Talking to a friend – Is artificial intelligence really intelligent?

In the last blogpost we had a look on how we as designers could use artificial intelligence (AI) in our work. For this post, we need to get back to the first question: will AI take over the world and kill us all?

I talked to my friend Michael Meindl, who is doing his PhD in the field of artificial intelligence. Right now, he is doing research on how robots and machines learn movement and how different parts of a machine can communicate with one another, just like a human body would to. He uses machine learning to make the communication within the system possible. His research will probably be used in the medical field, for example for prostheses. For me he is the smartest friend I have (though the competition isn’t really hard since I’m friends with lots of lovable idiots).  I asked him what he thinks about the future of AI, what this means for us and of course if humanity will get destroyed by this technology.

He stated that if we look at how AI is discussed in the media, we are talking about the wrong matters and trying to handle problems which might never come into place. The thinking about AI is formed by sci-fiction books and movies, moreover, the misconception that a machine might have human attitudes or interest. The following article is based on the conversation I had with Michi.

Often, we hear about the crazy short time it takes for an AI to learn a new game. People consider this means that AI is a super quick method to learn things. But we need to take the years of research and programming into account. Even if you have two AI’s which play different games and want to merge them together, it takes years of work to get that job done. Also, the method of how an AI is learning new things, seems kind of odd when we think about it. If a human would want to learn how to play chess by playing it a thousand times and just trying out moves over and over again, to see if he can win the game like that, you’d consider him as stupid. But that’s what a machine learning algorithm is doing. Since we don’t even really understand how human intelligence functions, how shall we create an artificial general intelligence (AGI)?

Is this calculator an AI?

Back then people might have said yes, now maybe no. This example shows that the definition of intelligence is sometimes a very subjective matter. Some calculations we type in might be difficult to solve, but in the end this system just follows given commands. Is that intelligent?

We have kind of a problem when we think about the definition of intelligence. Actually, an AI just does what it is told to do. It follows given commands. This sometimes looks to us as if the system is intelligent. The real intelligent thing about this instruction-following system, is the algorithm which makes that system follow instructions. If a calculator doesn’t seem like an AI to you, then also a self-driving car shouldn’t. Just like a calculator, it follows commands and instructions.

Artificial intelligence as a design tool

For the research on the field of artificial intelligence I collected lots of information from different websites and videos. In the beginning I had problems to gather everything in a nice manner. Positioning images in a Microsoft word document seems like an undoable task. Pen and paper didn’t allow me to copy and paste anything. With Indesign I started layouting stuff which didn’t need a layout. After trying out some tools and being frustrated with the restrictions, I was searching for a tool which could help me to save links and videos, structure information, add text, thoughts and images. It should be easy to use in the process and the output should look appealing without too much design effort. After trying out some online note-tools, I found Milanote [1]. You can have a look at my collection to see more examples for AI driven designs and articles [11]. Milanote is a free online tool for organizing creative projects. It shall help gathering information and structuring it however its desired. I fell in love instantly.

The tool Milanote helped me to structure notes and gather all different kinds of information. View the whole collection at [11]

There is a lot to know about the typology, methods and kinds of AI. In the last blogpost I already explained the difference between machine learning, artificial intelligence and neural networks. The AI systems we know today are based on machine learning. How a simple machine learning algorithm works is not too difficult to understand. There are tons of YouTube videos which explain the basics, I recommend the video of 3Blue1Brown [2] because of it’s visual explanation. But anything will do.

I took a closer look at how I could use artificial intelligence in my own field of interest. As an interaction designer, there are many intersections where AI can help to create. I came across the website of Google AI experiments [3] where different AI projects are shared. “AI Experiments is a showcase for simple experiments that make it easier for anyone to start exploring machine learning, through pictures, drawings, language, music, and more.” It says on the website. It’s collection of work from Google teams and other creative teams in different fields which used AI to find a solution to problem. You find AI examples for learning, drawing, writing, music and other experiments. Just the shear sum of creative work built with AI struck me.

I was especially impressed by the “teachable machine” by the Google Creative Lab [6]. They invented a free online tool, where you can build your own machine learning model in such an easy way… to be honest- it feels somehow rude how easy this tool makes machine learning seem. The video was very inspiring, showing all kinds of solutions and ideas built with pattern recognition. I think this is a huge step in the development of AI and machine learning. I tried the tool if it can spot if I’m wearing glasses or not. First you need to gather pictures of what the method shall recognize. Taking a few selfies of myself wasn’t too difficult. Secondly, by just clicking a button (yes just clicking a button!!) you can train your model and boom that’s all.

The teachable machine makes machine learning crazy easy to non-programmers

This opens up a whole new world to non-programmers and will allow thousands of creative people stepping into the field of machine learning/AI. I have the feeling that using this online tool for a own project might still be more difficult that it seems in the first place, since you need to set up the communication between your model and your code, but still- I’m impressed. Furthermore, this new technique of collecting data of an object opens up a whole new perspective. One of the annoying parts of training a machine learning code was, that you had to feed the model with tagged data. We all know those reCAPTCHA pictures from Google, where you need to click on the pictures which show traffic lights, cars, busses, road signs and stuff like that. What we are doing there is not only a proof for a login, but we are actually feeding and AI with very precious information [4]. (I sometimes click on the wrong images on purpose to confuse Google)

Furthermore, I made a list of how AI could be used in our field of work. This collection is driven a lot by which technologies we used in our first semester.

  1. Use pattern recognition and build physical systems with Arduino
    – Use an Arduino to build a hardware solution where physical things are triggered.
    – Get the input of an event via image (computer vision) or via sound.
    – React to that event with your Arduino device.
    – Example: showed in the video of the teachable machine [5]. Could be used to switch on a light of a disabled, open the little door when the system sees the dog approaching, sort trash by filming the trash etc.
  2. Use pattern recognition to control an object in virtual space
    – Use Unity to control an object in a virtual space.
    – Can track hand gestures or body movement to navigate or manipulate within a virtual 2D or 3D space.

    – Can use for interactive applications in exhibitions
    2.1 The more neural activity your brain has, the more likely you will remember something. If you get the body to move, you can trigger the muscle memory and the user might find it easier to remember your content. For example: teach music theory not only with information or sound but using the gap between your fingers to visualize harmony theory.
    2.2 A higher immersion can lead to more empathy. For example if you made the experience of being an animal in a burning jungle with virtual reality for instance, you might feel more empathy for this concern. A lived experience is more likely to influence emotionally rather than just telling a person that animals are dying in fire.
  3. Draw images with sound
    – Create “random” images which are drawn by incidents in the real world.
    – For the implementation you could use processing or p5.
    – Example: you could record with your Webcam, film the street and trigger an event when a blue car is driving by. This could change how a picture is drawn by code. You could also use certain sounds instead.
  4. Visualizing data
    – Collect data and visualize it in images or in a virtual space.
    – Huge amounts of data can be classified and structured by machine learning to create strong Infographics.
    – Data can be very powerful and reveal insights which you wouldn’t think of. There are good examples where well-structured meta data showed coherences, which didn’t seem related to the data itself. An episode of the design podcast 99% invisible talked about how a list of e-mails within a company showed information of who is a manager and who was probably dealing with illegal and secret projects – without reading one e-mail [7]. Moreover, David Kriesel gives with his presentations an impression of how powerful meta data is [8]. With the power of machine learning and AI we could reveal information which don’t seem obvious in the first place.
    – Example:
  5. UI design, recommendations and personalization
    – Use machine learning (ML) in your UI to make navigation easier and quicker.
    – Personalize systems for your user and create experiences where the user can move freely within your application

    – Best practice found in article [9]:
    5.1. Count decisions as navigation steps
    Count how many decisions need to be made for navigating though system. Reduce them with ML. The ML-generated suggestions shouldn’t make the user evaluate the options, otherwise the ML doesn’t make any sense here.
    5.2. A predictable UI is necessary when the stakes are high
    Do not use ML for critic navigation/finding. Human works best in such cases. Consider using it for browsing and exploration.
    5.3. Be predictably unpredictable
    Basically, hide the new ML feature. Think it depends on use case.
    5.4. Make failure your baseline
    ML will make mistakes. Built the system that if mistakes happen, it doesnt take longer to erase them rather than just doing the job on your own in the first place.
  6. Use AI for creative exchange
    – Use AI as a communication in creating new concepts.
    – AI is good in making links and connections to similar fields. Also, it’s good at bringing randomness into the game.
    – Example of writers which chat with AI to boost their ideas. Since ai is built with a neural network its kind of works like our brain, so it’s capable of bringing fascinating ideas for the field it’s programmed for. And since it’s a machine and not a human it can bring new perspectives into thinking (see youtube “Prescursors to a Digital Muse” below).
    – Example: The AI for the game GO, played a move which seemed like a bad one to a human but maximized the winning probability since it was interested in winning the whole game and not conquering as many fields as possible. Professional GO players examined the new thinking of the game which is played since the 4th century, with a new perspective [10].
  7. Get rid of repetitive tasks
    I was so fascinated when I saw how the new iPhone does all the photo editing which I used to do in hours of work, automatically. Of course, it does mistakes and is not as accurate, but come on, who enjoys cropping of a curly haired person in Photoshop. Using a cropped image and putting it somewhere else is the fun part, not the cutting out. At least for me. When such tasks are done by a machine, we can concentrate on all the other ideas we have with that curly-haired-person-image.
Example video for 6. Use AI for creative exchange.

On the search on where AI is and where designers are, you could often read about the fear that AI will take away the jobs of designers. Since AI is capable of doing a lot of work which was dedicated to designers for a long time, it’s definitely true in some ways. But we need to evolve and adapt to technology. A lot of frustrating and repetitive tasks can be done by the machine, take advantage of this and start creating from that point. We can create much larger scaled projects when we can deal with such technologies.

  1. Free tool for collecting notes and structuring ideas:
  2. Machine Learning tutorial:
  3. Google AI experiments:
  4. Article on how recaptcha images help Googles algorithms:
  5. The teachable machine promotion video:
  6. The teachable machine website:
  7. Interesting podcast – the value of data:
  8. Interesting presentation on the power of data mining:
  9. AI in UI design
  10. Documentary of the Alpha Go AI
  11. Collection of articles, example videos and background information

Why artificial intelligence is kind of stupid and will kill us all

Artificial intelligence (AI) is used in an enormous number of fields. It is guiding us through critical decisions in logistics, health, finance and which Netflix series to watch next. Even though we use AI a lot, many people do not know too much about it. What people think about artificial intelligence reaches from ‘my fridge sends me smileys’ to ‘the end of the world is near’. While it’s currently used how many apples there shall be in the stores to buy for next week, people are discussing about how it will take over the world and kill humanity.

On a scale from 1 to 10, how scared should we be now?

The word AI is a dangerous buzzword. It is so abstract and broadly used that people mix up a lot of things while still saying the truth. The term is so difficult to define because we already have difficulties to define what intelligence exactly is. We need to differentiate between machine learning, neural network and artificial intelligence to understand these broad assumptions about the technology.

Non-scientific graph of a timeline considering the possible futures with artifical intelligence based on first impressions when you start researching about this topic

Machine learning: In machine learning for example, the machine learns from its experience. Here you simply input data and it will output something. It will learn and understand after a while. But as soon as you change things a little, the system will fail. It is a technique to recognize patterns – it can differentiate, this is a cat or a dog or the word ‘hello’ and then do something according to what it already learned.

Neural network: A neuronal network is a method to achieve machine learning. It is how you might build the actual software of machine learning to do the process of learning. It is an algorithm which models how our brain works. When neurons are combined you can have a set of inputs. Those inputs are moving further down in the network until a certain threshold is met. This threshold can be tweaked and trained with a feedback system. You will tell the system what the desired result is. This creates the ability to recognize patterns. So, this whole process is a method for pattern recognition. There are other ways to do it too, but this is the most frequently used.

Artificial Intelligence (AI): While machine learning is about pattern recognition, AI is mimicking to be intelligent. A computer can exhibit being intelligent but without actually being intelligent. An often-used example is the mind game of a man in a room with lots of information on Chinese language. A Chinese man could be standing in front of that room, sending messages to the other. While the other one has all the data and time to answer the messages, the Chinese man outside would think he is writing to someone, who is familiar to the language. But the man is just exhibiting to speak Chinese without truly knowing Chinese language. Furthermore, we need to differentiate between a weak AI and a strong AI. A weak one exhibits the characteristics of being intelligent while a strong one could possibly form a consciousness and is self-aware. This brings questions of morality and ethics.

An interesting aspect of AI is shown in the work of the researcher Janelle Shane. In her TED talk she explains the power but also the limits of AI. She says: ‘The danger of artificial intelligence isn’t that it’s going to rebel against us, but that it’s going to do exactly what we ask it to do’. Computers and algorithms have no common sense. It’s difficult to think of every possible outcome, use case or scenario while programming.

Further questions to be answered

The motivation behind this topic is to see what all the fuss is about. Getting a clearer image of what is going on with this technology and learn how I can maybe use it for my own work or how we can deal with it in our everyday life. We need to understand how much it is already embedded in our everyday life and understand where this might take us. I want to dig deeper into the concept of how it works, what it can do and where the limits are. Furthermore, it might be appropriate to listen to what no one else but Elon Musk and Bill Gates are warning about. Elon Musk is basically warning us that AI will take over the world and we need regulations before it’s too late. He talked about AI as ‘an immortal dictator from which we would never escape’. Maybe we should all panic now.