Schlüsselfaktoren auf Instagram

Die Nutzung von Instagram ist in den letzten Jahren für Firmen und andere Werbende sehr essentiell und wichtig geworden. Wer sein Produkt heute nicht aktiv auf der Plattform präsentiert wird vor allem von den 18 – 34-jährigen wenig Aufmerksamkeit bekommen. Auch gesponserte Werbung, für die man auf Instagram bezahlt, kommt immer häufiger vor. Diese Umstände, die neuen Usergruppen auf Instagram und das dadurch entstehende und immer mehr werdende Konkurrenzverhalten sind unter anderem Gründe für die geänderten Algorithmen der Plattform.

https://de.statista.com/statistik/daten/studie/512308/umfrage/instagram-nutzerzerzahlen-fuer-oesterreich-nach-alter/

Vor einigen Jahren hat Instagram seine Schlüsselfaktoren geteilt, auf die der damalige Algorithmus angesprochen hat. Diese waren Interesse, Beziehungen, Zeitpunkt/Pünktlichkeit, Häufigkeit, Abonnieren und Verwendung(interest, relationship, timeliness, frequency, following and usage).

Heute sind die wichtigsten drei Punkte, auf die man sich konzentrieren sollte Beziehungen, Interesse und Zeitpunkte, an denen man Inhalte teilt.

Vor allem bei ‚Beziehungen‘ geht es um Interaktionen, genauer gesagt, wiederholte Interaktionen zwischen bestimmten Profilen. Instagram befürwortet den regelmäßigen Austausch zwischen Usern. Hier kann man entweder direkt mit dem User interagieren – liken, kommentieren, erwähnen – oder seine Inhalte teilen.

Bei ‚Interesse‘ geht es vor allem um Engagement. Je mehr Kommentare man bekommt, je öfter ein Beitrag geteilt wird, ob ein Beitrag gespeichert wird und natürlich auch wie oft er ein Like bekommt, beeinflusst den Algorithmus, sodass er den Post mehreren Usern auf der Plattform zeigt. Der Algorithmus funktioniert auch umgekehrt. D.h. je öfter ich mit einem Profil interagiere, desto eher werden mir auch diese Beiträge und Stories vorgeschlagen bzw. ähnliche Beiträge von anderen Usern gezeigt.

Hier kann man sehr ins Detail gehen. Zum Beispiel beeinflusst man den Algorithmus positiv, indem man auf Kommentare unter dem eigenen Post antwortet. Hier werden kurze Wörter oder Emojis allerdings nicht oder wenig gewertet. Möchte man also dadurch sein Profil pushen, so sollte man mit mehreren Wörtern – mindestens vier – antworten. 

Die wichtigsten Interaktionen auf der Plattform: liken, kommentieren, teilen, wie oft ein Video angesehen wird. 

Etwas das nicht aufgelistet wurde ist ‚Beiträge speichern‘. Ein relativ neues Feature, das in der Zukunft mehr Bedeutung bekommen könnte.

Wenn man von den richtigen Zeitpunkten‘ auf Instagram redet, geht es vor allem auch um die Häufigkeit. Regelmäßiges Posten ist essentiell, um auf Instagram effizient Follower zu gewinnen. Jedoch nicht nur das. Man muss auch wissen, wann seine Zielgruppe auf der Plattform unterwegs ist, damit die Chancen, dass sie deinen Post sehen, größer sind. 

Übrigens passt Instagram die gezeigten Posts hier an die User an. 

Sind deine Follower häufig und lange auf der Plattform unterwegs, werden ihnen ältere Beiträge gezeigt. Ihnen werden die besten Beiträge, die seit dem letzten Verwenden der App entstanden sind, gezeigt. 

Verwenden sie Instagram immer nur für kurze Zeit, dann sehen sie die neuesten Beiträge. Hier wird nicht chronologisch vorgegangen, da Instagram versucht herauszufinden, was diese User anspricht.

Interaktionsvarianten auf Instagram

Like Zeigen, dass einem der Beitrag gefällt

Kommentar Interagieren, indem man ein Kommentar hinterlässt oder auf Kommentare anderer User antwortet

Teilen innerhalb der Plattform

  • Erwähnen & verlinken
  • Privat teilen (Direct messaging) Mit bestimmten Usern innerhalb der Plattform teilen

Teilen außerhalb der Plattform

  • Link versenden
  • Über andere Plattformen (Facebook, etc.)

Speichern Beiträge privat speichern, um sie später nochmals ansehen zu können

Dauer der Betrachtung  Wie lange und wie oft sieht man sich ein Bild, Video oder eine Story an.

Gründe für User Research

Heutzutage sind viele Unternehmen der Ansicht, dass sie keine Ressourcen für User Research haben oder keine Notwendigkeit in der User Research sehen und auch nicht den Mehrwert daran erkennen. Doch aus welchen Gründen betreibt man jetzt User Research? Man führt User Research wenn man folgende Ziele verfolgt:

  • Ein Produkt zu erschaffen, das einen echten Mehrwert für die Nutzer hat
  • Ein Design zu erstellen, dass intuitiv nutzbar ist (Usability) und Freude bereitet (Joy of Use)
  • Dass es Nachvollziehbar ist, wie sich die „Return on Investment“ ergibt. 

Wie erreicht man nun diese Ziele? 
Die Antworten bekommt man nach dem durchführen der User Research, hierbei werden verschiedene Methoden aus der User Research angewendet (auf die verschiedenen Methoden werde ich in meinem nächsten Blog Eintrag genauer eingehen).

Doch was genau bedeutet das jetzt? Lassen Sie mich anhand eines Beispiels erklären warum die User Research wichtig ist:

Betrachten wir die Markteinführung der E-Scooter: Man dachte, dass die E-Scooter eine Stadt umweltfreundlicher machen würden, es gäbe weniger Staus und der Mensch wäre dadurch mobil, ohne von den öffentlichen Verkehrsmitteln abhängig zu sein2 . Wer jedoch einmal in einer Stadt war, wo die E-Scooter verfügbar sind, der weiß, dass dadurch das Stadtbild merkbar beeinträchtigt wird. 

Hierbei wird deutlich verständlich, wie wichtig es ist, eine User Research durchzuführen und sich folgendes Fragen: Welche Probleme treten hierbei für die Menschen und das Umfeld auf? Wie können wir helfen, dieses Problem zu lösen?  

Das hier war nur ein Beispiel von vielen, jedoch wird einem hierbei bewusst, wie wichtig die User Research für unseren Alltag ist.



Referenz/Ressourcen/Literatur 

1 Semler, Jan/Tschierschke, Kira: App Design. Das umfassende Handbuch. 2., aktual. und erw. Auflage. Bonn: Rheinwerk 2019
2https://www.springerprofessional.de/mobilitaetskonzepte/mikromobilitaet/was-sie-ueber-e-scooter-wissen-muessen/17156852

Process for VR UI design

While most designers have figured out their workflow for designing mobile apps, processes for designing VR interfaces are yet to be defined. When the first VR app design project came through our door, the logical first step was for us to devise a process.

Traditional workflows, new territory

Interface-based VR apps work according to the same basic dynamic as traditional apps: Users interact with an interface that helps them navigate pages. We’re simplifying here, but just keep this in mind for now.

Given the similarity to traditional apps, the tried-and-tested mobile app workflows that designers have spent years refining won’t go to waste and can be used to craft VR UIs.

1. Wireframes

Go through rapid iterations, defining the interactions and general layout.

2. Visual design

At this stage, the features and interactions have been approved. Brand guidelines are now applied to the wireframes, and a beautiful interface is crafted.

3. Blueprint

Here, we’ll organize screens into flows, drawing links between screens and describing the interactions for each screen. We call this the app’s blueprint, and it will be used as the main reference for developers working on the project.

Tools

Before we get started with the walkthrough, here are the tools we’ll need:

  • Sketch. We’ll use Sketch to design our interfaces and user flows. If you don’t have it, you can download a trial version. The sketch is our preferred interface design software.
  • GoPro VR Player. GoPro VR Player is a 360-degree content viewer. It’s provided by GoPro and is free. We’ll use it to preview our designs and test them in context.
  • Oculus Rift. Hooking Oculus Rift into the GoPro VR Player will enable us to test the design in context.

Design for VR

“You can think of an environment as the world that you enter when you put on a VR headset”

Sam Applebee

Taking into consideration the VR or Virtual Reality is a technology that requires details for building it. The question of how the design for VR application works around came out. What does it need? Where it goes? What is the most important requirement?

Many questions where raise but, I still think that the essential question indeed is, what means design for VR?

Environments and Interfaces

Think about an environment as the world that you enter when you put on a VR headset — the virtual planet you find yourself on, or the view from the roller-coaster that you’re riding.

From the other side, an interface is the set of elements that users interact with to navigate an environment and control their experience. All VR apps can be positioned along two axes according to the complexity of these two components.

  • In the top-left quadrant are things like simulators, such as the roller-coaster experience linked to above. These have a fully formed environment but no interface at all. You’re simply locked in for the ride.
  • In the opposite quadrant are apps that have a developed interface but little or no environment. Samsung’s Gear VR home screen is a good example.

Designing virtual environments such as places and landscapes requires proficiency with 3D modelling tools, putting these elements out of reach for many designers. However, there’s a huge opportunity for UX and UI designers to apply their skills to designing user interfaces for virtual reality (or VR UIs, for short).

augmented reality – learning a language | 5

AR technology

There are several types of AR in use today. For a better understanding of AR it is important to know the types and their advantages. There is marker-based and markerless technology.

Marker-based AR is:

  • easy to produce
  • most available technology (supports biggest variety of devices)
  • commonly used for marketing and retail

Markers can be images or signs that trigger an augmented experience and act as anchors for the technology. A marker has to have enough unique visual points — especially images with corners or edges do well. Logos, packaging, QR-codes, products themself (engine, bottle, chocolate bars) can serve as markers. The technology uses natural feature tracking (NFT) and can share AR content like text, images, videos, audios, 2D/3D animations, objects, scenes, 360° videos and game mechanic. For image recognition there are license based solutions (software development kits) on the market like Vuforia, EasyAR, Wikitude and more.

image courtesy of Villeroy & Boch, image from https://learn.g2.com/augmented-reality

Markerless AR is:

  • more versatile
  • not restricted to any surrounding
  • allowing a multi-person interaction in virtual environment

Different styles and locations can be chosen by the user who can also rearrange his surrounding virtual objects. The user’s device collects information through the camera, GPS, digital compass and accelerometer to augment realities into the scene. It is restricted to some devices: for iOS it has to be version 11 or up and for Android 7.0+ or newer. For placing objects in the real world, it uses plane detection which means that horizontal and vertical planes are detected. When ARKit or ARCore (Apple’s iOS and Google’s Android AR framework) analyse the real surfaces in the natural environment and detect a flat surface, it places a virtual object on the detected surface so that it appears to rest on the real surface. This happens with new virtual coordinates which are related to the real coordinates in the environment.

Following types technically fall under markerless AR as well:

  • Location-based AR ( Pokemon Go: characters in specific locations )
  • Superimposition AR ( recreating or enhancing a object in the real world )
  • Projection-based AR ( projectors instead of mobile devices: hologram-look )
  • and Outlining AR ( outlining boundaries and lines, e.g. on the lane )

A combination of both technologies — marker-based and markerless — in one application is possible as well.

As reviewing the last blog ( number |4 ) about current AR applications, regarding the use of flashcards to learn the sign language, the cards themself serve as markers and marker-based technology triggers augmented realities.

Developement of marker-based app

As an example will be described how a mobile AR marker-based letter recognition application was developed to get to know the rough technicalities. Suman Deb, Suraksha and Paritosh Bhattacharya analysed how a AR application should be created and developed to improve deaf-mute students’ sign language skills. The used media-cards showed a specific Hindi letter and triggered the application to display 3D animated hand motions for each letter.

A quick description:

  1. Upload pictures of markers To make an app response to certain images ( markers ) every image has to be stored in a Library. The developers of the app uploaded them to the Vuforia Library. As described above, the Vuforia Engine is a software development kit (SDK) and allows developers to add advanced computer vision functionality to any application, allowing it to recognize images and objects, and interact with spaces in the real world
  2. Download Unity Package which was downloaded from Vuforia was imported. Also along with Vuforiaandroid and vuforiaimage targets-android to generate Augmented Reality application
  3. 3D hand model arrangement with markers To ensure that the right image is shown, you match and place them into an Image Target Hierarchy so that after scanning of the media-cards ( flashcards ) the corresponding animated hand 3D model is shown in the augmented reality interface
  4. Include Features The menu gives features like zooming in and out and rotation of the 3D hand. The scanned letters ( markers ) can be amassed into words

In use the camera feed will be analysed by the image capturing module when pointed over the marker. Then binary images are generated and processed by the image processing technique. Marker tracking modules tracks the location of the marker and displays the corresponding 3D hand animation over the marker itself. The Rendering module has two inputs at a time: calculation of the pose from the Tracking module and from the virtual object which will be augmented. The Rendering module also combines these inputs and renders the image on the screen of the mobile device.

Sources

https://www.springerprofessional.de/plane-detection/16253564

https://www.howtogeek.com/348445/what-are-the-arcore-and-arkit-augmented-reality-frameworks/

https://learn.g2.com/augmented-reality

https://www.youtube.com/watch?v=qAaUSmVfpaU

https://www.youtube.com/watch?v=16jT1_MtTXs

Suman Deb et al. / Procedia Computer Science 125 (2018), p. 492–500: “Augmented Sign Language Modeling(ASLM) with interaction design on smartphone – an assistive learning and communication tool for inclusive classroom”

Augmented Reality Contact Lens

In the process of my research, I came across a company that has presented a prototype of an augmented reality contact lens. The company, Mojo Vision, was founded in California in 2015 and has since been trying to bring to market a contact lens that sits so close to the pupil that the eye can’t see the lens, allowing virtual elements to become visible. The Mojo Lens is currently still a prototype and must first pass some testing to be certified and considered safe and effective to use. It will take probably a few more years before the first medical tests will start. 

Mojo Lens

What is fascinating about the prototype is that technical components such as chips, displays or sensors can be made so small that they can be combined in a contact lens. 

Visualization of the display of the lens through VR

The contact lens should only display information or items when they are wanted. The lens can therefore understand when it is needed and when it is more of a nuisance. The state of the current prototype (shown in 2020) – or more precisely – of the Display of the Mojo Lens can currently be tried out with a VR headset. In this world, you can see, for example, the current weather or weather forecasts, the amount of traffic or a calendar when you focus your eyes on a certain spot in the image.

The mission and goal of Mojo Lens, however, is to use this technology to help people who have severely impaired vision. The lens should be able to highlight important people or things or zoom in on objects to be of help. Only then would the lens be accessible to the public.

Plan of the past, present and future of the Mojo Lens

Sources

  1. Mojo Vision’s Augmented Reality Contact Lenses Kick off a Race to AR on Your Eye, Jason Dornier (17.1.2020), https://singularityhub.com/2020/01/17/mojo-visions-augmented-reality-contact-lenses-kick-off-a-race-to-ar-in-your-eye/
  2. Mojo Lens The World’s First True Smart Contact Lens, Mojo Lens (2021), https://www.mojo.vision/mojo-lens
  3. The making of Mojo, AR contact lenses that give your eyes superpowers, Mark Sullivan (16.1.2020), https://www.fastcompany.com/90441928/the-making-of-mojo-ar-contact-lenses-that-give-your-eyes-superpowers

Fact Checking Sites

As mentioned in recent posts there are multiple reasons for the rise and existence of false or misleading information in the digital age. Some of them occur because of a data void, but there are also other reasons like the so-called filter bubbles, where an algorithm selectively guesses what information a user would like to see based on information about the user, such as location, past click-behavior and search history. This term was coined by internet activist Eli Pariser in 2010 and also discussed in his 2011 book of the same name. As a result, users get isolated from information that might differ from their own opinion. This leads to less discourse of information and again might be harmful for our civic discourse.

The extrem negativ effects a filter bubble can have is shown in the following video THE MISEDUCATION OF DYLANN ROOF (Trigger warning: Violence, Racism and racial slurs, Hateful language directed at religious groups).

https://www.splcenter.org/files/miseducation-dylann-roof

Here is a short video of things to look for when you are uncertain or just want to know what to look for when surfing the world wide web:

Spotting Bogus Claims

Despite the things to look for mentioned in the video, sometimes that is not enough. If you watched the video about the miseducation of dylann roof, you will clearly realize that websites which spread false information or hate speech are sometimes designed in a similar way to other reliable news pages, which can make it difficult for not savvy users to identify propaganda and misinformation.

Since 2010 a lot of fact checking sites appeared. Most of them rely on the same principle. They use donations to do their work and they write articles about current rumors.

FactCheck.org

Fact checking sites like FactCheck.org or the The Washingtion Post Fact Checker like to comment on mostly false information spread by politicians and such. However, they do not show or label content compared to the social media platforms, on which false information is spread throughout the platforms and also shared to other social interaction platforms. You will find statistics about that here.

Other sites like PolitiFact show statements and their truthfulness in form of an “Truth-O-Meter”. In my personal opinion the design of the quotes and the “Truth-O-Meter” does not look really sophisticated and believable. In the next post I want to do a survey about the credibility of these sites and their designs.

PolitiFact

Another fact checking organization or institute is IFCN. This website is really transparent and well designed. Its function is described as follows: “The code of principles of the International Fact-Checking Network at Poynter is a series of commitments organizations abide by to promote excellence in fact-checking. We believe nonpartisan and transparent fact-checking can be a powerful instrument of accountability journalism.”

ifcncodeofprinciples.poynter.org

They use a clear and consistent design language, like corporate colors and fonts. The International Fact-Checking Network is a unit of the Poynter Institute dedicated to bringing together fact-checkers worldwide. Also they use a corporate badge to verify organizations, which looks like this:

IFCN Badge

Around 100 fact checking or news organizations all over the world use this service or way of validating, even tough it is not an easy application process. Next to other reasons why implementing such a verification is important, the good design is clearly making the site more sophisticated.

Some Fact Checking Sites and Organizations:

https://www.factcheck.org/

https://www.washingtonpost.com/news/fact-checker/

https://www.politifact.com/truth-o-meter/

https://www.opensecrets.org/

https://www.ifcncodeofprinciples.poynter.org/

Next up: Survey about fact checking websites credibility, Systems/Icons to label content, Which labels do we need?

AR with social media

Especially young people are tempting to use new, unknown technology as they are more familiar with current things and more open for new things. Even if it is not true any more that social media is only used by young people, the majority is still According to a survey by Gfk, 45% of 19- to 28-year-old and 49% of 29 -to 38-year-old asked people said they would visit a retrail store which offers AR or VR experiences than one which does not. At the same time only 31% of 39- to 53-year-olds said the same.

If a AR application is well realized, a realistic effect can be reached, used for example to try new lifestyle products like clothing or make uo products.

Facebook
Since late 2018 Facebook and Instagram started to offer a AR experience to their cunsumers by few selected brands. For the first experience, the consumers were able to try on different products like make up, lipstick or sungalsses directly on their newsfeed.
Through a simple Call-to-action button consumers are able to start the AR-ad – provided by Facebooks own AR application Spark AR.

Snapchat
While Snapchat set their focus from day one on a mobile social network with a big selection of virtual sticker and soon also the posibility of interaction with augmented reality objects. Now a days, Snapchat couldn’t be thinken of without its filters which lay on your face and being interactive when chanching your face expression.
To be exact, already 2015 Snapchat presented the new feature “Lens” which made consumers able to take snaps with the technology of face recognition. While everybody started to send snaps of pictures while throwing up rainbows when opening the mouth, nobody know that days how far this will go.

Release of Snapchat Lens 2015

Only two years later, in april 2017 Snapchat released the feature “World Lenses” which brought 3D rendered objects to live. Now useres were able to build a little character of themself, a so called Bitmoji or simply use sticker or other objects and with help of AR they started to move, act and dance.

Release of Snapchat World Lenses 2017
3D Bitmoji

But those fun features of Snapchat were just the begining. The businesspart of this AR hype is the so called Shoppable AR which is a feature that makes consumers able to bring digital ads to life or even try out new products and buy it streight away.
The marketing strategy about this feature is to discover products of brands in a fun way with not having the ad in the focus but the fun of interacting with the products. Because of high fun level, the consumer is tempting to send snaps with even those filters with product placement to friends and becoming kind of a brand representative.

Shoppable AR

Goals of AR-ads in social media
Many reasons could be exist to choose AR for your ad but on social media there are for example following goals to be reached:

• brand awareness
• traffic
• conversions
• rage

Some say social media was the main part why AR established this fast and well. For sure the target groupe on social media is perfect to use this new technology to find out its potential.

Since Zuckerberg described the AR features in Facebook “phase one” we can be sure there is more coming up. We will see what the future of AR and social media brings.


Facebook has made AR ads available to all marketers through its ad manager
FACEBOOK: GLOBALER ROLLOUT VON AUGMENTED REALITY ADS
Augmented Reality (AR) Ads
AR auf Social Media: Was kann Facebook, Instagram und Snapchat?

Artificial Intelligence (AI) | part 1

While Artificial Intelligence is getting used more now than ever before, the concept is not new. John McCarthy was already talking about “the science and engineering of making intelligent machines” back in the 1950s. Because of his numerous contributions to the field of Computer Science and AI he was also called the father of AI. Artificial Intelligence is a sub-field of Computer Science and is about how machines imitate human intelligence. It is rather about being human-like than becoming human. AI is also commonly described as any task performed by a machine that would have previously been done by a human, but there are a lot of different definitions. These definitions are shifting based on the goal the AI system has to achieve. 

“The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

– Oxford Dictionary

“artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.”

– The Encyclopedia Britannica

Types of AI

Artificial Intelligence can be divided into different types. These types can either be based on the abilities/capabilities or on the level of intelligence/functionalities of the system. 

source: https://data-flair.training/blogs/ai-and-machine-learning/

Based on Capability

Narrow AI

Narrow AI is also known as Weak AI or Artificial Narrow Intelligence (ANI) and is focused on one single “narrow” task. This type is not able to do anything that was not programmed and targets only a single subset of cognitive abilities.

The Artificial Narrow Intelligence (ANI) is the most common type of AI at the moment. It also includes more complex systems which are able to teach themselves with Machine Learning or Deep Learning. Most people nowadays are already using this type of AI on a daily basis. It is used in all personal assistants like Siri and Alexa, chatbots on websites, translating software, the Google page ranking system and many more. Narrow AI is also used in the health sector and is able to diagnose cancer and other illnesses with a very high accuracy by analyzing images from MRI’s.

General AI

General AI is also known as Artificial General Intelligence (AGI) and these systems will have the same capabilities as humans. They can learn, perceive and understand like a human being. But because there is currently not enough knowledge about the functionality of the human brain to develop these systems, they are still under development and will not be available anytime soon.

The best attempts on Artificial General Intelligence also include simulations on the fastest supercomputers. Back in 2011 the Fujitsu K computer was able to simulate one single second of neural activity in about 40 minutes. The successor of this supercomputer, the Fugaku, is the fastest supercomputer at the moment and has a processing power of about 415 petaFlops. But the US government is already building an even faster supercomputer named Frontier. Frontier will have a processing power of about 1.5 exaFlops and will go online later this year. This supercomputer will also be the first machine with more processing power than the human mind (about 1 exaFlop).

Super AI

Super AI is also called Artificial Super Intelligence (ASI) and will be more capable than any human. This technology is currently far away from becoming real but it would be the most capable form of intelligence on earth. Artificial Super Intelligence would also be able to perform incredibly well in creative tasks like design, decision making and even in emotional relationships. These systems will be better than any human at everything they do and may even take over the world.

source: https://www.jigsawacademy.com/what-are-the-different-types-of-ai/

Based on Functionality

Reactive Machines

Reactive Machines are the oldest form of Artificial Intelligence and therefore also have extremely limited functionalities. These systems do not have a memory and are not able to learn from previously gained experiences. Reactive Machines are only using present data for solving specific tasks.

One of the most popular examples would be IBM’s Deep Blue. This machine defeated chess grandmaster Garry Kasparov in 1997. Deep Blue is able to identify the pieces on a chessboard, knows how each of them is moving and makes predictions about the next moves. But it ignores everything that happened before the present moment. It is looking at the chessboard after every move and starts deciding from there.

Limited Memory

Limited Memory is the most common type of functionality based AI’s. It is able to learn from data and base the decisions on this data. This type of Artificial Intelligence is using data from big databases as a training for future problems. Limited Memory is currently used for voice assistants, image recognition, chatbots, cars with autonomous driving capabilities and self-driving cars.

Theory of Mind

Theory of Mind will be the next level of AI systems. This type of Artificial Intelligence will be able to understand needs, emotions, beliefs and thought processes and will therefore be especially useful for researchers. Theory of Mind will be successful when systems are able to truly understand human needs. 

Self Awareness

Self Awareness will be the final stage of Artificial Intelligence and is currently just existing hypothetically. Systems of this type will also have emotions, needs, beliefs and even desires of its own. Despite this technology being decades away from becoming real, people are already thinking about these systems and if they will take over humanity and enslave all humans.

source: https://www.jigsawacademy.com/what-are-the-different-types-of-ai/

Subsets of AI

Machine Learning

Machine Learning is one of the most popular and also most important subsets of AI. It helps AI systems to learn and improve their capabilities without being programmed. Systems are becoming better and better at specific tasks because of it.

We are already using systems with ML in our daily lives. The most popular technologies powered by Machine Learning include personal assistants, targeted advertisements on social media, image recognition software and traffic predictions on services like Google Maps.

ML uses Neural Networks and other algorithms and can be divided into the following categories: Supervised Learning, Unsupervised Learning and Reinforcement Learning.

Neural Networks

Neural Networks are a subset of Machine Learning. They are modeling themselves by creating an artificial network with an algorithm based on the human brain. Neural Networks are trained by databases with a large set of labeled data. The most common databases consist of images and the correlating labels.

If you feed a Neural Network with pictures of thousands of traffic signs and their label, it is able to inspect and analyze these pictures, learn a formula based on the data, divide it into different layers and finally put the signs into different categories. A Neural Network that was trained like this is able to recognize every common traffic sign next to the road, categorize it and show the driver the current speed limit for example.

Deep Learning

Deep Learning is a Machine Learning technique and teaches machines how to learn. It is also called Deep Neural Learning and is a subset of Neural Networks.

Deep Learning is also used in the automotive industry. Systems like driverless cars or voice assistants use it to analyze thousands of hours of videos and images. Self-driving cars can learn how to drive and navigate on specific roads by studying road patterns, driving habits of existing humans and other vehicles on the road. But this process also requires a lot of data to work properly.

source: https://serokell.io/blog/ai-ml-dl-difference

Artificial Intelligence is already used across nearly all industries. AI is completing our words as we type them, vacuuming our floors in every corner of the room, providing directions while avoiding high traffic roads, matching up passengers for ridesharing services and recommending what we should buy next on Amazon or watch next on Netflix.

Resources | part 1

https://www.forbes.com/sites/bernardmarr/2018/02/14/the-key-definitions-of-artificial-intelligence-ai-that-explain-its-importance/?sh=7fb6b6384f5d

https://www.britannica.com/technology/artificial-intelligence

https://www.simplilearn.com/tutorials/artificial-intelligence-tutorial/types-of-artificial-intelligence

https://interestingengineering.com/the-three-types-of-artificial-intelligence-understanding-ai

https://www.edureka.co/blog/types-of-artificial-intelligence/

https://www.ibm.com/cloud/learn/what-is-artificial-intelligence

https://builtin.com/artificial-intelligence

https://www.theverge.com/2019/5/7/18535078/worlds-fastest-exascale-supercomputer-frontier-amd-cray-doe-oak-ridge-national-laboratory

https://learn.g2.com/applications-of-artificial-intelligence

https://www2.deloitte.com/fi/fi/pages/technology/articles/part1-artificial-intelligence-defined.html

BPE of Science centers in Austria 2

Haus der Nature Salzburg

Permanent exhibition : the brain , intelligence, consciousness, emotion

I was particularly impressed by this exhibition. The space is really big and allow you to explore freely like you enter a curiosity cabinet. Each area of the exhibition focus on a topic related to the brain. This exhibition deals with everything related to brain. Visitors can explore the organ through all perspectives : anatomy, personality, learning, perception, consciousness and sleep. There are several hands-on exhibits where visitors can try games and play with their memory.

Entrance of the permanent exhibition

On the opposite of the first BPE we described in the COSA center, I found the graphics in this museum much more comprehensible and visible as you can see in the picture below :

Graphics used to show some history around the brain and big thinkers

When entering the exhibition I was surprised to see this little robot in the middle of the path. Its aim is to guide the visitor through the exhibition. It can talk and guide you through the stations while providing explainations. A lot of childs were playing with it. I find the idea interesting since of the coronavirus situation it allows you to have explaination without any guide. I was impressed that it attracted so much childs either.

Pictures of the KIM AI robot

This picture shows the setup of the EEG sampling (electro- encephalography). This is something that could be complex to explain, but that is quite easy to show. It work by placing electrodes on the head and the inputs are recolted and processed by a computer.

Picture of the EEG setup

The exhibition uses a lot of tactile devices as this one. On this screen, you can learn about anatomy and functions of the brain by simply clicking on the area where you want to have information about. The brain is represented in 3D and you can look at it through different angles.

The next part deals with the topic of drugs and intoxication. I really liked how it was depict and the little windows which allow you to see how it looks like. Since those products are usually prohibben it’s not easy to see how they look. I found the pictures on the wall really clear.

Graphics showing the effect of drugs on your body

This part of the museum was more showing than hands-on, but we can find a lot of hands-on experiments in the science center part. The success of this first exhibition for me resides in the clear graphics, and 3D representations of complex topics around neuroscience, which help to understand it without too much difficulty.

Hands-on experiments in the Science Center

This science center is really focus on children hands-on approaches experiments. It presents the themas : energy and lifting, acoustics and music, physics and technology, body and fitness. Every setup and activity is really simple to allow young children to interact with.

Energy and Lifting

On the bottom floor of the Science Center, everything revolves around the theme of energy and lifting. Hands-on experiments make it easier to understand the law of the lever, hydraulic lift and power transmission and the generation of electricty from water power and solar energy.

The question that must rise into everyone’s mind are : How do turbines work? How can we produce electricity from generators? Different solar energy experiments offer insight into the technology of photovoltaics. And the water experiment area invites us to get our hands wet, raise and dam water, and discover the incredible power that lies hidden in water.

Acoustics and Music

The first floor of the Science Center is devoted to the phenomena of acoustics. From the wave nature of sound to the exploration sounds and noises and the transfer of sound to the human ear, here visitors can explore everything relating to the theme of sound.

A special highlight is the “Feel Mozart” area: with a violin you can actually walk on, the vibrations of music can be not only heard, but also felt.

In the Target Singing area, you can check if you’ve hit all the right notes. And finally, in the screaming cabin you can test out the volume of your own voice.

One of the devices visible in the center. You can click and play any song and see the frequency/recording waves

Physics and Technology

The second floor of the Science Center offers a myriad of experiments in physics, technology, and mathematics. Simple experiments provide confirmation of great natural laws: build a bridge and then test it out right away, launch a rocket using compressed air, make a ball float as if by magic, feel the aerodynamic lift with your own body. . . here you’ll learn what forces are capable of doing!

Questions come up and are answered by doing experiments. How do the fastest and slowest gears work? Why is fine powder used for cement and coffee? Nothing compares to experiencing it for yourself!

Body and Fitness

Also on the second floor, a large area of the Science Center is devoted to the dexterity, movement, and health of the human body. Our own body is the focus here with simple experiments and athletic contests.

Those devices help to get the concept of forces and physical activity. What a better way to understand how it works than trying it ?
By looking at the skeleton behind the glass, you can see which parts of your body are activated when you are practicing a physical activity.
Physical device showing the process of teeth cleaning

Conclusion :

These two examples highlight the importance of good graphics and the of haptics in science centers. Good graphics help to categorize sometimes complex information and have the “big picture” around a subject, while haptic supports, whether highly technological or not, help to apprehend concepts, test them to understand them, and facilitate memorization.

In the next article I will try to answer to the question around the learning strategies :

  • Which strategies are the best for public implication and a better learning ?
  • How to create funny but educational experiences ?
  • Why is multisense particularly interesting for exhibition design and learning ?

Source :

https://www.hausdernatur.at/en/