Haptik - Feedback - aktiv - passiv

Welche Arten von Haptiken gibt es?

Derzeit liegt das Hauptaugenmerk der Branche darauf, Bildschirminteraktionen und virtuelle Umgebungen greifbarer zu machen. Gleichzeitig macht die Materialwissenschaft große Fortschritte bei der Veränderung der physikalischen Eigenschaften von Objekten wie der Steifigkeit oder der Oberflächenstrukturen. Aktives haptisches Feedback wurde neben Smartphone-Anwendungen erst seit kurzem in der Verbraucherelektronik und in Alltagsgegenständen immer beliebter. Die Liste zeigt verschiedene Anwendungsbereiche und zeigt gleichzeitig die Schwierigkeit, gemeinsame Begriffe und Definitionen zu finden.

Eine Passive Haptik ist eine physikalische Eigenschaft eines Materials. Hierzu zählen Parameter wie: Größe, Gewicht, Oberflächenbeschaffenheit, und Temperatur eines Gegenstandes. 


Eine aktive Haptik ist, wenn ein physikalisches Feedback emuliert wird.
Apple hat bei seinen MacBooks nun überall diese Trackpads mit der sogenannten Taptic Engine verbaut. Diese reagiert auf Druckkraft und kann durch eingebaute Aktuatoren, dem Nutzer, kontextbasierend ein Feedback zurückgeben.

Taptic Engine – Detail
Apple MacBook – Trackpad

Im Vergleich zu allen anderen Sinnen ist die haptische Wahrnehmung bidirektional. Um Informationen aus unserer Umgebung zu erhalten, müssen wir sie aktiv untersuchen und / oder manipulieren. Wir müssen ein Objekt anheben, um sein Gewicht zu bestimmen oder eine Erdbeere leicht mit den Fingern zusammendrücken, um ihre Reife zu bestimmen.

Jones, L.A., 2018. Haptics, The MIT Press essential knowledge series. 
The MIT Press, Cambridge, Massachusetts.

Apple Watch – Digital Crown
https://developer.apple.com/design/human-interface-guidelines/

Alexander Moser
https://www.alexander-moser.at/

Future of this Research (6)

What can come out of this research is pretty broad. It has been a highly theoretical work mostly focusing on, “what is” and “how is”. The interviews I held and the literature I read gave me viable examples on how accessibility is a necessity if we want to achieve good and inclusive design.
It is important to note that although accessibility elements are mostly focused on people with disabilities, the whole user group benefits from them. Because these methods and guidelines are aimed at making an interface/design more usable and more human centred. So it is safe to say that whatever we as designers do, if we do it “accessible”, everyone benefits.

Hence an interaction method focused specifically on elderly can’t be the main goal here. A universality is the main goal, being inclusive and designing accordingly so that the elderly can use our design, while still being used in the same sense by the rest of the population.

This research can be expanded even more in the goal of exploring more examples and establishing guidelines for designers who work in this area. Similar guidelines already exist and they can be improved in the sense of commonality and applicability. A similar outcome can be found in the article “Elderly in the Digital Era. Theoretical Perspectives on Assistive Technologies”

Another future outcome could be creating a digital interaction method to tackle certain issues faced by the elderly, without losing universality. An example for this can be seen in the article “User interface based on natural interaction design for seniors”.

Another interesting exploration could be comparing the two countries in this matter which I am connected to; Turkey and Austria. How do they compare in their practices and how do their senior population compare? How can these differences and similarities between two countries’ populations can effect a designer’s point of view. Would these populations require totally different interaction methods or could there be one single solution? How does the cultural aspect of everything change this matter?

How Covid-19 redesigned the world?

A couple days ago, I read a book, where history was not defined as chronicle, but as a series of rare bright events, that we perceive as abnormal. While, in fact, such events change our world and continue history more, then everyday life.

Covid-19 came into our life and will not leave it soon. Seems, that everything has already been written and said. But I propose to look at how it redesigns our lives in real time.

The COVID-19 pandemic has made major changes in our lives. Against this background, in particular, the digitalization process accelerated: within a few months, the number of people who work from home and shop on the Internet has sharply increased. At the same time, technologies appeared to stop the spread of the virus, which are essentially technologies for effective government surveillance of citizens.

The usual meetings were replaced by Zoom parties. Probably, everyone has visited at least one. Not to mention Zoom-birthdays, Zoom-proms, Zoom-weddings, and Zoom-all-all-all. No wonder the company earned more in the second quarter of 2020 than in the whole of 2019. In this situation, video calls began to be used not only for work and study, but also for leisure. At the beginning of the pandemic, online concerts were very popular. There were even online tours, online exhibitions, and online performances. But by the end of the year, this trend has lost its relevance.

In the spring of 2020, the world’s demand for psychotherapy has grown at a record high. The jump occurred in late March and early April, when the borders were closed and a self-isolation regime was declared. Sales of plant-based sedatives increased. In families, conflicts escalated, which had not surfaced before, because people were busy outside the house all day. Another stress factor was economic instability. Neurologists have found that people who are prone to anxiety and auto-suggestion began to detect false symptoms of the coronavirus.

The services went online. Cinemas, gyms and cafes, if they work, only in the fresh air. 

Now people pay more attention to their relatives, take care of their own health and appreciate the time spent together. To be sure, the pandemic has given a dramatic boost to changes that none of us expected: appears new words in the lexicon, technology and medicine began to develop more rapidly.

The world has been exposed by drastic redesign. And we will see the consequences later.

AR for product information

The product is the basis and often the only connection from a brand to their customer. This is why it is important to give the user the best experience with or while using the product. But before that, the user needs to buy the product and especially he needs reasons to buy it.
In the best case, this is reached by convincing the user because he feels he need this certain product. Design Thinkers would say, you need to create a whole user experience about the product to make your clients happy.

This usere experience starts by collecting information about the product. Why does the consumer need it and what does it have to offer?
While typical medium for this steps were the seller with a personal service, the internet or friends and acquaintance, Augmented Reality is a new way to give the user a excellent and individual experience to gain information about the product.

This kind of experience can be handled by many ways. No matter if the custumers smartphone, glasses or a headset is used to give this experience, no matter if it can be used at home, on a website or even at a shop, this will bring the product information to a next level. And not only in a fun way but also giving the feeling of being a advanced, forward-looking and competently brand or company.

Also most of the new technology poduct information medium are tending the user to buy online. Thinking of the internet where you can read blogs or recommandations for mostly all products, buying the product aferwards with only one click is very confertable but also a big issue for our local shops and the economy of small brands and companys. With giving them a digital experience via AR but in the shop, it will probably make them coming to the shops and trying out this nice and fun experience. Obviously there is a high chance of buying the product in locally in the shop afterwards.

A nice example for this is the Toyota AR experience of 2019 where they presented their new C-HR model by a AR app. It gave the customer a experience of the inner systems of the car as they where overlayed on a real ohysical car. Additional to that, the customer could even interact with the car to get more information about motor, battery and fuel tank.

Toyota – vehicle demo

Thinking one step further, this kind of information about the product can also fix problems in future. If there are light problems with your products, the AR app could offer solutions while scanning the product and offer the customer a guid for fixing. This could not only optimze the service time but also saves costs for the customer and time for the brand.

The goal about product information with AR
When doing just a small research about this topic now, there are many and more applications which already ofer some kind of information about a product already. But this can also go in the wrong direction. A customer who is taking out his smartphone, maybe even downloading an app and scanning the product then will be dissappointet by just getting informations about the product which he could find out without scanning the product as well. Thinking of a food product which shows if the product is gluten free or diary free but with also having exactly this labels on the pagage already will make the user being frustrated and not using it any more. In the worse case the user would even not buying the product because of that.

Also sending the user to a social media platform or something like this is fun for the first time but would not make the user use it regularly.

Also for products where many information could be given but the surface on the pagaging is limited – for example when thinking of wine bottles – AR is a great way, to give as many information als needed to convince the customer to buy the bottle.

Regarding the current situation while touching a product in a supermarket which could have been touched before by others is the thing you absolutly want to avoid, those AR product informations become a new relevance.

So COVID 19 could have a big impact for using AR to present your product and information about it while using your own smartphone.

Product information with Augmented Reality
Smarter product information with augmented reality
Augmented Reality (AR) for holographic product information in times of COVID19

Artificial Intelligence (AI) | part 2

AI in the Automotive Industry

Artificial Intelligence is getting more and more important in the automotive industry. The value of AI in automotive is expected to approach 10 billion Euros in 2024. When talking about Artificial Intelligence and cars, most people are just thinking about self-driving cars. Despite the fact that AI is a key technology to enable cars to drive autonomously, there are a lot more AI-powered services available in modern connected cars.

All Advanced Driver Assistance Systems (ADAS) like emergency braking, blind spot monitoring and lane keeping systems also depend on Artificial Intelligence. These systems are not only providing more safety and more convenience on the road, they are also helping customers, automakers and regulators to build trust in AI. This trust will play an important role when AI takes over the control of the vehicle in self-driving cars.

But Artificial Intelligence is not limited to driving features. It has the potential to optimize every process along the automotive customer journey. Some processes are already relying on AI and would not be possible without it.

Driver Monitoring

Artificial Intelligence is not just able to monitor the road and surroundings. It is also improving the safety by keeping an eye on the driver. AI is able to analyze if the eyes are on the road, how distracted the driver is and if the driver is getting tired. Depending on the status it could inform the driver to keep the eyes on the road, propose a small break at the next gas station or even safely stop the car when the driver is not reacting because of a serious medical problem.

Another example for improved safety could be the use of AI during accidents. Artificial Intelligence is able to change the seat position to a better position and how the airbags are going off depending on the position, height and weight of the driver milliseconds before the impact.

Driver Recognition

AI is also able to detect if there is a driver in the vehicle, which driver is in the car and if the driver is actually allowed to drive the car. This feature is especially helpful when different members of a family are sharing one car. The car recognizes the driver and automatically adjusts the seating position, mirrors, ambient lighting, default temperature, favorite playlist and many more. Artificial Intelligence will be one of the key factors of vehicle personalization in the future.

source: https://www.futurebridge.com/blog/driver-monitoring-from-essential-safety-to-passenger-wellness/

AI Cloud Services

Connected vehicles need a lot of data for delivering all the services. AI powered platforms ensure that this data is available to the services all the time.

Traffic Forecasting

Artificial Intelligence is especially useful for analyzing a lot of data in a short time. AI powered traffic forecasting is taking traffic data from the past and predicts the future traffic situation based on data from similar days, time and conditions. It also helps with faster options for avoiding unexpected traffic jams.

Predictive Maintenance

Traditional cars are alerting their drivers with check-engine lights, oil lights and other combinations of lights in the dashboard when the damage has already happened. Sometimes this is just too late and accidents occur because of faulty parts. Connected vehicles are already monitoring all sensors with the help of AI and detect problems before they affect the driving. Artificial Intelligence is also able to monitor the wear and tear of critical parts based on the driving style, road conditions and mileage. This monitoring could also inform the driver that a specific part is going to break soon and should be replaced before something happens. In addition to hardware maintenance, automakers can also provide over the air (OTA) software updates for fixing bugs in the software, improving the functionality of the ADAS or changing the design of the infotainment without the need to visit a dealership first.

source: https://www.vector.com/de/de/know-how/technologien/automotive-connectivity/automotive-ota/

Car Manufacturing

But the applications of AI in automotive are not limited to the vehicle itself. Artificial Intelligence also has the potential to optimize different processes during the manufacturing of the car.

Assembly Line Robots

While assembly line robots were already used in the 1960s, they are now also helping the humans and working with them instead of alongside them on different steps of the process. Assembly line robots are not only shortening the time a car spends in the assembly line, they are also improving safety and helping to avoid injuries like back problems due to heavy lifting. Robots are already automatically moving materials, different car parts and the car itself between the assembly lines in a lot of factories. With the further development of AI, these robots will be optimized even further.

Quality Control

Another important part during the manufacturing process is the quality control. AI is not only able to detect irregularities in materials, it is also able to identify faulty parts before they are used in a car and predict if it is cheaper to repair or replace the part completely. Image recognition also helps with identifying flaws during the manufacturing process like scratches in the paint job or small damages in the bodywork.

Supply Chain Automation

Artificial Intelligence also allows automakers to improve their supply chain management. It is able to predict the materials needed for the upcoming production based on the orders, optimize storage in the warehouses and even check the quality of the delivered parts and determine if they are good enough for using them in a car.

Automotive Insurance / Insurtech

Insurance companies are also starting to use AI for risk assessment. They are creating risk profiles based on personalized data from previously owned and rented cars, driving style and accidents. Based on this risk profile Artificial Intelligence is able to predict how safe the driver is going to be and give every driver a personalized offer. This process could significantly lower insurance rates for safe drivers, while others may have to spend more than they are spending now.

AI will also help with filing claims after an accident. A special app could guide drivers with detailed instructions after an accident and explain exactly which videos, photos and descriptions will be necessary to process the claim as fast as possible. Correctly created claims could even be processed by AI again and give an immediate response about the next steps. It would even be possible that AI analyzes the videos and pictures of the damaged car and tells the driver which repair shop is able to fix the problem, how long it will take and what’s covered by the insurance. 

AI and In-Vehicle Infotainment Systems

In-vehicle infotainment systems (IVI) are also known as in-car infotainment systems (ICI) and provide an unique combination of information and entertainment. These systems are the only digital component of a car and are therefore also getting more important. Infotainment systems in modern cars include audio and video content, games, social media, navigation, phone calls and even in-car office features. Despite this range of features, only 56 percent of car owners are currently satisfied with their IVI.

In-car infotainment systems are also a major factor when drivers are purchasing a new car. Modern vehicles have already evolved from hardware-driven machines to software-driven electronic devices. Because of this shift, AI is also becoming more important for ICI.

IMAGE

source: https://media.mercedes-benz.com/article/14bac18d-812f-4418-88ca-4e47b8231b77

Voice and Gesture Recognition

AI-powered personal assistants like Siri and Alexa have already changed the way people are interacting with technology in their homes and on their phones. These voice-controlled assistants are now also shifting the automotive industry. Voice and gesture controlled interfaces allow an easy and intuitive interaction with in-car infotainment systems. With the help of these systems drivers are able to interact with their car, without taking their eyes off the road.

MBUX (Mercedes Benz) is a good example for a voice-controlled in-car personal assistant which can change nearly every setting of the infotainment. The AI behind this system learns the drivers habits and preferences and is even able to improve from time to time. MBUX is also capable of indirect command recognition. That means that it is able to recognize sentences like “Hey Mercedes, I’m cold” and automatically changes the temperature.

Individual Marketing

In-vehicle infotainment systems can also be used for individual marketing. With the help of Artificial Intelligence, drivers and passengers could even get personalized offers or suggestions based on their preferences, needs and habits – displayed directly on the in-car infotainment. Companies could even target potential customers when they are driving by their shops – The possibilities of AI in the automotive industry are endless.

Resources | part 2

https://www.fleeteurope.com/en/autonomous/europe/features/top-10-ways-ai-impacting-mobility?a=FJA05&t%5B0%5D=Top%2010&t%5B1%5D=AI&t%5B2%5D=machine%20learning&t%5B3%5D=e-mobility&t%5B4%5D=Smart%20Mobility&curl=1

https://becominghuman.ai/how-ai-has-been-%D1%81hanging-the-automotive-industry-e3d3c5324e03

https://www.alten.com/in-vehicle-infotainment-challenges-automotive-engineers/

https://www.infineon.com/cms/en/applications/automotive/infotainment/

Skeuomorphism in digital music production programs | Part 1

Digitalization has not only brought a technical change that has affected almost all areas of life, but also a social and cultural change. Dealing with technology is assumed as a matter of course nowadays, because that’s what modern life consists of. Everything should work as quickly and easily as possible, be intuitive to use, and best of all, everyone can use it themselves without a lot of external tools. But what does intuitive design mean for different industries and at this point in time? 

In view of this question and under the aspect of skeuo- and neumorphism, I had a discussion about the music industry in cooperation with a hobby music producer. Indeed, this very industry is characterized by skeuomorphic design elements in digital music production programs, mainly in plugins. 

Music production is of course a field that is very hardware related. It therefore made sense to take a skeuomorphic design approach in digital music production programs to represent digitally, often almost 1:1, how it works in analog. The target audience for programs, such as FL Studio, Ableton, Cubase, Logic Pro X, are not amateurs who have not handled an instrument or a mixing console before. Musicians and producers, with a passion for music, composition and analog instrument and mixing console knowledge are the ones who (should) use these programs. 

However, this raises a question for me: as has been shown through the research in my previous blog posts, there has of course always been modernization, other design styles, other approaches to design, especially at the level of interaction and intuition. Interfaces that we use every day are constantly changing and trying to adapt more and more to the needs of their target audience. However, I could hardly observe this very change in digital music production programs, especially in plugins. These interfaces adapted to the new technical requirements, but many of them remained almost unchanged in their operation and the design elements used. 

A knob still looks like a knob, a slider like a slider and a deployable instrument can be operated just like in real life only via digital inputs or buttons. Cables provide the right connections and show how the digital elements would be connected to each other in an analog way. Why is that? 

For me, as a layman in this field, it is not really intuitive to use, as I partly feel I have to learn the instrument or mixing console first, to be able to use the interface. Not only through the instruments, but also the design elements that are based on the mixing console, I can not comprehend without a minimum knowledge in this industry or by trying out a lot. In the interview with the hobby music producer we came to talk exactly about this and in some areas he simply could not imagine any other design solution than the one just used, because it is intuitive for musicians and producers, which as a layman can not be understood at first sight. For example, there are still ten elements next to each other, as it is on a real mixing console, rather than making it a flat dropdown menu to select individual wanted elements.

Intuitive does not mean the same thing for all industries and depends on existing prior knowledge. Other types of intuition also require a different type of interface. However, I think it also has a lot to do with Never change a running system, which indicates the little change in the music production programs and their plugins.

There are some changes going on and there are some other approaches, which I will talk about in my next blog entry. I am not interested in showing or telling what is better or more intuitive, because that is subjective. I want to explore how things are and have changed, how they could be different and how other approaches affect people in the music business, but also laymans.

Sources:

Links:

FL Studio: (30.01.2021)
https://www.image-line.com

Logic Pro X: (30.01.2021)
https://www.apple.com/de/logic-pro/

Cubase: (30.01.2021)
https://new.steinberg.net/de/cubase/

Ableton:(30.01.2021)
https://www.ableton.com/de/live/


Images:

Featured Image: Logic Pro X
https://www.apple.com/de/logic-pro/

Image01: FL Studio:
https://www.image-line.com

Image02: Plugin – Guitar-Amplifiers – Standalone / VST:
https://www.thomann.de/at/positive_grid_bias_fx_2_professional.htm

Image03:Plugin Sylenth1:
https://lennardigital.com/sylenth1/

A small insight into a few VR/ AR/ MR headsets, glasses and other things

We still don’t know 100% what effects, specific to VR for example, can have on our health or on our brain in the long term. An MRI of the brain while using it cannot be done with VR because the head has to be kept still and that turns out to be a bit more difficult in this case. Using VR has many good aspects such as overcoming trauma, can bring out a realistic level of empathy, reduce pain or even cure phobias. However, stimulus overload can also, in the worst case scenario, lead to creating a new trauma. The warnings, requests for breaks, age limits, or the need to sign documents before using VR cannot be ignored or accepted as careless under any circumstances. This is precisely why it is important to consider in advance what technology to use for what purpose and whether a positive goal can be achieved with it.

In the process of my research, I was mainly interested in headsets, glasses or other things that are relatively easy to put on, as safe as possible to use and perhaps don’t completely exclude your own real environment or at least are easy to take off. Which technology, headset or glasses will end up being the best fit for my project will become clear over time. So here is a small selection of technologies that are interesting for me at the moment:

The Mixed-Reality-Headset Microsoft Hololens 2

  • This headset is put on and tightened with the help of a knob and a headband
  • Eye tracking
  • The headset does not need to be taken off because the visor can be folded upwards
  • Display does not have to be precisely aligned with the eyes to work due to the technology used (laser, mirror, waveguides) in the glasses
  • Not yet immersive enough for the normal consumer
MR Headset Microsoft Hololens 2

The first standalone mixed reality glasses Lynx-R1

  • Does not require external tracking sensors or other devices
  • Optical hand tracking
  • Digital elements can be added to a real-time filmed environment by two cameras on the display
  • VR and AR at the same time
  • Multiple cameras are used to film the environment
MR glasses Lynx-R1

Small VR glasses from Panasonic

  • Ordinary, commercially available VR glasses are much bigger and bulkier
  • Stereo speakers are integrated
  • Is put on like a normal pair of glasses 
  • Spatial tracking
  • Positional tracking through inside-out tracking cameras by tracking the position of the head mounted display and that of the controller
 Panasonics “VR Glasses”

AR glasses Rokid Vision 2

  • Must be connected with a cable to smartphone, laptop or tablet
  • Is put on like a normal pair of glasses 
  • Has Speakers for stereo sound
  • The glasses will be operated by voice control
  • There are specially developed scenarios, such as a Fantasy World. This is an immersive space in which the user can interact with the world through head-, gesture- or voice control
  • The user can move freely in the virtual space through room tracking
  • No word yet on when it will hit the market
AR glasses Rokid Vision 2

VR Arcade Game from The VOID or Sandbox VR

The VOID and Sandbox VR are actually both on the verge of going out of business. Due to the Corona crisis, all arcades had to close, and at the VOID, Disney withdrew several important licenses, such as Star Wars, because the company could not pay for them due to expensive equipment and associated debts. Still, the concept behind it is very exciting. Here are a few key points from The VOID:

  • Through a headset, motion capture cameras, 3D precision body tracking, haptic suits, props like flashlights or blasters meant to represent a weapon, one can explore the game in the physical environment and interact with a virtual world simultaneously 
  • Fully immersive through VR and at the same time physical in the game by making virtual objects resemble physical objects
  • So you are immersed in the game as the main character, and depending on the virtual world and story, you have to complete certain tasks as a team
The VOID Trailer

Sources

  1. Microsoft’s Hololens 2: A $3,500 Mixed Reality Headset for the factory, not the living room, Dieter Bohn (24.2.2019), https://www.theverge.com/2019/2/24/18235460/microsoft-hololens-2-price-specs-mixed-reality-ar-vr-business-work-features-mwc-2019
  2. Lynx-R1: Erste autarke Mixed-Reality-Brille vorgestellt, Tomislav Bezmalinovic (4.2.2020), https://mixed.de/lynx-r1-erste-autarke-mixed-reality-brille-vorgestellt/
  3. CES 2021: Panasonic zeigt extra-schlanke VR-Brille, Tomislav Bezmalinovic (12.1.2021), https://mixed.de/ces-2021-panasonic-zeigt-extra-schlanke-vr-brille/
  4. Rokid Vision 2: AR-Brille kommt in neuem Design, Tomislav Bezmalinovic (15.1.2021), https://mixed.de/rokid-vision-2-ar-brille-kommt-in-neuem-design/
  5. The VOID (2021), http://www.thevoid.com/what-is-the-void/
  6. The Void: Highend-VR-Arcade steht vor dem Aus, Tomislav Bezmalinovic (17.11.2020), https://mixed.de/the-void-highend-vr-arcade-steht-vor-dem-aus/
Haptik - Feedback - Sensitivity skin

Wie nehmen wir Haptik wahr?

Die haptische Wahrnehmung ist ziemlich ähnlich zur auditiven Wahrnehmung. Haptisches Feedback sind Frequenzen die über die Haut wahrgenommen werden. Die menschliche Haut reagiert auf Frequenzen zwischen 0.4 und 1000 Hz. Im Frequenzbereich zwischen 300 und 400 Hz reagiert unsere Haut besonders empfindlich.

» At a location with high sensitivity a vibration can be of lower intensity to get the same perception of intensity. « 

Rovers, A.F., van Essen, H.A., n.d. Using Active Haptic Feedback in Everyday Products.

In dieser Grafik werden die empfindlichsten Körperregionen dargestellt-

https://journals.plos.org/plosone/article/figure?id=10.1371/journal.pone.0031203.g002

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0031203

Jones, L.A., (2018). Haptics, The MIT Press essential knowledge series. The MIT Press, Cambridge, Massachusetts.

Alexander Moser
https://www.alexander-moser.at/

Sciences centers strategies for learning and engagement

Introduction :

As we saw in the previous articles, one of the role of science centers is to introduce participatory experiences and provide effective learning content and techniques. While traditional museum emphasize static displays of objects and artifacts, science centers have followed the more dynamic philosophy of the chinese proverb : « I hear and I forget, I see and I remember, I do and I understand » [1]. Since learning is a complex concept, we will try here to explain it as well as the strategies used by science centers to adress this goal.

Learning :

Learning is a dynamic process in which the learner uses sensory inputs and constructs meaning out of it.It is what people do when they want to make sense of the world around them. It may involve enhancing in skills, knowledge, understanding, values, feelings, attitudes and capactity to reflect. Effective learning leads to change, development and the desire to learn more.

People be trained to learn to learn as they learn to see as learning consists both of constructing meaning and constructing systems of meaning. The crucial action of construction meaning is mental where it happens in the mind. Physical actions such as hands on experience may be necessay for learning that effectively for children. However it is not sufficient while we need to provide activities which engage the mind as well as the hands.

Learning is a social activity that out learning is intimately associated with our connection with other human beings, our teachers, our peers, our familiy as well as casual acquaintances, including the people before us or next to us at the exhibit. Learning is contextual as we do not learn isolated facts and theories in some abstract ehtereal land of the mind separate from the rest of our lives, we learn in relation to what else we know, what we believe, our prejudice and our fears [2].

Learning is divided into 3 categories :

  • Formal learning : school experience, teacher or staff might involve worksheets, often passive and may involve assessment
  • Selft directed learning : led by the learner when they are interested in a subject or motivated by a specific need (school project, vocational interest)
  • Informal learning : unplanned casual encounters that lead to new insights, ideas or conversation. Types of learning that always introduced in a museum setting are related to how well visitors understand and regulate their own thinking process as summarized by the following description.

Learning takes time : the 4 stages of the learning model :

  • Stage 1 : Self Awareness – Don’t know that you don’t know

This is the first stage of learning. The individual doesn’t understand or know how to do something and does not necessarily recognize the deficit. The lenght of time an individual spends in this stage depends on the strength of the stimilus to learn. You don’t know where you are and what you are doing.

  • Stage 2 : Self Appreciation – Know that you don’t know

The learner doesn’t understand or know how to do something but he recognizes the deficit. This is the most difficult stage and it is where the learning begins. A lot of mistakes are going to be made during this learning process.

  • Stage 3 : Self Engagement – Know about it, but you have to think about it

The individual understands or knows how to do something. However, demonstrating the skill or knowledges requires concentration and effort. This stage is easer than the previous one but still requires concentration.

  • Stage 4 : Selft directed learning – Know it so well you don’t have to think about it

The individual had a lot of practice with a skill that has become a second nature and can be performed easily. He may be able to teach to other people depending on how and when it was learned.

The model of the exploratorium for learning

Science and children museum’s followed the learning strategy model of the Exploratorim because it put the visitor in a very active role as a learner : Experimenting, Hypothsizing, Interpreting and drawing conclusions. This model integrate 4 importants aspect of the learning environment : immediate apprehendability, physical interactivity, conceptual coherence and diversity of learning modes [3].

  • immediate apprehendability : capacity to create effortless backdrops. The aim is to limit the cognitive overload also named as the museum fatigue. Shettle found that the average visitor views an exhibit unit for 20 seconds and tours a complete exhibit for a maximum of 14 minutes. It means that science centers are able to draw the attention of the viewer for a very limited period of time. In order to capitalize on that time it is important not to require the reading of extensive text nor concentration on visual aids that would try the patience of the average viewer[1]. This concept is close to the idea of affordance defined by Donald Norman.
  • physical interactivity : Research on visitor learning in museums suggests that interactivity promotes engagement, understanding and recall of exhibits. Some studies in the exploratorium identified 5 common pitifalls for designing exhibits with high levels of interactivity or multiple interactive features : multiple options with equal salience can overwhelm visitors, interactivity by multiple simultaneous users can lead to disruption, interactivity can desrupt the comprehension of the phenomen.
  • conceptual coherence : one of the main goal of science centers is to give visitors the big picture around a subject. They are using various techniques to make abstract concepts and themes more apparent to visitors. Achieving high levels of thematic clarity for exhibitions may be particularly difficult in an open environment.
  • diversity of learning modes

Howard gardner developed a theory on the dissimilar ways that individuals learn and process information, which called the multiple intelligences theory. According to gardner’s theory, visitors might show well built leaning skills in any of seven different style categories that summarized in the following tables :

Through this different categorisations of learning profiles, Dawson tried to show how museum communication of meaning would affect those different types of learners :

Visitor’s Engagement

The concepts of visitors involvement and participatory exhibit have undergone some basic changes in recent years as a result of museum research on viewer attention span and of nonmuseum research on cognitive and affective processes. Participatory exhibits actively involve the visitor in discovering information through his own participation in the demonstration process. Successful participatory learning devices are those that allow manipulation, experimentation and variation. For an instructionally efficient and effective exhibit, some feedback loop between the person and object appears to be necessary [1].

In the video underneath, Nina Simon is explaining a few rules and BPE of a good visitor engagement. She also explains why affordance is particularly important when designing an exhibition.

The role of museums in lifelong learning

Lifelong learning is the ability to constantly update and expand your knowledge in a variety of fields. It helps you to survive, to engage and shape your vision of the world. Lifelong learning comprises all phases of learning from preschool to post retirement. Museums take place in this learning, and propose content for all age groups.

Literacy is a person who has all the abilities to be able to engage deeply herself with a specific topic. For this, she needs 3 components :

  • knowledge : about the specific topic
  • skills : direct to the tasks or to apply the knowlege
  • volition : will to engage and do something

Scientific literacy : a person who has the will to engage in a recent discourse about science and technology which requires the competences to explain phenomena scientifically (knowledge), evaluate and design scientific enquiries (skills) and interpret data and evidences scientifically (skills and knowledge).

A visitor who really uses a museum content to its full extend, is called a museum literate person.

The 8 dimensions of museum literacy :

  1. curiosity, motivation and volition = the interest to will to do something inside the museum
  2. information processung competence = skills to use the information which is presented
  3. social competence = being able to interact either with the staff at the museum or with other visitors
  4. emotional competencies = self regulation on the other hand to allo feelings during a museum exhibit
  5. pre knowledge of a subject
  6. visual literacy = the ability to interepet the signs and images
  7. location and behavior competence : the ability to orient oneself in a museum and manoeuver through the differe offers of a museum
  8. appreciation of the exhibit = valuation of the objects of our cultural heritage

Application of thoses principles through the Dargis museum of Munich.

Conclusion :

The museum is in position to make a decision about which techniques and approaches are been utilized with respect to specific communication goals. In order to learn, a visitor first has to be motivated which is usually the case since visitors are chosing to go on science centers. Then, exhibitions designer must pay attention to provide immediate apprehendability, physical interactivity, conceptual coherence and to adress the multiple learning modes availables through the use of different communication devices. It is always useful to evaluate afterwards the vision of visitors after an exhibition in terms of learning and enjoyment, to evaluate if the global design exhibition experience is successful or not.

Sources :

[1] Kimche, L. (1978). Science centers: a potential for learning. Science 199, 270–273.

[2] Ahmad, S., Abbas, M.Y., Taib, Mohd.Z.Mohd., and Masri, M. (2014). Museum Exhibition Design: Communication of Meaning and the Shaping of Knowledge. Procedia – Social and Behavioral Sciences 153, 254–265.

[3] Allen, S. (2004). Designs for learning: Studying science museum exhibits that do more than entertain. Sci. Ed. 88, S17–S33.

[4] TED talks

Neue Features auf Instagram

Instagram Stories

Kurz erklärt: Bei den Instagram Stories geht es um kurzen visuellen Content in Form von Bildern oder Videos. 24 Stunden nach ihrer Veröffentlichung verschwinden sie und sind für die Nutzer der Plattform – außer dem Ersteller – nicht mehr einsehbar. Nach Ablauf der Story, wird diese in ein, nur für den Account-Inhaber einsehbares, Archiv gespeichert. Von dort aus kann er alle entstandenen Posts immer wieder ansehen.

Es gibt jedoch die Möglichkeit Stories in sogenannte Highlights zu speichern. Diese Highlights können sich andere User zu jeder Zeit am Profil des Erstellers ansehen. So werden Stories auf für andere User länger sichtbar gemacht.

Vorrangig bekommt man die Instagram Stories von Usern, mit denen man am meisten interagiert, gezeigt. Vor allem hier legt Instagram darauf Wert, den Usern das Aktuellste vorzuschlagen, um die User up-to-date mit ihren beliebtesten Accounts zu halten. 

Wenn man regelmäßig mit einem Account interagiert, werden vor allem deren Stories angezeigt. Auch wenn man die aktuellsten Stories bereits angesehen hat, kann es passieren, dass Instagram trotzdem deren Account vorschlägt.

Deshalb ist es wichtig Stories zu posten, da sie deinen Einfluss auf deine Nutzer erheblich steigern können und du ihnen öfter gezeigt wirst. Dadurch hast du eine höhere Chance, dass andere Nutzer auf dein Profil zugreifen, womit du auch im Algorithmus besser gewertet wirst.

Explore Page

Auf der Explore Page bekommt der User Inhalte vorgeschlagen, von denen Instagram ausgeht, dass sie seine aktuellen Interessen widerspiegeln. Die meisten dieser Beiträge sind von Profilen, denen der User noch nicht folgt – anders als im Instagram Feed, der Beiträge der Accounts zeigt, denen man folgt. Damit will Instagram Interaktionen fördern und auslösen. 

Instagram Explore Page
Explore Page

Um mit seinen eigenen Posts auf die Explore Page zu gelangen, muss man vor allem Themen treffen, die gerade bei Usern im Trend sind. Am besten dazu sind starke Captions (Bildunterschrift) und gut gewählte Hashtags. Hashtags dienen vor allem dazu die Posts zu kategorisieren. So kann Instagram sie leichter gewissen Themen zuordnen.

Erhalten die eigenen Posts dann viel Engagement, ist die Wahrscheinlichkeit, dass man auf die Explore Page kommt gar nicht mehr so gering. 

IGTV (Instagram TV) & Reels

Bei diesen zwei Formaten handelt es sich um Videos, die den Usern gezeigt werden. Während Reels kurze Videos sind, die sich auch mit TikTok Videos vergleichen lassen, so sind die Videos, die auf IGTV gepostet werden, lange. 

Beide Formate können ebenfalls auf der Explore Page vorgeschlagen werden und können durch ihr bewegtes Format die User meistens zu längerer Betrachtung bringen.

Die IGTV Videos sind grundsätzlich auf dem Profil des Users einsehbar. Allerdings kann man hier eine ein-minütige Vorschau erstellen, die auch im Feed gepostet werden kann. So werden mehrere User auf das Video aufmerksam und man kann es so etwas mehr pushen. IGTV Videos können, wie oben bereits erwähnt, länger ausfallen. Zum Beispiel können Live Videos später als IGTV Video gespeichert werden und sind so – wie Highlights – auch nach der Live-Übertragung noch für andere User einsehbar.

Man kann auf Instagram auch Live gehen. Das passiert über die Instagram Story. Hier können die Zuseher kommentieren und Fragen stellen, die dann im Video live beantwortet werden können.

Reels kann man gleich direkt in seinem Instagram Feed posten, da sie sehr kurz sind. Durch das Klicken auf ein Reels, öffnet man eine Reel-Vorschau, in der man – wie bei TikTok – endlos hinunter scrollen kann.

Instagram Reel
Instagram Reel