VR as an assistance for people with Prosopagnosia

The biggest motivation and reason why I chose to study design is to help people who are struggling with difficult life situations. Design makes it possible to be a supportive help to people, regardless of the medium. So this is why my chosen topic revolves around the condition Prosopagnosia or face blindness in combination with the use of VR. Prosopagnosia is an illness where an affected person cannot recognize the identities of other people.

Prosopagnosia

To go a little deeper into this illness and what it is about, there are actually about 2 percent of the population that have prosopagnosia – or face blindness. Due to this – inheritable – disease they cannot even recognise the face of their family members. To be able to live as normal as possible, they have certain tricks. To cover up the illness, they sometimes use their cell phone as a safety mechanism or are deeply engaged in a book. In the worst case, it can lead to social anxiety. Many affected people also learn certain characteristics of their counterpart, for example facial features, jewelry, height, skin color, body movement, weight and so on. However, if the person does not have any special characteristics to help them recognise, learning things like hair color or hair length or mouth or nose shapes do not really work, since a lot of the people look like a certain type. Affected people say that they are afraid to tell others about this illness. The reason behind it is, that they do not recognise others on the street and because of that they have this fear that it looks unfriendly or arrogant to others. If people with prosopagnosia are not in their familiar surroundings for a long time, it can happen that they do not even recognise people who are familiar to them at the beginning, because the features that they learned to recognise become blurred in their heads. So they are always wearing a mask to cover up their situation to not come across rude and not to hurt anyone, because people do not want to be forgotten. It makes people question their own worth and it makes them question the worth of the person with the condition because he or she couldn’t even take the time to remember this certain someones face.

Best Cases

The vision I have with this topic is to help somehow. So after I started thinking about how to approach this topic and which technology could be of help, I kept coming back to the topic of virtual reality. Nowadays, the use of VR or AR is becoming more and more popular in school, jobs or even in medicine. To mention a few companies who are interested in helping people with an illness, there is a start up company who want to help people with Prosopagnosia for example. They developed an AR App that reminds them, who the person in front of them is. The App functions in combination with smart glasses.

Augmented Reality To Tackle Face-Blindness (Start-up Social Recall)

Another company uses VR, for example, to immerse people with disabilities, who are limited in their movements or can only perform certain actions, in a virtual world. Through this gamification, they automatically make larger movements than they would otherwise do. This is physically beneficial to them and they have fun because they can perform certain tasks and are therefore focused on solving them.

Virtual reality as a new therapy for patients with strokes (Start-up rewellio)

And there are a lot more of these kind of companies or ideas to help people. One company focuses on supporting people who have suffered a stroke. Learning or practicing with VR or AR helps to remember the information much easier and a lot faster. 

My Vision

The problem I want to solve and the vision I have is to find a way or develop something to help this group of people and provide assistance from the beginning. Through this safe environment and of course the technology, the patient is not overwhelmed but can get used to the new situation at his own speed. Other advantages would be that the self-confidence is getting increased and the patient is perhaps a little better prepared to deal with the new situation in reality. I think that these technologies have a lot of advantages and can and should be used to help them in the best way possible. 

This is a really well produced short film I found about Prosopagnosia:

Sources

  1. Die Zukunft der Arbeit/ AR und VR Trends für 2019 laut Experten, Vicki Lau (07.02.19), https://www.valamis.com/ de/blog/die-zukunft-der-arbeit-ar-und-vr
  2. Mit VR gegen die Angst/ Invirto bietet App-Therapie und Virtual Reality, Julius Beineke (03.02.20), https://t3n.de/news/ vr-gegen-angst-invirto-bietet-1248233/
  3. SocialRecall – Augmented Reality To Tackle Face-Blindness, Rebecca Hills-Duty (14.09.18), www.vrfocus.com/2018/09/augmented-reality-to-tackle-face-blindness/
  4. rewellio – Virtual Reality als neue Therapie für Schlaganfall-Patienten, Andrea Losa (27.12.19), https://futurezone.at/start-ups/virtual-reality-als-neue-therapie-fuer-schlaganfall-patienten/400705860
  5. WalkinVR – Virtual Reality for People with Disabilities, WalkinVRDriver (o.D.), https://www.walkinvrdriver.com
  6. TED Talk: VR Therapy, Unlocking the Potential of VR, Brian Boyle (22.10.19), http://www.youtube.com/watch?v=qxd-ppIDfjw
  7. Galileo: Mein Leben als Gesichtsblinde, Galileo (01.10.16), http://www.youtube.com/watch?v=bDGTKQAKHKY
  8. Prosopagnosia / Thriller Short Film, Hugo Keijzer (01.01.15), vimeo.com/129415238 
  9. TED Talk: How being Face-Blind made it easier to see people, Fleassy Malay (16.02.19), https://www.youtube.com/watch?v=q3sZaoPQSc4

History of Synaesthesia in the Audiovisual Sphere

Synaesthesia is one of the phenomenons that was always quite interesting to artists, however, still never explored as much as it should be. During my research, I was quite intrigued to find out that even Plato had referred to synaesthesia in his writing. In 370 BC, he wrote Timaeus, which connects the world’s essence with musical ratios. After that, Aristotle compared the harmony of sounds with the harmony colours. After the chromatic scale was introduced, and in 1492 Franchino Gaffurino proposed a colour system for the scales, where Dorian was crystalline, Phyrgian orange, Lydian red and the Mixolydian was an undefined mixed colour. In 1646 Athanasius Krcher developed a color system for musical interval based on symbolism. 

Sir Isaac Newton also delved into the relationship between music and colour. In his work Optics, he revisited and expanded upon Aristotle’s relationships between sound and colour. He mathematically divided visible light into seven different colours, which had similar mathematical relationships to a musical scale. Even though Newton didn’t hold to these claims as scientific truth, rather just analogies, Louis Bertrand Castel firmly believed that the connection between light and the musical scale is a fact. He drafted a sketch prototype for a “clavecin oculaire”, an instrument that would produce the “correct” colour with each note played. 

In 1875, the first colour organ was built by the American inventor Bainbridge Bishop.  The organ worked by projecting coloured light onto a screen while it is being played. Unfortunately all 3 of Bishop’s organs were lost in a fire.

Vincent Van Gogh is quite a notable artist with synaesthetic abilities. His letter to his brother often mention synaesthetic experiences, which confirm this claim. In those letters he mentions that certain colours, like yellow and blue were like fireworks to his senses. Van Gogh probably had chromesthesia and basically painted sounds the way he saw them. Yellow gave him an experience of joy- a song of hope that he was otherwise missing in his life.

Synaesthesia was a big subject of interest for certain Bauhaus artists as well, most notably Gertrud Grunow, who was a Bauhaus master and teacher since 1919. She was interested in incorporating movement and music into visual art, which in her words opened “new originals ways of experiencing”. She put a focus on instinctive and emotional expression. Gertrud started making a new style of curriculum, but sadly died before being able to finish. Today, it is hard to establish what her thoughts were in the curriculum, as it was finished by her assistant, and he was very good at mimicking her writing style. Grunow influenced the interest in Synaesthesia in Gropius, Kandinsky and Itten.

Kandinsky’s works were all named as if they were musical compositions. He called them “Symphony of Colours”. Just like Van Gogh, he found yellow to be particularly important to him. He created experimental performance-based expressions of Synaesthesia for theatre titled “The Yellow Sound”.

Today, there are more and more synaesthetic works done with modern techniques like VR, 3D animation and similar. Even though interest in this field is very steady, synaesthetic audiovisual experiences are not part of the mainstream media yet. Personally, I consider that this field needs a lot more attention and research, as it is still not entirely understood.

Intercultural Communications 5

In this post I would like to talk about an Austrian animation, analyzing its technique and compare it with my previous post of an Iranian animation. They are both done around 7 years ago roughly at the same time.

brats (Extended Play) | Alexander Hengl | 00:10:00 | 2013

From the website http://www.asifa.at/bestaustriananimation/

The „brats“ seem like a tribe of excited and sexless creatures connected through a collective spirit. Instead of using the classic storytelling format, artist/musician Alexander Hengl, member of „theclosing“, offers impressions that trigger subliminal emotions with this music-driven short.

Animation, Music, Editing: Alexander Hengl
Produced at: Academy of Fine Arts in Vienna – www.akbild.ac.at 
http://alexanderhengl.theclosing.net | www.lichenisierung.net

The philosophy about thinking and analyzing individuals comes from the Europe. Austria is specially famous for it due to its famous psychologist Freud.

As a middle eastern, it was also a big culture shock for me to see how people analyze individuals specially themselves. In the middle eastern culture it is not common to say the word “I” or use the sentence “I want something” very often. It is mostly said through a group or asking the others if they also want the same thing, and out of the result we say “We wanted”or some times “I wanted”.

This animation also focuses on individuals and showing them in a group and again individuals, their feeling, fears, characters etc.

However previous Iranian animation was mostly based on a narrative about a family and how dependent the members are on one another. It did not analyze every individual very exactly.

In the Iranian animation, the grey color was also used but in combination with red, blue and other lively traditional colours which are also used in the Persian carpet. Sound design was mostly dependent on traditional Kurdish music.

Both these animations are dealing with personal questions, relationships among people in the society, different happenings and their effect on people, however in two very different styles. We as the audience can understand the message behind both of them and learn something new about the new culture and their way of thinking and perception of arts and life.

Entwicklung der Musik in der Werbung

Untersuchung des Sound Designs im Bezug zu Werbewirkung 1982-2018 am Beispiel Nike

In den unterstehenden Werbebeispielen wurden von mir insgesamt 3 verschiedene Werbespots herangezogen. Einmal die anscheinend erste TV Werbung von Nike von 1982, einen neuen Werbespot von Nike’s Zoom Pegasus Turbo Schuh von 2018 und zuletzt eine Nike Air Werbung von 2017. Bei der Auswahl der Spots wurde sich hautsächlich auch subjektiv wahrgenommene Kongruenz von Bild und Ton bezogen. Mein Ziel war es in dieser Übung die Musik des alten Spots von 1982 auf die Bilder der neuen Werbung zu legen und umgekehrt. Dadurch soll verdeutlicht werden, wie sich die Musikauswahl und das Sound Design über die Jahre verändert hat und heutzutage auf den Menschen wirkt. Dabei fällt meiner Meinung nach auf, dass die neue Musik das Potenzial hat, das alte Video aufzuwerten, umgekehrt ist es jedoch fast gegenteilig.

Zuletzt folgen zwei Zitate aus einem Buch von Sieglerschmidt (2008), welche die Kongruenz von Werbung und dem Konsumenten, sowie auch die Zugehörigkeit zu gewissen Bevölkerungsgruppen als Zielgruppe beschreiben.

Nike’s first television commercial (1982) – New Music:

Nike Zoom Pegasus Turbo (2018) – Old Music:

Nike’s first television commercial – 1982 (Original):

Nike Zoom Pegasus Turbo 2018 (Original):

Nike’s first television Commercial 1982 – New Music (Nike Air 2017):

Nike Air Max 2017 Commercial:

„Möglicherweise differieren verschiedene Arten von Kongruenz in ihrer Wirkung, weil sie unterschiedlich intensiv wahrgenommen werden. Einschlägige Aktivierungstheorien weisen uns auf die Bedeutung der Stärke eines Stimulus für dessen Wirkung hin (vgl. Felser, 2001, S. 385f.). Betrachtet man Kongruenz als Stimulus, so hieße das: Je deutlicher eine Kongruenz in den Augen eines Rezipienten zu Tage tritt, desto stärker ihr potentieller Einfluss auf die Verarbeitung von Werbeinhalten. Gunter et al. (2002) stellen entsprechende Überlegungen an. Sie vermuten, dass eine stilistische Kongruenz von Werbung und Kontext für sich genommen unzureichend sein könne, um einen Interferenzeffekt hervorzurufen.“ – Sieglerschmidt, Sebastian. Werbung Im Thematisch Passenden Medienkontext: Theoretische Grundlagen Und Empirische Befunde Am Beispiel Von Fernsehwerbung. 2008. Web. S. 113

“Zugehörigkeit zu einer sozialen Schicht (Mayer & Illmann, 2000, S. 611ff). Allerdings besteht eine hohe Korrelation dieser Variablen untereinander, so dass ein eindeutiger Einfluss einzelner Faktoren nur schwer nachzuweisen ist (Steffenhagen, 2000, S. 19). Ein relativ gesicherter Zusammenhang besteht zwischen verfügbarem Einkommen und Mediennutzung, welche wiederum die Wirksamkeit von Werbung beeinflusst: Je geringer das verfügbare Einkommen, desto intensiver der TV-Konsum (vgl. z.B. Arbeitsgemeinschaft Fernsehforschung [AGF], 2002). Hierbei handelt es sich aber nicht um einen Faktor, der Unterschiede in der Wirkung einer einzelnen Werbung bedingt, sondern die Menge empfangener Werbebotschaften insgesamt beeinflusst. Dagegen wirken sich schichtspezifische Unterschiede in der Suche nach Informationen auch auf die Wirkung einer einzelnen Werbung aus; die Suche nach Produktinformationen erfolgt in höheren sozialen Schichten intensiver (Mayer & Illmann, 2000, S. 617). Unterschiede zwischen Einkommensklassen bestehen außerdem bei werbevermeidendem Verhalten: Individuen mit höherem Einkommen haben ein größeres Bestreben, Werbung zu vermeiden (Speck & Elliott, 1997, S. 64).” – Sieglerschmidt, Sebastian. Werbung Im Thematisch Passenden Medienkontext: Theoretische Grundlagen Und Empirische Befunde Am Beispiel Von Fernsehwerbung. 2008. Web. S. 34

Theremin

Da ich letzte Woche über das Terpsiton geschrieben habe, finde ich es auch interessant, über das Instrument zu sprechen, aus dem es stammt, das Theremin.

Es ist nicht nur eines der ältesten elektronischen Musikinstrumente, sondern auch das erste Musikinstrument, das man beim Spielen nicht einmal berührt.

Obwohl es anderswo ähnliche Signalgeneratoren gab, war dieser Klangerzeuger das erste auf hochfrequenten Schwingkreisen basierende elektronische Musikinstrument, das großen Zuspruch gefunden hat.

Das Theremin ist seit den 1920er Jahren aufgetreten. Nachdem sein Entwickler Leon Theremin (Lev Sergejewitsch Termen) seine Militärdienste beendet hatte, hat er beim Physikalisch-Technischen Institut in Petrograd gearbeitet, wo er ein Gerät zur Messung des elektrischen Widerstands von Gasen entwickelt hat.

Da dieses Messgerät nach dem Überlagerungsprinzip arbeitet, können durch die Annäherung des menschlichen Körpers Klänge erzeugt und moduliert werden. Somit war das Theremin geboren. Leon Theremin hat mit seinem neuen Gerät ein kleines Konzert gegeben. „Termen spielt Voltmeter” haben seine Kollegen darüber gewitzelt. 🙂

Im Oktober 1921 spielte er vor Lenin, der dafür sorgte, dass Leon seine Erfindung überall in der Sowjetunion vorstellen konnte, um die „Elektrifizierung“ des Landes zu propagieren.


1926 erhielt Termen die Erlaubnis, sein Instrument auch im Ausland zu präsentieren. Die erste öffentliche Vorführung in Deutschland fand im Herbst 1927 im Rahmen der Internationalen Musikausstellung in Frankfurt statt, weitere Auftritte folgten in Berlin, Paris und London.

Mit dem Verschwinden seines Erfinders, der ab 1938 für einige Jahrzehnte in der Sowjetunion gefangen gehalten wurde, verloren Musiker und Komponisten weitgehend das Interesse an diesem Instrument. In den 1950ern konnte es jedoch Nischen in der Filmmusik und unter Hobbybastlern erobern.

Das Theremin arbeitet nach dem Prinzip eines kapazitiven Abstandssensors. Die Hand des Spielers, die durch ihre eigene Masse als Erdung fungiert, verändert über die jeweilige Elektrode („Antenne“) den LC-Schwingkreis eines Oszillators: Sie beeinflusst sowohl die Frequenz als auch die Güte des Schwingkreises, indem er den kapazitiven Anteil des Schwingkreises und dessen Dämpfung beeinflusst.

Hochfrequenzkomponenten werden durch einen Tiefpassfilter entfernt. Das Signal kann dann über einen Lautsprecher verstärkt werden.

Nur ein Schwingkreis mit variabler Frequenz wird verwendet. Ein Bandpassfilter sorgt dafür, dass sich der Pegel des Signals abhängig von der Frequenz ändert. Ein Hüllkurvendetektor erzeugt dann das Steuersignal für den Verstärker.

Der Elektronik-Enthusiast Robert Moog hat sein erstes Theremin in den 1950er Jahren gebaut, als er noch in der High School war.

Da das Theremin immer wieder in Filmmusiken zum Einsatz gekommen ist – etwa bei Spellbound (Alfred Hitchcock, 1945), The Lost Weekend (Billy Wilder, 1945), The Day the Earth Stood Still (Robert Wise, 1951), Ed Wood (Tim Burton, 1994) oder Mars Attacks! (Tim Burton, 1996) –, wird es zumeist mit „außerirdischen“, surrealen und gespenstischen Klängen und Glissandi, Tremoli und Vibrati assoziiert.

Es ist auch in verschiedenen Arten von Musik weit verbreitet, z. B. in Pop, Jazz, Rock und anderen.

Was ist Immersion?

In Zusammenhang mit Virtual Reality fällt häufig der Begriff der “Immersion”, mit denen sich auch Merkmale von 360-Grad-Videos und Virtuellen Rundgängen beschreiben lassen. Der folgende Beitrag soll deshalb Aufschluss darüber geben, was unter Immersion zu verstehen ist und warum unsere Inhalte besonders immersiv wirken.Immersion

Unter Immersion wird das Eintauchen in die Inhaltes eines Mediums verstanden, bei der Fiktion akzeptiert wird. © Der_Typ_von_nebenan / pixabay.de

Mentale Immersion vs. Physikalische Immersion

Der Begriff Immersion beschreibt das Eintauchen in die Inhalte eines Mediums. Dabei kann sie sowohl mental als auch physikalisch hervorgerufen werden, weshalb hier zunächst eine Unterscheidung stattfinden soll:

Mentale Immersion

Die mentale Immersion kennen die meisten sicherlich aus persönlichen Erlebnissen, denn sie entsteht, wenn sich ein Rezipient tief in eine Handlung hineinversetzt und mental in diese eintaucht. So erfahren wir sie beispielsweise beim Lesen eines spannenden Buches oder wenn wir einen fesselnden Film anschauen. Sie beschreibt einen Zustand, in dem der Nutzer ein tiefes Engagement empfindet, hoch involviert und bereit ist, Fiktion zu akzeptieren („suspension of disbelief”).

Physikalische Immersion

Die Immersion in 360-Grad-Medien und auch in Virtual Reality (für die genauere Differenzierung der beiden Begriffe empfehlen wir den Artikel Sind 360-Grad-Videos Virtual Reality?) kann noch weitergehen und nicht nur mental, sondern auch körperlich erfolgen, was dann als physikalische Immersion bezeichnet wird. Eine hohe physikalische Immersion entsteht, wenn Ein- und Ausgabegeräte genutzt werden, die möglichst viele Sinne des Anwenders auf eine reale Art und Weise ansprechen. So unterstützen bspw. die für Virtual Reality typischen HMDs  (Head-Mounted-Displays, umgangsprachlich auch VR-Brillen genannt; erfahren Sie hier mehr über die Entwicklung von Virtual Reality) eine hohe physikalische Immersion, da sie die Realität ausblenden und dafür sorgen, dass der Nutzer – egal, wo er hinguckt – nur die virtuelle Welt wahrnimmt. Handschuhe mit haptischem Feedback, Trackinglaufbänder und andere Ein- und Ausgabegeräte verstärken die physikalische Immersion, was sich widerum positiv auf die mentale Immersion auswirkt.

Immersion und Präsenz

Im Zusammenhang von Immersion wird auch häufig der Begriff der Präsenz verwendet, der das subjektive Gefühl des Nutzers beschreibt, sich tatsächlich innerhalb einer virtuellen Welt zu befinden. Je nach Grad der Präsenz wird die Umgebung für den Betrachter als mehr oder weniger real empfunden. Dieses subjektive Präsenzgefühl kann erreicht werden, indem die Person mental in die Umgebung eintaucht und lässt sich durch einen hohen Grad an physikalischer Immersion intensivieren.Was ist Immersion?

Was ist Immersion? © Vertex / pixabay.de

Das Präsenzgefühl lässt sich durch eine Ortsillusion, eine Plausibiliätsillusion und der Involviertheit des Nutzers hervorrufen.

Dabei beschreibt die Ortsillusion (engl. „place illusion”) das Gefühl des Nutzers, an einem anderen Ort zu sein, obwohl er weiß, dass er sich dort nur virtuell und nicht tatsächlich befindet. Sie wird vor allem durch (physikalisch) immersive Ausgabegeräte unterstützt und insbesondere durch die dreidimensionale Präsentationstechnik und die nutzerzentrierte Sicht gefördert. Eine Ortsillusion kann beispielsweise bei unseren Virtual-Reality-Rundgängen entstehen, die dem Nutzer das Gefühl vermitteln, sich tatsächlich an dem dargestellten Ort zu befinden.

Die Plausibilitätsillusion (engl. „plausibility illusion”) beschreibt den Umstand, dass die Ereignisse der simulierten Umgebung so wahrgenommen werden, als seien sie wirklich geschehen (trotz der Gewissheit, dass sie nur in einer virtuellen Umgebung stattfinden). Im Gegensatz zur Ortsillusion, die im Wesentlichen durch die Art und Weise der Präsentation hervorgerufen wird, beruht die Plausibilitätsillusion auf den Inhalten der simulierten Welt. Dabei scheint die Glaubwürdigkeit der virtuellen Umgebung wichtiger als der sensorische Realismus zu sein, so könnte bspw. ein virtuell perfekt modellierter Mensch, der jedoch nur in stupiden Phrasen kommunizieren kann, zu einem Bruch dieser Plausibilitätsillusion führen. Dieser Bruch wird auch als Präsenzbruch (engl. „break in presense”) bezeichnet und kann sowohl bei der Orts- als auch bei der Plausibilitätsillusion stattfinden, indem die Umgebung nicht so reagiert, wie es der Anwender erwartet.

Die Involviertheit (engl. „involvement”) des Nutzers bezieht sich auf den Grad der Aufmerksamkeit bzw. das Interesse an der simulierten Welt. Ähnlich wie die Plausibilitätsillusion wird die Involviertheit hauptsächlich durch die Inhalte der virtuellen Welt hervorgerufen. So könnte ein Nutzer bspw. auf Grund einer überzeugenden Ortsillusion zwar das Gefühl vermittelt bekommen, sich an dem virtuellen Ort zu befinden, sich dabei aber trotzdem langweilen und nur gering involviert sein, wodurch nur ein geringes Präsenzgefühl entsteht. Andererseits könnte ein Nutzer auf Grund der Inhalte hoch involviert und damit mental immersiert sein, wobei auf Grund einer fehlenden Ortsillusion das Präsenzgefühl trotzdem ausbleiben kann.

Immersives Marketing

Während die mentale Immersion und Präsenz als psychologische Phänomene angesehen werden können, kann die physikalische Immersion als ein technisches und / oder körperliches Phänomen verstanden werden, da die Sinne über physikalische Reize durch den Einsatz von Technologie angesprochen werden. Hier erfahren Sie mehr über die Wirkungseffekte von immersiven Marketing.

Die folgende Abbildung soll abschließend noch einmal die Begrifflichkeiten rund um das Thema Immersion zusammenfassen:Das Zusammenspiel von mentaler Immersion, physikalischer Immersion und Präsenz.

Das Zusammenspiel von mentaler Immersion, physikalischer Immersion und Präsenz. © Kiani / Berger (2017)

Quellen:

  • Sherman, William R. / Craig, Alan B. (2003): Understanding Virtual Reality. INTERFACE, APPLICATION AND DESIGN. Morgan Kaufmann Publishers, San Francisco.
  • Dörner, Ralf / Broll, Wolfgang / Grimm, Paul / Jung, Bernhard (Hrsg.): Virtual und Augmented Reality (VR/AR). Grundlagen und Methoden der Virtuellen und Augmentierten Realität. Springer Verlag, Berlin, Heidelberg.
  • Slater, Mel / Wilbur, Sylvia (1997): A framework for immersive virtual environments (FIVE): speculations on the role of presence in virtual environments. In: Presence: Teleoperators and Virtual Environments. 6(6), MIT Press, Cambridge. S. 603–616.

The Future In-Vehicle Experience

The rapid development of new technologies in combination with the drastic change of consumer preferences offers great potential for the digitization of the automotive industry. The preferences of the younger and technically savvy generations are not only changing the behaviour of the consumers, but also the relationship between them and their vehicles. To be able to meet these needs, automakers are currently prioritizing the in-vehicle experiences like never before. They are trying to optimize their customer experience across all facets of the car journey. From vehicle entry and ignition to parking at the final destination, everything is going to change during the ongoing digitization of the automotive industry.

source: https://www.lucidmotors.com/media-room

Nobody can anticipate how the industry will look like in 10 to 15 years but the current industry leaders are able to shape this process. Shaping this evolution is currently based on the development of the following disruptive and technology driven trends.

Autonomous Vehicles

Autonomous or self-driving vehicles are currently not publicly available but a lot of different car companies are already testing semi autonomous and autonomous cars on public roads. These self-driving cars will minimize the need for human drivers and transform the everyday mobility of millions of people. Advanced driver assistance systems (ADAS) are already slowly preparing the consumers, regulators and insurance companies for the next step.

Driving automation is divided into six different stages (Level 0 – 5) defined by the SAE levels of automation. At the moment, most of the publicly available cars are limited to SAE Level 2 automation, where the human driver still has to monitor the environment. With the step to the next level, SAE Level 3 automation, the system is able to monitor the environment independently and the driver will be a passenger for a part of the trip. These semi-autonomous cars will drastically change the interaction between drivers and their cars. The driver’s role will shift from driving the car to supervising the automated driving system until he has to take back control over the steering wheel and gas pedal.

https://www.youtube.com/watch?v=EzylsrXtkxI

Technological challenges like the development of artificial intelligence, machine learning and deep learning will be the key factor for the speed of this transformation. But country specific regulations and the acceptance of the consumers are additional hurdles for the release of semi-autonomous and autonomous vehicles. By reducing accidents based on human errors or fatigue, self-driving cars will help to improve the safety of individual and public transportation in the short term, lead to more productivity in the mid term and to a much better traffic flow in the long term.

Fully autonomous cars will also allow their drivers to use the transit time during their ride for additional activities. Work while commuting, scrolling through social media on the way to the grocery store or even sleeping while long distance road trips will become normal in the future.

Connected Cars

Modern cars are much more than a simple way of transportation for getting from point A to point B. For many people, a car is also an extension of themselves. Connected cars are also extensions of the customers home, school or office. A modern car is equipped with hundreds of sensors and systems that are connected to the internet as well as the surroundings to be able to deliver more convenience to the drivers. With these sensors, cars are already part of the Internet of Things and will get more and more connected features in the future. Current functionality of connected cars include smartphone apps with the most important car data, automated maintenance planning based on the driving habits or automatic emergency calls after accidents. In the future they will also be able to communicate with other cars, traffic lights, fuel stations or even toll stations to be able to deliver the most convenient ride possible.

The digitization of the automotive industry will drastically change the way we are interacting with our vehicles. Starting with a fingerprint sensor on the door for unlocking the car and personalization of the seating position, light color and social media channels up to a completely immersive entertainment hub with new retail channels. The passenger experience will become the most important feature of connected cars in the future.

Electrification

The electrification of all vehicles is currently the best and most effective way for reducing emissions. This process will also be the basis for making mobility 100% emission-free in the future. Since fossil fuels are limited on our planet and their harm done to the environment is huge, the European Green Deal is planning on becoming climate neutral by 2050.

Currently, electric vehicles (EV) are too expensive, have poor batteries and also a poor range. The charging infrastructure is not where it should be and most of the time the charging stations are also not powered by renewable energy and therefore also not emission-free. Lower battery costs and more charging stations are new possibilities for all electrified vehicles. Electrified vehicles do not only include battery powered electric cars like a Tesla, they also include hybrid cars, plug-in hybrids and hydrogen fuel cell cars. With the combination of classic combustion engines and electric engines, hybrid cars will make up a large percentage of future vehicle sales and be especially popular for people living in less dense populated areas.

source: https://ocw.tudelft.nl/course-readings/2-2-2-lecture-notes-types-of-ev/

The biggest growth for all electrical vehicles will take place in large or densely populated cities. Especially megacities like Tokyo or Paris or cities with strict emission regulations and consumer incentives like a lower tax, special parking or driving privileges will profit more from the electrification of the automotive industry than the rural areas.

Shared Mobility

Shared mobility is not really a new concept. The first car sharing was already offered back in the 1940ies in Switzerland. But the tech-enabled shared mobility we know today just started a few years ago. It is an umbrella term for all services where the actual vehicle is shared like the classic car sharing and where just the ride is shared like public transportation, taxis, carpooling, ride sharing and ride hailing.

Shared mobility will become a viable alternative to classic vehicle ownership in the future, especially in big cities. If people stop paying for the vehicle itself and are starting to pay for mobility, they will also be able to choose the best solution for every specific purpose rather than using their cars as all-purpose vehicles. Everything will depend on the purpose. If you just want to commute to your office for a few kilometers on a sunny day in summer you may just need a E-Scooter or small car, while you may need a bigger car for a shopping trip with your whole family. This focus on the purpose may also lead to the development of vehicles optimal for only one specific task to fully meet the needs of the consumers.

source: McKinsey – Auto Report 2030 (2016)

When self-driving cars are available to the public, shared mobility will be a key driver for the growth of the automotive industry. There are already early signs that the importance of car ownership in megacities is declining in favor of shared mobility. Owning a car in a big city has a lot of different disadvantages. From limited and more expensive parking, to higher congestion fees and more traffic jams. Everything is getting more and more complicated when owning and using a personal car in a densely populated area. While the people living in rural areas will still prefer owning a private car, there will be a huge potential for a lot of completely new business models and also new players in the industry along this transformation.

Shared Autonomous Electric Vehicles (SAEV)

Shared autonomous vehicles will have the largest disruptive potential by simply combining all four trends to one concept. From the customer’s point of view all SAEV’s will be more flexible and more personalized while still emitting less exhaust fumes and noise into the environment. Additionally they will also take up less time because they are moving fully autonomously and drivers can do whatever they want during the ride. Shared autonomous vehicles will also be more accessible for people with physical disabilities or people without a drivers licence and also more affordable because you only need to pay for mobility when you really need it.

source: https://www.rubbernews.com/article/20181227/NEWS/181229990/for-autonomous-vehicles-to-gain-passengers-trust-communication-is-key

Conclusion

The future in-vehicle experience does not only offer a huge potential for automakers and industry leaders. It is also a great opportunity for UI and UX designers. While the in-vehicle user experience has been quite static for decades, automotive interaction designers have already fundamentally changed the in-car experiences in the last few years. Automotive interaction designers will be the bridge between the highly advanced technologies and the passengers in future. Enabling the passengers to access these technologies in a user friendly and simple way will be the biggest challenge for interaction designers in the automotive industry. Possible research questions for interaction designers could include but are not limited to:

  • How can autonomous vehicles (SAE Level 3-5) gain the trust of the passengers?
  • Is the current usage of ADAS in SAE Level 2 cars helping to gain trust in the systems and preparing passengers, insurance companies and regulators for the next step?
  • How is the role of the driver changing during the development of more advanced cars with more and more self-driving features?
  • How are we interacting with our cars in 10 years?
  • How can we adapt and optimize the in-car experiences to the needs of different drivers and passengers?
  • Is the usage of touchscreens safer and better than the usage of classical knobs for different purposes?
  • How does the in-vehicle infotainment of the future look like and what does it offer to the different passengers?
  • How does the most convenient ride look like for different target groups?
  • How immersive is the future in-vehicle experience going to be?
  • Does the usage of extended reality in cars add value for the passengers?
  • Is artificial intelligence going to affect the design of the in-vehicle infotainment systems and how the passengers are interacting with it?
  • Is shared mobility going to skyrocket after the release of the first SAE Level 5 cars?
  • Is shared mobility a viable alternative to classic vehicle ownership throughout Europe? 

Resources:

Accenture – Mobility as a Service; Auto 2030 Report; StartUs Insights – Automotive Industry Report; CBInsights – Future Of In-Vehicle Experience; Frances Sprei – Disrupting Mobility; Experiences per Mile – 2030 Report; PWC – Five trends transforming the automotive industry;

Best practise examples for connected cars in 2020:

Tesla Infotainment, Rivian Infotainment, Lucid Air Infotainment, Byton Connected Car, Mercedes-Benz MBUX, Porsche Connect, Polestar Android Automotive, Audi MMI, BMW iDrive

Research in Austria:

Austrian Mobility Labs; Mobility Lab Graz; Aspern Mobil LAB Vienna; MobiLab OÖ; Thinkport Vienna; UML Salzburg; Austrian Mobility Research FGM-AMOR; Virtual Vehicle Research; FH JOANNEUM

Research worldwide:

R&D Centers of car manufacturers, Center for Automotive Research; Harman; McKinsey Center for Future Mobility; Accenture Automotive; Startus Insights Automotive; CBinsights Automotive; Deloitte Digital – Ericsson Automotive; IBM Institute for Business Value

The “Rules” in Film

Over the last 70 years people in the industry learned what usually works and what not, doesn’t matter if its in the field of storytelling, cinematography or in the post productions sector. Most mainstream films in cinema, especially those from Hollywood, seem to have a lot in common with each other and we don’t seem to get a lot of new ideas from there. To answer why that is, is very simple – production companies love a guaranteed success! This why most of what we see in the cinema nowadays is always pretty familiar. From story structure, character developments to the way it’s filmed and edited. In this blog post I’d like to focus on the most common practices, or as some people like to call them rules, that are used in film.

Storytelling

It’s not necessary to to reinvent the wheel in order to write the story for a box-office hit. In fact it’s probably even better to stick to the rules of a three act story structure or also called a seven point story structure according to american screenwriter and author Blake Snyder. In “Save the Cat” Snyder gives clear instructions on how to write a entertaining story and even provides a so called beat-sheet. It determines what should vaguely happen at what page of the script and is pretty strict about it. All though he is also criticized for his harsh approach, it’s clear that most films follow these guidelines. He also points out that he didn’t invent those rules, they come from his observation and colleagues that he met over the years. Basically saying that those rules and guidelines for a good story where always there, he just wrote them down.

Blake Snyders Beatsheet:1
Opening Image (p.1), Theme (p.5), Set-Up (p.1-10), Catalyst (p.12), Debate (p.12-25), Act II (p.25-30), B Story (p.30), Fun & Game (p.30-55), Midpoint (p.55), Bad Guys Close In (p.55-75), All is Lost (p.75), Dark Night of the Soul (p.75-85), Act III (p.85), Finale (p.85-110), Final Image (p.110)

How to Write a Novel Using The Three-Act Structure
Three Act Story Structure2

Without context the beat-sheet is probably a bit meaningless to most people but in the right hands a very strong tool for making an exciting and entertaining story. Blake Snyder also states that there are only ten types of movies. Every movie that exists can be assigned to one of those types. A few of his types are for example “Dude with a problem”, “Superhero” or “Buddylove”.

Composition

There are many ways to composite an image, yet some compositions just work and are a great basis to begin with. There’s no official rule book on this topic but the following “rules” or suggestions are probably the most common and used ones in film and television.

  1. Rule of Thirds
    Divides the picture into a 3×3 raster that serve as a guideline on how to frame objects, people or points of interests in your frame.
  2. The 180 Degree Rule
    Depicts the radius in which you should place the camera when shooting dialogue between people.
  3. Shot Types
    There are 3 type of shots with several variations of it, the wide, medium and the close-up. They define how much of the person or object is visible in the frame.
  4. Size Equals Power
    This rule gives important information about the perception of size in the frame. If it’s important or mighty, it should be filmed in a close-up.
  5. Leading Lines
    Any objects, structures or textures can shape lines in the frame. This rule says to frame for having those lines run into our point of interest, e.g. into the actor. 3
Rule of thirds in filmmaking explained - Media Maker Academy
Rule of Thirds in Harry Potter4

Editing

Walter Murch, who is famous for his editing work with names like Francis Ford Coppola and George Lucas describes in his book “In the Blink of an Eye” a good edit as one that respects “The Rule of Six”. But what exactly does he mean when he talks about “The Rule of Six”?

The “Six” he is referring to are Emotion, Story, Rythm, Eye-trace, Two-dimensional plane of screen and Three-dimensional space of action. Those are the six elements that, if respected, make an ideal cut to him. He explains that in traditional cinema, especially back in the beginnings of sound film, the common practice was to always stay true to the position of the actors in between cuts. Back then jump cuts were seen as a mistake which, as we can see in films and shows nowadays is no longer the case. For Murch the most important thing in an edit is the emotion, as the only thing the audience will remember in the end is not the editing, camerawork or the performances but what they felt when watching it. He continues with providing a percentage of importance for his six criteria that make a good edit.

  • Emotion = 51%
  • Story = 23%
  • Rythm = 10%
  • Eye-trace = 7%
  • Two-dimensional plane of screen = 5%
  • Three-dimensional space of action = 4% 5

Murch’s “Rule of Six” should help editors on what to look out for when putting a scene together and also gives a sense for prioritization. But apart from the “Rule of Six” there are also some other common techniques that go more into the detail, for example:

  • The “J-Cut / L-Cut”
    Other than the “Hard-Cut”, which cuts the audio and visuals at the same time from one clip to the next, the L or J-Cut interpolates the audio in between two clips. Often used for dialogue scenes or to make a cut more seamless.
L-Cut and J-Cut shown in a Timeline6
  • The “Third Person at the Table Technique”
    This technique is a powerful tool to get a sense for when to cut between people having a dialogue. I learned this trick in school while working on a documentary but haven’t found a name for it on the internet so I came up with this one. The “Third Person at the Table” is referring to the audience that is in the position of the camera – when would the audience look where in the scene? Naturally people don’t always look at the person speaking, sometimes they get a reaction or other times they stay on someone a bit longer before switching to the one speaking. Nothing happens immediately! To follow this technique the editor has to imagine to actually be in the room and cut between shots like if he was looking around. I recently also found this video from CineD going into futher detail on this technique.
  • The “One Frame Trick”
    Another useful technique I learned in my bachelor years is the “One Frame Trick”. It states, that when cutting to a beat, music or SFX, the visuals should always come (at least) one frame earlier than the audio. It seems to most people that it just matches better than cutting on beat.7
  • Cutting Patterns
    Some patterns of switching between shot types (wide, medium, close-up) established to work better than others. The website “cuvideoedit” gives a breakdown on the most common cutting patterns:

    Conventional
    wide > medium > close-up (working closer towards the action)

    Reveal
    close up > medium or wide (slowly revealing more information)

    Matching Action
    cutting on movement for dynamic and seamless edits.8


Conclusion

I strongly believe that everything in this blog post is very fundamental and important knowledge for everyone working in the field of film creation. Although it’s a discussion worthy topic whether you you want to call them rules or not- I’d rather call them differently but calling them “techniques that have already proven to work reliable” is quite a long way to phrase it. The more interesting question is, if you rather want to stick to those conventions or not and even the professionals in the field don’t have an agreement on this.

For example, let’s go back to the Snyder and Murch. Blake Snyder is convinced about his strict approach in order to get a working story. He is sticking to what has already been done before him and deviations from his instructions are conceived as mistakes to him (which he clearly points out in his book). Walter Murch on the other side is a lot more vague when giving instructions. He is strongly referring to the emotional aspect of editing a film which is very hard to define and break down. He is also very much deviating from the traditional way of editing a film, which (if you remember) was very strict about the position of the characters in space and traditional cutting patterns. Before the french new wave happened, most of the rules in this blog-post were established and back then they were without a doubt rules, no quotation marks needed.

Sources:

1) Snyder, Blake: Rette die Katze! Das ultimative Buch übers Drehbuchschreiben, 2. Auflage, Autorenhaus Verlag, Berlin 2015

2) https://blog.reedsy.com/three-act-structure

3) https://www.studiobinder.com/blog/rules-of-shot-composition-in-film/
https://www.diyphotography.net/five-composition-rules-filmmaking-break/
https://motionarray.com/learn/filmmaking/shot-composition-framing-rules/

4) https://mediamakeracademy.com/rule-of-thirds-in-film/

5) Murch, Walter: In the Blink of an Eye – A perspective on film editing, 2nd Edition, Page 17 – 18

6) https://www.techsmith.com/blog/how-to-edit-videos-l-cuts-and-j-cuts/

7) https://youtu.be/7E_mi_xNYOk

8) http://www.cuvideoedit.com/rules-of-editing.php

Intercultural Communications 0

I have chosen the theme “Intercultural communications” as my research topic. As a foreigner student living in the Europe, I have experienced many cultural differences as well as similarities between Middle Eastern and European culture. These differences and similarities are varied in many different ways, such as the way people communicate with each other, the way they decorate their houses, how they shop their groceries and cook, how they celebrate, how different relationships define, better to say, how “design” in different fields looks like. I suppose many international design students such as me, need some time to get to know the new culture in order to make different designs for it. Personally, I often try to find a way between both cultures in order to create something unique as well as understandable for both cultures.

There are many questions that I ask myself almost all the time, for instance, what does beauty mean? What is the best technique to show a specific message? How can I break boundaries and make something understandable without any geographical restriction? Is there any way so I could make at least a small summary about design and beauty factors in different cultures and compare them?

There are many Iranian artists who have more or less same experiences such as, Marjane Satrapi, Shirin Neshat, Farhad Moshiri etc. who are inspirations for me and also have tried fining answers for these questions.

I would like to start my research about defining intercultural topics and realize firstly, what really happens when two different cultures confront each other. After that, I would like to investigate about visual cultural differences such as forms, colours, compositions etc. For example, how do people work on same subject but in different cultures and how do they use forms and colours and other visual elements.

mosque ceiling, Iran
church ceiling, Europe

Die Verwendung von Traumsequenzen in narrativen Filmen

Der Zusammenhang von Freuds Psychoanalyse und Rauschzuständen wurde schon im Blogpost „Die psychoanalytische Filmtheorie von Rauschzuständen im Film“ thematisiert. Auch Träume können als eine Art Rauschzustand aufgefasst werden. Traumsequenzen im Film werden oft als Erzähltechniken verwendet um Zwischensequenzen von der Hauptgeschichte abzugrenzen. Dazu zählen sowohl Träume, Rückblenden, Visionen, Halluzinationen, Rauschzustände, Bewusstseinsveränderungen oder Phantasievorstellungen. Diese Zwischensequenzen grenzen sich durch Unterschiede, Raum und Zeit betreffend ab. Sie werden oft genutzt um einen tieferen Einblick in die Gedanken oder Psyche eines Charakters zu geben. Träume werden aber auch genutzt um einen Charakter auf neue Ideen zu bringen oder ihm bewusst zu machen, dass das Erlebte nicht real war. Dieser Prozess des sich etwas bewusst Werdens wird auch verwendet um unrealistischen Handlungssträngen wieder einen Kontext zu geben und aufzuklären, dass es sich dabei nicht um die Realität gehandelt hat. Meist handelt es sich bei diesen Traumsequenzen im Film nicht um luzide Träume. Die Person, die träumt ist sich dessen also nicht bewusst und bemerkt erst nach dem Traum, dass es sich um einen solchen gehandelt hat. Es wird prinzipiell zwischen zwei verschiedenen Arten von Traumsequenzen unterschieden.

  1. Klar geklammerte Traumerzählungen: Bei diesen Traumsequenzen ist dem Betrachter von Anfang an klar, dass es sich um einen Traum handelt. Oft wird der Ein- und Ausstieg in und aus dem Traum sogar in die Handlung miteinbezogen.
  2. Nachträglich konstruierte Traumerzählungen: Hierbei wird dem Betrachter erst im Nachhinein klar, dass es sich bei den Geschehnissen nicht um die Realität, sondern um einen Traum des Charakters gehandelt hat.

Traumsequenzen gehören außerdem zum „Spektrum der filmischen Darstellungsmöglichkeiten von Subjektivem, was in manchen Filmerzählungen dazu führt, dass Traumsequenzen zur psychologischen Ausgestaltung von Figuren verwendet und nach Freudschen Kriterien gestaltet werden.“.[1]

Um Traumsequenzen audiovisuell vom Haupterzählstrang zu trennen wird oft mit Musik, zum Beispiel Harfenklang, oder dem Verschwimmen eines Bildes gearbeitet. Der Traum selbst wird oft durch sehr phantastische und surreale Elemente, eine andere Farbgebung oder Unschärfe hervorgehoben.

Es gibt aber auch Regisseure, die solche üblichen audiovisuellen Abgrenzungstechniken bewusst meiden und lieber andere Zugänge, Mittel und Techniken verwenden um solche Sequenzen von der Hauptgeschichte zu trennen. Alfred Hitchcock hat 1945 Regie in einem der ersten Filme, die sich mit Sigmund Freuds Psychoanalyse beschäftigten, geführt. Hitchcock entschied sich bei dem Film Spellbound für eine zu dieser Zeit unübliche Art Träume darzustellen. Seiner Meinung nach waren Träume sehr lebhaft und klar, was die vernebelte, unscharfe Darstellung, die sonst verwendet wurde um Träume als solche zu kennzeichnen, ausschloss. In Zusammenarbeit mit Salvador Dalí entstand so eine Traumsequenz, die es schafft einen faszinierenden Einblick in die Psyche des Hauptcharakters zu geben. Dazu verwendet er Motive wie die fließende Zeit, verhüllt Gesichter oder Spielkarten. Die Szene gibt Aufschluss über unterdrückte Erinnerungen und Gründe für den Gedächtnisverlust des Charakters. Der Künstler Salvador Dalí wurde bereits in dem Blogpost „Warum Walt Disney immer schon als Vorreiter im Bereich psychedelischer Szenen im Animationsfilm galt.“ erwähnt. Er hat nämlich auch schon mit Walt Disney zusammen an Traumsequenzen für Animationsfilme gearbeitet und schon öfter surreale Elemente in seine Filme miteinfließen lassen.

Alfred Hitchcock’s Spellbound with Salvador Dalí

Wie also zu erkennen ist können Träume, Rückblenden, Visionen, Halluzinationen, Rauschzustände, Bewusstseinsveränderungen oder Phantasievorstellungen durch unterschiedliche audiovisuelle Merkmale vom narrativen Handlungsstrang abgegrenzt werden. Wichtig ist nur, dass sie sich überhaupt abgrenzen und in drastischem Gegensatz zur realen Welt stehen. „Der stilistische Bruch zwischen Realität und Traum ist absichtlich groß, um die Wünsche unerfüllbar und weltfremd erscheinen zu lassen.[2]


[1] o.V. (22.04.2012): Traumsequenz. In:

http://filmlexikon.uni-kiel.de/index.php?action=lexikon&tag=det&id=7646 (zuletzt aufgerufen am 11.12.2020)

[2] Schöpe, Maria: Traumsequenzen. Ästhetik sequenzieller Imagination im Film. Diplomarbeit Hochschule für Film und Fernsehen Konrad Wolf Potsdam Babelsberg. Potsdam 2007, S.95