Graphic Design Cake

Visual Languages als Basis von Grafikdesign 

Wenn man sich ein beliebiges Design als Kuchen vorstellt, von dem es drei Schichten gibt:

  • Visual Language – visuelle Sprache (Elemente wie Illustration/Grafik, Fotografie, Typografie)
  • Graphic Design (Mittel, Prozess)
  • Visual Communication – Visuelle Kommunikation (Ergebnisse)

Die visuelle Sprache oder Bildsprache bildet die Zutaten des Kuchens. Wenn diese visuelle Sprache verbalisiert würde, hätten wir Töne, Wörter, Pausen und Intonation.

Grafikdesign ist der fertige, vollständig gebackene Kuchen. Wenn Grafikdesign verbalisiert wird, haben wir Inhalte und gesprochene Nachrichten.

Die visuelle Kommunikation ist das Essen des Kuchens. In verbalen Begriffen entspricht dies einem Gespräch oder Dialog zwischen Menschen. 

Diese Metapher zeigt, wie Illustration und Fotografie als Gestaltungselemente, bzw. bildgebende Elemente in die Schicht der visuellen Sprache einzuordnen sind und welche Aufgabe sie im großen Ganzen übernehmen. Und je nachdem welche Elemente man verwendet, schmeckt der Kuchen anders.

Die visuelle Sprache von Illustration und Fotografie im Vergleich

Anhand der folgenden Beispiele sieht man sofort, wie unterschiedlich die Wirkung bzw. die Sprache und der Ausdruck der beiden Elemente ist. Hier muss jedoch dazugesagt werden, dass es sich hier um extrem reduzierte, flache und minimalistische Illustrationen handelt, was natürlich an sich schon eine sehr starke Bildwirkung erzielt.

Grundsätzlich lässt sich sagen, dass Illustration als Visual Language oft mehr Gefühl vermittelt, einem Design mehr Charakter verleiht, ihm mehr Stimmung gibt, oft nicht real wirkt, spielerischer, verspielt, kindlicher, humorvoll und lockerer. Illustrationen repräsentieren oft eher eine Idee, ein Konzept oder etwas Abstraktes oder Fiktives.

Das folgende Beispiel zeigt, wie viel Einfluss der verwendete Stil hat und wie schon allein dadurch die Wirkung einer Illustration beeinflusst werden kann. Illustration kann auch im Vergleich zur dokumentarischen Fotografie zusätzliche Information übermitteln. Man kann dadurch die Wahrnehmung und Wirkung beeinflussen, vor allem lassen sich durch die »Stimmung« einer Illustration bestimmte Gefühle erzeugen.

Eine Illustration hat immer eine stärkere emotionale Dimension als reine Fotografie, welche meist als sachlicher, und damit distanzierter und kühler empfunden wird. Illustrationen vermitteln eine gewisse Art von »Menschlichkeit«. Illustrationen werden meistens entweder positiv oder negativ wahrgenommen, wohingegen ein Foto meistens neutraler betrachtet wird. Die Assoziation mit abstrakten und allgemeinen Themen, die wir mit Illustration haben, führt dazu, dass dieses Medium eine stärkere emotionale Qualität hat. Im Gegensatz dazu sind konkrete Darstellungen eindeutig und klar und wir nehmen sie eher zur Kenntnis ohne darüber nachzudenken.

Eine Illustration muss man immer interpretieren und ist nie so schnell oder selbstverständlich wie ein Foto. Warum Illustrationen jedoch so beliebt sind, ist weil Illustrationen Gedanken anregen können, suggerieren, erzählen, narrativ wirken, Konzepte/Ideen abstrahieren, Geschichten komprimieren, Aussagen verdichten oder präzisieren können. Mit einer Illustration kann oft viel mehr ausgedrückt werden als mit einem Foto.

The Lord of the Rings: The Fellowship of the Ring by Matt Chase

COVID Music industry

The Devastating Impact of COVID-19 on the Music Industry

It’s now been more than a year since COVID-19 first started spreading in China. The virus has not only killed infected people, but it has also taken many industries with it. The music industry is one of Corona’s economic victims. In a nutshell, sources like Billboard and Amsterdam Dance event, the music industry finally recovered from a drastic downfall it experienced and electronic music artists in 2019 were making more than during the 2017 peak of the industry. The pre-covid yearly industry revenue was 7.3 billion, and now it has fallen almost 70%, which is a shocking pitfall. This article talks about the ways in which the industry is trying to cope with the loss.

The first major thing that happened was that over 90% of festivals were cancelled. Shortly after, clubs followed. This hurt artists a lot, because ever since the start of the 00’s, live performances have made up the majority of any musician’s revenue. Everyone moved online, organised streams and virtual festivals, which are accompanied by fundraisers intended to support the staff that has been put out of work due to the virus (i.e. promoters, bartenders, sound staff). However, these efforts were not enough, and pretty quickly we saw the closing of some of the most iconic clubs in Europe.

The Summer came and Ibiza and Majorca did the unimaginable- they closed their seasons. These islands are famous for their clubbing scene and at least 80% of their economy relied on party tourism. This has put many seasonal staff out of work.

Since standard festivals couldn’t be held anymore, organisations came up with the concept of “drive-ins” and “socially-distanced” festivals, but this is obviously not sustainable in the long run. Firstly, it is not cost-efficient and tickets are quite expensive. On top of that, festivals rely on international visits, which is highly discouraged during the global pandemic times.

One name that stood out with its charitable work and artist support during these times was Bandcamp. The biggest news they announced is that every Friday, the company would forgot about 15% revenue made from digital sales, channeling that money back to the independent artists. Besides the company itself, numerous artists made campaings on the pplatform where they donated 100% of album revenue to COVID relief organisations.

Many people are in need of mental support during these times, because it is hard to cope with quarantining and not being able to live a normal life. But there have been some positives- we are united more than ever and everyone is trying to help each other. Pineer DJ company has made a documentary that gives insight into how the music industry staff have been handling the situation. They showcase the stories of 5 prominent electronic DJs, who talk about their past and current experiences.

Resources

https://www.billboard.com/articles/news/dance/9419311/2020-ims-business-report-covid-19-impact-global-dance-music

https://vnpf.nl/media/files/20201021-ade-2020-report—the-electronic-music-industry-during-covid-19.pdf

https://www.latimes.com/entertainment-arts/music/story/2020-09-22/bandcamp-anti-spotify-streaming

https://www.pioneerdj.com/en/news/2020/distant-dancefloors-covid-19-and-the-electronic-music-industry/

Audiovisual Aesthetics of Sound and Movement in Contemporary Dance

I want to share with you this interesting experiment I just discovered on the relationship between dance and music.

“How do movement and sound combine to produce an audiovisual aesthetics of dance? We assessed how audiovisual congruency influences continuous aesthetic and psychophysiological responses to contemporary dance. Two groups of spectators watched a recorded dance performance that included the performer’s steps, breathing and vocalizations, but no music. Dance and sound were paired either as recorded or with the original soundtrack in reverse, so that the performers’ sounds were no longer coupled to their movements. A third group watched the dance video in silence. Audiovisual incongruency was rated as more enjoyable than congruent or silent conditions. In line with mainstream conceptions of dance as movement-to-music, arbitrary relationships between sound and movement were preferred to causal relationships, in which performers produce their own soundtrack. Performed synchrony granger- caused changes in electro-dermal activity only in the incongruent condition, consistent with “aesthetic capture”. Sound structures the perception of dance movement, increasing its aesthetic appeal.”

Method Participants 

Thirty-four participants, (9 male, 25 female), ranging in age from 18-51 years (M = 27.41; SD = 7.3), volunteered to take part. Musicality ranged from 48-100 (M = 74.71; SD = 17.85). on the Goldsmiths musical sophistication index v1.0 (GMSI) musicality scale. Musicality involves a wide range of musical behaviours and musical engagement such as understanding and evaluation (Müllensiefen, Gingras, Musil, & Stewart, 2014). The GMSI is a standardised instrument that provides a measure of musicality in the general population, which ranges from 18-126 with an average score of 81.58 (SD = 20.62) across the general population 

(Müllensiefen, Gingras, Musil, & Stewart, 2013). Dance experience was assessed using a custom-made questionnaire, which asked people how many years dance experience they had, and how often they watched recorded dance. Fifty-three percent of participants reported previous formal dance training, but none were professional dancers, see table 1 for a demographic breakdown by experimental group. Participants volunteered in response to a Goldsmiths participation page on Facebook, and all participants signed a written consent form before taking part in the experiment. Participants were signed up to experimental groups based on their availability; allocation of experimental group to condition was randomized using numbered sealed envelopes containing randomization cards for either the congruent, incongruent, or silent conditions. The groups included 10, 14 and 10 participants in the congruent, silent, and incongruent groups respectively. All participants were included in the physiological and qualitative analyses however the tablet data of 8 participants was lost due to technical problems, leaving only 8, 10 and 8 in the congruent, silent, and incongruent groups for the enjoyment analyses. 

Sources:

Howlin, Vicari, Orgs – Audiovisual Aesthetics of Sound and Movement in Contemporary Dance

Klangliche Stilmittel in Videospielen (Teil 2: Signale, Stereotype, Subjektivierung)

Im letzten Blogeintrag bin ich auf die von Gerhard Hackl genannten Symbole, Leitmotive und Key Sounds eingegangen, die häufig in Filmen und Videospielen vorkommen [1]. In diesem Post werden zunächst weitere allgemeine Stilmittel ermittelt, die sich sowohl klanglich als auch visuell etabliert haben. Anschließend beschreibe ich mehrere rein klangliche Effekte zur Subjektivierung, die es dem Zuschauer erlauben, sich besser in die Rolle der Charaktere hineinzuversetzen.

Signale

Ein Signal ist ein Klangobjekt, das eine gesellschaftlich definierte und kommunikative Bedeutung hat. Das Signal übermittelt eine Information, die zum Handeln auffordert und/oder als Warnung dient, wie z.B. beim Heulen einer Sirene.

Meistens weisen Signale eine simple klangliche Grundstruktur auf, die ihren Mittelpunkt hinsichtlich des Frequenzspektrums in einem für Menschen kritischen und leicht hörbaren Frequenzbereich hat. Das hat zur Folge, dass dieses Geräusch trotz ungünstiger akustischer Bedingungen trotzdem deutlich gehört werden kann.

In Filmen oder Videospielen erklingen Signale häufig ohne den dazugehörigen visuellen Elementen, welche die klanglichen Signale verursachen. Damit soll der Beobachter auf einer höheren affektiven Gefühlsebene beeinflusst werden und Emotionen wie Angst, Aggression oder Vorsicht herbeiführen. In der Spielereihe Grand Theft Auto erklingen z.B. Sirenen, sobald eine Straftat begangen wurde und man von der Polizei verfolgt wird. Ziel des Spielers ist es nun, die Polizei abzuhängen und solange von der Bildfläche zu verschwinden, bis die Polizei die Verfolgungsjagd aufgibt.

Stereotype

Abgesehen davon, dass der Begriff heutzutage negativ konnotiert ist, tragen Stereotype eine wichtige Funktion als Orientierungshilfe. Mit ihrer Hilfe können Unterschiede der komplexen Außenwelt mit eigenen inneren Vereinfachungen bewältigt werden.

In Filmen und Videospielen werden Stereotype durch häufige Verwendungen, die sich über zahlreiche Filme und Videospiele erstrecken, geschaffen, um beim Rezipienten im Langzeitgedächtnis verankert zu werden. Ein typisches Beispiel wäre das Kreischen eines Adlers, das die Leere und Weite einer Landschaft verstärken soll oder im Horrorgenre das Wolfsgeheul bei Vollmond, das die Gefährlichkeit der Nacht symbolisiert und ein Gefühl von Angst beim Rezipienten erzeugen soll.

Da Stereotype dazu dienen, Komplexitäten zu reduzieren, werden außerdem entweder Ähnlichkeiten verschiedener Charaktere und Sachverhalte überbetont oder deren Unterschiede stark kontrastiert. Klanglich wird oft damit gearbeitet, dass “das Böse” mit Geräuschen gestaltet wird, die ihren Fokus im unteren Frequenzspektrum haben, somit eher dumpf klingen und ein Unbehagen beim Beobachter auslösen. “Das Gute” wird hingegen mit Klängen im Mittelton-, oder Hochtonbereich dargestellt, die dem Beobachter vertraut sind und dementsprechend sympathischer erscheinen. Mit solchen Stereotypen lassen sich Strukturen einfacher verdeutlichen. Sie machen es dem Beobachter leichter, Sachverhalte und Charaktere einzuordnen.

Subjektivierung

Stilmittel, die zur Darstellung der Sicht einer Figur oder aber auch zur Darstellung einer Veränderung von Wahrnehmungen, wie bei Träumen, Halluzinationen, Erinnerungen und Visionen dienen, werden Subjektivierung genannt. Diese kommen ursprünglich in Filmen vor, haben aber eine noch größere Bedeutung in Videospielen, in welchen man häufig aus der Sicht einzelner Charaktere spielt. Dabei bedient man sich verschiedener klanglicher Effekte und/oder der Dissoziation von Bild und Ton. Die Verfremdung des Klangmaterials oder die Diskrepanz von Bild und Ton, die beim Rezipienten einen logischen Konflikt erzeugt, wird durch kognitive Bemühungen und Zuordnungen interpretiert und dementsprechend als wahrnehmungsverändernd wahrgenommen.

Stille

Stille in einer Situation, in welcher es eigentlich laut sein müsste, ist häufig die Darstellung eines Realitätsverlusts. Das wird so erklärt, dass der Mensch ständig im sensorischen Austausch mit der Außenwelt steht und quasi bewusst oder unbewusst, selbst im Schlaf, klangliche Geräusche registriert. Beim Effekt der Stille trennt sich also die Figur von der Lautsphäre und somit auch von der Realität.

Lautstärke

Lautstärke bekommt erst eine Bedeutung durch ihren Kontrast. Ein plötzlicher Anstieg der Lautstärke bewirkt ein reflexartiges Zusammenzucken beim Zuschauer und verursacht Verängstigung. Auch wird mit der Lautstärke der Grad der Aggressivität der Spielfigur dargestellt.

Generell muss Lautstärke im Zusammenhang mit der Dauer betrachtet werden. Lang andauernde laute Geräusche und Klangkulissen verursachen Stress und werden auch in Videospielen genutzt, um ein Unwohlsein der Spielfigur darzustellen. Auch können lange und laute Geräusche andere Sinne beeinflussen und Gleichgewichtsstörungen, Schwindelgefühle, Schmerzen oder aber auch positive trance-artige Zustände hervorrufen.

Hall

Hall beschreibt einen bestimmten geistigen Zustand einer Figur. Oft werden damit Träume oder Erinnerungen dargestellt. Verhallte Geräusche werden hierbei in Form von akustischen Rückblenden genutzt. Ein weiterer gewünschter Effekt bei der Nutzung künstlichen Halls entsteht, wenn die klangliche räumliche Repräsentation nicht mit dem Gesehenen übereinstimmt. Durch die ungewohnte klangliche Umgebung fühlt der Rezipient eine gewisse Unsicherheit.

Zeitlupe

Die Zeitlupe wird als Effekt genutzt, um die Aufmerksamkeit des Beobachters auf bestimmte Momente zu lenken. Die Zeitlupe zielt auf das Phänomen ab, dass Zeit als etwas subjektiv dehnbares gesehen wird und in Abhängigkeit von Ereignissen und deren Intensität unterschiedlich lang wahrgenommen wird. Ein weiterer Effekt, der bei der Verlangsamung von Geräuschen entsteht, ist die Erhöhung ihrer dramatischen Wirkung, da diese Geräusche nach der Bearbeitung tiefer und somit auch voluminöser klingen.

Vergrößerung

Damit ist die Hervorhebung eines Geräusches von der restlichen Klangkulisse gemeint. Diese wird durch eine erhöhte Lautstärke, einen größeren Hallanteil oder durch klangliche Verfremdungen erzeugt. In manchen Fällen wird auch die Quelle des Geräusches durch eine andere ersetzt. Mit einer Vergrößerung wird die Wertung oder Relevanz einer Sache aus Sicht der Figur beeinflusst.

Atmen und Herzklopfen

Diese bilden eine besondere Form von Stilmitteln und repräsentieren eine extreme Form von Anspannung oder Lebensbedrohung. Diese Geräusche werden besonders hervorgehoben, wenn die Figur droht zu sterben oder einer extremen Belastung ausgeliefert ist. In vielen Spielen werden diese Geräusche außerdem genutzt, um dem Spieler ein bestimmtes Handeln zu suggerieren. Wenn man beispielsweise kurz davor ist, in einem im Spiel zu sterben, erklingt oft ein ein lautes Atmen oder ein intensives Herzklopfen, das signalisiert, dass die Spielfigur in Deckung gehen und sich erholen soll.

Das waren also gängige Stilmittel und Effekte, die sowohl in Filmen als auch in Videospielen anzutreffen sind. Sie haben verschiedene Ziele und sind in dieser Form gewöhnlich nicht in der realen Welt anzutreffen. Allerdings ermöglichen sie es dem Zuschauer oder Spieler auf eine leichtere und gleichzeitig intensivere Art in die erzählte Geschichte oder Spielhandlung einzutauchen und sich in die Charaktere hineinzuversetzen.

Quellen:

[1] https://phaidra.fhstp.ac.at/open/o:1779

Audio & Machine Learning (pt 2)

Part 2: Deep Learning

Neural Networks

Neural networks are a type of machine learning which are inspired by the human brain and consist of many interconnected nodes (neurons). The goal of using neural networks is to train a model, which is a file that is or has been trained by machine learning algorithms to recognize certain properties or patterns. Models are trained with a set of data and once trained they can be used to make predictions about new data. A neural network is split into different layers, to which diffident Neurons belong to. The layers a neural network consists of are an input layer, an output layer and one or more hidden layers in between. Mathematically, a neuron’s output can be described as the following function:

ƒ( wT x + b)

In the function above w is a weight vector, x is a vector of inputs, b is a bias and ƒ is a nonlinear activation function. When a neural network is training, weights w and biases b modify the model to better describe a set of input data (a dataset). Multiple inputs then result in a sum of weighted inputs.

i wi xi + b = w1 x1 + w2 x2 + … + wn xn + b

The neurons take a set of such weighted inputs and through an activation function produce new values.

Training

When training the network analyzes the individual packs of data (examples) from the dataset and initializes the weights of its neurons with random values and the bias with zero. Neural network training consists of three parts: A loss function evaluates how well the algorithm models the dataset. The better the predictions are the smaller the output of the loss function becomes. Backpropagation is the process of applying gradients to weights. The output of the loss function is used to calculate the difference between the current value and the desired value. The error is then sent back layer by layer from the output to the input and the neurons’ weights get changed depending on their influence on the error. Gradients are used to adjust the neurons’ weights based on the output of the loss function. This is done by checking how the parameters have to change to minimize loss (= decrease the output of the loss function). To modify the neurons’ weights, the gradients multiplied by a defined factor (the learning rate) are subtracted from the weights. This factor is very small (like 0.001) to ensure weight changes remain small to not jump over the ideal value (close to zero).



Illustration of the learning rate

Layers

As mentioned above, a neural network consists of multiple layers, which are groups of neurons. One or more are hidden layers. These calculate new values for previous values with a specific function (activation function).

deep neural network with two hidden layers

Hidden layers can be of various types like linear or dense layers and the type of the layer determines what calculations are made. As the complexity of the problems increases the complexity of the functions must also increase. But stacking multiple linear layers in sequence of each other would be redundant as this could be written out as one function. Dense layers exist for this reason. They can approximate more complex functions which make use of activation functions. Instead of only for the input values, activation functions are applied on each layer which in addition to more complexity leads also to more stable and faster results.

The following list introduces some commonly used activation functions:

  • linear (input=output)
  • binary step (values are either 0 or 1)
  • sigmoid (s-shaped curve with values between 0 and 1 but never 0 and 1)
  • tanh (like the sigmoid but maps from -1 to 1)
  • arcTan (maps the input to values between -pi/2 to +pi/2)
  • reLU – Rectified Linear Unit (sets any negative values to 0)
  • leaky reLU (does not completely remove negative values but drastically lowers their magnitude)
activation functions displayed in Geogebra

Deep Learning

As mentioned before, there are different layers in a neural network. What was not mentioned is that a neural network with more than one hidden layer is called a deep neural network. Thus the process of training a deep neural network’s model is called deep learning. Deep neural networks have a few advantages to neural networks. As mentioned above activation functions introduce non-linearity. And having many dense layers stacked after each other leads to being able to compute much more complex problem’s solutions. Including audio (finally)!


Read more:

http://www.deeplearningbook.org
https://www.techradar.com/news/what-is-a-neural-network
https://www.ibm.com/cloud/learn/neural-networks
https://www.mygreatlearning.com/blog/activation-functions/
https://algorithmia.com/blog/introduction-to-optimizers
https://medium.com/datathings/neural-networks-and-backpropagation-explained-in-a-simple-way-f540a3611f5e
https://medium.com/datathings/dense-layers-explained-in-a-simple-way-62fe1db0ed75
https://medium.com/deeper-learning/glossary-of-deep-learning-batch-normalisation-8266dcd2fa82
https://semanti.ca/blog/?glossary-of-machine-learning-terms

Difference between VR, AR and MR

To get an idea and a basic knowledge of what the differences are between VR, AR and MR, I started with this in my research:

Virtual Reality:

  • Is the most commonly used
  • It allows the user to fully immerse himself in another world
  • With the use of head mounted displays, screens, sensors, gloves, etc. the images can be seen and interacted with
  • Simulations of real-life situations or entire environments can be immersed and interacted with
  • To use VR, the needed program or software, vision devices like TV, laptop, projector, smartphone, tablet, gaming console, OPI etc. and interactive devices like keyboard and mouse, haptic devices, joystick etc. must be provided

Augmented Reality:

  • It combines direct or indirect physical environments to the real world with other digital elements (like Pokémon Go for example)
  • While the user is in his real environment, virtual elements are generated in real time which the user can interact with
  • Through a device, these virtual elements can be seen and interacted with
  • To use AR, a program is needed that can mix the real and virtual world, capture devices such as webcam, video camera, smartphone, etc. for object recognition or geolocation, devices that can display the images such as smartphone, laptop or computer display, projector, etc. and activators that recognize at what time which virtual elements should be displayed
VR, AR and MR.

Mixed Reality:

  • Digital elements and the real environment are combined – it is a mixture of virtual reality and real-life
  • Real objects in the physical world are masked by virtual objects so that it feels as if the virtual objects really exist
  • With VR, you are immersed in a virtual world that does not exist. With MR, you feel like the virtual world is immersed in the real world and they merge together
  • For MR, you not only need sensors, displays and headsets but, most importantly, an environment where the location exists in both the real and virtual world
Mixed Reality is the result of blending the physical world with the digital world.

Sources:

  1. Virtuality Continuum ́s State of the Art, Héctor Olmedo (2013), https://www.researchgate.net/publication/257388578_Virtuality_Continuum%27s_State_of_the_Art
  2. What is Mixed Reality?, Brandon Bray (26.08.2020), https://docs.microsoft.com/en-us/windows/mixed-reality/discover/mixed-reality

How do we interact with misinformation? Part 1

An empirical user research questionnaire about how we interact with social media platforms and false or misleading content. Furthermore how the design influences us and if labeling content is helpful?

In 2018 the European Union did a survey on “The digital transformation of news media and the rise of disinformation and fake news”. In this report they stated that misinformation or fake news is pretty old. The first known case of fake news goes back to the 16th century. However, this may be an argument for some people, it is clear by now that social media and the spread of fake news and misinformation have become a problem. First of all we need to define some wordings:

Falsity

Falsity refers to inconsistency in claimed facts (Spears, 2015), for instance, when a car manufacture claims that the car’s gas mileage is higher than it actually is.

Misleading

Some content creates an impression about a product, a story or news that is untrue (fake) or about features, information or facts that do not exist.

Misleading and false content affects the choices of users and their opinions.

The Questionnaire

In this phase of the research some personal qualitativ interviews based on a standardized questionnaire with a few participants will be conducted and analyzed. The main interest of this survey is how people interact with fake or misleading information and how we can change the apperance or interaction process through design. Therefore the following questions will be asked:

  1. Demographic data like gender, age, education level, employment and family status.
  2. Where do you usually watch/read news or get information on certain topics?
  3. Which social media platforms do you use?
  4. How often do you visit these platforms daily/weekly/monthly?
  5. Have you ever experienced misleading content on these platforms? If so, please elaborate.
  6. How do you interact with misleading or false content? Please elaborate.
  7. What is your reaction when you find out the content you found is misleading or false?
  8. What do you think about labeled content?
  9. Which additional (background) information of a statement/fact is important to be shown directly for you as a viewer?
  10. What makes a website/content/information trustworthy?
  11. How trustworthy is social media in your personal opinion and why? (Scale 1 – 10)
  12. Do you think the design of information or content has an effect? If so, please elaborate.
  13. Do you want do add something?

Links/Reports:

(PDF) Impact of misleading/false advertisement to consumer behaviour. International Journal of Economics and Business Research, 2018 Vol.16 No.4, pp.453 – 465. Available from: https://www.researchgate.net/publication/328067596_Impact_of_misleadingfalse_advertisement_to_consumer_behaviour_International_Journal_of_Economics_and_Business_Research_2018_Vol16_No4_pp453_-_465 [accessed Dec 20 2020].

Study on fake news and disinformation from the European Commission’s Joint Research Centre.

Audio Adaptor Boards for Teensy

This Audio Board is and extension for your Teensy. It exists for Teensy 4 and Teensy 3. If you want to use it you have to solder it together with your Teensy. It has an anormous amount of useful features, including for example high quality audio adapters in 16 bit and 44.1 kHz sample rate. It supports stereo headphone and stereo line-level output, and also stereo line-level input or mono microphone input.

Teensy Audio Library

This is kind of a software you can open in your browser to program your signal flow on the Audio Board to create all types of sophisticated audio applications. You can play multiple sound files, create synthesized waveforms, apply effects, mix multiple streams and output high quality audio to the headphones or line out pins. To use the sketch with your board you can export the code and apply it to your Teensyduino Software and upload all of it.

Blog Entry

This blog is devoted to redesign

Nowadays lots of companies have decided to change their styles and brands. I will try to reveal topics, as: why they decided to do it right now, what is the purpose and how dangerous a drastic redesign could be. And, mainly, when will we finish to redesign things and what is the goal?

Main motivations for me choosing this topic was the soul of the topic itself.

A huge opportunity is the presence of endless edges of redesign. Redesigning apps, web-pages, products, everyday things, interior items, appliances.

Also be interesting, how redesign changes our mind, how it is connected with the world history, mentality, our relationship with ourselves and with others, rivalry and banal change of time tendency.

I hope it will be useful and interesting to all of us.

Autonomous Vehicles

Autonomous vehicles are capable of sensing the environment and fully operating the vehicle without the help of human drivers. In fully autonomous cars, the driver is not required to do anything at all while still being able to get from point A to point B. While most of the cars that can be purchased at the moment offer advanced driver assistance systems (ADAS), autonomous cars are currently only tested by different automakers and startups. Because distinguishing the different levels of automation was important as well, the SAE (Society of Automotive Engineers) defined 6 different levels of driving automation back in 2014. They are ranging from Level 0 (fully manual) to Level 5 (fully autonomous).

source: https://www.synopsys.com/automotive/what-is-autonomous-car.html

The terms self-driving and autonomous are currently often used interchangeably, but actually they have a slightly different meaning. While self-driving cars would fall under SEA Level 3 or 4 because a human driver has to be present all the time and also constantly ready to take over, autonomous cars could go anywhere without human help and are therefore limited to Level 5.

Making these fully autonomous vehicles is harder and more expensive than most of the startups predicted. That is the reason why these self-driving startups are now teaming up with tech giants like Alphabet (Google) or Amazon and big automakers like General Motors or Volkswagen. Aptiv has established a partnership with Hyundai, Waymo with Jaguar, Cruise with GM, Voyage with Fiat-Chrysler and Argo AI with Ford and Volkswagen. Currently, the only big exception that is building self-driving vehicles and operating fleets of them on their own is Tesla. But there are already rumors that Tesla is thinking about merging with or buying Daimler (Mercedes Benz).

Waymo

Waymo is one of the most well known players in this industry. It started as Google Self-Driving Car Project back in 2009. Waymo has already done millions of miles of testing in Arizona and California with additional safety drivers monitoring the system behind the wheel. From October 2020 they also started their services with fully driverless taxis in Phoenix, Arizona. While the Waymo vehicles are still a little too cautious around pedestrians, there are already a lot of really positive reviews of their driverless service.

source: https://waymo.com/press/

Cruise

Cruise, acquired by General Motors back in 2016, unveiled their shuttle van in San Francisco back in January 2020. The name of the orange, white and black van with sliding doors and two sets of three seats facing each other is Origin. It is meant to be shared by riders in a ride-hail service and will be starting in San Francisco. Cruise is planning on offering their service significantly cheaper than current, human-powered ride-hail services like Uber and Lyft. They also announced a separate Origin configured for carrying packages. Cruise is currently starting their self-driving-taxi service with five modified Chevrolet Bolts in San Francisco with the plan to operate truly driverless cars before the end of the year.

source: https://medium.com/cruise/sharing-a-better-future-981ca839f4a5

Zoox

Zoox, acquired by Amazon for roughly $1.2 billion in June 2020, recently announced their robotaxi. The company was founded in 2014 and has been working on their robotaxi for the last six years now. Zoox is planning on using their autonomous vehicles on a ride-hail network in large cities like Las Vegas and San Francisco. Because of this, their vehicle is able to drive bidirectionally and also offers a tight turning radius. These robotaxis will be able to run up to 16 hours on a charge and travel fully autonomously at speeds up to 75 miles per hour (approximately 120 kilometers per hour). Zoox has not announced a release date yet, but the technology from their robotaxis has been tested with other cars in San Francisco since 2017. The design of the Zoox vehicles is similar to a lot of other different autonomous vehicles revealed over the past few years like the Cruise Origin. Due to its rectangular shape, the robotaxi platform could also be used for packages.

“We’ve made the decision to maximize the interior space and minimize the exterior space,”

Jesse Levinson, Zoox’s Co-Founder and CTO

Tesla

While Tesla’s Full Self-Driving (FSD) feature was released as a closed beta to a few selected vehicles in October 2020, they are also aiming to release their FSD for all compatible Teslas by the end of this year. Since this feature was only rolled out to a limited number of people and is currently updated regularly, there is also limited knowledge about the functionality and safety of these features.

“Despite the name, the Full Self-Driving Capability suite requires significant driver attention to ensure that these developing-technology features don’t introduce new safety risks to the driver, or other vehicles out on the road,”

Jake Fisher, Senior Director of auto testing at Consumer Reports
source: https://www.tesla.com/de_AT/model3

Resources

https://www.synopsys.com/automotive/what-is-autonomous-car.html

https://www.wired.com/story/self-driving-cars-look-toasters-wheels/

https://www.wired.com/story/cruise-hit-san-francisco-no-hands-wheel/

https://www.wired.com/story/gms-sensors-room-6-no-steering-wheel/

https://www.wired.com/story/self-driving-tech-game-partnerships/

https://www.wired.com/story/guy-taking-viewers-driverless-rides-waymo/

https://www.caranddriver.com/news/a34306011/waymo-expands-driverless-taxi-more-people/

https://www.consumerreports.org/autonomous-driving/tesla-full-self-driving-capability-review-falls-short-of-its-name/

https://waymo.com/