Creation of environmental documentaries as a one person crew

There is a lot of information out there about filmmaking and documentary filmmaking. But not so much about environmental/ conservation or wildlife filmmaking. When thinking about making such a documentary most people would probably think of big productions like the once from BBC or NatGeo. Yet it is possible to create a documentary as a small crew or even alone. Films can have a great impact on its viewers, therefore well made environmental documentaries are important. Especially ones from independent filmmakers with the correct knowledge about the topic they are making a film of and no big production company in their neck telling them to stage things cause of better quotes.

For my master thesis I want to write a little handbook like I wish would already exist. One which is easy to understand for everyone, where you can’t only read about how to make a documentary, but also get an overview of what an environmental documentary is, its history, its power and eco critical perspective and environmental ethics. After having an understanding of that I will go through every step of production, from research, finding protagonists to marketing, distribution and festivals. As environmental filmmaker I think it is also very important to have knowledge of the things you are making a film about, so there will also be chapters about what the environment is, conservation and conservation psychology. Obviously every project needs its own research about the specific topic the film is going to be about, but I think an understanding of the basics is important.

Alongside this book I am going to create a documentary film about the problems of Mediterranean Sea. This documentary is currently in its pre-production phase. First possible protagonists from different ocean conservation organizations were approached and first confirmations came in. As a second, smaller part of my master thesis book I am going to document the making of this film as a real life example, using the information I have gathered in the first part.

Wanding

Wanding is a very important process for the OptiTrack system calibration.

A calibration wall is used, which is repeatedly shaken in front of the cameras so that all cameras can see the markers.

The CalibrationWand must be brought into the capture volume and carefully shaken through the entire aspiration volume. To collect samples with different orientations, it is best to draw figures of eight.

For adequate sampling, we need to cover as much space as possible and cover both low and high heights.

The wanding trails are shown in color in the 2D view. A table with the status of the measurement process is displayed in the calibration area to monitor the progress.

After enough samples have been collected, the software can calculate. Generally 2,000 to 5,000 samples are sufficient.

When done, the cameras are displayed in Motive’s 3D viewer. However, the recording volume built into the software still has to be aligned with the coordinate plane. This is because the ground plane is not fixed.

The final step necessary to complete the calibration is to set the ground plane and origin by placing the calibration square in the volume and indicating on the subject where the calibration square is located. This needs to be positioned within the volume where we want to place the origin and level the ground floor.

To set the coordinate system, reference is made to the position and orientation of the calibration square, so it must be aligned in such a way that it relates to the desired axis orientation.

The power of Design

In this blogpost I refer to the TED Talk of JD Hooge:

With great power comes great responsibility

Stan Lee

Technology
In the past few decades, technology has changed everything: news, transportation, shopping, health, friendships and family. On average, a person checks their cell phone 220 times a day (that‘s twice as often as we think we do). We want technology to empower us, but the digital world as it was designed tends to dictate the rhythm of our lives. Technology has changed our everyday lives faster than we thought and than the consequences could be foreseen. Designers in the digital sector can therefore now influence more than ever before. It is therefore important to take responsibility for what you create and the impact it has on life.

Design gives identity
In addition to the music that makes people feel they belong to a wide variety of groups, strong visuals also give a feeling of commutity. Different groups have their own aesthetics, ethical code and ideology. And often this aesthetic is reflected in stickers, fashion and visuals. These symbols connect – design gives identity.

Bauhaus
The Arts and Crafts movement emerged during the Industrial Revolution. It reacted to the increasing technology and machines and should offer an antipole. In the meantime, a group of idealistic artists and designers who have embraced technology came together in Germany. The Bauhaus no longer saw designers just as ‚decorators‘. It wanted to combine technology, art and design. They used industrial materials for household furniture. They placed emphasis on the social aspect of design and focused on the affordability and efficiency of materials and visual elements.


Dieter Rams spoke of designers as visual engineers. His philosophy revolved around a ‚less is better‘ essentialism. He wanted to develop products that were more sustainable and less problematic for the environment. ‚Design shouldn‘t dominate people, it should help people‘. His aesthetics and ethics were aligned and his design was human-centered. Design is a responsibility because design shapes the future.

The impact of design
Design influences every moment of our life. It evokes feelings, it is a strong social influencer, it helps to understand complex systems. Today we need design to let it work for us. In the new digital world there are enough problems that need to be solved. The success that technology has these days is measured by the time users spend on it, not the benefit it can bring to people. Sometimes technology makes things better for one community while it makes things worse for another. There are many blind spots in technology, for example on the topics of racism, equality, oppression. And people are increasingly rethinking their relationship with technology. Technology companies such as Apple and Google are also struggling to better focus on people and their wellbeing. But that‘s only just beginning. The advantage of designers is that they have a great insight into the inner and work on the front lines. The knowledge, values ​​and experience of a designer flow directly into, for example, the user interface, every push notification and every icon that he develops. This means that designers can directly influence the user. In order to live up to this responsibility, designers should deal with more than just their area of ​​expertise: world history, ethics, inclusion, systematic inequality, cognitive bias. It is important to learn from the past. It is just as important to deal with different opinions and views.

Seek historical, cultural & economic context
Elevate the design curriculums
Build diverse teams to improve collective decision making
Empower individuals with resposibility

Surroundscapes

The next generation of audio

Surroundscapes: the power of immersive sound

Geoff Taylor, CEO, the BPI: “Cutting-edge new tech, such as immersive audio used in VR and other applications, give us a glimpse of how this exciting new world of consumption and entertainment will take shape.”

In the early days of silent film, music was performed live. There was a band or pianist that would play in front of the screen, serving as commentary for the narrative’s action and flow. In 1927, music and sound were recorded and printed onto film for the first time. Slowly, as film technology became more sophisticated, it encompassed multiple channels of sound, with a number of speakers placed around the theatre. Eventually, surround sound was developed, and became a critical factor in the success of the overall cinematic experience.

Immersive content is no different. If one thinks of virtual reality as an attempt to create an alternative reality, the brain needs audio cues to match the visuals to be able to buy into the illusion. The user has to feel present in the experience, and can only feel present if all the cues received are completely natural. If the sound is flat and coming from just one place, the spell will be broken.

When delivered successfully, immersive audio creates the sensation of height around the user, transporting into a more thrilling experience. Because the power of sound can alert users to something behind or above, it’s important users realise that they are able to move around within the immersive experiences. When creating virtual reality (VR) and augmented reality (AR) experiences, for a long time, the industry has been focusing on the visuals, but that is only part of the environment.

This year, at SXSW, Tribeca and the Venice Film Festival there was a noticeable rise in sound-led immersive experiences: sound has become a powerful storytelling tool.  

Last year, Bose launched BoseAR  and with it three products to transform AR audio. Launched alongside  
these products, the software to  create AR content is now available, with the world’s first audio AR platform: Traverse.

At SXSW, Traverse’s “From Elvis in Memphis”, an AR-based piece of content allowed users to experience the music of Elvis Presley by walking through a physical space. The experience is created in such a way that it’s like being in the studio with Elvis; it’s possible to walk right up to him and his band members.

In the UK, Abbey Road Studios is one of the most famous recording studios in the world. It has been in use since 1931, and has famously provided recording facilities for talents such as The Beatles, Pink Floyd and Aretha Franklin. Abbey Road is the only facility in the UK to offer both scoring and film sound post-production, while the focus on immersive technology grows year on year.

Our research has identified a number of companies in the UK which are creating sound-based tools and solutions. There are even more creating sound-led immersive experiences. Two companies from our CreativeXR programme this year are doing just that: Darkfield and Abandon Normal Devices. On last year’s programme, Roomsize developed Open Space: a platform that enables the rapid construction of interactive audio experiences that occupy physical spaces. All this activity suggests that we are on the brink of a new generation of infrastructure 
to amplify sound in VR, AR and MR. Sound-led content will simultaneously open up new streams of possibilities for entertainment and media.

In partnership with the British Phonographic Industry (BPI) and its Innovation Hub, we are delighted to introduce Surroundscapes: The power of immersive sound. This is the latest Digital Catapult immersive showcase which runs from July to October 2019. We will be shining a light on UK-based startups and scaleups that are either creating the latest solutions to amplify VR, AR and MR experiences with sound, or content creators who are  specifically sound-led. 

Surroundscapes: showcase

After a competitive selection process, Digital Catapult and the BPI welcome six of the most innovative immersive sound companies in the UK: 

1.618 Digital: is an award-winning creative sound design studio that provides audio production and post-production services; immersive and spatial audio solutions for 360 video content; and games and interactive VR/AR media. 

As part of this showcase, 1.618 Digital is proud to present three projects that are on the cutting edge of digital technology for modern education and storytelling and illustrate the innovative applications of immersive audio. Immersive environments allow interactions and user manipulation of objects and sounds, which has been proven to provide 40% more brain activity relating to storage and recall of information. The use of high spatial resolution audio and interactivity along with volumetric audio and 6DOF enables users to engage with stories and other content on a deeper level.

Darkfield: specialises in creating communal location-based immersive experiences inside shipping containers. These experiences place the audience in complete darkness and then deliver binaural 3D audio and other sensory elements, using the darkness to create a canvas for the imagination.

The company has a unique offer grown from over twenty years working in the immersive theatre industry, and over six years creating shows and experiences in complete darkness that use binaural audio, multi-sensory effects and content to place the audience in the centre of evolving narratives. The experiences’ greatest asset is the invitation to walk the line between what seems to be happening and what imaginations can conjure up.

MagicBeans: is a spatial audio company creating a new kind of AR audio. The company maps highly realistic audio ‘holograms’ to real-world locations, visual displays and moving objects, creating a new and emotive presence for experiential businesses and visitor attractions.

The experiences demonstrate how sound can be mapped to visual displays, to individual objects that can be picked up and interacted with, and to a full room-scale audio experience. Experience MagicBeans technology embedded in a next-generation silent disco, an immersive theatre production and a new kind of audio-visual display

PlayLines: is an immersive AR studio that specialises in creating narrative-led immersive AR experiences in iconic venues. Its productions combine cutting-edge location-based AR technology with game design and immersive theatre techniques. The team’s work has been described as “Punchdrunk Theatre meets Pokemon Go”.

CONSEQUENCES is a groundbreaking immersive interactive audio-AR grime rap opera, created in collaboration with multi-award-winning MC Harry Shotta. Explore the AR Grime Club, follow the rhymes, and choose the ending. CONSEQUENCES gives audiences a brand new kind of night out that combines Secret Cinema, silent disco and ‘Sleep No More’. 

Volta: is a new way to produce music and audio, using space and movement as both a medium and an output. It is a VR application that makes spatial audio production not just easy but expressive, like a musical instrument, and easily integrates with audio production software.

Volta achieves integration by retaining the visual and interactive elements of producing spatial audio within the platform, while keeping all audio signal processing in the producer’s or engineer’s audio production application of choice. It uses a robust communication channel that allows the user to physically grab objects, move them in space, and record and automate that motion.

ZoneMe: ZONEME’s TRUE2LIFE™ object-based sound system provides a new way to control how audiences hear things by placing the sound at the point of origination. For example, words can appear to come from an actor’s mouth, not the speakers, or a gunshot can be made to sound as if it takes place outside the room. 

ZONEME aims to put the ‘reality’ into VR/AR/MR experiences by providing TRUE2LIFE™ sound. These experiences can be up to seven times more stimulating than visuals. Yet visuals have seen huge technological advances over the last 30 years that have not been matched by similar developments in audio. 

Naima Camara Thursday 04 July 2019

The Power of Procedural Texturing

Wie bei allen prozeduralen Vorgängen, welche in meinen vorherigen Blogeinträgen näher behandelt wurden, wird diese Prozedur ebenso in der Aufbereitung von Texturen eingesetzt und ist heutzutage fester Bestandteil in der CGI-Welt.

Doch was ist der Vorteil von Procedural Texturing?

Das Abfotografieren eines bestimmten Musters oder Objekts per Kamera verspricht ein genaues Abbild der Realität und kann durchaus hervorragende und nützliche Endresultate liefern, doch liegt der große Unterschied zu einer prozeduralen Textur bei der Weiterverarbeitung und speziell in der flexiblen Weiterverwendung des Materials. Der große Vorteil und somit der maßgebliche Unterschied zu einer “abfotografierten Textur” ist der dynamische und schnelle Austausch von bestimmten Parametern, welche wiederum zu einem komplett neuen Erscheinungsbild der Textur führt. Aufgrund der Tatsache, dass prozedurale Texturen aus einer Reihe von numerischen Parametern und somit ausschließlich digital funktionieren, sind diese vom Benutzer jederzeit frei anpassbar, um maximale Flexibilität in der Gestaltung von Texturen beizubehalten.

Links: Abfotografiert — Rechts: Procedural Textures

Weitere Vorteile sind die kleinen Speichergrößen sowie die potenziell unbegrenzt große Auflösung, da nur das Verfahren gespeichert wird, mit dem die Textur generiert wird, was vor allem in den Bereichen, in denen Realtime-Rendering betrieben wird, beispielsweise in 3D-Computerspiele oder ähnliche Anwendungen, sehr nützlich ist und vieles vereinfacht.

Perlin-Noise sei Dank!

Der Ursprung der parametrischen Textursynthese ist Perlin-Noise, eine 1982 von Ken Perlin für den Film “Tron” entwickelte mathematische Funktion, die Texturbilder durch zufällige Verzerrung abwechslungsreicher gestaltet. Nach seinen Aussagen zufolge war er sehr verärgert, dass Produkte, welche in seiner Zeit mit CGI gefertigt wurden, ganz klar zwischen realen Bilder unterscheidet werden können – Computergenerierte Inhalte hatten seiner Meinung nach ein zu offensichtliches “maschinenähnliches” Aussehen.

Dieses Problem wurde mithilfe seines Perlin-Rauschens großflächig aus dem Weg geschafft. Das Perlin-Noise ist eine Methode, welche Oberflächen und ihre damit verbundenen Texturen durch ihre “Zufallsformel” natürlich(er) erscheinen lassen. Mit der Hilfe dieser Funktion gelingt es Grafikern und 3D-Artists, etliche Naturphänome visuell besser und vor allem realitätsnaher darzustellen, da sie auf ein pseudozufälliges System beruht, welches vom Designer jederzeit manuell kontrolliert und gesteuert werden kann.

Perlin-Noise

Hierbei handelt es sich um synthetische Texturen wie Feuer, Rauch oder Wolken – kann aber bei allen Arten von Texturen und anderen Objektoberflächen angewendet werden – unter anderem auch in prozedural gesteuerten Texturen.

Generierte Landschaft mithilfe von Perlin-Noise

Quellen:

https://de.khanacademy.org/computing/computer-programming/programming-natural-simulations/programming-noise/a/perlin-noise

https://de.m.wikipedia.org/wiki/Textursynthese

https://en.m.wikipedia.org/wiki/Procedural_texture

https://en.m.wikipedia.org/wiki/Perlin_noise

House with germs and bulletproof skin. BioDesign.

Society is increasingly trying not to subjugate nature, but, on the contrary, to learn from it. With knowledge of the properties of certain organisms, we can revolutionize design. For example, grow bulletproof leather and create lamps with an alternative way of getting light.

BioDesign

Biodesign is associated with the design of hybrid forms of living organisms and modern technologies in order to enhance certain properties of organisms and increase their chances of survival.

In many ways, the task of a biodesigner is to address the challenges posed by the impending climate crisis. The field of biodesign cannot be imagined without the interaction between designers and scientists who know how the body works at the molecular level.

On the one hand, the question arises whether we have the right to interfere with the life of other living beings and transform it; on the other hand, biodesign, in theory, is guided by noble ideas and is looking for ways to make life easier (not only for people, but also for other living organisms) in extreme conditions.

Researchers and designers are studying the properties of various microorganisms, which are the first to develop cunning mechanisms for adapting to new conditions. For example, inside nuclear reactors, scientists found microbes that were able to protect and restore their DNA, despite the huge doses of radiation. Studying and using such a defense mechanism could help scientists looking for a cure for cancer. Another representative of superhero microorganisms managed to survive, being on the ISS hull for 553 days and overcoming the space cold. For biodesigners, such discoveries are a real find, given that the human genome is 90% composed of cells of symbiotic microbes. That is, theoretically, we can “add” tiny neighbors to ourselves and get some of their super-abilities.

Biodesign is not equal to biomimicry.

The term “biomimicry” denotes an approach when the design borrows or copies the principles observed in nature. Biologist Janine Benjus detailed the main ideas of this approach in her 1997 book. One problem with the term biomimicry is that it is used too widely. Often, in such a design, the connection with nature is limited only to the imitation of form or material for a symbolic, decorative effect. That is, the form turns out to be divorced from the idea of ​​biodesign — to live in harmony with existing ecosystems.

Examples:

BioConcrete

This experimental technology was invented by Henk Jonkers. Bioconcrete contains bacteria of the Sporosarcina pasteurii species, which naturally produce limestone under suitable conditions. Before the bio-concrete hardens, bacteria are mixed with nutrients. Over time, as the concrete structure begins to crack, bacteria will help fill them, producing limestone. This hybrid material extends the life of the man-made technology, reduces maintenance costs and minimizes the carbon footprint. Since concrete is one of the most common building materials, the use of technology improved in this way will help to significantly reduce the harmful human impact on the environment.

Botany as architecture

The architecture technique, invented by Ferdinand Ludwig, Cornelius Hackenbracht and Oliver Storz of the University of Stuttgart, involves the use of trees as a supporting structure. For this purpose, species with strong trunks and a root system, such as willow, are suitable. Moreover, the branches and trunks in such a structure will only become stronger over time. It is important for the authors of the idea to emphasize that architecture is not static, it is also subject to change and transformation, like all elements of nature around.

EcoCradle

This packaging material was developed by the American company Ecovative Design as an alternative to containers made from petroleum products and polluting the environment. The basis of EcoCradle (from the English cradle – tray, stand, frame) is mycelium (or mycelium), which is characterized by a rather dense structure. To create such eco-containers, you can use waste from the local agro-industry.

A microbial home

The concept of a microbial home was invented by the Dutch subsidiary of Philips. We can say this is a version of a smart home in which all the work is done and coordinated by microbes, bacteria and fungi. Thanks to them, devices for heating, cooling and growing food, as well as processing food waste, function. They all make up a closed ecosystem. For example, an apparatus for bio-processing of methane helps to solve the problem of recycling food waste and at the same time produces gas, due to which the stove operates.

LatroLamp

An experimental lamp design by Mike Thompson illustrates an alternative way of producing light. In this case, it is generated using gold nanoelectrodes, into which algae cells have been implanted. So, thanks to photosynthesis, we get current.

In addition, Yalila Essaidi is an artist, designer and researcher who invented and created bulletproof leather. It is a hybrid of human skin and spider filament:

The eye on the belly of a tadpole

The manipulation of the electrical status of cells allowed scientists at Tufts University, Massachusetts to grow fully formed eyes in the intestines and on the tail of tadpoles. As strange as it sounds, the demonstrated result is an important step towards the regeneration of complex organs and the evolution of design.

In earlier work, the researchers managed to grow a new tail for the tadpole to replace the lost one. To do this, they changed the electrical properties of cells by stimulating their absorption of salt. These and later results indicate that not only chemical, but also physical factors play an important role in the development of the animal organism in the direction of cell differentiation.

The membranes of all cells have a certain electrical potential, which is formed due to the difference in the concentrations of charged ions of sodium, potassium and calcium in the intra- and extracellular space. The movement of ions across the membrane is regulated by so-called ion channels. For most cells, in addition to nerve and muscle, the charges created by these currents are very small and are expressed in millivolts. However, it turned out that the difference in potentials of different cells plays an important role in the migration and development of cells.

The researchers found that about 19 hours after fertilization of the Xenopus laevis Xenopus laevis egg, the membrane potential of some cells in the embryo was reduced by about 20 millivolts. At the site of localization of these cells, the eyes of the animal are subsequently formed. At the same time, the introduction of compounds that block the change in potential prevents the formation of eyes.

To test the hypothesis that the electrical properties of the cell play an important role in the formation of the eye, the scientists have embedded ion channels, allowing to create a characteristic decrease in membrane potential, into the cells of the intestine and tail of tadpoles. As a result, each stimulation of these channels led to the formation of an additional eye in the selected localization.

Previously, experts believed that eyes can only form from head cells that express certain regulatory proteins. However, it turned out that the primary signal in this case is a change in potential, which somehow triggers the production of the necessary proteins.

The researchers believe that in the future, similar approaches could be used to stimulate the formation of organs from stem cells in the laboratory. However, some experts doubt the success of such experiments, since the course of the process of organ formation is most likely determined by a change in the electrical status of the cell relative to the surrounding cells, and not simply by reaching a certain level of membrane potential.

An eye formed in the intestine of a tadpole is circled in red.

Drums recording in When The Levee Breaks

Today we talk about Led Zeppelin, or rather “The Hammer of the Gods“, and their song When the Levee Breaks, included in their fourth masterpiece “Led Zeppelin IV”.

They are certainly one of the most important bands in the history of music, their innovative sounds and their incredible talent have been the key to their incredible success and the impact they have had on the world of music is not quantifiable.

Another hallmark of the (best) band is the visionary mindset of their guitarist Jimmy Page, not only as a musician, but also as a sound engineer.

In December 1970 the recording session of some parts of the album (Black Dog, Four Sticks) and, above all, When The Levee Breaks took place in Headley Grange (Hampshire, England), a poorhouse of 1790 transformed into a residence a la late 1800.

After the first few songs they got a new drum set and told the delivery guys to leave it in the huge hallway.

Then the drummer, John Bonham, came out to test the kit. Jimmy Page’s reaction was amazing, he was so amazed by the sound he had in the hall that he literally said “Let’s not take the drums out of here!”.

The production of the album was done with Andy Johns, he and Page experimented a lot on how to get that huge drum sound in a recording.

The solution? They hung the microphones (Beyerdynamic M160) on the second floor, at the top of a stairwell in Headley Grange, and the drums at the bottom.

Heavy amounts of compression have been added to the microphones to add excitement to the room.

Another curious fact, the drummer has made some changes to the drums, so they didn’t have to mic the kick!

Combined with John Bonham’s powerful and unique sound, the result was amazing. Something no one has ever done before.

This move continued Page’s philosophy of ambient miking for drums, rather than putting mics directly on instruments. According to him, the drums must breathe.

Here you can enjoy this Meisterwerk:

Informationstheorie

Die Gestaltpsychologie ging zu Beginn des 20. Jahrhunderts von den Erkenntnissen der Wissenschaften Ende des 19. Jahrhunderts aus. In den darauffolgenden Jahren gab es immer mehr Einwände gegen diese Erkenntnisse, da sie zu sehr an subjektive Empfindungen geknüpft und nur schwer messbar waren. Die amerikanischen Mathematiker R. A. Fisher, C. E. Shannon und N. Wiener versuchten diese Mängel mit Hilfe ihrer in der Mitte des 20. Jahrhunderts entwickelten Informationstheorie zu beheben. Die damalige Entwicklung von elektronischen Nachrichtenmedien und die Verbreitung von datenerfassenden und -verarbeitenden Rechenanlagen verlangte nach einer theoretischen Grundlage. In dieser Informationstheorie gibt es einen Sender der eine Nachricht in Wort und/oder Bild an einen Empfänger sendet. Die Nachrichten bestehen aus Zeichen die sowohl vom Sender als auch vom Empfänger gekannt und erkannt werden können. 
Wichtig ist dabei zu erkennen, dass sich der Gehalt, die Substanz einer Nachricht aus der Beziehung von Unvorhergesehenem – Neuen – zu bereits Bekanntem und Überflüssigem (der sogenannten Redundanz) ergibt.
Die optimale Nachricht besteht aus einem geringen Übergewicht an Neuem, das das Interesse und die Aufmerksamkeit weckt. 

An diesem Punkt überlappen sich die Gestalt- und die Informationstheorie. Die Verbindung zwischen beiden Lehren stellte der Psychologe und Soziologe Abraham Moles her. Er hat die Informationstheorie, die zunächst unter einem physikalisch-mathematischen Aspekt entwickelt und hauptsächlich auf materielle Systeme angewendet wurde, auf menschliche Empfindungen, sprich auf die Probleme der Wahrnehmung, angewendet. 

Der unvorhersehbare, schwer zu verstehende Teil einer Nachricht, der Originalitäts- oder Neuigkeitsgehalt, ist nach den Gestaltgesetzen mengenmäßig nicht zu erfassen. In der Informationstheorie ist dieser Gehalt allerdings in „bit“, der Maßeinheit für die kleinste Informationsmenge, meßbar. Ein Mensch hat eine Aufnahmekapazität von ungefähr 16 bit pro Sekunde. Es ist für die Gestaltung eines Bildes jedoch nicht notwendig einen Zahlenwert für seinen Neuigkeitsgehalt zu nennen. Wichtig ist nur zu erkennen, dass der Neuigkeitswert ein maßgeblicher Faktor in der Bildentwicklung ist.

Bedeutung der Gestalt- und Informationstheorie für die Bildgestaltung
Die Gestalttheorie lehrt uns, dass unser Wahrnehmungssystem stets bemüht ist, durch Ordnen und Zusammenfassen das Erkennen und Verstehen unserer Umwelt zu vereinfachen, damit wir uns leichter darin zurechtfinden.

Die Informationstheorie lehrt uns; gegensätzliche Bestandteile einer Information, wie Neues und Bekanntes, so aufeinander abzustimmen, dass sich sowohl eine verständliche als auch eine interessante Nachricht ergibt. Während die Gestalttheorie eine formalästhetische Beurteilung von Bildern ermöglicht, dient die Informationstheorie einer Beurteilung des Neuigkeitsgehaltes. Einen unbedingten Maßstab dafür gibt es jedoch nicht; denn was dem einen neu ist, mag dem anderen schon seit langem bekannt sein. Die Originalität hängt wesentlich vom Wissens- und Erfahrungsschatz des Betrachters ab. (Weber 1990)

Eine Bildnachricht kann kein absolutes Abbild der Wirklichkeit sein, denn zwischen Sender und Kanal – Objekt, Fotograf und Foto – besteht eine Wechselbeziehung, in der die Subjektivität des Fotografen, wie er die Dinge sieht, zum Ausdruck kommt. Dem Empfänger – Betrachter – bleibt es überlassen, wiederum subjektiv, das Maß an Übereinstimmung zwischen Realität und Abbildung auszuloten und das Bild zu bewerten.

Quelle:
Weber 1990 Ernst A.Weber: Sehen, Gestalten und Fotografieren. Basen; Boston; Berlin: Birkhäuser, 1990
Bilder:
Weber 1990, Seite 28

Die Figur-Grundbeziehung

Auge und Gehirn arbeiten nicht wie eine Kamera rein registrierend. Es werden Bilder weder für eine kurze Zeit noch auf Dauer als ein beständiges Bild aufgezeichnet. „Der Sehvorgang ist ein vielmehr konstruktiver Prozess, bei dem vollständige Muster wahrgenommen und mit bereits im Gehirn gespeicherten Mustern und Erfahrungen verglichen werden, um zu einer Erkenntnis und Bestimmung des Gesehenen zu gelangen.“ (Weber 1990, S.15) Der österreichische Physiker, Psychologe und Philosoph Ernst Mach hat mit seinem Buch „Die Analyse der Empfindung“ 1886 den Anstoß zur Begründung der Schule der Gestaltungspsychologie gegeben. Er erkannte, dass bei der Wahrnehmung von nicht zu komplizierten Gegenständen die Form als Ganzes über die weiteren Unterscheidungsmerkmale dominiert. 

Bei einem Baum werden nicht die einzelnen Blätter, Äste und der Stamm gesehen, sondern er wird als Ganzes wahrgenommen. Ebenso verhält es sich mit dem Dromedar.
(Weber 1990, S.15)

Um 1910 wurde in der Folge von den Psychologen Max Wertheimer, Wolfgang Köhler und Kurt Koffka, den bedeutendsten Vertretern der Gestaltpsychologie, die Unterscheidung zwischen Figur und Grund als ein wesentliches Kriterium der Gestalttheorie formuliert.

Gestalt
Wenn sich ein Objekt durch Kontrast von seiner Umgebung abhebt, kann es wahrgenommen werden. Diese Unterscheidungsgrenze formt die äußere Gestalt des Objekts. Seine „Gestalt“ entsteht durch spontanes Ordnen und Gruppieren einzelner visueller Elemente zu einem Ganzen. „Die Gestalt bildet eine klar erkennbare Ganzheit, die gegliedert und geschlossen ist und sich von ihrer Umgebung deutlich hervorhebt. […] Das Ganze unterscheidet sich von der Summe seiner einzelnen visuellen Elemente.“ (Weber 1990, S.16)

Um den Übergang vom Kleinen zum großen Ganzen zu veranschaulichen wird hier die Venus von Milo als Beispiel gezeigt. Aus einzelnen Buchstaben wurde der Torso der Venus zusammengesetzt. Man kann entweder jeden einzelnen Buchstaben, jedes einzelne visuelle Element, oder die Gruppierung der Buchstaben zu einem Ganzen, die Gestalt, erkennen. (Rechenzentrum der Technischen Universität Berlin)

Die Figur-Grundbeziehung
Die wichtigste Einsicht in der Gestaltungstheorie liegt in der Unterscheidung zwischen Figur und Grund.

Wenn wir ein Bild betrachten wählen wir in der ersten 1/100 Sekunde unwillkürlich ein Objekt vor der übrigen Szene, dem Hintergrund, als Figur aus. Wir unterscheiden also in der ersten Wahrnehmungsphase zwischen der (uns wichtig erscheinenden) Figur und dem (uns unwichtig erscheinenden) Hintergrund.

Im Gegensatz zur freistehenden Kugel auf dem rechten Bild unterscheidet sie sich links kaum als Figur vom Grund. (Nach A. A. Moles aus „Kunst und Computer“)

5 Faktoren bestimmen die Unterscheidung zwischen Figur und Grund:
1. Die Figur muss sich vom Grund abheben.
2. Die kleinere Fläche wird meist als Figur, die größere eher als Grund gesehen.
3. Figur und Grund können nicht zugleich wahrgenommen werden.
4. Vor allem dicht beieinanderliegende, sich ähnelnde visuelle Elemente werden zu einer Figur zusammengefasst.
5. Symmetrie und geschlossene Formen werden bevorzugt als Figur wahrgenommen.

Quelle:
Weber 1990 Ernst A.Weber: Sehen, Gestalten und Fotografieren. Basel; Boston; Berlin: Birkhäuser 1990
Bilder:
Weber 1990, Seite 15, 16