EUKLIDISCHER RHYTHMUS – Part I – Theorie

Der euklidische Rhythmus bzw. Algorithmus gehört zu den Polyrhythmen. Er basiert auf einem Konzept des Mathematikers Euklid von Alexandria. Prinzipiell geht es dabei darum, den größten gemeinsamen Teiler zu finden. Hierzu gleich ein Klangbeispiel: 

Wouter Hisschemöller – Music Pattern Generator v2.1

Kurze Geschichte:

Schon vor 2300 Jahren hat der Mathematiker Euklid von Alexandria sein wichtigstes Werk „Elemente“ verfasst. In diesem beschreibt er auch das Konzept zur Findung des größten gemeinsamen Teilers zweier Zahlen. Dieses Konzept ist als Euklidischer Algorithmus bekannt.

Rhythmik und Mathematik:

Er ist im Jahr 2004 entdeckte der kanadische Informatiker Godfried T. Toussaint die Beziehung zwischen dem Euklidischen Algorithmus und Rhythmen in der Musik. Mit diesem lassen sich viele bekannte Rhythmen zum Beispiel aus dem Rock’n’Roll, der afrikanischen und der südamerikanischen Musik berechnen.

Was ist ein ‚guter‘ Rhythmus? Ein Algorithmus, der die Schläge in einem bestimmten Zeitintervall so gleichmäßig wie möglich verteilt.

Das interessante an euklidischen Rhythmen ist, dass es nicht ewig gleichbleibende Rhythmen sind, sondern abwechslungsreiche und so auch spannendere für die Zuhörer. Er bietet eine gute Balance zwischen eintönig und zu komplex und verteilt alle Schläge gleichmäßig.

Wie funktioniert der Euklidische Rhythmus?

Gegeben sind immer zwei Zahlen, z.b. 13 und 5. Dann wird immer die kleinere von er größeren Zahl abgezogen, bis kein Rest mehr bleibt.

Tipp: Der Euklidische Algorithmus funktioniert besonders gut, wenn man Primzahlen benutzt.

Für den Eukldischen Rhythmus ist sozusagen der Weg das Ziel. Aus den Zahlen können wir die Schläge (geschrieben als 1) und die Pausen (geschrieben als 0) herauslesen. Das funktioniert so:

13 – 5 = 8  [1 1 1 1 1]  [0 0 0 0 0 0 0 0]

8 – 5 = 3  [1 0]  [1 0]  [1 0]  [1 0]  [1 0]  [0 0 0] 5 Nullen werden nach vorne hinter die 1en gestellt, 3 bleiben hinten stehen

5 – 3 = 2  [1 0 0]  [1 0 0]  [1 0 0]  [1 0]  [1 0] 3 Nullen werden nach vorne hinter die 10 Sequenzen gesetzt

3 – 2 = 1  [1 0 0 1 0]  [1 0 0 1 0]  [1 00] 2x 10 Frequenz wird nach vorne hinter die 100 Frequenzen gesetzt. Nun haben wir den fertigen Ryhthmus

2 – 1 = 1 Da ein Rhythmus zyklisch ist würde es ab hier keinen Sinn mehr machen, nach vorne zu verschieben

1 – 1 = 0

Zum einfacheren Verständnis kann man auf dieser Seite visuell sehen und ausprobieren, wie die Schläge und Pausen verteilt werden: https://dbkaplun.github.io/euclidean-rhythm/

Zudem gibt es hier eine Liste mit Zahlenkombinationen und ihrem Rhythmusmuster: http://www.iniitu.net/Euclidian_Erd%C3%B6s_Deep_Aksak_rhythms.html

Abschließend noch zwei Beispiele zum reinhören:

Quellen:

Ableton AG. „Don’t DJ über polymetrische, polyrhythmische, zirkuläre Musik | Ableton“. Zugegriffen 15. Dezember 2021. https://www.ableton.com/de/blog/dont-dj-moving-in-circles/.

bettermarks. „Euklid von Alexandria“. Zugegriffen 15. Dezember 2021. https://de.bettermarks.com/mathe/euklid-von-alexandria/.

Toussaint, Godfried. „The Euclidean algorithm generates traditional musical rhythms“. In In Proceedings of BRIDGES: Mathematical Connections in Art, Music and Science, 47–56, 2005. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.72.1340&rep=rep1&type=pdf.

W., Bernd. „Euklidische Rhythmen – Ist die Mathematik ein guter Percussionist?“ Tropone Sounds (blog), 18. Oktober 2018. http://tropone.de/2018/10/18/euklidische-rhythmen-ist-die-mathematik-ein-guter-percussionist/.

Birdman’s Soundtrack

In this blog post, I would like to write about a fairly unusual movie soundtrack to find, the Birdman soundtrack or (The Unexpected Virtue of Ignorance).

Birdman is a 2014 American comedy-drama directed by Alejandro G. Iñárritu.

Here is a small summary, just to know what it is:

“Riggan Thomson is an actor who played the famous and iconic superhero” Birdman “over 20 years ago. Now, in his middle age, he is directing his Broadway debut, a drama called “What We Talk About When We Talk About Love,” an adaptation from a Raymond Carver story. With the help of his assistant and daughter Sam, and his producer Jake, he will play premieres of the show, even when a talented actor he has hired, Mike Shiner, acts arbitrarily, the internal issues between him and the other cast, the his futile utmost efforts to critics and the unexpected voices of his old character, the Bird Man, pushing his sanity to the first debut show. [1]

Besides the movie itself, what I find really interesting is the soundtrack.

There are several classical music pieces, mainly from the 19th century (such as Mahler, Tchaikovsky, Rachmaninov, Ravel) and several jazz compositions by Victor Hernández Stumpfhauser and Joan Valent. But those are just “outlines” of the real thing.

Most of the score consists of a drum score composed entirely of solo jazz percussion performances by Antonio Sánchez.

It’s a rather unusual choice for a film, as the drums are just a percussion instrument, no harmony, (almost) no melody.

But why? As the director said:

“The drums, for me, were a great way to find the rhythm of the film … In comedy, rhythm is king, and not having the editing tools to determine time and space, I knew I needed something. that would help me find the internal rhythm of the film. “

When the director contacted Sánchez and offered him to work on the film, the composer felt a little unprepared and surprised, as he put it:

“It was a scary proposition because I had no benchmarks on how to achieve this. There is no other film that I know of with a soundtrack like this.” Sánchez hadn’t worked on a movie before either.

He first composed “rhythmic themes” for each of the characters, but Iñárritu preferred spontaneity and improvisation when he said, “Man, this is absolutely the opposite of what I was thinking. You’re a jazz musician. I want you to. you have a completely jazzy approach, which improvises, which has nothing too preprogrammed. I just want you to react to the story.” [2]

After Sánchez visited the set for a couple of days to get a better idea of ​​the film, he and Iñárritu went to a studio to record some demos. During these sessions the director first spoke to him through the scene, then as Sánchez improvised he guided him by raising his hand to indicate an event – such as a character opening a door – or describing the rhythm with verbal sounds. They recorded about seventy demos and, once they finished shooting, they put them in the rough cut.

He liked the way the soundtrack complemented the action, but not how the drums actually sounded. Having been recorded in a New York studio, the audio was extremely crisp and clear, not quite the mood they wanted for a film steeped in tension and dysfunction.

So Sánchez headed to Los Angeles to re-record the tracks.

They wanted “rusty, out of tune drums that hadn’t been played in years”. Also, Sanchez purposely degraded his kit. “I didn’t tune the drums, I used mismatched heads, stuck duct tape on the heads to make them weaker, and put things on the cymbals to make them sound a little broken. I also stacked two and three plates on top of each other, metal on metal. Instead of a sustained sound, you get a dry, trashy one. It worked a lot better.”

Iñárritu also pressured his sound design team.

In these scene they pass a drummer on the sidewalk outside the theater. The drum sounds change multiple times during the sequence — first when Keaton leaves the quiet of the theater and exits onto the New York City street, then again as he and Norton approach and move past Smith. Iñárritu wanted the volume level of the sidewalk drummer to rise and fall as Keaton and Norton walked by, but they did it in the most authentical way.

“We actually brought the drums out onto the street near the studio,” Sanchez recalls. “There were a couple of sound guys a block away with mics that had really, really long cables. I started playing, and they walked the whole block, right pass my drums, and kept walking to the next block. Then they came back. That’s how Alejandro approaches his work. Anybody else probably would have just turned the volume up and down.”

“The movie fed on the drums, and the drums fed on the imagery”.

The official soundtrack was released as a CD (77 min) in October 2014, and as an LP (69 min) in April 2015.

It won the Grammy Award for Best Score Soundtrack for Visual Media.

Resources

[1] Birdman Plot Summary https://www.imdb.com/title/tt2562232/plotsummary

[2] Neil Janowitz – “Birdman” composer drumming out the film’s soundtrack

[3] Wikipedia – Birdman

[4] Wikipedia – Birdman (film score)

The effect of an artificial sonic ambiance in office buildings

Have you ever thought about the sounds inside office spaces in different buildings? Hows does a concrete space feel in comparison to a glass and wood one? What if the sounds you heard were actually not generated by the space itself?

I’m on the 20th floor of an office building on Wall Street. One of the offices inside is equipped with about a dozen speakers, some sitting on plinths, others mounted on the ceiling. Aric Marshall, of audio software company Spatial, is leading a demonstration of a new soundscape designed for the workplace. Holding his phone, he says “Just listen,” and touches the screen. I ready myself to hear the soundscape come out through the speakers, but just the opposite happens. The sound I hadn’t processed turns off, plunging the room in a cold, uncomfortable, almost metallic silence. Without me realizing it, a soundscape had been playing all along—in this case, a muted, almost imperceptible pitter-patter of rain falling on the roof of a wooden cabin—coating the concrete office with a sort of soft, sonic balm.

Nowadays, our senses are bombarded from every side. Companies are competing for our attention any way they can, and now lots of them have started using sound as a marketing strategy. Companies like Ambercrombie and Fitch and Mastercard started using their own signature soundscapes in stores in order to stick in consumers’ minds.

The article author goes on: “This week, I experienced what an office could sound like if someone intentionally designed it that way. Here, that someone is in fact two companies: sonic-branding studio Made Music Studio and Spatial, an immersive audio-software platform. As companies continue with their quest to lure tenants back into the office, both are betting that bespoke soundscapes can provide a resounding advantage.”

Made Music studio has been experimenting with implementing different soundscapes in companies that invoke an emotional response and increase the immersion for customers. Imagine how it would be to walk into a hotel where you have “welcoming ambience,” “focusing ambience” and “energising ambience`.

Resources

https://www.spatialinc.com/news/fast-company-i-got-to-hear-what-an-office-can-sound-like-with-the-help-of-sonic-ambiance

Live Performance Environment: Camelot

This might be interesting for all of you who want to do a live performance! 🙂

Setlist manager, digital mixer, software instrument and effects host, PDF sheet music player, multitrack audio player and highly advanced patchbay and MIDI router, these are the features of this new live performance software, Camelot, developed by Audio Modeling.

It is also designed not to have the need to have a manual, everything is clear and with just a quick guide you can have access to all the features.

As they stated:

“Camelot is an application created to address the most complex live performance needs with a simple and guided workflow. What in the past could only be achieved with a complex configuration of several applications and devices connected together, Camelot achieves in a single comfortable and well-designed workstation. Song in Camelot gathers all your configurations of hardware instruments and MIDI routing, software instruments and FX instances, sheet music and audio backing tracks.

It is also quite cheap for all of its features and there is also a free version, in which only a few features are missing.

If you are interested, you can take a look here:

https://audiomodeling.com/camelot/overview/

Bela Board

What is it?

Bela is an open-source embedded computing platform for creating responsive, real-time interactive systems with audio and sensors.

Bela was born in the Augmented Instruments Laboratory, in the Centre for Digital Music at Queen Mary University of London. Bela is now developed and produced by Augmented Instruments Ltd in London, UK.

It provides low latency, high quality audio, analog and digital I/O in a really small package that can be easily embedded into a huge range of applications.

It can be used, for example, to create musical instruments and audio effects.

It is built on the BeagleBone family of open-source embedded computers, Bela combines the processing power of an embedded computer with the timing precision and connectivity of a microcontroller.

In computing, “real-time” refers to any system that guarantees response within a specified time frame. This is called the “real-time constraint”. Bela’s real-time constraint is processing audio frames at audio rate, which is 44,100 frames per second.

There are different types of real-time systems, often called soft, firm, or hard. A firm or soft real-time system can miss a few deadlines, but performance will degrade if too many deadlines are missed. A hard real-time system is one that is subject to stringent deadlines, and must meet those deadlines in order to function. Bela is a hard real-time system, meaning that it must process audio frames at audio rate in order to function.

Bela features an on-board IDE that launches right in the browser, making it easy to get up and running without requiring any additional software.

It has just 0.5ms latency while retaining the capabilities and power of a 1GHz embedded computer running Linux.

Why choose Bela? Because no complicated setup or complex toolchains is needed. Just connect Bela to your computer, launch the on-board IDE in a web browser, and start coding right away.

Bela comes in 2 version, Bela Board and Bela Board mini.

Bela Board has 8 channels of 16-bit analog I/O, 16 digital I/O, stereo audio I/O, and 2 built-in speaker amplifiers.

Bela Mini is 1/3 the size of the big one. It features 8 16-bit analog inputs and 16 digital I/O, as well as 2 channels of audio input and output.

It runs a custom audio processing environment based on the Xenomai real-time Linux extensions. Your code runs in hard real-time, bypassing the entire operating system to go straight to the hardware.

What’s really great, is that you can code in various languages such as C++, SuperCollider, Csound and PureData. This makes it really user-friendly!

Bela is ideal for creating anything interactive that uses sensors and sound. So far, Bela has been used to create:

  • musical instruments
  • kinetic sculptures
  • wearable devices
  • interactive sound installations
  • effects boxes
  • robotic applications
  • sensorimotor experiments
  • paper circuit toolkits
  • e-textiles

Here some examples:

Light Saber

of Nature and Things – de Shroom (the Shroom) – Sound Art Installation

Microtonal subtractive synthesizer (Bela Mini + Pure Data)

SoundInk (Bela Mini + Pure Data)

Noise do not Judge (Bela, Trill Craft + Pure Data)

Sonic Tree (Bela + SuperCollider)

Resources

bela.io

Music for Solo Performer

Music for Solo Performer is a composition from the american composer of experimental music Alvin Lucier.

Mr. Lucier was a music professor at Wesleyan University in Middletown, Connecticut and also a member of the influential Sonic Arts Union (a collective of experimental musicians).

His works explore acoustic phenomena and auditory perception and is influenced by science and explores the physical properties of sound itself: resonance of spaces, phase interference between closely tuned pitches, and the transmission of sound through physical media.

He is mostly know for the piece “I am sitting in a Room” from 1969, in which he plays with the resonence of different rooms.

But the proper beginning of his compositional career is defined by his 1965 composition Music for Solo Performer.

This composition is also reffered to as the ‘brain wave piece’ and is considered to be the first musical work to use brain waves to directly generate the resultant sound..

The mechanics of the piece were although simple: alpha brain waves are picked up from electrodes attached to the performer, and the low frequency thumps (typically between 9-15 hertz) are first sent into amplifiers to amplify the pulses’ volume. Then, a bandpass filter cleans up the signal, which is sent by a second performer at a mixing board through a number of loudspeakers attached to percussion instruments and other objects to be activated by these massive, low frequency thumps.

This work, anyway, doesn’t attempt to use or show mind controll, but more like how to creatively use the mind controlling the “consequencies”, as Luier said:

“The idea was that I didn’t want to show mind control… Discovery is what I like, not control… So I completely eschewed that form… and let that alpha just flow out, and the composition was then how to deploy those speakers, and what instruments to use.”

How does this performance look like?

A man sits in a chair in the middle of a concert hall, perfectly still and dressed in a suit and tie, surrounded by a veritable orchestra of percussion instruments. Timpani, gongs, bass and snare drums and cymbals without performers to bring them to life.

Out of nowhere, thunderous bass sounds bring these instruments into vibratory oscillation, their sonic activity spreading around the space in an ever-morphing wash of sound. The man sits calmly at the center. His eyes are closed as he listens to this remarkable spectral orchestra, which, is being played by his brain.

How did this idea start?

In 1964 Lucier was teaching at Brandeis University and he was trying to find his own path, to work on composition, on his own language.

At the time, physicist Edmond Dewan was working at a lab near Brandeis doing brain wave research for the US Air Force. Dewan was an amateur organist and used to visit the Brandeis music department and was eager to share his ideas and equipment. Also, Lucier spent many hours working alone in the Brandeis electronic music studio working with Dewan’s gear such as  two Tektronix Type 122 preamplifiers in series, one Model 330M Kronhite Bandpass Filter set for a range of 9 Hz to 15 Hz, an integrating threshold switch and electrodes.

After learning how to produce alpha waves quite consistently, Lucier had to decide what he was going to do with them. They are between 10-14 hertz (below the normal human hearing range) so they could be somehow perceived as physical rhythmic impulses. His colleagues suggested that he could record the alpha waves and create a piece of tape with the material by manipulating the recording. But Lucier regularly used a recorded track of accelerated alpha waves during the performances of the piece to bring the waves into the audio range, the main material of the piece remains raw alpha generated in real time.

This work was also influenced by the strong impression that a Trappist monk’s meditation practice made on Lucier:

“I remember going into the chapel and watching a Trappist monk in the act of contemplation… he was thinking – deeply. It looked like somebody just thinking as hard as he possibly could. I remember I went back an hour later – he was in the same attitude – and I thought, “Well, if there’s any such thing as pure thought, that guy is doing it.” And that impressed me a lot… So when I did the brain wave piece, you’ve got to sit and not think of anything; because if you create a visual image your alpha will block.”

Here is a video of the performance:

Resources

Volker Sraebel, Wilm Thoben – Alvin Lucier’s Music for Solo Performer Experimental music beyond sonification

Wikipedia – Alvin Lucier

Wikipedia – Sonic Arts Union

Andrew Raffo Dewar – Inner Landscapes Alvin Lucier’s Music for Solo Performer

ProWork3 – Product (Updates)

The semester is nearing its end and after a long journey with our personal projects, we have to conclude our research and present the results. The time really flew by and I feel a bit sad that we are finishing this portion of our studies. In this article, I wanted to give some updates on the current state of my research and product progress.

Recap

My project is exploring the possibility of training non-synaesthetic people to experience synaesthesia (weaker form). I wanted to do it with an ep titled CMYK, where each song has a dominant colour based on my own synaesthetic perception.

Product Pt. 1 – Songs

I am in the process of polishing out the final details of all the 4 songs in my project collection. 2 of the songs on the EP have a similar mood/style, while the other 2 are slightly different in genres, with one being significantly darker than the rest.

Product Pt. 2 – Training videos

As part of the research and testing, I came up with a short training video that I will use in order to have a benchmark between different test groups and evaluate the validity of my hypothesis. I kept the video as short as possible because I knew I couldn’t expect unpaid subjects to focus on something for more than 3-4 minutes. Here is the video itself:

Test Groups

My project requires working with 2 different test groups with the goal of comparing results and determining whether there is a positive difference present with long-term training. The first portion of the test has started- the training video is being watched daily for 20 days by 15 participants so far. The goal is to get 15 more. The second test group will be a short-term group that will be exposed to the video only once.

GameJam – Impressions

The Serious Game Jam was the first event of its kind that I attended. It opened a whole new world for me, making me more excited about all the possibilities of this competition format. I wanted to talk about my experience during the event and put in some final thoughs. At the beginning the energy was both good and bad- we were excited but also stressed/worried about being on time with the deadline. 3 people gave up from Friday night to Saturday morning, so it was only the 3 of us working and we knew we had to seriously step it up in order to be able to finish. Impressively enough, all 3 of us were crazy productive and even managed to sleep enough! We were impressed with each other’s skills and the outcome of this GameJam is that we will most probably be working together on future projects. In the end, we managed to score the best sound design award. I went in with no expectations and came out of the event positively surprised. If it hadn’t been for the Game Jam, I wouldn’t have discovered a conference called Game Development Session. This event enabled “blind speed dating” with companies and it resulted in one very interested future employer in a great company.


Game Tile: Confusion Quest

Premise

This is a logic puzzle game that is supposed to illustrate how it feels to have ADHD. Even the smallest everyday task can seem like an epic medieval quest when a person with ADHD is having a bad mental health day. This is why we decided to go with a medieval aesthetic for the art/design.

Team

FH | Esma Kurbegovic | Character, Animation, Sound 

bibibi | Bibi (Artwork, level design, mechanics)

grizeldi | Luka (Development)

Presentation: https://docs.google.com/presentation/d/1BE4n3kT-cCeYDADyyrwmSB5jF6dVyemub62Gzr27DPc

Game Link: https://grizeldi.itch.io/gaming-gets-serious

Gameplay Video:   

Disney’s Imagineering – Sound

Disney is very well-known for the immersive experiences they are able to offer in their parks. Every little aspect has to tie in with the rest in order to form one amazing whole that is Disneyland/Disneyworld. Since I am a sound designer, I was interested in how do the Imagineers approach the problem of sound design specifically. It turns out that the parks offer a lot of unique challenges, as well as opportunities. There is a whole team of people working on the sound alone. A few music researchers look up different music online, determine the mood of each park attraction and come up with a bunch of ideas and a mood-board that is then given to the sound lead to create a piece. The pieces are made used different online music resources, as well as Disney’s own sound libraries and new sounds created specifically for the parks. In order to sonify the whole park, a lot of big and small speakers are hidden everywhere around the park. They all play at different times and volumes in order to simulate different effects. It is also very important to think about the difference between times of day within a sound composition. The style of the sound piece must perfectly match the theme of each park attraction. There are also different styles of speakers, from very directional, to ones that are more wide. There are also tons of materials with different levels of absorption around the park. The sound has to attract visitors to explore sections that appeal to them. The environment has to feel like a whole different realm. The sounds include anything from dinosaur roars, birds, fictional monsters, cicadas, wind, water, rainforest, and many, many others. As one approached to each ride, the sounds become less and less audible. Then a more robotic sound design is used to put focus on the rides themselves. As rides progress and take you into the centre of the action, all the incredible sounds come gushing back, hitting the visitors in. the face and leaving them in awe. The following video showcases all these principles and goes in depth on the topics i mentioned in this article:

MetaSynth

MetaSynth ist ein von U&I Software für das Macintosh-Betriebssystem entwickeltes Sounddesign- und Musikproduktionswerkzeug, mit dem sich aus Bildern Töne erzeugen lassen. Es wurde bei dem Film “The Matrix eingesetzt, ist aber auch auf der Single “Windowlicker” des Elektronikkünstlers “Aphex Twin” zu hören.

Metasynth+Xx ist die neuste Variante, welche auch auf allen gängigen Betriebssystemen läuft.

Das Programm besteht aus einem Bildsynthesizer, Spektrum-Synthesizer, Bildfilter, Effekträumen, Bild-Sequenzer, eigenen Instrumenten, Wavetables, Oszillatoren und Metatrack (Mixing Room).

Das Programm bietet eine riesige Auswahl an Möglichkeiten auf unkonventionelle Art mit Sound zu arbeiten. Es kann mit der Software Sound Design Produziert oder Musik durch Improvisation erzeugt werden. Außerdem ist die visuelle Komponente einzigartig im vergleich zu anderen Software-Plugins welche Teile der Funktionen auch beherrschen.

Anbei ein Beispiel für die Vielfalt des Synthesizers:

Literatur:

MetaSynth + Xx