Autocorrelation

What is it? Correlation normally describes the mutual relationship that exists between two or more things.

In statistics, it referred to any association which, considering a pair of variables, refers to which degree they are linearly related.

We could also say the same thing in the context of signals, where the correlation between signal indicates how much one signal resembles another.

There are 2 types of correlation in signal processing: autocorrelation and cross-correlation.

When the signal is correlated with itself, it’s called autocorrelation. In this case the signal is mostly compared with its time-shifted version.

One way of computing it is through the graphic technique.

Considering the first digital signal, already represented graphically, we then shift sample by sample the second signal at each interval, overlapping it with the first one. The results are continuous multiplication and addition.

This process is well represented in this image: [1]

As we can easily imagine, the first row represents the given signal, our reference one. The second row represents its time-shifted version, compared sample by sample to the reference one. The third row is the result of the multiplication of the first two rows and the red number is their sum.

So, we got those results: -1, 0, 6, 0, -1.

The maximum value is obtained when the overlapping signal best matches the reference one, therefore, in this case, when the time-shift is exactly zero, as there will always be a peak at a lag of zero in an autocorrelation. Also, its size will be the energy of the signal.

References

[1] H.L. Sneha: Understanding Correlation. 2017

[2] Wikipedia – Correlation

Scapes

Scapes is a project by sound designer Francis Preve, who dives into the belief that synthesizers can literally sound like anything.

This statement describes the project exactly:

“If we tend to recreate what we have heard, it is partly because we are synthesizing something we have listened to with some care. So why not go back to the richness and complexity of sound as we hear it in everyday life? Why not combine actively listening to a sound walk or a field recording with the art of producing something using synthesis, instead of a recording? “

Preve’s work goes in this direction, it does not have a single sample, the whole ambiental environment, such as the rainy day in the city, summer midnight, the birds, are synthetic.

Those sounds are also synthesized using just several instances of Ableton’s stock-synth operator and some internal effects.

It might not sound 100% realistic, but the resemblance with the natural world is stunning.

https://www.francispreve.com/scapes

Resources

Francis Preve – Scapes

CMD – What if you used synthesizers to emulate nature and reality?

Recreate the natural world with electronic music

There have been various attempts to achieve this, and here I would like to share with you this fascinating project from music producer Darren Cunningham.

What I personally find interesting is his approach to electronic music. He not only tries to emulate the sounds of nature, trying to have the same sound with some synthesizers, but he wants to give us the same sensations from electronic sources. Not the sounds of nature from electronic sources, but the feelings of nature.

The way he also uses field recording is quite inspiring, because he uses it exactly like electronic sounds / synthesizers by taking them away from the natural context, but letting them hold their feel and translate it into another electronic world.

Here is the full article with related video:

https://www.npr.org/sections/therecord/2012/08/13/158695578/an-electronic-composer-reconstructs-the-natural-world?t=1620803583190

From Polyrhythm to Orchestral Soundtrack

There are a lot of things to say about Radiohead and all the amazing and interesting things about their music like math, technology, experimentation.

You can also find a small book called Kid Algebra: The Euclidean and Maximally Uniform Rhythms of Radiohead by Brad Osborn, in which all these small subtle details are analyzed.

He describes their music as a Goldilocks principle, saying that it inhabits a space between mundane conventions and pure experimentation, the perfect “sweet spot”.

Here I would like to share this interesting transformation of one of their songs into an orchestral soundtrack for the film Blue Planet II, produced by the BBC Natural History Unit in 2017.

For this film, in which they worked with composer Hans Zimmer, they decided to use their song Bloom from their eight studio album, The King of Limbs (2011). This song was formerly written by singer Thom York and was inspired by the Blue Planet movie. Therefore, this was perfect.

This song is rhythmically very complex, full of polyrhythm, so it wouldn’t be appropriate for a film about all the beauty and depth of the ocean. Thus, this was the challenge. How to turn it into an immersive soundtrack for the ocean?

Well, in this video they (Thom Yorke and Johnny Greenwood) and Hans Zimmer explain the surprisingly simple yet effective process and technique they used to achieve this. Just a little spoiler, the answers lies in Pointillism.

And this is the final, magical, version of the song for the soundtrack.

Reference

1. B. Osborn – Kid Algebra: Radiohead’s euclidean and maximally even rhythm. 2014.

The Parallax effect in music

What is it about?

Parallax is a technique used to determine distances, as it is a difference in the apparent position of an object viewed along two different lines of sight. This difference is then measured by the angle of inclination between those lines.

This principle is for example used in astronomy, where scientists use it to measure larger distances (i. e. distance of a planet from Earth).

There are many types of parallax effects used in many different fields, one of them is the Parallax Scrolling, used in computer graphics.

Here, when background images move passing the camera slower than foreground images, we have an illusion of depth, even if it is a 2D scene.

This also means, that this effect describes how depth affects your perception of movements.

This effect could also be used in music to change the way the listener will perceive the song, creating depht and movement. 

Here an example:

References

Wikipedia – Parallax

C. Prendergast – Parallax Scrolling: Examples and History

Wikipedia – Parallax scrolling

Euclidean Rhythms

Music and mathematics are closely linked and an example of their common path are the so-called Euclidean rhythms.

As the name suggests, its roots go back to the Greek mathematician Euclid (around 300 BC). In his text “Elements” a revolutionary algorithm is described to efficiently find the greatest common divisor (GCF) of 2 integers.

Here an example on how it works [2]

[

The Euclidean Algorithm for finding GCD(A,B) is as follows:

If A = 0 then GCD(A,B)=B, since the GCD(0,B)=B, and we can stop.

If B = 0 then GCD(A,B)=A, since the GCD(A,0)=A, and we can stop.

Write A in quotient remainder form (A = B⋅Q + R)

Find GCD(B,R) using the Euclidean Algorithm since GCD(A,B) = GCD(B,R)

Example:

Find the GCD of 270 and 192

A=270, B=192

A ≠0

B ≠0

Use long division to find that 270/192 = 1 with a remainder of 78. We can write this as: 270 = 192 * 1 +78

Find GCD(192,78), since GCD(270,192)=GCD(192,78)

A=192, B=78

A ≠0

B ≠0

Use long division to find that 192/78 = 2 with a remainder of 36. We can write this as:

192 = 78 * 2 + 36

Find GCD(78,36), since GCD(192,78)=GCD(78,36)

A=78, B=36

A ≠0

B ≠0

Use long division to find that 78/36 = 2 with a remainder of 6. We can write this as:

78 = 36 * 2 + 6

Find GCD(36,6), since GCD(78,36)=GCD(36,6)

A=36, B=6

A ≠0

B ≠0

Use long division to find that 36/6 = 6 with a remainder of 0. We can write this as:

36 = 6 * 6 + 0

Find GCD(6,0), since GCD(36,6)=GCD(6,0)

A=6, B=0

A ≠0

B =0, GCD(6,0)=6

So we have shown:

GCD(270,192) = GCD(192,78) = GCD(78,36) = GCD(36,6) = GCD(6,0) = 6

GCD(270,192) = 6

] [2]

Its application in music and their formal discovery is attributed to the computer scientist Godried Toussint in 2004.

He found that this algorithm could be translated into rhythms.

But how? The GCF could be used to distribute a determined number of onsets across a determined number of step.

These rhythms are often represented with a circular visualization.

So, as we can see, the Euclidean rhythm needs a number of onset (filling) uniformly distributed over a number of steps (per measure).

Here’s an example of what it might sound like:

When combining more of them, we have a polyrhythm, a groove that consists of two or more contrasting rhythms playing simultaneously.

What is truly amazing is that music and mathematics are not only connected theoretically, but also viscerally.

Indeed, many rhythms found in cultures and in history are actually naturally Euclidean.

Brazillian Bossa nova = 5 onsets distributed across 16 steps

Cuba’s Tresillo = 3 onsets distributed across 8 steps.

Turkey’s Aksak rhythm = 4 onsets distributed across 9 steps.

and many more.

And, of course, an example in Radiohead’s music:

References

[1] Harrison ShimazuWhat are Eucledian rhytms?

[2] Kan Academy – The Euclidean Algorithm

Karlax!

Karlax!

Karlax is a type of MIDI controller ideated by the french music instruments company Da Fact, designed to “re-establish the artist’s body at the heart of the performance.”

Elected as the most bizarre instrument of the year 2010, it has been designed to offer artists a large and diverse range of controls.

It captures all the gestures, combining expressivity and intuitiveness. Fingers, wrists, elbows, forearms, torso and whole body are captured, analysed and sent to the computer, where they will control parameters of virtual instruments/patches.

Its sensors are embed to precision mechanics components. Pistons, keys, switches, benders, triggers, a rotation axis and an inertial unit can be activated separately or simultaneously.

It divides in two parts linked together by a rotary axis: the top part for the left hand and the bottom part for the right hand. It also features a screen for displaying and editing settings, thus it is possible to leave the computer off-stage.

On the front of both parts there are 4 velocity sensitive pistons, 5 continuous keys and 4 switches.

On the back of the top there are 5 switches to manipulated by the thumb and a five positions mini-joystick to browse through the user interface that is displayed on the screen.

4 additional switches are accessible on the back of the bottom part. A thumb rest is placed on both top and bottom parts.

It has a wide range of continuous sensors featuring, such as a 12 bits native sampling at 1 kHz, velocity definition, 4 types of response curves (one user-editable), adaptive latency (from 1 to 20 ms).

Regarding the motion sensor, it has:

Accelerometer = 3 axes with simultaneous attack detection on each axis. Sensitivity + – 3g.

Gyroscope = 3 axes at 2000°/ second.

Inertial sensors processing for high level output data for motion processing and recognition.

“Karlax pushes the boundaries of language,” says Nils Aziosmanoff, Chairman of LE CUBE, a creative centre devoted entirely to the digital arts.

Karlax uses a wireless network to communicate with his receiver, which is connected to the computer through USB, MIDI or OSC.

It is manufactured in plain aluminium for hardware precision parts. This aluminium is anodized black for a better protection of the surface.

It is PC & Mac compatible, enables the management and organization of

scenes in Karlax bank, allows output monitoring of the instrument and flexible MIDI assignation, sensor behaviour control, and MIDI re-mapping.

Karlax can simply be used as an interface for your audio software, yet Da Fact provides also some software to control the instruments such as Karlax-Bridge, Karlax-View.


There is another software, Karl/Max, a complete composition and live performance software specially designed for Karlax.

Here some comments from artists:

“I follow closely the innovations in music and Karlax seems to be a new instrument very well designed and adapted to new expressions of digital arts.

Jean-François ZYGEL (Pianist, improviser, composer, professor at the Conservatory of Paris, producer of “La boîte à musique” on French TV and many other creative activities.)

“Karlax is an incredible link between music and visual arts, this magic wand redefines our expressivity. Gestures become sounds, and sounds connect with images. It is such a fun experience to feel the sounds spinning around the audience following your gestures and to see images come alive from your fingertips. For the first time, the artist can master in realtime the development of a movie! And this is just a start, my imagination alreday feeds numerous projects that will soon be achieved thanks to Karlax. It is an amazing dream machine…” Philippe GEISS (Saxophonist, composer, professor to the Academy of Strasbourg in France.)

Karlax: and the dance becomes the master of musical work. Controller where the movement is king. A playing partner with who dance leads and projects music and sound worlds at will.” Hervé DIASNAS (Choreographer, dancer, musician)

Resources

Da Fact – Karlax. www.dafact.com

Synthopia – Da Fact Karlax Is The Most Bizarre New Instrument Of The Year. (2010)

AudioMulch

AudioMulch is, somehow, a forgotten, yet incredibly powerful, music software/sound design tool developed by Ross Bencina in Melbourne.

“It is software for live performance, audio processing,
sound design and music composition.”

It brings together elements of tradition analogue routing with effects and control options only possible within the computer.

It has a Patcher Interface similar to PureData/MaxMSP, which allows you to think and compose your music in a non-linear way, thanks to his improvisational signal flow approach.

A great feature of this software is the MetaSurface, a powerful tool for gestural control that turns AudioMulch into an expressive musical instrument. This surface has no virtual knobs and fader, it’s designed specifically for performing music with a computer. Instead of having to turn one knob at a time with the mouse, the Metasurface lets you blend smoothly between dozens of parameter settings on a two dimensional plane. (You can even automate and loop your Metasurface gestures.)

It also has multichannel input and output (up to 256). So you can use it as a multi track recorder, live mixer, to spatialize and diffuse music for surround sound and so on.

(An amazing granulator is also featured!)

Many artists have used it, such as Nine Inch Nails, Four Tet and Girl Talk.

There is just one problem, the last update for AudioMulch was in July 2013, this means that no 64-bit plugins can work. Some rumors say there should be a 64-bit beta for Mac and Windows soon, we’ll see

Reference

www.audiomulch.com

Radiohead & Ondes Martenot

At the beginning of the XX century many electronic instruments have been invented, some have had better luck than others, and one of these unlucky ones is the Ondes Martenot.

This instrument was invented in 1928 by Maurice Martenot, a French cellist.

But what is it? It is somewhat of a cross between an Organ and a Theremin.

Originally, the main interface was a metal ring, which the player worn in the right index finger. The movement of the finger (up and down a wire) creates a theremin-like tone. Then a four-octave keyboard was added, yet not a normal one, because it has moveable keys that create a vibrato when wiggled. All this is enclosed in a wooden frame that features a drawer that allows manipulation of volume and timbre by the left hand. Volume is controlled with a touch-sensitive glass “lozenge”, called the “gradation key”; the further the lozenge is depressed, the louder the volume.

Early models produce only a few waveforms. Later models can simultaneously generate sine, peak-limited triangle, square, pulse, and full-wave rectified sine waves, in addition to pink noise, all controlled by switches in the drawer.

The inventor was fascinated by the accidental overlapping of tones of military radio oscillators and wanted to build an instrument to replicate it, but with the same tonal expression as a cello.

Four speakers were produced for the instrument, called “diffuseurs”.

The “Métallique” (imm. below, first from left) features a gong instead of a speaker cone, producing a metallic timbre.

The “Palme” speaker (imm. below, middle), has a resonance chamber laced with strings tuned to all 12 semitones of an octave; when a note is played in tune, it resonates a particular string, producing chiming tones

The last one is a normal cabinet.

It has been used by composers such as Edgar Varèse, Pierre Boulez and Olivier Messiaen, but its “rebirth” in the modern era and its diffusion in the context of popular music can be attributed to Jonny Greenwood, best known for his role as a guitarist in Radiohead .

Jonny, visionary musician and creator of a new way of thinking and playing music, was so fascinated by the Ondes Martenot that he decided to integrate them into Radiohead’s music. This “journey” began with their amazing album Kid A (2000) and has been played on some of their most important songs ever since. In live performances of their song Weird Fishes/Arpeggi they even use a group of six ondes martenot.

Here is a collection of Radiohead’s songs with Ondes Martenot

Greenwood also wrote Smear, a piece for two Ondes Martenot:

Thanks to Jonny Greenwood it has had a new light and has found many applications in modern popular music. For example, Yann Tiersen used it for the Amelie soundtrack and Thomas Bloch, Ondes Martenot virtuoso, also played it on records by Tom Waits, Marianne Faithfull and in Damon Albarn’s Monkey: Journey to the West Opera.

References

[1] Wikipedia – Ondes Martenot.

[2] The Guardian – Hey, what’s that sound: Ondes martenot.

[3] Britannica – Ondes martenot.

Sound of Fire?

What is really the sound of fire? In general, the fire should not make any sound, what we hear is more the burning object or the air itself, the so-called crackling and hissing.

Sound is defined as oscillation in pressure, particle displacement and velocity propagated in medium such as air or water, that have density.

On the other hand, thermal conduction, hence the transfer of heat (energy) resulting from temperature differences, occurs through the random movement of atoms or molecules.

Heat is vibration of molecules, but it takes a very large amount of molecules moving organized together to create what we perceive as sound, as we perceive the vibration of the air as a whole, not of individual molecules.

Thus, the combustion itself does not produce any sound, but due to the release of a high amount of energy, nearby molecules acquire a greater random kinetic energy which allows us to perceive a so-called Brownian noise.

Brownian noise (or Brown Noise), is a noise characterized by the decrease of 6 dB per octave with increasing density.

Normally, we detect longitudinal waves, as they are made up of groups of particles that periodically oscillate with high amplitudes, so we can easily detect, for example, a human voice.

In the event of combustion due to the disorganized movement of the particles, the power at the eardrum is lower and this noise is not audible. This means that we mainly hear the popping of wood or the sound of the wind blowing as the air expands and rises.

Increasing the temperature leads to an increase in the average kinetic energy of the molecules, and thus increases the pressure.

Therefore, we could try to find the hypothetical temperature necessary for the Brownian noise produced by a fire to be audible.

In the work from Sivian and White [2] the minimum audible field (MAF) was determined, that is the thresholds for a tone presented in a sound field to a participant who is not wearing headphones.

Here we can also find a formula to express it:

P is the RMS value, and we could consider it as P = 2 x 10^-2 (so 60 dB, as loud as a conversation);

𝜌 (Rho) is the density of air (1.225 kg/m^3);

kB is the Boltzmann constant (1.38064852 × 10^-23 m^2 kg s^-2 K^-1);

T is the temperature, in Kelvin

c is the speed of sound in air (344 m/s),

f1 and f2 are the frequency range. Let’s consider the audible range of 20-20000 Hz.

So we need to do the inverse formula to find T:

After all the calculations and after converting all the data to common values, the result is T= 88.144,20 K. An incredibly high temperature, much higher than the sun’s core! (5.778K)

Of course we can hear Brownian noise by simply generating it in any DAW and getting it to 60dB, yet we wouldn’t hear the noise caused by the random kinetic energy of air molecules.

References:

[2] L. J. Sivian, S.D. White. On minimum audible sound field. 2015.