How Music Producers Find Their “Sound”

Do you catch yourself recognising whose track/song you are listening to when you’re just shuffling randomly through Spotify, even before you look at the artist name? This is because successful music producers have a way to make sure you can instantly recognise them. This is quite beneficial, because it imprints into the listener’s mind and makes them more likely to recognise and share the artist’s future releases with their network.

So how do musicians/music producers do this? There are some key points that can easily help you understand this occurence better.

1) There’s no shortcut! 

You know the 10.000 hour rule? Or as some have put it in the musical context- 1,000 songs? There’s really no way around it! This aplies to any skill in life, not just music. However, usually the end consumer never really knows how many songs an artist actually never releases. Those are all practice songs. For every release that you see out there there might be 100s of other unreleased songs done prior to it. if the musician just keeps creating instead of getting hung up on one song, they will eventually grow into their own unique way of structuring, as well as editing songs.

2) They use unique elements 

So many producers/musicians use samples from Splice, which leads to the listener feeling like they’ve already heard a song even if they haven’t. Songs get lost in the sea of similar musical works, but every now and then, something with a unique flavour pops up and it’s hard to forget. Musicians who make their own synth sounds, play exotic instruments or even their own dit instruments are the ones that stick around in our minds.

3) Using the same sound in multiple songs

This is the easiest and most obvious way in which musicians/producers show their own style. You might hear a similar bass, or drum pattern in mutiple songs/tracks from the same musician. In rap/hiphop, you will also hear producer tags (e.g. “Dj Khaled” being said in the beginning of each track).

4) Great Musicians/Producers don’t stick to one style/trend

Music has existed for so long and progressed so fast lately, that it is hard to stand out, especially if you stick to genres strictly. Nowadays, great musicians will come up with their own subgenres or mix in few different ones into a musical piece. You won’t ever really remember the musicians or the producers who are just following in the footsteps of the greats who have already established a certain genre. If you can’t quite put your finger on why you like someone’s music so much and why they sound “different”, they are probably experimenting with a combination of different genres.

Harmor

The Basics

The picture above shows Harmor’s interface. We can group the Interface into three sections: The red part, the gray part and the window to the right. Firstly, the easiest section to understand is the window to the right. Harmor is an additive synthesizer, which means the sounds it generates are made up of sine waves added on top of each other. The window on the right displays the frequencies of the individual sine waves, played over the last few seconds. Secondly, the red window is where most of the sound is generated. There are different sections and color-coded knobs to be able to identify what works together. Left of the center you can see an A/B switch. The red section exists twice: once for state A and once for state B. These states can be mixed together via the fader below. Lastly the gray area is for global controls. The only exception is the IMG tab, which we will cover a little later. As you can see there are many knob, tabs and dropdowns. But in addition to that most most of the processing can be altered with envelopes. These allow the user to draw a graph with infinitely many points to either use it as an ADSR curve, an LFO, or map it to keyboard, velocity, X, Y & Z quick modulation and more. At this point it already might become clear that Harmor is a hugely versatile synth. It’s marketed as an additive / subtractive synthesizer and features an immense amount of features which we will take a closer look at now.

Additive or Subtractive?

As mentioned above Harmor is marketed as an additive / subtractive synthesizer. But what does that mean? While Harmor is built using additive synthesis as its foundation, the available features closely resemble a typical subtractive synth. But because Harmor is additive, there are no audio streams being processed. Instead a table of frequency and amplitude data is manipulated resulting in an efficient, accurate and partly very unfamiliar and creative way to generate audio streams. Harmor features four of these additive / subtractive oscillators. Two can be seen on the image above in the top left corner. These can be mixed in different modes and then again mixed with the other two via the A/B switch. In addition to the four oscillators, Harmor is also able to synthesize sound from the IMG section. The user can drag-and-drop audio or image files in and Harmor can act like a sampler, re-synthesizing audio or even generating audio from images drawn in Photoshop.

The Generator Section

As you can see in addition to the different subsections being walled in by dotted lines, this section is color coded as well. The Timbre section allows you to select any waveform by again drawing and then morphing between two of them with different mixing modes. Harmor allows you to import a single cycle waveform to generate the envelope. But you can import any sample and generate a waveform from it. Here is an example where I dragged a full song into it and processed it with the internal compressor module only:

The blur module allows you to generate reverb-like effects and also preverb. Tremolo generates the effect of a stereo vibrato, think about jazz organs. Harmonizer clones existing harmonics by the offset/octaves defined. And prism shifts partials away from their original relationship with the fundamental frequency. A little prism usually generates a detune-like effect, more usually metallic sounds. And here is the interesting part: As with many other parameters as well, you can edit the harmonic prism mapping via the envelopes section. This allows you to create an offset to the amount knob on a per frequency basis. Here is an example of a usage of prism:

As you can see in the analyzer on the right: There is movement over time. In the Harmonic prism envelope I painted a graph so that the knob does not modify lower frequencies but only starts at +3 octaves.
The other options from this section, unison, pitch, vibrato and legato should be clear from other synthesizers.

The Filter Section

As seen above, Harmor features two filters per state. Each filter can have a curve selected from the presets menu. The presets include low pass, band pass, high pass and comb filtering. Additionally you can draw your own curve as explained in the Basics section above. The filters can additionally be control the mix for the envelope, keyboard tracking, width, actual frequency and resonance. But the cool thing is how these filters are combined: The knob in the middle lets you fade between only filter 2, parallel processing, only filter 1, filter 1 + serial processing and serial processing only. In the bottom half there is a one-knob pluck knob as well as a phaser module with, again, custom shaped filters.

The Bottom Section

As you can see above the bottom section features some general global functions. On the left side most should be clear. The XYZ coordinate grid offers a fast way to automate many parameters by mapping them to either X Y or Z and then just editing events in the DAW. On the top right however there are four tabs that open new views. Above we have seen the ENV section where you can modulate about anything. The green tab is the image tab. We already know that Harmor can generate sound from images and sound (not that this is a different way of using existing sound, before I loaded it into an oscillator, now we are talking about the IMG tab). On the right you can see a whole lot of knobs, some of them can be modified by clicking in the image. C and F are course and fine playback speed adjustments, time is the time offset. The other controls are used to change how the image is interpreted and partially could be outsourced to image editors. I’m going to skip this part, as this post would get a whole lot more complicated if not. It would probably be best to just try it out yourself.

The third tab contains some standard effects. These are quite good but especially the compressor stands out as it rivals the easy-but-usefullness of OTT.

And finally, the last section: Advanced (did you really think this was advanced until now? :P) Literally the whole plugin can be restructured here. I usually only go in here to enable perfect precision mode, threaded mode (enables multi core processing) and high precision image resynthesis. Most of these features are usually not needed and seem more like debugging features so I will not go into detail about them, but like before I encourage you to try it out. Harmor can be very overwhelming and as many people mention in reviews: “Harmor’s biggest strength is also it’s greatest weakness, and probably why there are so few reviews for such an amazing synth. You can use Harmor for years, and still feel like a noob only scratching the surface. That makes writing a review difficult. How can you give an in-depth review, when you feel so green behind the ears? You only need to watch a few YT videos (e.g. Seamless) or chat with another user to discover yet another side to this truly versatile beast.”

Harmor on KVR ⬈
Harmor on Image-Line ⬈
Harmor Documentation ⬈ (a whole lot more details and a clickable image if you have more detailed questions)

Cultural body

When discussing the mind we also need to discuss our body. Interactions are not only processed in our minds but always use our body as the vehicle. In this article I refer to a paper that discusses the body and its cultural conjunction as well as and interesting gesture controller design that shows a designed outcome of of a philosophical point of view.

Sara Sithi-Amnuai has developed a custom built MIDI glove interface called NAMI that is designed for live electro-acoustic musical performances. It is a tool that extends her own multicultural background. Nami utilizes a force-sensitive-resistor, flex sensors, buttons, hall effect sensors, and a photoresistor. The purpose is to allow the user to use their own gestural vocabulary while being culture general and flexible valuing cross-cultural exploration and accommodating a variety of cultural gestural language.

In Sithi-Amnuais paper “Exploring Identity Through Design: A focus on the cultural body via Nami” she discusses the cultural body, its consideration in existing gestural controller design, and how cultural methods have the potential to extend musical/social identities and/or traditions within a technological context. She starts off by describing the metaphysics of agency, identity, and culture. The identity of a human-being is deeply linked to culture and sustained through creation and performance of music and dance. The agency describes an “… active entity constantly intervening in the course of events ongoing around [them]”. Our mind is inseparable from our body, while the body being the physical material through which we have access to the world. Sithi-Amnuai refers to a study by Steven Wainwright and Bryan Turner that shows that the identity of former dancers in Royal Ballet in London is deeply rooted in their own body. Once their career has ended, it is exceedingly challenging to reinvent themselves outside the company.

Sithi-Amnuais take on gestural controllers is very philosophical. Once again, I thought how important it is to have philosophical and meaningful foundation for the execution of a design. But also, the gadget itself is very interesting and makes up for an inspiring controller that enables users to put more of themselves into the music. She sparked my curiosity to explore and extend my own cultural body. In that regard, cultural etiquettes, movements, and experiences through sense, sight, and feel are affecting our instruments and tools as well.

Reference:
https://zenodo.org/record/4813188#.YOAqVS335QI

My thoughts on Vodhrán

The paper “Vodhrán: collaborative design for evolving a physical model and interface into a proto-instrument” was a short but interesting read into what could be explored more in the marriage of music making and interaction design.

The design itself is a box that consists of a touch sensitive plate for interaction, a micro computer for processing, and an AMP for sound production. It was effectively designed to create a new form of interaction for musicians and a new way of making music. It also can work connected to a computer with a special software. The authors write on their experience and the tools/methods they used throughout the building of this device and how it would function.

This type of a new interaction for music making is highly interesting in the field of interaction design in my opinion. Since making music generally has predefined methods and pretty commonplace conceptions it is both really interesting and difficult to experiment with a new form of interaction. This is the reason why I was interested in this paper. Also since this semester I had the chance to experiment with a new type of interaction with my project for this course, I could relate more with the process and findings of this paper. With the increasing amount of music production on computers it is hard to dismiss this topic. Human-computer interaction in this area is already highly saturated, with interaction methods that popped up in the last 2 decades but also with digitalisation of already existing centuries old musical instruments. This progress will only expand with increasing usage of digital devices in this area, hence the need and search for new and different interactions will always persist. I think this paper was the start of this kind of an approach where the authors wanted more out of the interaction and ability to create through creation. As an interaction designer it is crucial to understand and reflect on areas that can feel settled and unchangeable. This paper for me does exactly that and I highly appreciate it for it.

Auswahl von Filmen zur Analyse

In dem theoretischen Teil der Masterarbeit soll der Ansatz einer Kategorisierung von Mensch-Natur-Hybriden im Animationsfilm beschrieben werden. Diese Annäherung soll versuchen, das breite Spektrum dieser Hybrid-Formen zu skizzieren.

Das theoretische Kategorisierungs-Konzept soll dabei von kurzen Filmbeispielen sowie detailierten Filmanalysen untermauert werden. Die Auswahl der Filmbeispiele folgt drei Kategorien:

  • Westliche Mainstream-Animation (mit Fokus auf Disney)
  • Fernöstliche, etablierte Animation (mit Fokus auf Studio Ghibli)
  • Indie Animation (diverse Independent Filmmaker, ohne örtliche Beschränkung)

Während bei den beiden etablierten Animationsgruppen vor allem Langformate behandelt werden, zeichnen sich Indie Animationsfilme durch ihre Kürze aus. In diese Kategorie wird auch der praktische Teil der Masterarbeit, ein Animationsfilm, fallen.

Sound Design – What Does Magic Sound Like? A Look At How The Harry Potter Films Redefined The Sound Of Magic

Here is an interesting video on the sound design of magic in the Harry Potter series of films.

Before Harry Potter this video suggests that although there were some indications in literature as to what magic might sound like, that until the Harry Potter films came along the medium of film never seen such formalisation of the sound of magic, such a variety of spells cast with specific gestures and feelings. If the film makers didn’t quite know what that should all sound like they definitely knew that they didn’t want it to sound like shooting scenes from science fiction films.

In preparation fro the first Harry Potter film director Chris Columbus told supervising sound editor Eddy Joseph that he didn’t want anything modern, futuristic or electronic. Although the sound of magic did change and develop throughout the series of films, it is said that this was a mantra that the film makers and sound designers continued to hold to.

Instead if the spell that was being cast had a specific sound related to it then they would use that, like water, fire, freezing, etc. Sometimes the sound of what the spell is impacting is all that is needed. When it comes to levitation then silence works just fine. But there are plenty of examples where the magic doesn’t have a specific sound attached and this is where the sound designers get the chance to be creative.

There is no doubt that the sound of magic developed though the Harry Potter films but there was a major change when it came to to the 3rd film, The Prisoner of Azkaban, when out went the explosions and whooshes and in came much softer sounds for most of the spells, which has the effect of making the magic less aggressive and more mysterious and that is the style the team build on for the key patronus spell that is built out a chorus of voices.

Another development we see through the Harry Potter films is that see the spell sounds become more personal and appropriate for each character to give the impression that the spell comes out of the magician just like their breath.

Watch the video and hear and see the examples played out.

0# | Collaborative learning processes

For this blog entry I chose to examine the paper “Collaborative Learning with Interactive Music Systems” from Adnan Marquez-Borbon. It is presented on the website of the International Conference on New Interfaces for Musical Expression (NIME-20). It is a bit off-topic but relevant for the lecture ‘Interaction Design’.

The paper draws a new perspective on how to learn a new instrument – more specifically an interactive digital music system. The designers of those instruments are often the only performers and do not have many copies of their specific instrument. In general they often do not have instructional informations aside from a technical document (if existent) what makes it hard to learn it in a traditional way. The ‘traditional way’ means to first learn the musical notation and then the according application to the instrument. This way means a very linear and structured process of learning. The teacher transmits knowledge, evaluates the technical proficiency, musical accuracy and the appropriateness of style of the student. But this approach ignores the diversity of alternative musical practices and approaches. In an interactive digital music system the musical text and notation are not obligatory central to the practice. It lives from variable and numerous practices and the form of documentation applies to that. The learning process is therefore more complex and probably not tangible with the traditional way to learn an instrument.

The author describes an open-ended, exploratory and collective learning approach to learn a technologically mediated instrument and with that to overcome the traditional way of learning an instrument. The so called ‘socially mediated learning’ process centres collaborative learning and should provide a flexible and adaptable learning environment. This space is unlimited in musical exploration, creativity and bringing in additional musical skills of students. Learning by imitating other learners can lead to the extension of the own capabilities of students. The challenge of collective learning is to keep up the motivation and let the students take responsibilities in their freedom of learning (like setting subjectives). The teacher becomes a guide and is not longer an authority. 

As a method the author developed a new instrument with a 3D printed case, two outward facing speaker, four push buttons controlling pitches organised in a one-octave chromatic scale, two linear soft potentiometer (controlling the pitch & other) and a force-sensing resistor controls the output volume. The volunteers have been three people with traditional music education and extensive performance experience but with no experience in performing with musical technologies or interactive musical systems. They learned to play the instrument in group sessions over a period of six months. 

As a result they actually started to learn from each other, imitated and/or extended each others findings and were learning effectively together. Furthermore they started to make up their own learning structures by noting down individual musical notations as a learning aid. They even started to come up with group exercises and created own compositions in order to rehearse and perform together. The participants developed their own style in playing but also found a way to play together. Even without fixed learning structures they started to self-organise their capacities and activities as well as learning goals which motivated and oriented them. 

Due to the flexible exchange of ideas and techniques the participants found a unified conceptualisation and usage of the instrument. Both, the whole group and the individual learner benefit from the rich learning opportunities in the open and flexible learning structure.

My resume: Personally, I never thought about how we learn to play instruments – even though I myself learned the musical notation and how to play the flute and the piano. Looking back I realise that even as a kid the linear learning structure is very hard to accomplish in the beginning. I am sure I would have loved to learn with such an open and collaborative learning approach that Adnan Marquez-Borbon suggests. 

Keeping in mind, that the approach is aimed for more complex and often unique technologically mediated instruments I am sure, that this structure can be helpful also in other fields of computer-mediated channels like new and unknown software. Since a lot of daily tasks are getting more complex day by day I think we should see a collaborative learning approach as a serious opportunity to enhance not only our learning processes but also ideation processes in terms of innovative design. I currently started my summer job in the area of ‘making’ – means building, tinkering around and hacking all kind of old electronic rubbish in order to create new stuff, new ideas and appreciate creativity from a new, more practical perspective. This kind of practical process also nourishes from collaborative exchange of ideas and knowledge which is in my opinion a really fruitful way of creation. 

The genius of Trent Reznor

One of the most influential bands of our time are certainly the Americans Nine Inch Nails (NIN), founded by singer / composer / programmer / multi-instrumentalist / visionary / genius Trent Reznor in 1988.

Nine Inch Nails have sold over 20 million records and were nominated for 13 Grammys and won 2. Time magazine named Reznor one of its most influential people in 1997, while Spin magazine once described him as “The most vital artist in music”.

Their concerts are characterized by their extensive use of thematic visuals, complex special effects and elaborate lighting; songs are often rearranged to fit a given performance, and melodies or lyrics of songs that are not scheduled to play are sometimes assimilated into other songs.

Trent is also famous for soundtracks of him along with his bandmate Atticus Ross.

They (he) deconstructed the traditional rock song, a bit like Sonic Youth did, but they went in a more electronic and aggressive direction. Their music is characterized by their massive use of Industrial sounds (although, not as massive as for the berliners Einstürzende Neubaten) in early works and lately is focused on analog and modular synths.

The sound design work is a really important part in their composition, as important as the harmony and the melody. They probably used every electronic instruments (and software) they could find, turning them all into their signature, creating that industrial NIN sound. Reznor’s sound is always clearly identifiable. While some of that is due to his sound design, which includes digital distortion and processing noise from a wide variety of sound sources,

What I find really impressive, besides the sound design and beautiful dark lyrics, is the unique choice of harmony and melody progression.

Nothing is predictable and even in the simplest progression there is that note that takes you deep into Reznor’s mind, away from any other musical word.

Reznor’s music has a decidedly shy tone that sets the stage for his often obscure lyrics.

His use of harmony, chords and melody also has a huge impact on his sound. In the movie Sound City, Reznor explains that he has a foundation in music theory, especially in regard to the keyboard, and this subconsciously influences his writing of him:

“My grandma pushed me into piano.  I remember when I was 5, I started taking classical lessons.  I liked it, and I felt like I was good at it, and I knew in life that I was supposed to make music. I practiced long and hard and studied and learned how to play an instrument that provided me a foundation where I can base everything I think of in terms of where it sits on the piano… I like having that foundation in there.  That’s a very un-punk rock thing to say. Understanding an instrument, and thinking about it, and learning that skill has been invaluable to me.”

Here are some example of his writing process:

  • Right where it belongs

Here’s is a continuous shifting between D major e D minor, that marks also an emotional shift of feeling, going constantly from sad to happy and viceversa. This helps give the song its emotional vibe.

  • Closer

Here the melodic line ascends the notes E, F,  G, and Ab.  The last note is intentionally ‘out of key’ to give an unique sound sound.

  • March of the Pigs

The harmonic and melodic choices of this song are simply impressive. They are exactly what an experienced musician would NEVER do, yet they work great.

The progression is unusual because the second chord is a Triton away from the first chord (this means, something really dissonant, that sound you would always try to avoid). The melody is brilliant. The song is (mostly) in the key of D minor (these are the notes of the D minor chord, D – F – A), but in the vocal line it sings an F #. Also, sing the major in a minor key, the worst thing to do, and yet it sounds great.

I must say that falling in love with their music helped to “color outside the borders”. It is a wonderful feeling to know how things should be and to consciously destroy those rules to follow the pure essence of your art.

For anyone interested in learning more about chord theory, here is the full article I was inspired by:

https://reverbmachine.com/blog/trent-reznor-chord-theory/

Rhythm, Emotions and Storytelling

The filmmaker and percussionist Ryan Oksenberg is explaining his way of connecting rhythm in filmmaking.

“I think about what I’m working on and translate it into beat and establish a pace. […] Ask yourself, what exactly is it that makes a song catchy, or makes us dance or feel something? How do people talk and relate information to each other? How long do we take to process good news and bad news? It’s all rhythm.”[1]

Ryan Oksenberg is reading out loud the dialog of actors and finding the rhythm in their words, getting to know the speed and pauses spectators need. Cutting a film, the filmmaker thinks of a musical piece establishing and building a climax. Oksenberg uses long takes that “obscure an audience’s sense of time, but the real-time aspect of the long take makes you stressfully aware of the seconds ticking by.” The length of the shots gives a measurement of time which is creating suspense as well.

“When editing is used thoughtfully, creatively and musically it not only produces a powerful sensual experience but also contributes to our deeper understanding of film”. For example, in the film Requiem for a Dream by Darren Aronofsky quick cuts of widening pupils [2], needles, quick camara movements empower the “dramaturgical and emotional effect”[3]. The addiction is shown through repetitive visuals like a constant pattern which is creating a rhythm.

Walter Murch writes in “In The Blink Of An Eye” that emotions are the most important basis to cut footage. From emotions the criteria “descends with the story, rhythm, eye-trace, two-dimensional plane and lastly, three- dimensional plane. […] Murch prioritizes three of them; emotions, story, and rhythm.” [4] Catching reactions and making the audience feel through [5] will keep a film in the audience’s memory.  

Rhythm helps creating emotions through generating “full immersion” and “keeping the flow of storytelling uninterrupted”.[6]


[1] https://filmmakerfreedom.com/blog/rhythm-in-film [01.07.2021]

[2] Vgl Danijela Kulezic-Wilson: The Musicality of Narrative Film, p.47

[3] Danijela Kulezic-Wilson: The Musicality of Narrative Film, p.47

[4] https://unknownreel.home.blog/2019/09/17/a-perspective-on-film-editing-in-the-blink-of-an-eye-book-analysis/ [01.07.2021]

[5] Vgl https://filmmakerfreedom.com/blog/rhythm-in-film [01.07.2021]

[6] Danijela Kulezic-Wilson: The Musicality of Narrative Film, p.46