Another application of music in the medical context

The power of music, as we already know, is astounding, and its possible applications in so many different contexts makes it more and more fascinating.

Besides the healing qualities that music itself has, we can find many, let’s say, technical application that allow it to work with science.

Here I would like to focus on its application in the medical field, in particular in physiotherapy (rehabilitation).

But how and why could it be useful?

Well, classical physiotherapy is sometimes repetitive (boring), not cheap at all (due to one-to-one treatments) and can’t be really monitored at home.

So… could we solve these problems using music? Well, the answer is yes.

A project that particularly interests me in this field is PhysioSonic, developed in IEM Graz.

Using a tracking system, the human body is detected and its data is used to synthesize and / or transform audio files. This provides the patient with auditory feedback, which allows him to understand whether the exercise was performed correctly or not.

Here is their statement about the project:

“In PhysioSonic, we focus on the following principles: 

  • The tracking is done without a disturbance of the subject, as markers are fixed to an overall suit (and the subject does not, e.g., have to hold a device).
  • In collaboration with a sport scientist, positive (target) and negative (usually evasive) movement patterns are defined. Their range of motion is adapted individually to the subject. Also, a whole training sequence can be pre-defined.
  • The additional auditory feedback changes the perception including the proprioception of the subject, and the move- ments are performed more consciously. Eyes-free condi- tion for both the subject and an eventual observer (trainer) free the sight as an additional modality. In addition to the target movement, pre-defined evasive movements are made aware.
  • We sonify simple and intuitive attributes of the body move- ment (e.g., absolute height, velocity). Thus, the subject eas- ily understands the connection between them and the sound. This understanding is further enhanced by using simple me- taphors: e.g., a spinning-wheel-metaphor keeps the subject moving and thus ’pushing’ the sound; if the subject does not accumulate enough ’energy’, the looping sound of a disk- getting-stuck metaphor is used.
  • The sounds are adapted to each subject. Music and spoken text that are used as input sounds can be chosen and thus en- hance the listening pleasure. Also, the input sounds change over time and have a narrative structure, where the repeti- tion of a movement leads to a different sound, thus avoiding fatigue or annoyance. (Still, the sonification design is well-defined, and the quality of the sound parameters changes according to the mapping.) “

The tracking system used is the VICON tracking system that allows a spatial resolution in the millimeter range and a temporal resolution and reaction time less than 10 ms. This means that it detects movements really precisely.

The data is then processed in the VICON system and used in SuperCollider (platform for audio synthesis and algorithmic composition) to synthesize and process sounds. The system allows a very detailed continuous temporal and spatial control of the sound produced in real time.

[1]

Some videos can be found at https://iem.at/Members/vogt/physiosonic

Currently, this system is set up at the hospital Theresienhof in Frohn- leiten, Austria. 

Reference:

[1] K. Vogt, D.Pirro ́, I. Kobenz, R. Höldrich, G. Eckel – Physiosonic – Movement sonification as auditory feedback. 2009.

NMT and Alzheimer’s

In the previous article I wrote about neurological music therapy (NMT) and how it works in general. Here I want to give you a small example of how useful it could be to treat patients with dementia.

As these diseases progress, different areas of the brain are damaged and this leads to a loss in the ability to think, process and remember information.

Many areas are involved, but many others are still healthy and, among them, the areas where music is processed.

Therefore, music can reach and recruit alternative paths to improve the weak connection and at the same time bypass non-functional areas to have a better performance in recalling information and in its recognition.

Music is a powerful link with memory, and accessing the still intact part of the person’s brain in the midst of the disease opens the door to other areas of functioning.

Another important clue is the direct connection between hearing a rhythm and the output of a movement. This makes music a highly effective treatment tool for maintaining movement skills for people with neurological decline.

“When working with individuals with memory loss, combining movement with familiar music results in full brain activation, promoting engagement, interaction with others, and attention maintenance.

By addressing Alzheimer’s and dementia at the brain level, NMT can positively affect the skills and abilities a person needs to honor his/her identity, meaningfully relate to others, and engage in the highest possible quality of life” [1]

Reference:

[1] Neurologic Music Therapy Services of Arizona – NMT and Alzheimer’s/dementia

Neurologic Music Therapy (NMT)

It is a treatment based on music as therapy, which emphasizes some specific musical elements (such as rhythm, melody, dynamics, tempo) in the construction of the therapy, to treat people with neurological disorders.

It is based on the neuroscientific research of perception and music production. Furthermore, it also examines how the brain responds to music and affects neuroplasticity to help the brain recover from injury in the areas of movement, language and cognition.

Music and rhythm affect multiple areas of the human brain at a subconscious level, this means that rhythm can be used to build connections in the brain, thereby improving its functions. This leads to a more productive and functional life.

There is however a difference between Music Therapy and NMT.

Music Therapy seeks more to treat aspects of patient need such as emotional, physical and mental.

NMT focuses on the physical effect of music and rhythms on the brain and its connections through NMT intervention, which are specific research-based techniques.

Here is an example of an approach to neurological music therapy from the Spaulding rehabilitation network:

They program focuses on three keys areas:

  • Speech & Language: Aphasia, Apraxia, Dysarthria, Voice Disorders
  • Sensorimotor: Upper Extremity, Lower Extremity, Gait Training
  • Cognition: Attention, Memory, Initiation, Executive Function, Neglect training

Treatment areas include:

  • Traumatic Brain Injury
  • Acquired Brain Injury
  • Stroke
  • Huntington’s Disease
  • Parkinson’s Disease
  • Cerebral Palsy
  • Alzheimer’s Disease
  • Autism

The therapy session varies according to the goal of the patients.

When working on speech, they may practice using their voice while singing along with rhythmic guitar playing or preparing their vocal muscles by playing wind instruments.

When working on attention, they have a musical instrument to play while concentrating on sitting in their chair.

When working on walking or other physical movement, they may move around playing drums in the walking track or in the therapy room.

Many other forms are described in detail here: https://www.nmtsa.org/who-we-serve

References

Neurologic Music Therapy Services of Arizona – Neurologic Music Therapy

Spaulding Rehabilitation Network – Neurologic Music Therapy

Fight fire with… sound!

A relatively new invention will change the way we douse fires, in particular it will improve our approach to them.

Whenever something huge burns, we think only of a forest or building fire, the risk of sacrificing human lives is truly enormous and finding an alternative way to combat this problem would mean saving many lives.

Water, chemicals, flame inhibitors, fire extinguishers, aerosol suppressors are the most common ways to put out a fire, but still not the best and not usable for every situation.

A revolutionary way to put out a fire is definitely with sound waves. But how? Sound waves move oxygen away from the source of a flame and spread it over a larger surface. In this way the “fire combustion triangle” is broken, consisting of the three elements necessary for the combustion of the fire (heat, fuel and oxygen).

Without relying on other sources such as water and chemicals, the frequencies of these sounds are between 30 and 60 Hz.

The concept of using sound waves had already been studied by teams at West Georgia University and the US Defense Advanced Research Projects Agency, but without any success.

In 2015, two final year students from George Mason University in Virginia, USA, Seth Robertson and Viet Tran, created an acoustic fire extinguisher for their senior design project at George Mason University.

It was originally created to put out small fires in the kitchen, but currently the goal is to fight large fires.

They tested different frequencies of sound on small fires and found that the lowest frequencies, 30-60Hz, produced the desired effect.

The final prototype consists of an amplifier and a cardboard collimator to focus the sound, it is a portable device of only 9 kg, powered by the network, able to quickly extinguish small alcohol-powered fires.

They are currently working to move on to further testing and refinements and studying the feasibility of applying the principles. For example, it could be applied in space, where fire is a big deal, and it is possible to direct sound waves without gravity.

It contains no refrigerant and may not be able to prevent re-ignition after the sound has been turned off, but they are trying to improve it.

Resources:

PhysicsWorld: Dousing flames with low-frequency sound waves. 2015

DellTechnologies: Sound Waves to Fight Wildfires: How Does it Work?. 2020

Music with Wiimote?

Wii Remote, or Wiimote, is the controller of Nintendo’s Wii console, which has occupied an important place in the gaming industry since 2006.

It’s a wireless controller with motion sensing and traditional controls.

The Wii Remote only uses an accelerometer to detect movement. Other functions, such as using it as a pointer, rely on an infrared sensor that tracks the position of a “sensor bar” above or below the television.

The accelerometer is an Analog Devices ADXL 330 and has three times the sensitivity of gravity.

It not only detects motion, but also reports the angle at which the Wiimote is held when it is not in motion – and not just one angle, but three: vertical, horizontal and rotational.

https://www.youtube.com/watch/ETAKfSkec6A

There are obviously many accessories, among them, we can find Wii MotionPlus, an accessory that uses a more precise and sensitive gyroscope, a device that measures orientation and angular velocity.

But now… can we use to as a controller for our virtual instrument?

Well, yes.

It is normally paired via Bluetooth, so this allows us to easily connect it to our computer.

Of course, we have to convert the Wiimote data streams into a module that our software can control, and we have several options for this.

On the Windows platform we can find GlovePIE, a free scripting language from Australian programmer Carl Kenner. It is fascinating for musicians because it includes a set of MIDI commands that work via the Windows MIDI Mapper and also commands for Open Sound Control (OSC), the communication protocol for sound and media processing.

Or we can access the data with Pure Data (Pd), an open source visual programming language. We just need to download the external like WiiSense and wiimote.

Here are some cool examples: 

Resources:

E. Wong, W. Youe, C. Choy: Designing Wii controller: a powerful musical instrument in an interactive music performance system. 2008.

Wikipedia: Nintendo Wii

Sound on sound: Nintendo’s Wii Remote As A MIDI Controller

Wikipedia: Wii Remote 

Tracking human body?

Well, nowadays, as we all know, it is possible. We see its results in movies, video games and other fields of research but not everyone knows how this process works.

The technology that fuses real life and animation and makes it possible to film someone and transfer their data in computerized form is called Motion Capture (also Performance Capture or Mo-cap).

It is somehow a descendant of one animation techniques, Rotoscoping.

Rotoscoping is an animation technique that animators use to track the footage of moving images, frame by frame, to produce realistic action.

Motion capture is used in military, entertainment, sports, medical, and computer vision and robotics validation applications. In film production and video game development, it refers to recording the actions of human actors and using that information to animate digital character models in 2D or 3D computer animation.

It started as a photogrammetric analysis tool in biomechanics research in the 1970s and 1980s and has expanded into other fields as the technology matured.

How does this work? Well, some sensors are needed. Those are placed all over a person to track and record movements, to be converted into data that will create a virtual skeleton in real time.

For example, we have optical systems that use data acquired from image sensors to triangulate the 3D position of a subject between two or more calibrated cameras to provide overlapping projections.

One of the largest motion capture providers in the world is OptiTrack, developed in 1996 by NaturalPoint, Inc. It offers high-performance optical tracking and its product line includes motion capture software and high-speed tracking cameras.

There are many additional accessories, but we may summarize the process like this:

The human body is tracked thanks to markers

identified by camers

then the data is processed in a specific software: Motive.

Here some examples on how does it works:

This system is also used in live performances, to make the human body able to control electronic instruments with its movement.

An example is the composition Bodyscapes from V. Moar, G. Eckel und D. Pirrò: 

References

Science World: How Motion Capture works 

Wikipedia: Motion Capture

OptiTrack: https://optitrack.com/

Wikipedia: Rotoscoping

How to Live Stream a music performance

We are now experiencing hard times, the music industry is suffering a lot of damage and, due to the pandemic, all concerts have been cancelled. Thanks to the internet there are some solutions we can adapt to survive those times, obviously it is not something that can 100% satisfy this loss, yet it is something that can help us.

One of these is live-streaming performances. Bands can use this moment to promote themselves better on the web and, even if it is not possible to play live, organizing a good live-streaming could be an important step in the development of the band itself.

This can only work if it is done well, taking all precautions and using the right equipment.

But what exactly is live-streaming? Well, it is a method of broadcasting audio and video over the internet. This allows viewers to tune in from different devices (phone, computer, TV). This medium allows you to reach more fans than ever, and is also immediate, there is no need to record/edit/print/distribute materials.

What do you need?

Live-streaming can be as simple as hooking up a single camera to your computer, or it can be an elaborate process requiring a considerable amount of specialty equipment. When it comes to the video, an essential setup consist of two or three cameras, a video mixer, a reliable computer, and a live-streaming service that fits your needs.

But remember, nothing makes good video more intolerable than poor audio. The right microphones positioned correctly can make all the difference in the world. There are two basic directions you can go with audio: taking an auxiliary feed from the mixing board or using an all-in-one video mixer.

Auxiliary Send

If you only stream the mix from the board, then your performance may not sound its best, why you only mike the quieter sources, such as vocals and keys through the PA, leaving drums and guitars out of the mix. 

You need to set up a couple of room mics, but a far better solution is to set up more instrument mics and only send them to a dedicated auxiliary output that generates a mix tailored for your video stream.

All-in-one video mixer (the best solution)

They combine video cameras, video sources, and multichannel audio all into a single unit.

Video mixers such as the Roland models (https://www.sweetwater.com/c1005–Roland–Video_Mixers) also typically include USB connections specifically intended for streaming and recording. 

What to look for in a video mixer:

  • Number and type of video camera inputs
  • USB and additional video inputs for slide shows (lyrics) and other sources
  • USB for recording and streaming
  • Audio I/O for mixing and sound reinforcement
  • Additional video processing such as transitions and effects

Streaming Platforms

There are plenty of great options out there, and some of the more popular names in live-streaming include DaCast, Ustream, Livestream, and Wowza – all considered standards. 

While there are free services, such as YouTube Live or Facebook, they won’t offer you as much control of your content as a commercial service. These services cost anywhere from $20 to $100 per month for entry-level service, with the average being around $50 per month.

These platforms actually vary quite a bit, some specializing in security while others offer more or less online storage.

Sources:

Sweetwater – Live Stream a Performance

Audiovisual Aesthetics of Sound and Movement in Contemporary Dance

I want to share with you this interesting experiment I just discovered on the relationship between dance and music.

“How do movement and sound combine to produce an audiovisual aesthetics of dance? We assessed how audiovisual congruency influences continuous aesthetic and psychophysiological responses to contemporary dance. Two groups of spectators watched a recorded dance performance that included the performer’s steps, breathing and vocalizations, but no music. Dance and sound were paired either as recorded or with the original soundtrack in reverse, so that the performers’ sounds were no longer coupled to their movements. A third group watched the dance video in silence. Audiovisual incongruency was rated as more enjoyable than congruent or silent conditions. In line with mainstream conceptions of dance as movement-to-music, arbitrary relationships between sound and movement were preferred to causal relationships, in which performers produce their own soundtrack. Performed synchrony granger- caused changes in electro-dermal activity only in the incongruent condition, consistent with “aesthetic capture”. Sound structures the perception of dance movement, increasing its aesthetic appeal.”

Method Participants 

Thirty-four participants, (9 male, 25 female), ranging in age from 18-51 years (M = 27.41; SD = 7.3), volunteered to take part. Musicality ranged from 48-100 (M = 74.71; SD = 17.85). on the Goldsmiths musical sophistication index v1.0 (GMSI) musicality scale. Musicality involves a wide range of musical behaviours and musical engagement such as understanding and evaluation (Müllensiefen, Gingras, Musil, & Stewart, 2014). The GMSI is a standardised instrument that provides a measure of musicality in the general population, which ranges from 18-126 with an average score of 81.58 (SD = 20.62) across the general population 

(Müllensiefen, Gingras, Musil, & Stewart, 2013). Dance experience was assessed using a custom-made questionnaire, which asked people how many years dance experience they had, and how often they watched recorded dance. Fifty-three percent of participants reported previous formal dance training, but none were professional dancers, see table 1 for a demographic breakdown by experimental group. Participants volunteered in response to a Goldsmiths participation page on Facebook, and all participants signed a written consent form before taking part in the experiment. Participants were signed up to experimental groups based on their availability; allocation of experimental group to condition was randomized using numbered sealed envelopes containing randomization cards for either the congruent, incongruent, or silent conditions. The groups included 10, 14 and 10 participants in the congruent, silent, and incongruent groups respectively. All participants were included in the physiological and qualitative analyses however the tablet data of 8 participants was lost due to technical problems, leaving only 8, 10 and 8 in the congruent, silent, and incongruent groups for the enjoyment analyses. 

Sources:

Howlin, Vicari, Orgs – Audiovisual Aesthetics of Sound and Movement in Contemporary Dance

Theremin

Da ich letzte Woche über das Terpsiton geschrieben habe, finde ich es auch interessant, über das Instrument zu sprechen, aus dem es stammt, das Theremin.

Es ist nicht nur eines der ältesten elektronischen Musikinstrumente, sondern auch das erste Musikinstrument, das man beim Spielen nicht einmal berührt.

Obwohl es anderswo ähnliche Signalgeneratoren gab, war dieser Klangerzeuger das erste auf hochfrequenten Schwingkreisen basierende elektronische Musikinstrument, das großen Zuspruch gefunden hat.

Das Theremin ist seit den 1920er Jahren aufgetreten. Nachdem sein Entwickler Leon Theremin (Lev Sergejewitsch Termen) seine Militärdienste beendet hatte, hat er beim Physikalisch-Technischen Institut in Petrograd gearbeitet, wo er ein Gerät zur Messung des elektrischen Widerstands von Gasen entwickelt hat.

Da dieses Messgerät nach dem Überlagerungsprinzip arbeitet, können durch die Annäherung des menschlichen Körpers Klänge erzeugt und moduliert werden. Somit war das Theremin geboren. Leon Theremin hat mit seinem neuen Gerät ein kleines Konzert gegeben. „Termen spielt Voltmeter” haben seine Kollegen darüber gewitzelt. 🙂

Im Oktober 1921 spielte er vor Lenin, der dafür sorgte, dass Leon seine Erfindung überall in der Sowjetunion vorstellen konnte, um die „Elektrifizierung“ des Landes zu propagieren.


1926 erhielt Termen die Erlaubnis, sein Instrument auch im Ausland zu präsentieren. Die erste öffentliche Vorführung in Deutschland fand im Herbst 1927 im Rahmen der Internationalen Musikausstellung in Frankfurt statt, weitere Auftritte folgten in Berlin, Paris und London.

Mit dem Verschwinden seines Erfinders, der ab 1938 für einige Jahrzehnte in der Sowjetunion gefangen gehalten wurde, verloren Musiker und Komponisten weitgehend das Interesse an diesem Instrument. In den 1950ern konnte es jedoch Nischen in der Filmmusik und unter Hobbybastlern erobern.

Das Theremin arbeitet nach dem Prinzip eines kapazitiven Abstandssensors. Die Hand des Spielers, die durch ihre eigene Masse als Erdung fungiert, verändert über die jeweilige Elektrode („Antenne“) den LC-Schwingkreis eines Oszillators: Sie beeinflusst sowohl die Frequenz als auch die Güte des Schwingkreises, indem er den kapazitiven Anteil des Schwingkreises und dessen Dämpfung beeinflusst.

Hochfrequenzkomponenten werden durch einen Tiefpassfilter entfernt. Das Signal kann dann über einen Lautsprecher verstärkt werden.

Nur ein Schwingkreis mit variabler Frequenz wird verwendet. Ein Bandpassfilter sorgt dafür, dass sich der Pegel des Signals abhängig von der Frequenz ändert. Ein Hüllkurvendetektor erzeugt dann das Steuersignal für den Verstärker.

Der Elektronik-Enthusiast Robert Moog hat sein erstes Theremin in den 1950er Jahren gebaut, als er noch in der High School war.

Da das Theremin immer wieder in Filmmusiken zum Einsatz gekommen ist – etwa bei Spellbound (Alfred Hitchcock, 1945), The Lost Weekend (Billy Wilder, 1945), The Day the Earth Stood Still (Robert Wise, 1951), Ed Wood (Tim Burton, 1994) oder Mars Attacks! (Tim Burton, 1996) –, wird es zumeist mit „außerirdischen“, surrealen und gespenstischen Klängen und Glissandi, Tremoli und Vibrati assoziiert.

Es ist auch in verschiedenen Arten von Musik weit verbreitet, z. B. in Pop, Jazz, Rock und anderen.

Terpsiton

Das Terpsiton ist ein fast vergessenes Musikinstrument, das berührungslos mit dem ganzen Körper gespielt wird. 

Es stammt aus der Weiterentwicklung des Theremins, die in Zusammenarbeit mit der Tänzerin und Filmemacherin Mary Ellen Bute, des Theremin-Erfinders Leon Theremin und der Musikerin und Tänzerin Clara Rockmore entstanden ist.

Grundsätzlich funktioniert es wie das Theremin: die elektrische Kapazität des menschlichen Körpers, der in ein elektrisches Feld eingebracht wird, beeinflusst einen Oszillator. 

Das Terpsiton mutierte die Stabantenne des Theremins zu einer isolierten Metallplatte, die sich unter einem großen Tanzpodest befindet. Je weiter man sich im Tanz nach unten in Richtung Podest neigt und dadurch die elektrische Kapazität erhöht, umso geringer wird die Frequenz des Röhrenoszillators und umso tiefer der erzeugte Ton; stellt man sich hingegen auf die Zehenspitzen, wird die Frequenz des Oszillators und somit auch der Ton erhöht. Die Töne werden also durch tänzerische Bewegungen moduliert, wobei Pose und Klang synchron zueinander variieren. Schon das Senken eines Arms oder das Heben eines Beins bewirkt eine hörbare klangliche Veränderung. Die Lautstärke und die Klangfarbe steuert eine zweite Person außerhalb des Terpsitons.

In späteren Modellen fügte Leon Theremin verschiedenfarbige Lichter hinzu, die es dem Tänzer ermöglichten, auch optisch wahrzunehmen, welche Note er gerade spielt. Heutzutage ist die ganze Technik in einem kleinen Kästchen versteckt, das mit einer größeren Antenne über dem Tänzer montiert wird.

Einige Terpsitons waren mit zusätzlichen Geräten zur Wiedergabe von Hintergrundmusik ausgestattet, zu der die Tanzenden spielten. 

Es ist jedoch viel schwieriger zu spielen als das Theremin, bis zum Punkt, dass es, für die meisten Menschen, fast unkontrollierbar ist.

Theremin, selbst begeisterter Tänzer, trug die Idee eines durch Tanzen spielbaren Instruments bereits mehrere Jahre (ab Mitte der 1920er Jahre) mit sich herum, bevor er sich an die konkrete Umsetzung des Terpsitons machte. 

Konkret an die Entwicklung machte er sich für eine geplante Vorführung seiner Instrumente in der New Yorker Carnegie Hall am 1. April 1932. Theremin suchte lange erfolglos nach einem Tänzer, der in der Lage war, das Terpsiton seinen Ansprüchen gemäß zu spielen, und musste sich an die Theremin-Virtuosin Clara Rockmore wenden, die nicht nur ein absolutes Gehör hatte, sondern auch sehr gelenkig war.

In der Presse wurde das Terpsiton immer wieder als misslungenes oder noch nicht ausgereiftes Projekt beschrieben, was nicht ganz den Tatsachen entspricht.  

in den USA hat doch eine Tanzgruppe sogar einigermaßen erfolgreich mit mehreren Terpsitons gearbeitet. Lavinia Williams, ein Mitglied dieser Gruppe, war mit Leon Theremin verheiratet.

Von Leon Theremin selbst sind neben dem Ursprungsgerät noch zwei gebaute Terpsitons bekannt. Eines hat er 1966–1967 am Moskauer Konservatorium gebaut, dieses ist mittlerweile verschollen. Ein zweites hat er in den 1970ern für Lidia Kawina gebaut. Dieses ist bis heute erhalten geblieben und ist das einzige bekannte existierende Terpsiton, das Theremin persönlich gebaut hat.

Andrej Smirnov | Terpsitone

N/O/D/E – Terpsitone

Kürzlich hat eine Berliner Bastlergruppe ein Terpsiton nachgebaut, ein Projekt weniger zum Anfassen, eher zum Anhören, Ansehen und Antanzen. 

Die Bastler haben einige Erweiterungen eingebaut: einen Vibratogenerator und eine Quantisierungsschaltung, mit der sich die Töne optional auf eine Durtonleiter beschränken lassen.

Quellen:

Andrej Smirnov – Temperiertes Tanzen

Wikipedia – Terpsiton

Das Terpsiton: Musik aus dem Äther in XXL