HOW MUSEUMS USE MUSIC TO DRAW IN NEW AUDIENCES

In a previous article I talked about businesses bringing in new audiences by creating impressionable soundscapes in their offices, but what is going on when it comes to museums? Soundscapes and sound installations are being used more and more to present important and relevant topics/exhibitions in museums.

Giles Martin- the son of Sir George Martin, Beatles’ notorious producer, created an experience with a new Dolby Atmos mix that brought back into the world that was created in Abbey Road studios. This event showed visitors a new angle of Beatles’ greatest hits. The succeed of this event definitely lead to an increase of popularity in immersive sound experiences, with which we can explore both art and history in a fascinating new way. These experiences can offer a unique and interesting way to learn about the past. 

The Grammy Museum in LA opened up a new exhibition  called “Mono to Immersive Experience Room” in which visitors can relive the most famous performances of the Grammys. This audio-visual experience takes visitors on a journey from the 19th century phonographs to the immersive sound of Today. One can explore the most important parts of music history and experience how technology progressed with time.

One of the most innovative experiences was done by the London Design Museum that went on from April to July 2020. It was designed to “transport” visitors into the Nightclubs of cities with a famous rave culture (e.g. Berlin, Detroit, Paris). It used a combination of music, strobe and flashing lights in order to create a club atmosphere.The experience called “Electronic” was using all these techniques to allow visitors to explore the art, design and photography that captured and shared the electronic music atmosphere. 

The exhibit features works from some of the most famous techno artists like Jeff Mills and Ellen Alien, along with BBC’s radiophonic workshop. It also features photography from a famous german event photographer Andreas Gursky, as well as the work of French DJ Laurent Garnier. The following video  gives a bit more detail from interviews and insight on the projects mentioned here.

Resources

https://www.soundoflife.com/blogs/design/how-museums-use-music-draw-new-audiences

CMYK Project Work – Updates

The semester is coming to an end and by now, all Projects should be brought to an end and result should already be visible. For my project experiment part 1, I have finished collecting data from 30 test subjects. As mentioned in a previous post, there was a training process that had to be done for my specific experiment. The subjects all watched a training video for 20 days. Before the first video and the last one, they were asked to do a synesthesia test on Synesthesia.com . Afterwards, I sent one of my 4 tracks to each of the participants. I made sure to have approximately the same amount of answers for each track. So far I have not interpreted all data, but based on the synesthesia test, I am getting mixed results, from a very slight improvement, to a significant improvement. After analysing the answers to the questionnaires, I will see how much that change in answers impacts the real world answers. I presume that the real world presentation will have slightly better results than the test itself. In this article I want to explain what I did with the video and how I presented the songs. Each song was called “Track” with number extensions from 1 to 4. The numbers were mixed that they do not correspond to the order of the songs. Here is my training video to show you how it looked like. The subjects were instructed to watch it once a day for the duration of the test in hopes that it will train their brain for better synesthetic perception.

The Aphex Face

Aphex Twin is one of the most influential and important contemporary electronic musicians.

He is famous for his experimentations for for his unique style, where he combines elements from all kind of genres, mainly electronic genre, using atypical solutions/rythms to create a song.

His face, grinning or distorted, is a theme of his album covers, music videos and songs. He said it began as a response to techno producers who concealed their identities, as he stated:

“I did it because the thing in techno you weren’t supposed to do was to be recognized and stuff. The sort of unwritten rule was that you can’t put your face on the sleeve. It has to be like a circuit board or something. Therefore I put my face on the sleeve. That’s why I originally did it. But then I got carried away.”

He even put it on a song. On the second track, easily called just “Formula”, from the album Windowlicker, if we open it with a Spectrogram visualizer we will be able to see his face with his typical grin.

Even if it looks like something really hard to do it. At that time, 1999, there was a Windows program called Coagula that could transform any picture into soundwaves with minimum effort. Aphex Twin himself had used a Mac program called Metasynth to do his images.

He used the same technique to the first track of the album with this aural image.

This was also something that some other artists made on the same year, here some other example:

  • Plaid3Recurring

  • Venetian Snares Look

Resources

Wikipedia – Aphex Twin

Coagula – https://www.abc.se/~re/Coagula/Coagula.html

Bastwood – The Aphex Face

Atau Tanaka

Atau Tanaka is a composer and performer of live computer music and, at the same time, Digital Media professor at Newcastle University.

He was born in Tokyo but he soon moved with his family to America where he then studied Biochemistry and, later music.

At Stanford he studied Computer Music and then continued research at the IRCAM in Paris. Later he was Artistic Ambassador for Apple France, then researcher at Sony Computer Science Laboratory Paris, and was an Artistic Co-Director of STEIM ( Studio for Electro-Instrumental Music) in Amsterdam.

Tanaka had the fortune of attending lectures by John Cage, who according to him was one of the first musicians to have a strong conceptual approach. 

As Tanaka says “He made contact with the visual art world”.

His research is in embodied musical interaction. Also intersection of human computer interaction and gestural computer music performance.

This includes the use of physiological sensing technologies, muscle tension in the electromyogram signal, and machine learning analysis of this data.

On the other hand, he studies user experience through ethnographic methods of participatory design where activities of workshopping, scenario building, and structured brainstorming lead an understanding of a mediums affordances in bottom-up, emergent ways.

EAVI (Embodied AudioVisual Interaction) is a research group of which Atau belongs, which focuses on embodied interaction with sound and image. This group carries out research across topics including motion capture, eye tracking, brain computer interfaces, physiological bio-interfaces, machine learning, and auditory culture.

Another group that features Atau Tanaka is Sensorband, a trio of musicians using interactive technology. Gestural interfaces – ultrasound, infrared, and bioelectric sensors as musical instruments.  The other two members are Edwin van der Heide and Zbigniew Karkowski.

Atau plays the BioMuse, a system that tracks neural signals (EMG), translating electrical signals from the body into digital data.

Edwin plays the MIDIconductor, machines worn on his hands that send and receive ultrasound signals, measuring the hands’ rotational positions of and relative distance. 

Zbigniew activates his instrument by the movement of his arms in the space around him. This cuts through invisible infrared beams mounted on a scaffolding structure. 

They literally bring a visceral physical element to interactive technologies. 

He also did many installations, one of them is Global String, launched this installation in Rotterdam’s fifth bi-annual Dutch Electronic Arts Festival and made in cooperation with the Ars Electronica Center in Linz

Global String is a multi-site network music installation, connected via the internet, It is a musical instrument where the network is the resonating body of the instrument, by use of a real time sound synthesis server.

As he stated: 

“The concept is to create a musical string (like the string of a guitar or violin) that spans the world. Its resonance circles the globe, allowing musical communication and collaboration among the people at each connected site. The installation consists of a real physical string connected to a virtual string on the network. The real string (12 millimetres in diameter, 15 metres in length) stretches from the floor diagonally up to the ceiling of the space. On the floor is one end-the earth. Up above is the connection to the network, to one of the other ends somewhere else in the world. Vibration sensors translate the analog pulses to digital data. Users strike the string, making it vibrate.”

Here are some link to other works:

Bondage (Sonification and remix of a photo by Araki)

Biomuse (Biosignal sensor instrument performance)

Myogram (8 channel sonification of muscular corporeal states)

References

ataut.net

Music Hackspace – Atau Tanaka: Making music with muscle sensors

Goldsmiths University of London – Prof Atau Tanaka

Cafe Oto – Atau Tanaka

V2_Lab for the Unstable Media – Global String

V2_Lab for the Unstable Media – About Atau Tanaka

EUKLIDISCHER RHYTHMUS – Part II – Praxis

In diesem zweiten Teil zum euklidischen Rhythmus möchte ich den Music Pattern Generator und XronoMorph kurz vorstellen.

Music Pattern Generator

Der Music Pattern Generator ist eine kostenlose Webanwendung zur Erstellung musikalischer Rhythmusmuster, welche von Wouter Hisschemöller programmiert wurde. Die Patterns werden durch animierte Grafiken dargestellt, die es einfach machen, komplexe Polyrhythmen zu erstellen und zu verstehen.

Der Music Pattern Generator überträgt seine Patterns als MIDI-Daten, so kann man die App leicht in andere Musiksoftware und -hardware integrieren. Es gibt ihn als Webanwendung, als auch als Installationsprogramm, um die App auch als reguläre Desktop-Anwendung zu verwenden.

Auf dem Github des Programmieres findet man alle nötigen Informationen zum Download und zur Anwendung.

Hier eine kurze Einführung zum Music Pattern Generator

Der MPG in Verbindung mit externer Hardware

Download:

Online Application

Download

User Guide

XronoMorph

XronoMorph ist eine kostenlose macOS- und Windows-App zur Erstellung mehrschichtiger rhythmischer und melodischer Loops (Hockets). Jede rhythmische Schicht wird als ein in einen Kreis eingeschriebenes Polygon visualisiert, und jedes Polygon kann nach zwei verschiedenen mathematischen Prinzipien konstruiert werden: perfekte Balance und Wohlgeformtheit (auch bekannt als MOS). Diese Prinzipien verallgemeinern Polyrhythmen, additive und euklidische Rhythmen. Darüber hinaus können die Rhythmen nahtlos ineinander übergehen, und auch irrationale Rhythmen ohne regelmäßigen Puls können leicht konstruiert werden. Die Patterns lassen sich sowohl als Midi als auch Audio-Datei exportieren.

Einführung in XronoMorph:

XronoMorph: An Introduction

Perfectly Balanced Rhythms

“Ein Rhythmus oder eine Skala kann als Punkte auf einem Kreis dargestellt werden. Der Rhythmus ist perfekt ausbalanciert, wenn die mittlere Position (Schwerpunkt) dieser Punkte in der Mitte des Kreises liegt.

Komplexe, perfekt ausbalancierte Rhythmen können durch die Addition einer beliebigen Anzahl von einfacheren, perfekt ausbalancierten “Elementar”-Rhythmen erzeugt werden. XronoMorph enthält eine Vielzahl dieser elementaren, perfekt ausbalancierten Muster.”

perfectly balanced rhythms

Well-formed Rhythms

„Well-formed Rhythms (auch MOS genannt) enthalten zwei Schlaggrößen, die so angeordnet sind, dass der Rhythmus so gleichmäßig wie möglich ist. Jeder dieser Rhythmen hat einen übergeordneten Rhythmus, der die ursprünglichen Beats in zwei neue Größen aufteilt. Auf diese Weise lässt sich eine komplexe, verschachtelte Hierarchie von Rhythmen erstellen.

Well-formed Rhythms sind eine Obermenge der euklidischen Rhythmen. Sie umfassen zum Beispiel auch Rhythmen ohne regelmäßigen Puls.“

Well-formed rhythms

Quellen:

Hischemöller, Wouter. „Wouter Hisschemöller Music Pattern Generator v2.1“. Zugegriffen 15. Dezember 2021. https://www.hisschemoller.com/blog/2019/music-pattern-generator-v2-1/.

Hisschemöller, Wouter. „Music Pattern Generator“. Zugegriffen 15. Dezember 2021. https://www.hisschemoller.com/mpg/.

Hisschemöller, Wouter. Music Pattern Generator. JavaScript, 2021. https://github.com/hisschemoller/music-pattern-generator.

Milne, Andy. „XronoMorph: Loop generator“. dynamic tonality, o. J. https://www.dynamictonality.com/xronomorph.htm.

Milne, Andrew, Steffen Herff, David Bulger, William Sethares, und Roger Dean. „XronoMorph: Algorithmic generation of perfectly balanced and well-formed rhythms“, 2016. https://www.researchgate.net/publication/302345911_XronoMorph_Algorithmic_generation_of_perfectly_balanced_and_well-formed_rhythms.

SOUNDS OF

Sounds of ist ein Format in dem verschiedenste Künstler*innen, Foley Sounds an besonderen Plätzen sammeln und danach ins Studio gehen und daraus einen neuen Song machen.

Sounds of Trailer

Über Sounds of:

Ursprünglich ist das Format von Fynn Kliemann und dem Kliemannsland initiiert und auch moderiert worden. Die ersten Videos wurden noch im alten Studio des Kliemannsland aufgenommen. Mittlerweile wird das Format von Nisse Ingwersen moderiert und gehört zu Funk dem Content-Netzwerk der öffentlich rechtlichen Sender ARD und ZDF, wird jedoch immer noch vom Kliemannsland produziert.

Die Folgen laufen immer so ab, dass die Künstler*innen an einen von ihnen ausgewählten, besonderen Ort gehen und dort die Atmosphäre mittels Recorder als Foley Sounds aufnehmen. Danach bekommt man einen Einblick in die Produktion der neuen Songs und kann auch persönlichen Geschichten lauschen. Die Zuschauer bekommen auch immer wieder Tipps für eigene Produktionen.

Community:

Das tolle an dem Format ist, dass man sich auch als Zuschauer alle Sounds downloaden kann. So bekommt man zum Beispiel aus der Folge mit Severin Kantereit einzigartige Aufnahmen aus Äthiopien.

Die Zuschauer werden selbst dazu aufgefordert, nur mit den Sounds aus den Folgen neue Tracks zu produzieren. Diese können an das Kliemannsland geschickt werden und werden dann auf der SoundCloud Playlist veröffentlicht.

Sounds of Äthiopien

In einer Folge, war Nisse mit Deichkind sogar in der Hamburger Elbphilharmonie:

Sounds of Elbphilharmonie Hamburg

Quellen:

Funk. „soundsof“. funk. Zugegriffen 8. Januar 2022. https://www.funk.net/channel/soundsof-12144.

Kliemannsland GmbH. „Sounds of“. kliemannsland. Zugegriffen 8. Januar 2022. https://www.kliemannsland.de/blogs/soundsof?page=3.

soundsof. „soundsof – YouTube“. Zugegriffen 8. Januar 2022. https://www.youtube.com/.

EUKLIDISCHER RHYTHMUS – Part I – Theorie

Der euklidische Rhythmus bzw. Algorithmus gehört zu den Polyrhythmen. Er basiert auf einem Konzept des Mathematikers Euklid von Alexandria. Prinzipiell geht es dabei darum, den größten gemeinsamen Teiler zu finden. Hierzu gleich ein Klangbeispiel: 

Wouter Hisschemöller – Music Pattern Generator v2.1

Kurze Geschichte:

Schon vor 2300 Jahren hat der Mathematiker Euklid von Alexandria sein wichtigstes Werk „Elemente“ verfasst. In diesem beschreibt er auch das Konzept zur Findung des größten gemeinsamen Teilers zweier Zahlen. Dieses Konzept ist als Euklidischer Algorithmus bekannt.

Rhythmik und Mathematik:

Er ist im Jahr 2004 entdeckte der kanadische Informatiker Godfried T. Toussaint die Beziehung zwischen dem Euklidischen Algorithmus und Rhythmen in der Musik. Mit diesem lassen sich viele bekannte Rhythmen zum Beispiel aus dem Rock’n’Roll, der afrikanischen und der südamerikanischen Musik berechnen.

Was ist ein ‚guter‘ Rhythmus? Ein Algorithmus, der die Schläge in einem bestimmten Zeitintervall so gleichmäßig wie möglich verteilt.

Das interessante an euklidischen Rhythmen ist, dass es nicht ewig gleichbleibende Rhythmen sind, sondern abwechslungsreiche und so auch spannendere für die Zuhörer. Er bietet eine gute Balance zwischen eintönig und zu komplex und verteilt alle Schläge gleichmäßig.

Wie funktioniert der Euklidische Rhythmus?

Gegeben sind immer zwei Zahlen, z.b. 13 und 5. Dann wird immer die kleinere von er größeren Zahl abgezogen, bis kein Rest mehr bleibt.

Tipp: Der Euklidische Algorithmus funktioniert besonders gut, wenn man Primzahlen benutzt.

Für den Eukldischen Rhythmus ist sozusagen der Weg das Ziel. Aus den Zahlen können wir die Schläge (geschrieben als 1) und die Pausen (geschrieben als 0) herauslesen. Das funktioniert so:

13 – 5 = 8  [1 1 1 1 1]  [0 0 0 0 0 0 0 0]

8 – 5 = 3  [1 0]  [1 0]  [1 0]  [1 0]  [1 0]  [0 0 0] 5 Nullen werden nach vorne hinter die 1en gestellt, 3 bleiben hinten stehen

5 – 3 = 2  [1 0 0]  [1 0 0]  [1 0 0]  [1 0]  [1 0] 3 Nullen werden nach vorne hinter die 10 Sequenzen gesetzt

3 – 2 = 1  [1 0 0 1 0]  [1 0 0 1 0]  [1 00] 2x 10 Frequenz wird nach vorne hinter die 100 Frequenzen gesetzt. Nun haben wir den fertigen Ryhthmus

2 – 1 = 1 Da ein Rhythmus zyklisch ist würde es ab hier keinen Sinn mehr machen, nach vorne zu verschieben

1 – 1 = 0

Zum einfacheren Verständnis kann man auf dieser Seite visuell sehen und ausprobieren, wie die Schläge und Pausen verteilt werden: https://dbkaplun.github.io/euclidean-rhythm/

Zudem gibt es hier eine Liste mit Zahlenkombinationen und ihrem Rhythmusmuster: http://www.iniitu.net/Euclidian_Erd%C3%B6s_Deep_Aksak_rhythms.html

Abschließend noch zwei Beispiele zum reinhören:

Quellen:

Ableton AG. „Don’t DJ über polymetrische, polyrhythmische, zirkuläre Musik | Ableton“. Zugegriffen 15. Dezember 2021. https://www.ableton.com/de/blog/dont-dj-moving-in-circles/.

bettermarks. „Euklid von Alexandria“. Zugegriffen 15. Dezember 2021. https://de.bettermarks.com/mathe/euklid-von-alexandria/.

Toussaint, Godfried. „The Euclidean algorithm generates traditional musical rhythms“. In In Proceedings of BRIDGES: Mathematical Connections in Art, Music and Science, 47–56, 2005. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.72.1340&rep=rep1&type=pdf.

W., Bernd. „Euklidische Rhythmen – Ist die Mathematik ein guter Percussionist?“ Tropone Sounds (blog), 18. Oktober 2018. http://tropone.de/2018/10/18/euklidische-rhythmen-ist-die-mathematik-ein-guter-percussionist/.

Birdman’s Soundtrack

In this blog post, I would like to write about a fairly unusual movie soundtrack to find, the Birdman soundtrack or (The Unexpected Virtue of Ignorance).

Birdman is a 2014 American comedy-drama directed by Alejandro G. Iñárritu.

Here is a small summary, just to know what it is:

“Riggan Thomson is an actor who played the famous and iconic superhero” Birdman “over 20 years ago. Now, in his middle age, he is directing his Broadway debut, a drama called “What We Talk About When We Talk About Love,” an adaptation from a Raymond Carver story. With the help of his assistant and daughter Sam, and his producer Jake, he will play premieres of the show, even when a talented actor he has hired, Mike Shiner, acts arbitrarily, the internal issues between him and the other cast, the his futile utmost efforts to critics and the unexpected voices of his old character, the Bird Man, pushing his sanity to the first debut show. [1]

Besides the movie itself, what I find really interesting is the soundtrack.

There are several classical music pieces, mainly from the 19th century (such as Mahler, Tchaikovsky, Rachmaninov, Ravel) and several jazz compositions by Victor Hernández Stumpfhauser and Joan Valent. But those are just “outlines” of the real thing.

Most of the score consists of a drum score composed entirely of solo jazz percussion performances by Antonio Sánchez.

It’s a rather unusual choice for a film, as the drums are just a percussion instrument, no harmony, (almost) no melody.

But why? As the director said:

“The drums, for me, were a great way to find the rhythm of the film … In comedy, rhythm is king, and not having the editing tools to determine time and space, I knew I needed something. that would help me find the internal rhythm of the film. “

When the director contacted Sánchez and offered him to work on the film, the composer felt a little unprepared and surprised, as he put it:

“It was a scary proposition because I had no benchmarks on how to achieve this. There is no other film that I know of with a soundtrack like this.” Sánchez hadn’t worked on a movie before either.

He first composed “rhythmic themes” for each of the characters, but Iñárritu preferred spontaneity and improvisation when he said, “Man, this is absolutely the opposite of what I was thinking. You’re a jazz musician. I want you to. you have a completely jazzy approach, which improvises, which has nothing too preprogrammed. I just want you to react to the story.” [2]

After Sánchez visited the set for a couple of days to get a better idea of ​​the film, he and Iñárritu went to a studio to record some demos. During these sessions the director first spoke to him through the scene, then as Sánchez improvised he guided him by raising his hand to indicate an event – such as a character opening a door – or describing the rhythm with verbal sounds. They recorded about seventy demos and, once they finished shooting, they put them in the rough cut.

He liked the way the soundtrack complemented the action, but not how the drums actually sounded. Having been recorded in a New York studio, the audio was extremely crisp and clear, not quite the mood they wanted for a film steeped in tension and dysfunction.

So Sánchez headed to Los Angeles to re-record the tracks.

They wanted “rusty, out of tune drums that hadn’t been played in years”. Also, Sanchez purposely degraded his kit. “I didn’t tune the drums, I used mismatched heads, stuck duct tape on the heads to make them weaker, and put things on the cymbals to make them sound a little broken. I also stacked two and three plates on top of each other, metal on metal. Instead of a sustained sound, you get a dry, trashy one. It worked a lot better.”

Iñárritu also pressured his sound design team.

In these scene they pass a drummer on the sidewalk outside the theater. The drum sounds change multiple times during the sequence — first when Keaton leaves the quiet of the theater and exits onto the New York City street, then again as he and Norton approach and move past Smith. Iñárritu wanted the volume level of the sidewalk drummer to rise and fall as Keaton and Norton walked by, but they did it in the most authentical way.

“We actually brought the drums out onto the street near the studio,” Sanchez recalls. “There were a couple of sound guys a block away with mics that had really, really long cables. I started playing, and they walked the whole block, right pass my drums, and kept walking to the next block. Then they came back. That’s how Alejandro approaches his work. Anybody else probably would have just turned the volume up and down.”

“The movie fed on the drums, and the drums fed on the imagery”.

The official soundtrack was released as a CD (77 min) in October 2014, and as an LP (69 min) in April 2015.

It won the Grammy Award for Best Score Soundtrack for Visual Media.

Resources

[1] Birdman Plot Summary https://www.imdb.com/title/tt2562232/plotsummary

[2] Neil Janowitz – “Birdman” composer drumming out the film’s soundtrack

[3] Wikipedia – Birdman

[4] Wikipedia – Birdman (film score)

The effect of an artificial sonic ambiance in office buildings

Have you ever thought about the sounds inside office spaces in different buildings? Hows does a concrete space feel in comparison to a glass and wood one? What if the sounds you heard were actually not generated by the space itself?

I’m on the 20th floor of an office building on Wall Street. One of the offices inside is equipped with about a dozen speakers, some sitting on plinths, others mounted on the ceiling. Aric Marshall, of audio software company Spatial, is leading a demonstration of a new soundscape designed for the workplace. Holding his phone, he says “Just listen,” and touches the screen. I ready myself to hear the soundscape come out through the speakers, but just the opposite happens. The sound I hadn’t processed turns off, plunging the room in a cold, uncomfortable, almost metallic silence. Without me realizing it, a soundscape had been playing all along—in this case, a muted, almost imperceptible pitter-patter of rain falling on the roof of a wooden cabin—coating the concrete office with a sort of soft, sonic balm.

Nowadays, our senses are bombarded from every side. Companies are competing for our attention any way they can, and now lots of them have started using sound as a marketing strategy. Companies like Ambercrombie and Fitch and Mastercard started using their own signature soundscapes in stores in order to stick in consumers’ minds.

The article author goes on: “This week, I experienced what an office could sound like if someone intentionally designed it that way. Here, that someone is in fact two companies: sonic-branding studio Made Music Studio and Spatial, an immersive audio-software platform. As companies continue with their quest to lure tenants back into the office, both are betting that bespoke soundscapes can provide a resounding advantage.”

Made Music studio has been experimenting with implementing different soundscapes in companies that invoke an emotional response and increase the immersion for customers. Imagine how it would be to walk into a hotel where you have “welcoming ambience,” “focusing ambience” and “energising ambience`.

Resources

https://www.spatialinc.com/news/fast-company-i-got-to-hear-what-an-office-can-sound-like-with-the-help-of-sonic-ambiance

Live Performance Environment: Camelot

This might be interesting for all of you who want to do a live performance! 🙂

Setlist manager, digital mixer, software instrument and effects host, PDF sheet music player, multitrack audio player and highly advanced patchbay and MIDI router, these are the features of this new live performance software, Camelot, developed by Audio Modeling.

It is also designed not to have the need to have a manual, everything is clear and with just a quick guide you can have access to all the features.

As they stated:

“Camelot is an application created to address the most complex live performance needs with a simple and guided workflow. What in the past could only be achieved with a complex configuration of several applications and devices connected together, Camelot achieves in a single comfortable and well-designed workstation. Song in Camelot gathers all your configurations of hardware instruments and MIDI routing, software instruments and FX instances, sheet music and audio backing tracks.

It is also quite cheap for all of its features and there is also a free version, in which only a few features are missing.

If you are interested, you can take a look here:

https://audiomodeling.com/camelot/overview/