Die älteste Methode zur Erzeugung um ein haptisches Feedback zu erzeugen, basiert auf einem Motor mit einem kleinen Gewicht, das exzentrisch auf der Welle. Die Kraft wird in zwei Achsen erzeugt senkrecht zur Welle des Motors erzeugt. Sie waren früher in allen Smartphones zu finden und erzeugen die charakteristischen Vibrationen. Die Amplitude (Stärke) wird bestimmt durch die Frequenz (Drehzahl) des Motors. ERMs sind gleichstrombetrieben.
LRA – Linear resonance actuators
In LRAs drückt eine Schwingspule einen Magneten gegen eine Feder. Durch Anlegen einer Wechselspannung führt diese oszillierende Bewegung zu einer vibrotaktile Rückmeldung in einer einzigen Achse. Die Rückmeldung von einem LRA ist gezielter und sauberer im Vergleich zu ERMs, weshalb sie den größten Teil des Marktes erobert haben. Ein LRA ist auf eine bestimmte Frequenz abgestimmt basierend auf der internen Feder [Resonanz Frequenz]. Dies ermöglicht die Steuerung der Schwingungsamplitude, ohne die Frequenz Frequenz bis zu einem gewissen Grad zu beeinflussen.
Wenn man die Membran eines Lautsprechers entfernt, erhält man die nackte Schwingspule. Auf einer Oberfläche angebracht, wandelt sie ein Eingangssignal in eine akustische oder in diesem Fall taktile Rückmeldung. Sie sind sehr ähnlich zu LRAs und benötigen ebenfalls eine Wechselspannung.
Ein naher Verwandter des LRA ist der Hubmagnet. Er ist im Vergleich nicht auf eine oszillierende Bewegung, sondern beschleunigt eine Masse, bis sie einen mechanischen Anschlag erreicht Anschlag erreicht. Eine Feder drückt die Masse zurück zu ihrem Ursprung. Magnete sind gleichstrombetrieben und erzeugen, je nach Größe, eine sehr hohe Stoßkraft.
Eine Mischung aus einem LRA und einem Elektromagneten sind beschleunigte Stößel oder lineare Wandler. Sie sind größer dimensioniert und erzeugen einzelne Impulse und Schwingungen durch Beschleunigen und Anhalten einer internen Masse durch ein elektromagnetisches Feld in zwei Richtungen beschleunigt und gestoppt wird. Unter Fall des Tac-Hammers (Nanoport) eine Seite mit einem mechanischen Anschlag für ein klickendes Gefühl, während die gegenüberliegende Seite einen magnetischen Anschlag für eine sanfte Rückmeldung. Sie werden meist mit Wechselspannung betrieben.
As part of the methodology, I held an interview with a student who is studying sign language in Austria for three years to become a interpreter. The 21-year-old female became interested in sign language six years ago. After watching videos via YouTube from deaf persons signing various vocabulary, she realized she wanted to learn the language even though there was no one in here nearer surroundings who was hard-hearing or deaf. She was especially fascinated by the movements which looked in her eyes like an artistic dance, the variety of the language by facial expression and posture and that it was so incredibly different from the vocal one she knew. To become very good at signing, she attended a course at Urania in Graz. As time went on the course came to an end but she practiced further on with a private teacher so that she finally could take part in the selection process to study sign language in her bachelors which she is doing now for six semesters.
Very interesting was how she described the own learning process from her studies. In the beginning of her beginner course, she had to write down the vocabulary which the lecturer was signing to learn them later at home. First of all she did not know what to focus on and which movements are more common than others so she just wrote down the movements as she thought she could later understand and how she has seen like “left hand does this and the right one does that while the thumb of the left one does this …“ but could not remember the exact movements at home. As some of the lecturers do not provide the students with prerecorded videos of the vocabularies, the students must record them themselves to learn them. After some time, she learned that there are a few ways to transcribe the gestures more comprehensible. By classifying the movements by dividing the movement into components with answering questions like “in which direction do the palms point?”, “is the movement of the hands symmetrical?”, “which form has the hand?”, “what do the fingers do?”, “are the shoulders involved in the movement?” and more.
In the beginning the students vocally spoke out the words they were signing and structured the sentences in the vocal language grammar approach. After some time they stopped speaking next to the signing and started to structure the signs in the correct sign language grammar way. But she also knows that the current study beginners do not vocalize the signs anymore but do it right away without it and structure sentences correctly as well from the beginning.
She explained that when she must do homework or tasks for her studies, the students record themselves while gesturing sentences and vocabulary. To catch every detail, they have to set up the camera in a proper way that their whole upper body is visible. If there are movements outside the visible area they either have to turn themselves to enable a view from the side or turn the view of the camera itself. The lecturers who provide their students videos record themselves from the front and if needed from the side as well when a part of the movement is hidden between or behind hands or other parts of the body.
One of the lecturers once advised her after the examination of one of her tasks that she should show the sign for timeline from the other perspective as when she tried to explain when exactly in the course of the day she has eaten or taken a break, it was not possible to see the space between the hands which showed the timeline as she had them hidden behind the hands. This showed, that it is also possible to change up the sign itself and to show the sign from the other side.
For her it is common to borrow out and search for vocabulary with the help of physical dictionaries which are mainly consisting of verbatim descriptions, some of the digital comprise fotographs as well videos. A few examples are LedaSila https://ledasila.aau.at/, spreadthesign https://www.spreadthesign.com/de.at/search/, signdict https://signdict.org/about and more. If you want to search by giving parameters of the movements because there is no specific word or sentence to describe the vocabulary or meaning, the only help you will get for searching for these “special gestures” (“Spezialgebärden” like for example “not knowing how to start”) is over ögs Gänsefüßchen https://sites.google.com/site/oegsgaensefuesschen/ as it is the only side offering the possibility to search by descriptions of movements or gestures. Ögs states that “special gestures” are a category that does not exist from a linguistic point of view. These signs are described and highlighted as “untranslatable”.
Difficulties she recognized throughout the time are that there are so many dialects even throughout Austria itself and that signs in general can change quickly so that when she communicates with deaf in her age or visits a retirement home and signs with elderly people it is a completely different experience as the signs differ. Another point she mentioned is the lack of teaching materials as it is not easy to get material and self-study is rather difficult in terms of sign language. A study colleague of her quit studying sign language after one year because he could not manage to imitate the movements and it was not possible for him to train the spatial imagination for that. She stated that some lecturers describe the experience of online-teaching for study-beginners in the first semesters currently in corona times very difficult as it is only a two-dimensional experience.
As I told her about my ideas regarding the application, possible features and visualization in the end, she said that she liked the idea of an application for learning sign language and can especially imagine it in the phase of learning vocabulary with it as it is an absolute necessity in her eyes is the contact to native speakers to gain a good level in signing in the further stages of learning. Furthermore she stated that the feedback feature is very important if there is no person next to you telling you if the gesture is done wrong. Regarding the area that should be visible for the gesturing in general, she showed me a few gestures that were not restricted to the main area from the hips to above the head as she thinks that the system can not track a few of the gestures then. For example she showed me the sign for “eyebrow” where you actually slide with your fingers around your own eyebrows as it could be difficult to touch them when you wear the glasses as well the sign for “curious”(touch on the bridge of the nose) as well as “glasses” or everything around the eyes and in the face that you physically touch to relate to it, could be on one side hard to touch or reach under the glasses and on the other side hard to track by the system if they are over the head or under the glasses to give a feedback.
There are classifiers like for example spatial classifiers for “passing the house with the bicycle” where you show the position of yourself on your bicycle and how you move forward in the spatial area by pointing out the positions in the area in front of your own body and sign the house, the bicycle and the passing. The house then serves as a box in the spatial area after you signed the sign for house one time, you then later just relate to it as a point or box in the spatial area. In the sense of “where it is located and in which direction something goes”, one sometimes does not even show the sign for “house” at the beginning, because it occurred in the sentences before and from the context it is obvious that the house is meant.
Regarding tracking she can not imagine in different scenarios (like “Going to grandma’s house but on the way there still going to the bakery” or when talking about friends standing by, just pointing at them without signing their name again) and with other classifiers how this will be possible to track correctly and thinks it will need a lot of deaf to help evolve the tracking systems to bring them to a right level of accuracy so that testing is indispensable. As an example, she names that many deaf people can not even understand what avatars are signing or want to communicate because they can not relate to the movements and translate it to their own style and the mimic of the avatars is too difficult to read although it is decisive for the context. She and her study colleagues talked a lot about the facial expressions and naturalness of avatars. In their eyes it is necessary to test avatars with a lot of deaf people first to help developers to adjust the look of the avatars so deaf people can understand avatars because it would be useful to use avatars for announcements at the train station (delays) or other short notice information as these are currently not even communicated to deaf in most cases.
This interview was very helpful to reassure myself of the outcome of my previous research from the first and this semester as many points which occurred throughout it proved my outcome of the research from the literature and the internet. It helped me to get to know if there is a necessity for my idea of the application in the first place and which aspects I must think about when evolving it. As I did not have much knowledge about the process of the proper education trough universities of organizations which offer courses, I have got new insight into the phases of teaching and methodological approach. I can get more interview partners as she offered me to connect me with her study colleagues as well with especially one who is currently engaging in writing a bachelor thesis on the topic of avatar appearance and understanding which will be a great input for my own thesis as I imagined using avatars in the beginning but are now considering if another visualization would be more suitable and understandable after her explanation. In general, I plan to focus on the visual appearance of the application and how I will structure the application in the next semester. I could send possible visualizations to her study colleagues to let them evaluate them as I see the visualization as a key factor of willingness to use the application or not if it is not done understandable and aesthetic while being helpful.
By the disappointing results from the user research in VR therapy, I got into a crisis point in my research. Where I should decide which direction will be the best for my topic and what I want to have as a result of it. During this rethinking process, I got different approaches to get into a good path for developing the project. The first decision I took was to stay or not into the topic of mental health. The second point I thought was to decide how I want to help, and which way is the best to do it. The last point is to decide on a new starting point to start developing the project. All these processes took weeks to pass through and the result is something that I am happy about.
As I describe in the last topic of my thesis research I got not the best results during my first user research, this impulses me to think about reapproaching the topic. During the process, I realize that I do not want to get out of the field of mental health. But it was difficult to find something where I can help people without being involved directly in medicine. To be a part of medicine is tactics that prevent me to get into certified permissions, medical regulations, and for that moment the users that will manage the app. This decision put me in the position of prevention of mental illness in a state of treatment. At the same time, it opens the ways to alternative methods.
As I just mentioned before alternatives methods to prevent anxiety, stress and depression are often used by people to keep their mental state positive. There are various methods in the world that people use some like sports, hobbies, religion, and more. Each method has a different perspective of helping through the process and each of them has different efficiency and effect on our mental health. After a small research, I decided to focus on meditation because it shows the best results on preventing and improving the mental health of people. As it says in the article Effect of Transcendental Meditation on Employee Stress, Depression, and Burnout: A Randomized Controlled Study “Studies indicate that practice of TM reduces the psychological and physiologic response to stress factors, including decreased sympathetic nervous system and hypothalamic-pituitary-adrenal axis, and reductions in elevated cortisol (stress hormone) levels” (Perm, 2014).
Meditation is a kind of practice that has hundreds of years of existence and has a different cultural background. Each kind of meditation has a different purpose on our minds and bodies, which is important to be considered. A meditation process is simple to be taught and to learn what is hard about it is to keep yourself constant on the practice. The meditation practice brings multiple benefits for your health but, it is important to organize the way to do it to get the right things you are looking for. In the next part, I will focus to explain how I will look to achieve this through the use of an app.
At the beginning of the Des&Res class the last semester, I decided to start research in mental health. The main topic of the research was VR for Mental Health, that is a methodology that has gained prestige in the last years, due to the advance in technology. During my research on this topic, I was centre on investigated how it works and what is their main field of application, for this, I read through different articles of medicine relate to these topics. In a second step I follow to get related to the history of VR and how long has it been in the medical field and for whom it was used. Finally, I focus on the design of a VR interface and prototype, with also a user research focus where I found something that changed my topic orientation.
For starting this research, I thought that VR therapy for mental health can be a key aspect during COVID-19. With this idea in my mind, I start research in VR for Psychotherapy focus on mental health, the focus was to get a better environment for treating special kinds of mental issues. During my research, I found out that VR in mental health can be useful for treating anxiety, stress, and depression. As Nigel Whittle said “VR offers the opportunity to develop more personalized therapeutics, especially in mental healthcare. It is already being used to treat PTSD, phobias, and psychiatric conditions such as conversion disorder and showing excellent results.” (WHITTLE, 2020). VR therapy had shown the last year more results during the pandemic, but even due some users do not trust this kind of technology. Also, VR therapy has been in the market for around 20 years.
VR therapy started around 1950 for treating different kinds of phobias around people. This treatment used multiple sensors to help the user experience the same sensation that provokes the phobia. But, it was not till the beginning of the 2000s that VR therapy was considered to be useful, in the article The use of virtual reality technology in the treatment of anxiety and other psychiatric disorders the authors mentioned that “The first study to formally investigate the efficacy of VR-based exposure therapy (VRE) focused on the treatment of acrophobia and results suggested that VRE was effective” (Rothbaum, Bunnell, Sae-Jin, & Maples-Keller, 2017) this study was made in the early empowered the technology to continue into a development area in medicine. After this series of studies at this time, VR therapy was used in the military area to treat PTSD to help soldiers recover due to missions. In actuality, VR therapy is managed mainly by three big companies that are: Limbix, Psious, and OxfordVR. All these companies have a focus on VR therapy but, their UI/UX is in my opinion a little bit too old and hard to use. This brings us to the next step that is VR design and user research.
In the last part of the last semester when I finally got the focus on which kind of path I was getting in, I decided to research design for VR and make user research too. Design for VR is something that I thought will be hard. Surprisingly it was not, VR interfaces and design are still based on the same idea of a normal application for a smartphone or the computer. Not to miss, VR design has one important thing to be careful about and it is sound. VR sickness is one of the major issues due to the therapy and it is provoking usually for a misinterpretation of the sound and the environment, to prevent that to happen, sound designers use 8D sounds. This kind of sound helps the user to identifies themselves in the environment preventing the sickness to happen.
Into the user research, I have begun it was a disaster and it turns to be disappointing. I researched with 8 psychotherapeutics and most of them found this topic to be negative instead to be something useful. The main reason for them to think this was that they thought it will be more learning and work for them. Even, I told them about the studies and the results, the doctors did not change their minds. Also, they expect that these therapies will be really expensive. This put me into a crisis point for my research where I decided to changed my path to something else that is not out of mental health.
As I got to know hand tracking from the HoloLens2 and a little bit trough one of my lectures “Interaction Design”, I wanted to try out the tracking with the LeapMotion Controller. The Leap Motion Controller from the company ultraleap is an optical hand tracking module which captures the movements of hands and fingers even if they move unparallel to each other. It is used for development and deployment of various applications.
I borrowed out a leapmotion controller at the mediacenter from the FH and downloaded the accompanied software from the website.
You plug in the controller per USB and install it as well. There are various possibilities you can choose from to try out the LeapMotion. As my topic of learning sign language deals with viewing or tracking – not only – but mainly hands, I wanted to see how I can interact and which movements can be tracked preciously. With the information for setting up and trying out with the Unity engine given trough one of the many helpful YouTube videos, and after the installation, you can open Unity. In the program you install the XR Plugin Management under the package manager to prevent problems when downloading the packages for the LeapMotion controller. You import the unity packages that are provided by ultraleap into the assets folder and can try out the different examples given.
Try out and conclusion
It was very helpful for me to try out specific movements for a better understanding of how systems are tracking and which parts of the hand brings the most valid information for the systems to possibly translate it into sign language.
I got to know that the parallel tracking of both hands works very well but there are some difficulties for the controller to track preciously when the signs are to complex and have many overlapping fingers or else. After just randomly moving and changing the positions of my hands and the fingers, I tried to spell my first name consisting of five letters. The controller tracked three of them without any problems but with the other two, it could not recognize the position of the thumb or index finger and this is why it showed a different gesture than the one I was doing. I tried to readjust it by turning or giving the controller itself another angle to read the gesture as well as turning the hands around to view it from the other side but it always set back to the wrongly tracked gesture.
Heuristics for LeapMotion Interaction
As already mentioned a couple of years ago from the LeapMotion developers, it is important to evolve applications with the focus on the following points which I can now relate to and understand better after trying out the LeapMotion:
1. consistency in tracking: working constantly on accuracy and consistency of tracking by having multiple people perform actions, motions and gestures
2. ease of detection: create a concept of how easy the motions are detected and what is important to detect as obvious conditions which define the motion but are separated from other things around which could also be detected
3. occlusion: the controller should be able to detect the environment but if the sensor is covered by physical objects, tracking is not accurate or not possible at all. If motions involve occluded parts, the system can not visibly see a part of the hand and makes assumptions trough available data. A diagnostic visualizer can help to test a variety of detectable areas around the field of view to prevent occlusion.
4. ergonomics: improving posture and working environments but not on a physical object. A consideration of affordances and guestures adjusted to the movements of the humans and users body without harming or straining is necessary by having the environment and comfortable positions on mind.
5. transitions: creation of a concept for the interaction with the application. Every interaction should be defined and differ so that the gesture can be detected from the system as well as be more memorable for the user. If actions have similar results the usage of slightly similar interactions is acceptable, otherwise they should differ. It depends on the beginning and ending of a motion in the space in front of the body if the user can perform the action in the air easily. It therefor should be looked for minimizing awkward switches by implementing an “initialization” or “resting” pose which initializes actions.
6. feedback: consideration of how providing a feedback from the application to the user who is performing the gesture because a lack of hardware-based physical feedback in motion-based interactions can leave the user unknowing of the current state of the application and state of the system. The user interface should communicate three things regarding the users interaction: Where am I now? Where do I need to be to complete? How fare and in what way do I need to move?
With the programm Unity I can in the end evolve the whole setup of a possible application as I already informed myself in the last semester as well as in this one about the developement of the final product. There are many possibilities how to do so, so that it is necessary to first develop a concept and designing the experience and interactions beforehand.
In this blog entry, I am going to analyse a few apps that I considered the best examples for my project, with the purpose to see why they are successful in their market. Looking through these apps will also help to be critical with a possible app design and to think what could be the best fit for my project.
Headspace isone of the most well-known meditation apps out there.
“There are hundreds of guided meditations, mini-meditations, sleep sounds, SOS meditations for emergencies, meditations for kids and animations to help you better understand meditation,” says Lindsey Elmore, PharmD, a pharmacist turned wellness expert (she’s known as “The Farmicist”).
Calm app lets you choose between your meditation practice. After all, the app provides guided sessions ranging in time from 3 to 25 minutes. And with topics from calming anxiety to gratitude to mindfulness at work—as well as sleep sounds, nature sounds, and breathing exercises—you can really choose your focus. “There are new meditations every day, progress trackers, and seven-day and 21-day programs for beginners,” Elmore says.
3. Insight Timer
Experts across the board agree that Insight Timer is primo when it comes to choosing a meditation app.
“This app has many of the most experienced mindfulness teachers on it, and allows you the freedom to pick and choose depending on how long you have to practice, what style you’d like (e.g. body scan, loving-kindness, anxiety/stress-reducing, etc.), or just set a timer and sit without guidance,” Tandon says.
the app is a go-to because, in addition to the variety of guided meditations, the app has a tracker that allows you to chart your progress and earn badges that keep you coming back for more.
Fans of Aura like it for its daily meditations, life coaching, nature sounds, stories, and music, which are all personalized based on the mood you select when you open the app. There’s also an option to track your moods and review patterns in how you feel, and set reminders to breathe and take breaks for mindfulness throughout the day.
Sattva is a mindfulness app that draws its mediations from ancient Vedic principles. In addition to 6-minute-plus guided meditations, the app features “sacred sounds, chants, mantras and music by Sanskrit scholars.”
Sattva is perfect for anyone looking to get more in touch with the history and origin of meditation in addition to starting their own practice.
Indepth Sound Design ist ein Sound Design Channel auf Youtube, der sich mit der Philosophie und den techniken des Sound Designs beschäftigt. Dafür werden Beispiele aus echten Filmen gezeigt und erklärt. Indepth Sound Design beschreibt sich selbst als eine Fundgrube für lehrreiche Sound-Dekonstruktionen, Audio-Stem-Breakdowns und andere klangliche Inspirationen. Die Seite wurde von Mike James Gallagher ins Leben gerufen.
Beispiele für Sound Design Dekonstruktion:
In diesem Beispiel wird auf den Film Independence Day eingegangen, wobei der Sound in die verschiedenen Layers aufgebrochen wird. Die Szene ist 3:45 lang und wird 4-mal gespielt.
Zuerst mit nur den Sound Effects, dann nur Dialog und Foley, anschließend nur die Music und zuletzt alles zusammen im Final Mix.
Im zweiten Beispiel geht es um eine 1:09 min lange Szene aus Terminator 2. Auch diese Szene wir mit den Layers Sound FX, Ambience, Foley, Music und Final Mix separat gezeigt.
Außerdem spricht im Anschluss der Sound Designer des Films Gary Rydstrom über den Entstehungsprozess des Sound Designs bei dieser Szene.
Das Ziel von Woche 6 ist es wieder Follower zu gewinnen. Deshalb fokussiere ich mich in dieser Woche vor allem auf den Punkt Reels. Reels sind kurze Video Postings (maximal 30 Sekunden), die von TIkTok abgekupfert wurden. Weitere Infos über Reels findest du in einem separaten Blogeintrag. Ebenfalls wird es wieder Stories geben, wenn auch nicht täglich, jedoch super oft, um Follower wieder zurück auf mein Profil zu bringen.
In Woche 3 fragte ich meine Follower ob sie gerne eine Anleitung für das Erstellen eines Tierportraits in Photoshop hätten. 90% stimmten für ja und so entschied ich mich dafür eine Anleitung in Form eines Reels zu posten. Auch der Post in Woche 3 erhielt sehr viel Reichweite und konnte insgesamt 64 Likes ergattern. Das war der reichweitenstärkste und meist gelikteste Post, den es bis jetzt auf meinem Profil gibt.
Ich postete dass Reel am 25.6 gegen 21 Uhr (eigentlich schon viel zu spät für meine Followerschaft) und er hielt über Nacht 1248 aufrufe. Das ist für mein Profil mit nun 131 Follower sehr stark. Weiters erhielt das Reel 34 Likes und insgesamt 6 Kommentare (Stand 26.6).
Update (Stand 30.06.): 1.347 Aufrufe, 40 Likes, 6 Kommentare
Aufgrund des großen Erfolges des Reels am Vortag, entschied ich mich dazu am darauffolgenden Tag noch einmal ein Reel zu posten. Dieses zeigt die Arbeit mit Acryl Tusche in Wasser welches einen schönen Effekt ergibt. Das Video wurde im Zuge eines Uni-Kurses produziert.
843 Aufrufe, 46 Likes, 4 Kommentare (Stand 15.07.)
Die Nutzung von haptischem Feedback, um Nutzer eines Produktes zu signalisieren, dass zum Beispiel eine Aktion erfolgreich ausgeführt wurde, kann das Produkt sowie das Markenerlebnis erheblich beeinflussen.
Im Vergleich sind beispielsweise die zwei Stifte sowie zwei Kameramodelle zu sehen.
Oben sind zwei Produkte der gleichen Kategorie abgebildet, beide weisen allerdings unterschiedliche Charakteristiken im Bezug Haptik auf.
Der Stift „Sharpie“ ist dafür bekannt, dass es sehr auf die Rückmeldung beim Schließen des des Verschlusses einen deutlichen “Klick” von sich gibt. Das macht in diesem Fall
einen professionelleren und hochwertigeren Eindruck und verleiht dem Produkt Charakter.
Die Einstellräder der SIGMA Kamera sind sehr schwammig und das Feedback fühlt sie sich undefiniert an. Bei der Leica Q2 haben die Bedienelemente einen sehr mechanischen, präzisen und klaren Charakter, der sicherlich zum Branding und Preisniveau passt.
Do you catch yourself recognising whose track/song you are listening to when you’re just shuffling randomly through Spotify, even before you look at the artist name? This is because successful music producers have a way to make sure you can instantly recognise them. This is quite beneficial, because it imprints into the listener’s mind and makes them more likely to recognise and share the artist’s future releases with their network.
So how do musicians/music producers do this? There are some key points that can easily help you understand this occurence better.
1) There’s no shortcut!
You know the 10.000 hour rule? Or as some have put it in the musical context- 1,000 songs? There’s really no way around it! This aplies to any skill in life, not just music. However, usually the end consumer never really knows how many songs an artist actually never releases. Those are all practice songs. For every release that you see out there there might be 100s of other unreleased songs done prior to it. if the musician just keeps creating instead of getting hung up on one song, they will eventually grow into their own unique way of structuring, as well as editing songs.
2) They use unique elements
So many producers/musicians use samples from Splice, which leads to the listener feeling like they’ve already heard a song even if they haven’t. Songs get lost in the sea of similar musical works, but every now and then, something with a unique flavour pops up and it’s hard to forget. Musicians who make their own synth sounds, play exotic instruments or even their own dit instruments are the ones that stick around in our minds.
3) Using the same sound in multiple songs
This is the easiest and most obvious way in which musicians/producers show their own style. You might hear a similar bass, or drum pattern in mutiple songs/tracks from the same musician. In rap/hiphop, you will also hear producer tags (e.g. “Dj Khaled” being said in the beginning of each track).
4) Great Musicians/Producers don’t stick to one style/trend
Music has existed for so long and progressed so fast lately, that it is hard to stand out, especially if you stick to genres strictly. Nowadays, great musicians will come up with their own subgenres or mix in few different ones into a musical piece. You won’t ever really remember the musicians or the producers who are just following in the footsteps of the greats who have already established a certain genre. If you can’t quite put your finger on why you like someone’s music so much and why they sound “different”, they are probably experimenting with a combination of different genres.