Interview – sign language student

As part of the methodology, I held an interview with a student who is studying sign language in Austria for three years to become a interpreter. The 21-year-old female became interested in sign language six years ago. After watching videos via YouTube from deaf persons signing various vocabulary, she realized she wanted to learn the language even though there was no one in here nearer surroundings who was hard-hearing or deaf. She was especially fascinated by the movements which looked in her eyes like an artistic dance, the variety of the language by facial expression and posture and that it was so incredibly different from the vocal one she knew. To become very good at signing, she attended a course at Urania in Graz. As time went on the course came to an end but she practiced further on with a private teacher so that she finally could take part in the selection process to study sign language in her bachelors which she is doing now for six semesters.

Very interesting was how she described the own learning process from her studies. In the beginning of her beginner course, she had to write down the vocabulary which the lecturer was signing to learn them later at home. First of all she did not know what to focus on and which movements are more common than others so she just wrote down the movements as she thought she could later understand and how she has seen like “left hand does this and the right one does that while the thumb of the left one does this …“ but could not remember the exact movements at home. As some of the lecturers do not provide the students with prerecorded videos of the vocabularies, the students must record them themselves to learn them. After some time, she learned that there are a few ways to transcribe the gestures more comprehensible. By classifying the movements by dividing the movement into components with answering questions like “in which direction do the palms point?”, “is the movement of the hands symmetrical?”, “which form has the hand?”, “what do the fingers do?”, “are the shoulders involved in the movement?” and more.

In the beginning the students vocally spoke out the words they were signing and structured the sentences in the vocal language grammar approach. After some time they stopped speaking next to the signing and started to structure the signs in the correct sign language grammar way. But she also knows that the current study beginners do not vocalize the signs anymore but do it right away without it and structure sentences correctly as well from the beginning.

own image: she did it like on the right side but should have done it like on the left side

She explained that when she must do homework or tasks for her studies, the students record themselves while gesturing sentences and vocabulary. To catch every detail, they have to set up the camera in a proper way that their whole upper body is visible. If there are movements outside the visible area they either have to turn themselves to enable a view from the side or turn the view of the camera itself. The lecturers who provide their students videos record themselves from the front and if needed from the side as well when a part of the movement is hidden between or behind hands or other parts of the body.

One of the lecturers once advised her after the examination of one of her tasks that she should show the sign for timeline from the other perspective as when she tried to explain when exactly in the course of the day she has eaten or taken a break, it was not possible to see the space between the hands which showed the timeline as she had them hidden behind the hands. This showed, that it is also possible to change up the sign itself and to show the sign from the other side.

For her it is common to borrow out and search for vocabulary with the help of physical dictionaries which are mainly consisting of verbatim descriptions, some of the digital comprise fotographs as well videos. A few examples are LedaSila https://ledasila.aau.at/, spreadthesign https://www.spreadthesign.com/de.at/search/, signdict https://signdict.org/about and more. If you want to search by giving parameters of the movements because there is no specific word or sentence to describe the vocabulary or meaning, the only help you will get for searching for these “special gestures” (“Spezialgebärden” like for example “not knowing how to start”) is over ögs Gänsefüßchen https://sites.google.com/site/oegsgaensefuesschen/ as it is the only side offering the possibility to search by descriptions of movements or gestures. Ögs states that “special gestures” are a category that does not exist from a linguistic point of view. These signs are described and highlighted as “untranslatable”.

source: https://sites.google.com/site/oegsgaensefuesschen/home/suchmoeglichkeiten: search by pictures, parameters or correspondences

Difficulties she recognized throughout the time are that there are so many dialects even throughout Austria itself and that signs in general can change quickly so that when she communicates with deaf in her age or visits a retirement home and signs with elderly people it is a completely different experience as the signs differ. Another point she mentioned is the lack of teaching materials as it is not easy to get material and self-study is rather difficult in terms of sign language. A study colleague of her quit studying sign language after one year because he could not manage to imitate the movements and it was not possible for him to train the spatial imagination for that. She stated that some lecturers describe the experience of online-teaching for study-beginners in the first semesters currently in corona times very difficult as it is only a two-dimensional experience.

As I told her about my ideas regarding the application, possible features and visualization in the end, she said that she liked the idea of an application for learning sign language and can especially imagine it in the phase of learning vocabulary with it as it is an absolute necessity in her eyes is the contact to native speakers to gain a good level in signing in the further stages of learning. Furthermore she stated that the feedback feature is very important if there is no person next to you telling you if the gesture is done wrong. Regarding the area that should be visible for the gesturing in general, she showed me a few gestures that were not restricted to the main area from the hips to above the head as she thinks that the system can not track a few of the gestures then. For example she showed me the sign for “eyebrow” where you actually slide with your fingers around your own eyebrows as it could be difficult to touch them when you wear the glasses as well the sign for “curious”(touch on the bridge of the nose) as well as “glasses” or everything around the eyes and in the face that you physically touch to relate to it, could be on one side hard to touch or reach under the glasses and on the other side hard to track by the system if they are over the head or under the glasses to give a feedback.

There are classifiers like for example spatial classifiers for “passing the house with the bicycle” where you show the position of yourself on your bicycle and how you move forward in the spatial area by pointing out the positions in the area in front of your own body and sign the house, the bicycle and the passing. The house then serves as a box in the spatial area after you signed the sign for house one time, you then later just relate to it as a point or box in the spatial area. In the sense of “where it is located and in which direction something goes”, one sometimes does not even show the sign for “house” at the beginning, because it occurred in the sentences before and from the context it is obvious that the house is meant.

own image

Regarding tracking she can not imagine in different scenarios (like “Going to grandma’s house but on the way there still going to the bakery” or when talking about friends standing by, just pointing at them without signing their name again) and with other classifiers how this will be possible to track correctly and thinks it will need a lot of deaf to help evolve the tracking systems to bring them to a right level of accuracy so that testing is indispensable. As an example, she names that many deaf people can not even understand what avatars are signing or want to communicate because they can not relate to the movements and translate it to their own style and the mimic of the avatars is too difficult to read although it is decisive for the context. She and her study colleagues talked a lot about the facial expressions and naturalness of avatars. In their eyes it is necessary to test avatars with a lot of deaf people first to help developers to adjust the look of the avatars so deaf people can understand avatars because it would be useful to use avatars for announcements at the train station (delays) or other short notice information as these are currently not even communicated to deaf in most cases.

Conclusion

This interview was very helpful to reassure myself of the outcome of my previous research from the first and this semester as many points which occurred throughout it proved my outcome of the research from the literature and the internet. It helped me to get to know if there is a necessity for my idea of the application in the first place and which aspects I must think about when evolving it. As I did not have much knowledge about the process of the proper education trough universities of organizations which offer courses, I have got new insight into the phases of teaching and methodological approach. I can get more interview partners as she offered me to connect me with her study colleagues as well with especially one who is currently engaging in writing a bachelor thesis on the topic of avatar appearance and understanding which will be a great input for my own thesis as I imagined using avatars in the beginning but are now considering if another visualization would be more suitable and understandable after her explanation. In general, I plan to focus on the visual appearance of the application and how I will structure the application in the next semester. I could send possible visualizations to her study colleagues to let them evaluate them as I see the visualization as a key factor of willingness to use the application or not if it is not done understandable and aesthetic while being helpful.

Sources

https://sites.google.com/site/oegsgaensefuesschen/

https://ledasila.aau.at/

https://www.spreadthesign.com/de.at/search/

https://signdict.org/about

Trying out the LeapMotion

As I got to know hand tracking from the HoloLens2 and a little bit trough one of my lectures “Interaction Design”, I wanted to try out the tracking with the LeapMotion Controller. The Leap Motion Controller from the company ultraleap is an optical hand tracking module which captures the movements of hands and fingers even if they move unparallel to each other. It is used for development and deployment of various applications.

own image

Set up

I borrowed out a leapmotion controller at the mediacenter from the FH and downloaded the accompanied software from the website.

You plug in the controller per USB and install it as well. There are various possibilities you can choose from to try out the LeapMotion. As my topic of learning sign language deals with viewing or tracking – not only – but mainly hands, I wanted to see how I can interact and which movements can be tracked preciously. With the information for setting up and trying out with the Unity engine given trough one of the many helpful YouTube videos, and after the installation, you can open Unity. In the program you install the XR Plugin Management under the package manager to prevent problems when downloading the packages for the LeapMotion controller. You import the unity packages that are provided by ultraleap into the assets folder and can try out the different examples given.

https://www.youtube.com/watch?v=WwHrXwGyMt8

Try out and conclusion

It was very helpful for me to try out specific movements for a better understanding of how systems are tracking and which parts of the hand brings the most valid information for the systems to possibly translate it into sign language.

I got to know that the parallel tracking of both hands works very well but there are some difficulties for the controller to track preciously when the signs are to complex and have many overlapping fingers or else. After just randomly moving and changing the positions of my hands and the fingers, I tried to spell my first name consisting of five letters. The controller tracked three of them without any problems but with the other two, it could not recognize the position of the thumb or index finger and this is why it showed a different gesture than the one I was doing. I tried to readjust it by turning or giving the controller itself another angle to read the gesture as well as turning the hands around to view it from the other side but it always set back to the wrongly tracked gesture.

own image

Heuristics for LeapMotion Interaction

As already mentioned a couple of years ago from the LeapMotion developers, it is important to evolve applications with the focus on the following points which I can now relate to and understand better after trying out the LeapMotion:

1. consistency in tracking: working constantly on accuracy and consistency of tracking by having multiple people perform actions, motions and gestures

2. ease of detection: create a concept of how easy the motions are detected and what is important to detect as obvious conditions which define the motion but are separated from other things around which could also be detected

source: https://medium.com/@LeapMotion/6-principles-of-leap-motion-interaction-design-98bddff77a53

3. occlusion: the controller should be able to detect the environment but if the sensor is covered by physical objects, tracking is not accurate or not possible at all. If motions involve occluded parts, the system can not visibly see a part of the hand and makes assumptions trough available data. A diagnostic visualizer can help to test a variety of detectable areas around the field of view to prevent occlusion.

4. ergonomics: improving posture and working environments but not on a physical object. A consideration of affordances and guestures adjusted to the movements of the humans and users body without harming or straining is necessary by having the environment and comfortable positions on mind.

5. transitions: creation of a concept for the interaction with the application. Every interaction should be defined and differ so that the gesture can be detected from the system as well as be more memorable for the user. If actions have similar results the usage of slightly similar interactions is acceptable, otherwise they should differ. It depends on the beginning and ending of a motion in the space in front of the body if the user can perform the action in the air easily. It therefor should be looked for minimizing awkward switches by implementing an “initialization” or “resting” pose which initializes actions.

6. feedback: consideration of how providing a feedback from the application to the user who is performing the gesture because a lack of hardware-based physical feedback in motion-based interactions can leave the user unknowing of the current state of the application and state of the system. The user interface should communicate three things regarding the users interaction: Where am I now? Where do I need to be to complete? How fare and in what way do I need to move?

source: https://medium.com/@LeapMotion/6-principles-of-leap-motion-interaction-design-98bddff77a53

With the programm Unity I can in the end evolve the whole setup of a possible application as I already informed myself in the last semester as well as in this one about the developement of the final product. There are many possibilities how to do so, so that it is necessary to first develop a concept and designing the experience and interactions beforehand.

source:

https://medium.com/@LeapMotion/6-principles-of-leap-motion-interaction-design-98bddff77a53

User research – interest in learning sign language

I asked around for some information through qualitative research about who is willing or has thought of learning the sign language. Basically to get to know if there is an interest in getting educated in and on sign language or if there is no need for such a tool. The structure of the questionary did not follow a concrete form and I was adjusting my follow up questions by the answers which were given to me.

The following questions appeared most of the times:

Were you ever interested in learning sign language? Yes/No
if yes: Why did/do you want to learn it? open answer
if did not learn it yet: Why did you not learn it yet? open answer
How did you learn it (instructor, YouTube, apps, other options) open answer
How did you improve the most? open answer
Do you generally learn new languages and if so how? open answer
Do you have someone in your surroundings you would require to use sign language to communicate with them? Yes/No and open answer
How old are you and where are you from? open answer

Outcome

The following answers give a short insight into the interest situation. The presented persons refer to the people interested in sign language and one participant who has learned it.

Person A, 27 years old female:

Her friend wrote a paper about the topic of sign language accompanied by a photo exhibition. She became interested in the topic trough that but she did not start to learn it because she would not know with whom she might practice speaking without letting the learned signs fall into oblivion. She is planning on learning  the most important signs like “hello”, “how are you?” and other most common conversation pieces on her own so that she can at least communicate like in the other languages with these simple sentences. Because she is interested in the topic she thinks the tool would be helpful to get to know a few sentences but thinks it is difficult if the person who wants to learn it does not have someone in the surrounding to talk to in sign language because there is no motivation if you do not know if you will really deploy it to communicate with another person.

Person B, 52 years old female:

She thought about learning it about a hundred times since she was a child. Her aunt is deaf and she was willing to learn it, not just because of her aunt but in general because she is interested in the language, but has never done it. Her aunt can read lips and person B learned to articulate clearly to make the communication easier so it was not necessary to learn it even though she had somebody in her surrounding she could communicate through sign language. She explains that her laziness is in her way even when trying to learn spoken languages like English.

Person C, 25 years old female:

She is interested in the language, since you can communicate with many people all over the world as well because she is interested to learn a language only based on movements, gesture and mimic. She has no one in her surroundings where she would require to use sign language. So far, she has only learned spoken languages because she really loves the melodic sound of the different languages which she has learned mostly in school and in university. She improved the most by spending time in the countries where she had to speak the language.

Person D, 26 years old male:

Has learned sign language when he was 18 because he thinks it is important to know how to communicate with all people and sign language is in his opinion a really universal language. He knew someone who was communication trough sign language and used to work with Org which in his opinion is always useful.
He learned a couple of gestures and hand positions with the help of YouTube and a foundation that teaches the language. He did not improve much because he did not practice.

Conclusion

In conclusion the questionary helped me as a first little step to understand the position and situation of possible users and which difficulties or behavior stands in the way. Most of the time people are interested but their own laziness or absence of the need to learn it because there is no one in their community who can not speak spoken language as well as if they would be interested in it and learn it they would not have somebody to exercise it with. It is clear that both people of unspoken and people of spoken languages stay in their own communities and communicate with each other as it is a lot easier and the empathy is higher most of the times. This is why the persons I have asked for this questionary as well and which were not even interested in learning sign language were naming reasons like that as well because they see no need for it and do not need it for communication. In order to not forget the whole picture it could be possible to implement different features into the tool that stand against these difficulties for example a buddy-feature where people can connect and exchange. Which of the solutions for such problems will be necessary or helpful will be discussed within the next months after more insight through the next steps as this questinary shall not be interpreted as representative due to the number, gender and age of the participants.

Next steps will be needed like a more precise as well as quantitative questionary to get to know the needs from actual users who have to learn it, different approaches in learning and many more to get as much information as possible.

HoloLens 2 Interactions and Actions

Trying out the HoloLens 2 for the first time as I have never tried out AR glasses ever before.

The experience itself was very immersive and I had great fun playing a game after adjusting the glasses to my head and vision. While shooting robot spiders served the purpose of having fun and had nothing to do with the project itself or an education app, it was interesting to get to know the navigation of the interface and remember different commands for navigating through the experience. It brought my attention to think about signs in general and overlaps between signs of the navigation with sign language sings as well as the whole interaction with AR objects by using gestures.

own scribble

There are multiple human gestures and movements for free-form systems for navigating interfaces. I reviewed the book of Designing Gestural Interfaces by Dan Saffer from 2008. While this book was published more than ten years ago, it still helped to get an overview due to a lot of pages which showed pictures of a person doing different movements with the hands and the whole upper body so that for example the system is reacting by cooling the temperature down. It was a good first impression of the multiple possibilities that are given to interact with the body.

Today you can interact with the Holograms in many different basic actions and gestures like dragging, dropping, clicking, holding and releasing to get effects like scaling, rotating, repositioning and more. The hand-tracking in Microsoft HoloLens 2 provides instinctual interaction so that the user can select and position holograms by using direct touch as if he would touch it in real life. Other than the possibility of direct touching you can use hand rays to interact with holograms which are out of reach. I recognized that you have to get used to use your own body very quickly. For example you often want to open the menu so you have to tap on your arm. Another often used interaction in my experience was lifting the arm to set up the hand ray to then click on the holograms by putting the thumb and index finger together as seen above.

mrtk hand coach: https://docs.microsoft.com/en-us/windows/mixed-reality/design/hand-coach

Looking into the Interaction Guides from Microsoft you can see which components are or can be taught to the users and how users are guided to learn gestures. This is done by viewing a hand coach which is just a 3D modeled hand repeatedly showing the gesture until the user’s hands are detected by the system and starts to imitate the movement. This means if the user does not interact with the hologram/component for a period of time, the hand coach will start to demonstrate the correct hand and finger placement again until the user understands the process. Interesting for my project is to to how the user gets educated just by hands and it was helpful for me to get to know how I can create my own hand coach as these information are provided in this guidelines as well by naming programs like unity as well giving the opportunity to download an asset of the hands to create an own controller setup.

maya example hands asset: https://docs.microsoft.com/en-us/windows/mixed-reality/design/hand-coach

It needs practice to learn to navigate through these AR tools and systems but it becomes clear that in this scenario people show willingness and technical interest to imitate gestures to get control over something. They are willing to get used to the navigation through gestures and learn to get to know them after a while until they remember them completly. After stating the issue the SAIL team talks about in my last post about that the depth perception must be learned for first time learners who are not used to use the space in front of themselves or the own body to communicate, it gives me hope that by giving the right amount of information and instruction AR will help people to overcome the phase where they use their body for communication more easily if they are interested into learning new interaction methods.

Sources:

Saffer Dan: Designing Gestural Interfaces, 2008, p.210-232

https://docs.microsoft.com/en-us/dynamics365/mixed-reality/guides/authoring-gestures-hl2

https://docs.microsoft.com/en-us/windows/mixed-reality/design/hand-coach

Sign learning education – depth perception and mirroring

Two important points that must be thought through are the depth perception and the mirroring for first time learners of sign language. I came across this topic through reading a project description of the SAIL (Signing Avatars and Immersive Learning) project which is lead by a team from the Gallaudet University in Washington, D.C., engaging in the topic of sign learning education.

As depth perception must be learned for first time learners who are not used to manoeuvre in the space in front of themselves, this team helps the users learn American Sign Language signs from both the first-person perspective and the third-person perspective with help of VR and using principles from embodied learning. Not only does it require to practice to move their body in a 3D environment, but also mirroring the instructor the other way round. The SAIL project team solves this mirroring thinking which is unusual for first time users by viewing both perspectives in form of a augmented teacher that is standing in front of you and also by showing augmented hands that are supposed to show you your own body movement you should do in front of your body.

The team itself states the great potential in this new way for learning ASL (American Sign Language) for native ASL users who will learn in the comfort of their own homes. Their next steps in their project are the conduction of EEG cognitive neuroscience experiments which should show the effects of embodied learning on ASL learning and also transfering their project ideas to an AR version after the VR version is developed entirely.

The use of the first person perspective helps learners especially in the beginning as I personally have seen that I have looked downwards and focused a lot on my hands and the precious movement while trying to sign for the first time. Watching videos of people who sign words while trying to sign by yourself and being unsure if you mirrored the sign correctly in your mind, would not be necessary and reassure you on what you are doing. Including this would also help the learner to keep on with the work as he would feel more confident throughout the learning process.

source: https://stemforall2020.videohall.com/presentations/1720

augmented reality – learning a language | 5

AR technology

There are several types of AR in use today. For a better understanding of AR it is important to know the types and their advantages. There is marker-based and markerless technology.

Marker-based AR is:

  • easy to produce
  • most available technology (supports biggest variety of devices)
  • commonly used for marketing and retail

Markers can be images or signs that trigger an augmented experience and act as anchors for the technology. A marker has to have enough unique visual points — especially images with corners or edges do well. Logos, packaging, QR-codes, products themself (engine, bottle, chocolate bars) can serve as markers. The technology uses natural feature tracking (NFT) and can share AR content like text, images, videos, audios, 2D/3D animations, objects, scenes, 360° videos and game mechanic. For image recognition there are license based solutions (software development kits) on the market like Vuforia, EasyAR, Wikitude and more.

image courtesy of Villeroy & Boch, image from https://learn.g2.com/augmented-reality

Markerless AR is:

  • more versatile
  • not restricted to any surrounding
  • allowing a multi-person interaction in virtual environment

Different styles and locations can be chosen by the user who can also rearrange his surrounding virtual objects. The user’s device collects information through the camera, GPS, digital compass and accelerometer to augment realities into the scene. It is restricted to some devices: for iOS it has to be version 11 or up and for Android 7.0+ or newer. For placing objects in the real world, it uses plane detection which means that horizontal and vertical planes are detected. When ARKit or ARCore (Apple’s iOS and Google’s Android AR framework) analyse the real surfaces in the natural environment and detect a flat surface, it places a virtual object on the detected surface so that it appears to rest on the real surface. This happens with new virtual coordinates which are related to the real coordinates in the environment.

Following types technically fall under markerless AR as well:

  • Location-based AR ( Pokemon Go: characters in specific locations )
  • Superimposition AR ( recreating or enhancing a object in the real world )
  • Projection-based AR ( projectors instead of mobile devices: hologram-look )
  • and Outlining AR ( outlining boundaries and lines, e.g. on the lane )

A combination of both technologies — marker-based and markerless — in one application is possible as well.

As reviewing the last blog ( number |4 ) about current AR applications, regarding the use of flashcards to learn the sign language, the cards themself serve as markers and marker-based technology triggers augmented realities.

Developement of marker-based app

As an example will be described how a mobile AR marker-based letter recognition application was developed to get to know the rough technicalities. Suman Deb, Suraksha and Paritosh Bhattacharya analysed how a AR application should be created and developed to improve deaf-mute students’ sign language skills. The used media-cards showed a specific Hindi letter and triggered the application to display 3D animated hand motions for each letter.

A quick description:

  1. Upload pictures of markers To make an app response to certain images ( markers ) every image has to be stored in a Library. The developers of the app uploaded them to the Vuforia Library. As described above, the Vuforia Engine is a software development kit (SDK) and allows developers to add advanced computer vision functionality to any application, allowing it to recognize images and objects, and interact with spaces in the real world
  2. Download Unity Package which was downloaded from Vuforia was imported. Also along with Vuforiaandroid and vuforiaimage targets-android to generate Augmented Reality application
  3. 3D hand model arrangement with markers To ensure that the right image is shown, you match and place them into an Image Target Hierarchy so that after scanning of the media-cards ( flashcards ) the corresponding animated hand 3D model is shown in the augmented reality interface
  4. Include Features The menu gives features like zooming in and out and rotation of the 3D hand. The scanned letters ( markers ) can be amassed into words

In use the camera feed will be analysed by the image capturing module when pointed over the marker. Then binary images are generated and processed by the image processing technique. Marker tracking modules tracks the location of the marker and displays the corresponding 3D hand animation over the marker itself. The Rendering module has two inputs at a time: calculation of the pose from the Tracking module and from the virtual object which will be augmented. The Rendering module also combines these inputs and renders the image on the screen of the mobile device.

Sources

https://www.springerprofessional.de/plane-detection/16253564

https://www.howtogeek.com/348445/what-are-the-arcore-and-arkit-augmented-reality-frameworks/

https://learn.g2.com/augmented-reality

https://www.youtube.com/watch?v=qAaUSmVfpaU

https://www.youtube.com/watch?v=16jT1_MtTXs

Suman Deb et al. / Procedia Computer Science 125 (2018), p. 492–500: “Augmented Sign Language Modeling(ASLM) with interaction design on smartphone – an assistive learning and communication tool for inclusive classroom”

augmented reality – application for learning a language | 4

There are many devices or physical objects trying to include options for deaf or hard-hearing people. Currently apps for learning the sign language are mostly without AR so without objects or signing avatars which are computer-animated virtual humans built through motion capture recordings. As described in the previous blogs, there are options showcasing the content with videos or pictures and predominantly showing the hands. In videos on various apps and on YouTube real individuals are signing and their whole upper body is shown which is more helpful to get to know the language better.

Overall there are four phases which are important when learning the language of signing:

  1. First of all learn one chosen alphabet and fingerspell it
  2. Secondary learn common signs
  3. Afterwards or meanwhile phase 2 you will get to know and learn the grammar and stucture of sentences
  4. Lastly sign with other people

The final concept should educate on and help with phase 1, 2 and 3 to prepare for phase 4.

AR objects and avatars

AR possibilities and concepts which are currently developed to help the sign language learners differ depending on the showcased AR objects within the apps. As you can see in the next examples, some are using flashcards, physical cards you have to buy beforehand. The flashcards have different illustrations like hand guestures or fingerspelling to learn the alphabet. By hovering the smartphone over the cards avatars start to sign letters and words or augmented 3D objects appear to represent the sign (like a augmented bear or heart is appearing when the signs are showed).

It was interesting to see that most avatars occur as whole individuals. On one side the lower body does not contain any relevant information when looking at the facial expression and arms but on the other side it personifies the real conversation with a whole communication counterpart. In my opinion the facial expressions of the avatars in these examples are not recognizable enough even though they are essential. But it is difficult to animate mouth and facial movement and only with huge effort precise and realistic enough. After all it is significant to think about which information is important for the communication and learning process in the end. Also how objects or avatars should be displayed and animated or could be beneficial to include in the final concept. Furthermore should be analysed how AR could be used to support the learning process.

Examples

Sources

https://virtualrealitypop.com/learn-american-sign-language-using-mixed-reality-hololens-yes-we-can-e6e74a146564

https://child1st.com/blogs/resources/113559047-16-characteristics-of-kinesthetic-and-tactile-learners

This M’sian App Makes A Sign Language Class Out Of Cards, Complete With A Lil’ Teacher

augmented reality – application for learning a language | 3

Sign language is mainly used by the Deaf and Hard-of-Hearing community. But it is beneficial for the communication of people with disabilities including Autism, Apraxia of speech, Cerebral Palsy, and Down Syndrome as well.

History

Today it is assumed that the communication through signs was developed even before the vocal communication. Native Americans communicated with tribes and Europeans using hand guestures. Benedictine monks were known to sign during the daily periods of silence.

The Spanish Benedictine monk Juan Pablo Bonet is named to be the first creator of a formal sign language for the hearing impaired. In the 16th-century he was teaching deaf to communicate with gestures. After that he started exploring and developed an education method for deaf where they should shape letters with the hand in association with the phonetic sounds by their oral voice. In 1755 the French Catholic Priest Charles-Michel de l’Épée founded the first public school for deaf children. He added the signs of the pupils to a manual alphabet creating a complex system with all relevant grammatical elements. His system became the first sign language and spread across Europe and to the United States of America. Establishing new schools for deaf throughout the countries, new signs from different households where brought to the schools and new alphabets and sign languages developed. Today modern signing systems differ through pronunciation, word order and even express regional accents.

picture from https://kidcourses.com/asl-alphabet-handout/
picture from http://fakoo.de/

Fingerspelling

The use of hands to represent the letters of the alphabet is called “fingerspelling”. It helps signers to name people, specific places and words which have no established sign within the language. For example signers would spell out the word “oak” by fingerspelling while there is a specific sign for “tree”. There are different sign language alphabets around the world and due to little modifications each manual alphabet has its uniqueness. These are carried out either with the use of one hand (like American Sign Language(ASL)) or with two hands (like British Sign Language(BSL) and Australian Sign Language (Auslan)).

Signs

There are between 138 and 300 different types of sign languages around the world. Due to the visual nature and grammar of the sign languages the wordorder differs from spoken languages. This is why it can be difficult to some signers to speak the words while they are signing. As well as speaking must be trained regularly because they have to get used to speaking while focusing on breathing, volume and pausing their voice they can not hear.

Sources

https://www.nationalgeographic.com/history/magazine/2019/05-06/creation-of-sign-language/

augmented reality – application for learning a language | 2

Augmented reality and virtual reality in the educational sector are rising more than ever. While there are challenges AR and VR are facing like the question of cost, effectiveness and health both educator and student benefit from these new technologies which bring interactivity and are currently developing and expanding their availability.

AR and VR in education provide the learner or student with a variety of functionalities at one time and bring a greater experience through immersion and overall visualization. A lesson does not consist of listening and looking at something because with these technologies students experience a topic and participate in a given subject. Travelling through cities, showing the anatomy of a human body or technical systems excite the students and are more memorable, tangible, understandable and learnable after all.

Students can see 3D objects and disassemble them instead of seeing a single 2D picture which is limited to viewing just one perspective and does not include the option to interact with the object itself. In general AR accelerates a higher preoccupation with the topic. Information can be absorbed better if it is perceived visually and students learn in a more efficient way especially if 3D models are designed appealing and they are engaged in the process.

image from https://www.cleveroad.com/blog/augmented-reality-in-education

Looking on the possibility of showing the signing gesture-movement of the visual language learners might also select specific parts of the body to concentrate on the movement of only one hand or the specific facial expression (like disassembling an engine into small pieces). To oversee every single aspect in the signing movement which comes within a short time the selection of specific parts, the possibility to slow down the movement and viewing it from the side would benefit the learning process. The complex signs which work through the movement of the whole upper body with fingers, hands, arms, shoulders, torso, head and the face could be understood and learned more easily and intuitive. This is more effective and understandable than reviewing 2D videos or pictures taken from one perspective which are aligned to show the whole complexity of a sign, phrase or sentence.

Sources:

https://www.cleveroad.com/blog/augmented-reality-in-education

https://theappsolutions.com/blog/development/ar-vr-in-education/

augmented reality – application for learning a language | 1

Augmented reality (AR) has become an attractive trend in the field of language learning. Publications throughout the years 2014 to 2019 show the popularity of mobile-based AR mainly for learning the vocabulary, reading and speaking. There are multiple choices when searching for apps to learn English, Spanish, Portuguese and many more but what if the language you want to get to know to communicate with our opponent is a visual language which depends on signing (forming words by gesticulation with hands)? The main reason for the chosen topic is to alleviate the communication to and with deaf people by making this visual language more accessible in a learning application by implementing augmented reality.

photo by google on https://www.bbc.com/news/technology-49410945

State of the art

The work on technologies, algorithms and language AI that tracks and translates hand gestures into speech developed a lot in the last years. In hope that developers will use them to make own translating apps, Google has published algorithms which can perceive the shape and motion of hands in 2019. Problems the technology is currently facing are hidden fingers and regionalisms in sings meaning that signs differ in specific local areas. Furthermore signing depends not only on hand signs but more importantly the meaning can vary through facial expressions or speed of signing.

To close the gap and enable communication to family members, innovators around the world have created own solutions and technologies like Roy Allela who made a pair of gloves for his hearing-impaired niece. His application reads the text aloud from the movement of the haptic gloves.

Current apps use pictures of hand guestures or videos and many do not include the upper body which is as already mentioned important for the signing context.

image: https://gebaerdenlernen.de/index.php?article_id=112

Goal

The app should include different possibilities for both sides, people which are hearing and which are not. For example features for learning the guestures/signs with the help of augmented realities to get in touch with the actual situation and speed of gesticulation. For communicating the app should translate the video information of recorded movements to text or audio.

Sources

https://onlinelibrary.wiley.com/doi/10.1111/jcal.12486

https://gebaerdenlernen.de/index.php?article_id=112

https://www.geo.de/geolino/mensch/1854-rtkl-gebaerden-wie-gebaerdensprache-funktioniert

https://www.gehoerlosen-bund.de/faq/deutsche%20geb%C3%A4rdensprache%20(dgs)

https://www.bbc.com/news/technology-49410945