augmented reality – application for learning a language | 4

There are many devices or physical objects trying to include options for deaf or hard-hearing people. Currently apps for learning the sign language are mostly without AR so without objects or signing avatars which are computer-animated virtual humans built through motion capture recordings. As described in the previous blogs, there are options showcasing the content with videos or pictures and predominantly showing the hands. In videos on various apps and on YouTube real individuals are signing and their whole upper body is shown which is more helpful to get to know the language better.

Overall there are four phases which are important when learning the language of signing:

  1. First of all learn one chosen alphabet and fingerspell it
  2. Secondary learn common signs
  3. Afterwards or meanwhile phase 2 you will get to know and learn the grammar and stucture of sentences
  4. Lastly sign with other people

The final concept should educate on and help with phase 1, 2 and 3 to prepare for phase 4.

AR objects and avatars

AR possibilities and concepts which are currently developed to help the sign language learners differ depending on the showcased AR objects within the apps. As you can see in the next examples, some are using flashcards, physical cards you have to buy beforehand. The flashcards have different illustrations like hand guestures or fingerspelling to learn the alphabet. By hovering the smartphone over the cards avatars start to sign letters and words or augmented 3D objects appear to represent the sign (like a augmented bear or heart is appearing when the signs are showed).

It was interesting to see that most avatars occur as whole individuals. On one side the lower body does not contain any relevant information when looking at the facial expression and arms but on the other side it personifies the real conversation with a whole communication counterpart. In my opinion the facial expressions of the avatars in these examples are not recognizable enough even though they are essential. But it is difficult to animate mouth and facial movement and only with huge effort precise and realistic enough. After all it is significant to think about which information is important for the communication and learning process in the end. Also how objects or avatars should be displayed and animated or could be beneficial to include in the final concept. Furthermore should be analysed how AR could be used to support the learning process.

Examples

Sources

https://virtualrealitypop.com/learn-american-sign-language-using-mixed-reality-hololens-yes-we-can-e6e74a146564

https://child1st.com/blogs/resources/113559047-16-characteristics-of-kinesthetic-and-tactile-learners

This M’sian App Makes A Sign Language Class Out Of Cards, Complete With A Lil’ Teacher