Neural sign language translation for expressive avatars
Khwaja Monib Sediqi (Ph.D. Student)
The primary objective of the PhD thesis is to develop an advanced sign language avatar that uses Artificial Intelligence (AI) to automatically translate German text or spoken language into sign language. The project addresses the critical issue of delivering written or spoken content during emergency situations, where immediate access to human sign language interpreters may be unavailable. In these high-stakes scenarios, ensuring a sense of safety and a nuanced understanding of individual emotional states becomes paramount. This research expands upon prior work by encompassing not only manual, but also non-manual continuous signals enriched with emotional expressions. To this end, the project will use an intermediate symbolic representation for sign language (MMS for Multi-Modal Sign Stream, for short) extended by modalities and temporal information, serving as a foundation for the synthesis of the avatar gestures. MMS extends common intermediate gloss representations with multiple parallel gloss information offering a more accurate description of sign language transitions and their semantic nuances. Furthermore, the project capitalizes on recent advancements in machine learning, exploring end-to-end methodologies to learn the direct mapping between spoken language and multi-channel key point sequences. This mapping is instrumental in controlling the animations of the signing avatar. By comparing these two distinct approaches, the project promises to provide valuable insights on the synergistic advantages of manually and automatically generated representations.
|Primary Host:||Elisabeth André (University of Augsburg)|
|Exchange Host:||Roland Poppe (Utrecht University)|
|PhD Duration:||01 March 2024 - 27 February 2027|
|Exchange Duration:||- Ongoing - Ongoing|