Khwaja Monib Sediqi
PhD
Neural sign language translation for expressive avatars

The primary objective of the PhD thesis is to develop an advanced sign language avatar that uses Artificial Intelligence (AI) to automatically translate German text or spoken language into sign language. The project addresses the critical issue of delivering written or spoken content during emergency situations, where immediate access to human sign language interpreters may be unavailable. In these high-stakes scenarios, ensuring a sense of safety and a nuanced understanding of individual emotional states becomes paramount. This research expands upon prior work by encompassing not only manual, but also non-manual continuous signals enriched with emotional expressions. To this end, the project will use an intermediate symbolic representation for sign language (MMS for Multi-Modal Sign Stream, for short) extended by modalities and temporal information, serving as a foundation for the synthesis of the avatar gestures. MMS extends common intermediate gloss representations with multiple parallel gloss information offering a more accurate description of sign language transitions and their semantic nuances. Furthermore, the project capitalizes on recent advancements in machine learning, exploring end-to-end methodologies to learn the direct mapping between spoken language and multi-channel key point sequences. This mapping is instrumental in controlling the animations of the signing avatar. By comparing these two distinct approaches, the project promises to provide valuable insights on the synergistic advantages of manually and automatically generated representations.

Track:
Academic Track
PhD Duration:
March 1st, 2024 - February 27th, 2027
ELLIS Edge Newsletter
Join the 6,000+ people who get the monthly newsletter filled with the latest news, jobs, events and insights from the ELLIS Network.