Sign Language Virtual Assistant

Sign Language Virtual Assistant

What's the project about?

The project aims to create an AI-Powered Sign Language Virtual Assistant to address the challenges faced by deaf or mute individuals in utilizing voice-based virtual assistants. This new system will detect sign language gestures in real-time using Google’s Medipipe and produce live captions for user validation. It will also support multiple Indian languages like Hindi, Tamil, and Marathi, enhancing linguistic flexibility. By integrating a language model called Rasa, the assistant will generate responses based on Google search results.

What's the goal of the project?

Nowadays, Virtual Assistant devices have been part and parcel of our lives, but most of them are Voice Automated. The most used Virtual Assistants are Alexa, Google Home, Apple Siri, and Microsoft Cortana. These assistants listen to users’ queries and respond accordingly, making their life easier; thus, they have been a very important part of Home Automation. Since these assistants are purely Voice Automated, Deaf-Mutes find it hard to make use of such technology. The goal of the project is to develop an interface that will help the Deaf-mutes to use these Virtual Assistants easily. Designing such an interface will make them find their freedom while using such technologies and might boost their confidence in this Digital Age.

What's the outcome expected?

The expected outcome of the project is a significant advancement in accessibility for individuals with hearing and speech impairments, enabling them to interact effectively with virtual assistant technology. We hope to observe the frequently asked questions through sign language and how we can simplify the processing of answer generation by the Rasa chatbot. The system will generate live captions that correspond to the detected sign language gestures, ensuring that users can validate the accuracy of the interpreted signs.

Plan of Action (summary)

The plan of action for this project involves conducting comprehensive research and data collection on sign language recognition and virtual assistant technologies. The focus will be on developing and training a real-time sign language detection model using TensorFlow, along with implementing a system to generate live captions for user validation. Linguistic flexibility will be ensured by supporting multiple Indian languages like Hindi, Tamil, and Marathi. Integration with the virtual assistant platform, utilizing the Rasa language model for constructing replies based on real-time Google search results, will further enhance the system's capabilities. Thorough testing, user interface development, user training, and support, as well as evaluation and impact assessment, will be key components of the plan. The project aims to deploy and release the AI-Powered Sign Language Virtual Assistant to improve accessibility and inclusivity for individuals with hearing and speech impairments, enabling them to effectively interact with virtual assistant technology.

2