Inspiration

We wanted to make it easy for 70 million deaf people across the world to be independent for there daily communication needs.

How it works

It uses the camera on the computer to recognize the sign language in real time and provide the sign interpretations immediately.

Our development process

We developed a custom data set of various American Sign Language. We used Deep learning (CNN) to train a model to recognize the signs in real time. We used python to implement deep learning and keras library for the implementation.

Challenges

Developing a detailed data set to accurately train the convolutional neural network was a big challenge. Getting and using Keras library. Implementing the solution on Google Cloud Platform.

Accomplishments

We have been able to classify the sign languages with a significant accuracy of >96%.

What we learned

Using CNN for live video data classification. Completing the project is a very short time.

What's next for Sign Language Interpreter

Deploying API using the cloud, adding more vocabulary, making it smart with AI-based feedback, adding different sign languages across the world, and enabling it for two way communication.

Built With

Share this project:

Updates