Website powered by

Sublynk

Teaching sign language through real time AR/VR face tracked subtitles and translation to 3D sign. I was the team's programmer on this finalist reality hack @ MIT project. Check out our Devpost page here: https://devpost.com/software/sublynk

Showcase video (I edited using Davinci Resolve)

Demonstration video

the problem

the problem

To work around the Quest's privacy limitations of passthrough, I added a webcam to the top of the headset to do computer vision face tracking using google mediapipe's blazeface solution.

To work around the Quest's privacy limitations of passthrough, I added a webcam to the top of the headset to do computer vision face tracking using google mediapipe's blazeface solution.

photo of me developing and testing the speech to text dictation using windows speech API.

photo of me developing and testing the speech to text dictation using windows speech API.

animation capture workflow

animation capture workflow

preview of our original recorded database of SEE (signed exact english) motion captured animation set

preview of our original recorded database of SEE (signed exact english) motion captured animation set

Team photo

Team photo

Team photo 2

Team photo 2

We made it to the finals! MIT Reality Hack 2024

We made it to the finals! MIT Reality Hack 2024

Showing the project to hackathon mentor

Showing the project to hackathon mentor