Teaching sign language through real time AR/VR face tracked subtitles and translation to 3D sign. I was the team's programmer on this finalist reality hack @ MIT project. Check out our Devpost page here: https://devpost.com/software/sublynk
Showcase video (I edited using Davinci Resolve)
Demonstration video
the problem
To work around the Quest's privacy limitations of passthrough, I added a webcam to the top of the headset to do computer vision face tracking using google mediapipe's blazeface solution.
photo of me developing and testing the speech to text dictation using windows speech API.
animation capture workflow
preview of our original recorded database of SEE (signed exact english) motion captured animation set