A few months ago we started developing a modular tracking stack called Shinra Meisin, focused on:
- Low-latency edge processing - Privacy-first architecture - Modularity and hardware flexibility - XR-focused tracking pipelines
The system currently includes:
- Eye tracking - Mouth tracking - SLAM - Inside-out full body tracking - Experimental BCI integration
Today we’re showing our current eye tracking progress and plan on showing off SLAM within the next two(ish) days.
Real time landmark accuracy is 80-85% and gaze is 1.0-2.0° (Which will increase exponentially once I get John (My dev) a better GPU and help train a dataset off of 40ish people