We are excited to introduce our first developer tool for augmented reality, virtual reality, and mixed reality. It’s called Mixspace Lexicon, and it’s all about using your voice to affect the world around you.

Speech interfaces have come a long way over the last few years. We can ask Alexa/Cortana/Google/Siri to manage our calendar and grocery lists, to play our music and send messages on our behalf. These interfaces often work on user discovery. Users try phrases they think will work, and they often do. They are truly intuitive interfaces, and are getting better each year.

With Lexicon, we’d like to bring this intuition to the world of mixed reality. By merging voice input with other forms of input, such as gaze and hand tracking, we can allow users to interact with mixed reality in a completely familiar way.

Speech is also a unifying source of input. While every device seems to have its own controllers, gestures, and other affordances, speech is one input we can take with us across all devices. We can learn the lexicon of an application on HoloLens and it should work equally well on Oculus, ARKit, etc.

To learn more about Lexicon, check out the Documentation.

To watch Lexicon in action, go to our YouTube channel.