What Microsoft Mesh means for developers


Microsoft unveiled its new mixed-reality platform, Mesh, at its March 2021 Ignite event. The splashy launch didn’t go into significant technical detail though it did show shared, cross-platform, virtual and augmented experiences and a HoloLens-based avatar conferencing service. There was a lot to see but not a lot of information about how we’d build our own code or use the service.

Despite the lack of detail forthcoming at Ignite, it’s quite easy to make an educated guess about Mesh’s components. We’ve been watching Microsoft unveil most of the services needed to build Mesh during the last couple of years, and Mesh brings all those elements together, wrapping them in a common set of APIs and development tools. Unlike many other augmented-reality platforms, Microsoft has a lot of practical experience to build on, with lessons from its first-generation HoloLens hardware, its Azure Kinect 3D cameras, and the Mixed Reality framework built into Windows 10.

Building on the HoloLens foundation

If you look at the slides from the Mesh session at Ignite, it won’t be surprising that the scenarios it’s being designed for are familiar. They’re the same set of collaborative, mixed-reality applications Microsoft has shown for several years, from remote expertise to immersive meetings, and from location-based information to collaborative design services. While they’re all familiar, they’re more relevant due to the constraints that COVID-19 has added to the modern work environment, with remote work and social distancing.

Over the years that Microsoft has been building mixed-reality tools, it’s noted a number of key challenges for developers building their own mixed-reality applications, especially when it comes to building collaborative environments. The stumbling blocks go back to the first shared virtual-reality environments, issues that prevented services like Second Life from scaling as initially promised or that held back location-based augmented-reality applications.

First, it’s hard to deliver high-definition 3D images from most CAD file formats. Second, putting people into a 3D environment requires significant compute capability. Third, it’s hard to keep an object stable in a location over time and between devices. Finally, we need to find a way to support action synchronization across multiple devices and geographies. All these issues make delivering mixed reality at scale a massively complex distributed-computing problem.

It’s all distributed computing

Complex distributed-computing problems are one thing the big clouds such as Azure have gone a long way to solving. Building distributed data structures like the Microsoft Graph on top of services like Cosmos DB, or using actor/message transactional frameworks like Orleans provides a proven distributed-computing framework that’s already supporting real-time events in games such as Halo.

Copyright © 2021 IDG Communications, Inc.



Source link