Mixed Reality and the OS

I’ve been doing a lot of coding and pontificating about mixed reality over the past six months or so.  And my conclusion is that we’re converging on an obvious thing:  A mixed reality Operating System (OS) environment.

That may not seem like any kind of huge logical jump.  It may seem obvious.  Perhaps you feel I just blurted something out that’s at least a year and a half behind the times.  But I’ve gone into a bit more concrete detail than that.  More later.

The reason I went on this design\futurist hike is rather simple.  I didn’t have much money for an expensive Virtual Reality (VR) development kit.

I’ve been holding off getting involved in VR for some time.  Mostly, because I thought it was too early.  I don’t have much along the lines of resources.  I’m not going to go deep into hardware design and patents as a business model.  I’m not going to try and build a global distribution platform as a business model.  I’m not going to try and patent software algorithms as a business model.  I don’t have that kind of money to burn.

Further, the Venture Capital (VC) world is not the right place for me.  VC wants a quick turnaround.  VC wants flashy stuff as fast as possible to sell the entity off.  If VC doesn’t get it quick, VC forces the entity into insolvency and cuts it off.  And while there’s a ton of VC available for VR these days, it’s a feeding frenzy.  VC is bad enough when its sane.  When it’s in a competitive feeding frenzy, it’s beyond dangerous to you and your entity.  It’s more irrational than usual when it’s like this.

Rather, my entry into VR needs to be VR content.  The right timing would have my VR content start to get appealing, just about when there are enough VR consumers to make it profitable.  I waited until I saw that convergence on the horizon near enough that I could start work.

When the time was right to get started, I didn’t have money to burn.  And my assessment of the coming consumer VR storm, suggested consumers also don’t have money to burn.  I thought therefore, that mobile VR such as Google Cardboard would be a good place to start.  It would evolve to be just good enough and be cheap enough.  But I knew that Oculus Rift, the HTC Vive, and some kind of video-game attached console VR was on the way too.  And further, that once console companies began subsidizing VR hardware the way they subsidize their console hardware, consumers would adopt it.

What I needed, was to target my content both to the fairly minimal mobile VR world and also to the oncoming high-end VR world, without doubling my workload.

Unfortunately, at that time, every VR platform had its own proprietary software stack.  Distribution and integration with computer systems was extremely unclear.  How does one launch a reality exactly?  It still is murky even though we have some implementations.  They’re competing.  They’re all trying to lock you into their own app-store environments.  It’s a mess.

As such, I embarked on a long thought<->development<->re-rethought<->re-development process, to try and work out how to abstract VR in such a way that I could focus on content and not worry about platform.

Another huge risk I felt, was that my early content would quickly become incompatible with newer forms of VR.  Content obsolescence is a big problem for a content company.  A back-catalogue is an important thing to develop and maintain.

Therefore, I added the requirement that what I develop needed not just to be multi-platform, but future-proof.

When I added in a requirement that it be future-proof, I immediately had to consider things like Microsoft HoloLens, and whatever the heck Magic Leap may or may not release.  Those devices are not so much VR, as Augmented Reality (AR).

Since I needed to actually create a compatibility layer, I couldn’t just talk amorphously and abstractly about how VR and AR relate, as “thought-leaders” get to do (clap, clap, clap.)  But rather, I needed to abstract concretely and concisely about it and then engineer an abstraction.  And then most importantly, I needed to implement behind that abstraction.  Such that I could actually get content development done.  Which is the point after all.

Lastly, I needed to keep the workload down.  If I commit to developing and maintaining an abstraction layer, I may not have time left to focus on my primary business, which is content.

As such, I decided that my abstraction layer probably should be open-source, such that I can try to bring others into the fold and share resources when it comes to maintenance and development.  Adoption is key.

So, this post is the first step in publishing the fruits of my labors.  I intend to publish a series of concept/design articles that present my observations, conclusions and implementations.  Think of this\them as a primer.  In parallel, I’m taking my code that implements these ideas and getting it ready for release.

But I’ll start with my overall conclusion.

VR is a subset of AR and really, it just is AR.  They are not cousins.  They are not similar but separate things.  Literally, VR is a subset of AR.  You can turn any AR device into a VR device by turning out the lights.  Or more crudely, if you were to put duct-tape over the visor of a HoloLens in the right places, you’d have created a not-so-great and very expensive VR headset.

Our current problems are that our implementation of both VR and AR are not complete or not ideal.  As such, right now it may not seem like VR is a subset of AR.  It may seem unique and better.  But from a long-term content perspective, it’s not.  It’s just a question of how much Real Reality (RR) you choose to block out of what will eventually be a near perfect AR.  When you block out enough RR, you probably will consider it a VR rather than AR.

And further, there is another form of reality which I term Ethereal Reality (ER.)  ER is an information reality that exists beyond RR.  An easy example, is a network service like a chat service.  When you think about it, a chat server is like telepathy.  It doesn’t exist physically.  It’s not a component of RR.  It’s ethereal.  It has no anchor to physical space like an AR.  Nor does it supplant physical space like a VR.

An AR or VR is a process.  Or sub-process.  Or thread.  And like any application, they need to consume services from a kernel and its drivers to do what they’re designed to do.

A depth camera might be a RR service.  A chat server would be a ER service.  An input device might be an RR service.  A combination of RR, AR and VR services might provide anchors and spaces which are necessary for an AR to place itself.

All of this means we’re slowly headed toward a proper Mixed Reality (MR) Operating System.  And that such an operating system’s design must be a careful blend of existing OS design, with new MR concepts and abstractions.  And there will be mistakes.  And there will be re-designs.  But the better one can predict where it will eventually converge, the better one can develop timeless content for it.  Which is my goal.

For example:  I built these designs before Microsoft released their documentation and development guides for the HoloLens OS.  When they did so, I was happy to find that nothing in my thesis or code conflicted with what they had done.  They seem to have implemented a thing that’s less feature-ful than where I think we’re going.  But those features will likely be added over the years.  Just like every other MR device or platform I’m targeting will eventually converge to the inevitable.  For me, that means I don’t need to change a thing to add the HoloLens to my publishing targets.

I’m looking forward to sharing\publishing my concrete designs and code that have already stemmed from these conclusions in the coming months.  As a content developer, it’s important to feel secure in your ability to distribute your content well.  Hopefully, this will be a good solid step in that direction.

One Reply to “Mixed Reality and the OS”

Comments are closed.