VR Review: Unity’s EditorVR Experimental Build

I’ve been limiting my VR/AR/MR (Virtual, Augmented and Mixed Reality) reviews and critiques to realm of VR Cinema.  For example, MR ROBOT or INVASION!

I hope to continue in that vein.  However, Unity Technologies has released something that I think is extremely important.  And for the moment, I want to broaden my scope to talk about it.

On their blog, Unity announced the release of an experimental build of their EditorVR.  Getting it set up fully for development (cloned dev branch from GitHub) is a little tricky and I’d refer you to their blog post as a starting point.  I won’t get into that aspect of things.  This is bleeding edge tech.  It’s only natural that it’s a little tricky to get set up.

I got it up and running and took it for a spin with my VIVE.  My mind was blown on many levels within minutes.

Simultaneous self contradictory conclusion:

  1. It is clunky and nearly unusable.
  2. This is the future.  It is amazing!

Simultaneous emotional response:

  1. I hate it.
  2. I love it.

Simultaneous contradictory desires:

  1. This needs a ton of work.  Throw most of it away.
  2. I want to adopt it fully.

Why am I acting like a schizophrenic?  Well, firstly, this is experimental.  I wouldn’t call it Beta.  I wouldn’t call it Alpha. But also, it’s not really a product.  For the moment, it’s an open system with an experimental set of tools.  The meat of the thing is available on GitHub and it’s meant to be extended and expanded heavily.  MIT-X license.   Open Source!

As a product, it’s a mess and not so useful.  I don’t think the folks at Unity would really disagree with that.  In fact, I’ll go so far as to say: great work.  Get messy.  Break things.  If there is ever a time to just throw it out there, this is it.  We should all be applauding what you’ve built.

But, as an open system, it’s already revolutionary.  I’d say I feel like I’m looking at an early version of Visual Studio, before it was released.  But that doesn’t fully cover the potential of this platform in the near term or the long term.  This is IMPORTANT work.

A deeper dive of the code shows Unity may have already stumbled into pitfalls.  But it’s not too late to get out of them.

A bit of disclosure here.  I have been using Unity for many years.  I am a c# developer as well as a 3D artist, among other things.  So, I’m no slouch when it comes to Unity from all angles.  Nor am I a slouch from a creative development or project management angle.  I can discuss it as a developer, 3d artist, content creator, filmmaker and general technologist at many scales.

Also, I have a lot of experience specifically in 3d pipeline and development for general entertainment, feature films and visual effects.  Tools for content creation in 3d space are my thing.

Therefore, a broad set of observations follows.  And then, a whole lot of deep technical detail.

EditorVR for Unity Scene Building

The tools and interfaces that ship with EditorVR are centered on the task of building a Unity Scene within a VR environment.  In a sense, you’re building a scene from within the scene.  No longer does it show up as little windows in-front of you, which allow you to peer into it and manipulate it from afar.  Rather, it surrounds you and you call up interfaces and tools to manipulate your world, which is the currently loaded scene(s).  When done, take off your headset and hit “save.”

The tools however, are primitive and clunky.  Partly because VR input is still very nascent.  And partly because these tools are super early and experimental.

The promise here, is that creatively, you’re going to make better choices when you actually are standing in the environment, than you would designing it from outside.

As a filmmaker I know that sets and props which look amazing in person, often look dramatically different on film, and vice versa.  Production design is for the screen.  It takes extrapolation skills and experience to know how a thing is going to translate to the screen.  Often it can look ridiculous or wrong in person.

If you are developing for VR, then your creative choices are likely to improve dramatically.  You won’t need to extrapolate what a thing will feel like.  You can actually work on it as it feels like it should.  This potentially shortens the creative iteration loop to a real-time loop.  Which should be invaluable going forward.

That being said, it doesn’t quite work right now.  Because the interface is so clunky.  A lot of necessary tools are missing.

Examples follow:

  1. No snapping.  You can’t assemble snap-able prefabs together effectively.  Nor can you easily put them on a ground plane well.
  2. Big ground objects highlight and drive you crazy as you move your controllers around.
  3. You can’t control hierarchy and scene organization effectively at all.
  4. Typing values into the inspector is very slow and difficult.
  5. Guerilla Arm sets in very quickly.  Flailing your arms around while you stand around for hours at a time is just not reasonable for one to do day after day.  Think about all the ergonomics that go into modern assembly-line work.  And all the injury that results when attention isn’t paid to this.
  6. Text is nearly unreadable and requires that things be blown up laughably large.  Which in turn makes it hard to filter through large numbers of things.  Obviously this will get better with higher resolution headsets over time.  But better layouts are needed.

That’s just a few.  And really, once you fix those things, you’ll likely just unearth another 20.  And so on.

In a sense, these tools should be learned from, thrown out, and replaced with a brand new iteration.  There’s just no reason to continue too much with them.  They’re not all that successful as a usable product.  In many cases, they show what direction NOT to go in more than anything else.  They are a necessary first research experiment.  They’ve done their job.  They’re a successful experiment.  It’s time to design a new experiment.

I’d suggest thinking in terms of working contexts that change frequently.  Something like Softimage|XSI was built this way, with contexts entered and exited from the keyboard very rapidly.  This was a big part of why it was a much faster tool than something like Maya for the most part.

I don’t really need to be in VR to build prefabs and set up behaviors and such.  I really just need to be in there for scene building at different stages of the creative process.  Similar to the way some people switch layouts in traditional content creation software.  But with such limited input options, it probably happens more often.

For example, I need to be in there to do basic layout.  I probably need to be in there to do lighting.  I probably need to be in there to do some material application and texturing.  But not all material building and texturing.  If someone builds a really good toolkit for house building, I need to be in there for that.

All of these are contexts that one enters and leaves.  And I’d strongly suggest that trying to force them all into a universal scene editing interface is a mistake.  Especially given the extremely limited user input possibilities that current VR input devices provide.

I do think however, that a top level interface for changing editing contexts is worth building in.  Because I might want to enter the house building context at a place in the scene, and then exit it and enter a set decoration context once I built the darned thing.  And I probably should be able to do that without taking off the headset.  Consider that Valve has done this with their SteamVR system.  They dedicated a button to jumping out of the application context and allow you to either re-enter it, or enter a different one.

Consider the idea that there are some contexts that may be better handled from a seated position at the desk.  Not all editing contexts must be full room contexts.  If a kind of editing is best accomplished by throwing the VR headset on at the desk, but with your hands on keyboard and mouse, that may be ideal.

Eventually, we’ll get good text entry within VR.  Possibly even a tracked VR keyboard that we can actually see in VR and type on.  Until then, consider the slightly hackey solution of depending on blind touch-typing skills.  Unity is a professional tool.  It’s probably okay to demand that users be able to touch type to be proficient in some contexts.  I see the virtual keyboards Unity has constructed.  I appreciate them.  They know as well as I do, that they’ll not suffice for real text entry.

Unity is now the ultimate content creation tool and programming tool

So, let’s go forward with the concept that the existing Scene Editing tools and context are only one of many potential  EditorVR contexts.  What does that mean?  If we believe that AR, VR and general MR are going to stick with us, then we have our very first true native MR content creation platform.  But it’s also our first true native MR programming platform.  That’s why I say it’s like Visual Studio and Maya.  I’d also say it’s like a video editing platform like AVID or a color timing platform like Resolve.  It could be like Photoshop if we want to expand further.

Humor me here.  Consider the following potential EditorVR capabilities and contexts.

Motion Picture Editing Suite

One can mimic the capabilities of AVID and Resolve within Unity within EditorVR contexts.  With a lot of work, obviously.  But it’s technically possible.  One can literally load in their raw footage as assets and then open up an empty scene in which to edit it together.  They can spawn editing interfaces such as timelines, bins and playback controls.  Literally a virtual editing room in which to work.  If you’re old enough, you worked on flatbed editors with physical rolling bins that held film-clips.  Imagine a MR version of this where the physical devices are replaced by virtual digital interfaces.

One can also mimic the kinds of finishing tools one finds in a tool like Resolve. Open a new scene.  Fire up the finishing EditorVR context.  Spawn color timing consoles and vector scopes and project organization controls.  Reference the edit you made in the other room.  Again, if you work on professional projects, you know that the Digital Intermediate Suite is a physical room you go to with your project.  The virtual digital iteration is easy to imagine.

One can do this with full VR video and maybe even fully interactive VR, in which the room which is unity, is the content and is the ultimate run-time.  One can also do this with traditional screen based material.  One simply needs to spawn a big old rectangular screen in their editing or color timing work-space.  Maybe more than one.

Finally, you’d have the ability to export to more traditional formats if need be, from your room.  Or bake it down to an intermediate or distribute-able format.  You can use Unity’s existing publishing tools to author a traditional or VR player.  Whatever is needed.

3D Modeling, Design and Sculpting

I’m not the first person to suggest 3D sculpting in VR.  There are already some early solutions for this.  But it’s worth mentioning it.  A fresh scene can easily be turned into a modeling and sculpting studio with the right EditorVR contexts.  Such content can then be brought into other scenes and EditorVR contexts.

But think bigger. Digital clay is great.  Something with the power of ZBrush running inside of Unity EditorVR would be absolutely phenomenal.  But, if you’ve worked in traditional 3D Art tools such as Maya or Studio Tools, you know there are all kinds of modeling.

A set of contexts for good old NURBS patch modeling would be useful.  There are some really impressive tools out there for car and product design that still work this way.

If one works in Houdini, one understands the idea of building a set of procedural tools that stack to help construct complex models.  Such a building-design toolkit could let one define a multi-story building with a series of curves, sliders and selections.  Such a system built around an EditorVR context could be extremely powerful.  One could make such a toolkit as exacting as AutoCAD or as mushy as polygon models in Maya.

Go on and on.  Any kind of complex constructed thing can have a specialized set of EditorVR contexts for creating the ideal crafting and designing space.

Like say, pottery.

Word Processing and Publishing

Yea obviously you can also edit words and text and even publishing documents.  We want better text entry input devices for MR to do this in the long run.  But consider the possibilities here when it comes to using EditorVR contexts and scenes to create ideal writing rooms and layout rooms to write and publish.

What about editing rooms for translating text for localization purposes?

What about editing rooms for writing and adapting interactive screenplays directly into Unity?

The Open System

As explained earlier, right now, EditorVR is available on GitHub as source-code that you  import into an experimental build of Unity.  Presumably, this experimental build has the nitty gritty technical bits necessary to enable VR editing.  The actual EditorVR package is designed as an open system.  And I think this is both wonderful and critical to maintain it as such going forward.  It’s released under the MIT-X license currently.

There is a danger here, that Unity decides to focus EditorVR on being a product.  A set of tools for working on Unity scene based content in VR, to the exclusion of being bigger than that.

Ideally, Unity would keep the Open System and a context switching toolset as their top level priority. Such that Unity EditorVR becomes the defacto blue-sky content creation and programming platform for MR going forward.

However, the design of the code suggests that’s not what they’re doing.  More on that later.

Unity is the medium

Let’s get theoretical.

The reason why all this is best done inside Unity is because MR is supposed to be ubiquitous going forward.

Users will construct their MR environments and work-spaces for themselves.

Microsoft seems to think this is best handled at the operating system level.  An exploration of their MR operating system demonstrates this.

However, I think Microsoft might be incorrect in that assertion.  Because all users are MR content creators now.  And in a sense, they may not ever exist in run-time in the future.  They may always experience their content in design-time because they’re constantly redesigning their MR as they use it.  They are content developers that operate at a higher level, but always in the moment.

As such, the traditional content creation and publishing industries become subcontractors.  An MR piece of content is delivered to the user.  But they in turn put it in their MR and in context.  They constantly edit that MR.

That doesn’t mean all movies and games become fully editable by their viewers and users, which might destroy authorship itself.  To the contrary.  The piece of MR you publish to the user as “the movie” (for example) can certainly be very baked and un-editable at a low level.  But that doesn’t change the fact that the act of placing it and using it in a MR is an act of authoring.

If the user is to be a content creator, they will want the most powerful tools available to do that at all times.  They may be put away in a toolbox.  But they will always want the ability to open their toolbox and use the tools.

Microsoft and SteamVR both try to define polished user modifiable environments in which they execute pieces of MR.  But they’re very limited in their ability to author that environment.  Feature upon feature will be added over the years to make that run-time environment more editable by the user.  SteamVR has rudimentary tools for this that they have already iterated multiple times.  More and more, interfaces will be authored to allow content creators to make their content more author-able at run-time by users of these platforms.

Unity on the other hand, has just managed to create a rather primitive (unpolished) environment which is infinitely modifiable and author-able by default.  They have started from the end-goal.  And content creators simply need to bake the stuff they mean to be baked, to maintain their status as authors in that medium.  In the end, the user will prefer the author-able environment because the user is the author of their own reality.

Unity is the medium.  A shared definition of reality for all authors to publish content to one-another.  There is no exhibition hall.

Deep Code Dive

A deep dive into the code has revealed a few things that are worth reviewing and critiquing.

VRView versus EditorVR

VRView is the low level editor window that makes VR possible at edit time.  This is the highest level of VR editing.  It doesn’t actually have a way of spawning itself.  Though it assumes it will be spawned by something else.  This last bit about how it spawns is probably backward.

EditorVR on the other-hand, is a GameObject that gets spawned into a scene and is Unity’s current toolset.  Paradoxically, EditorVR is also the thing that spawns the VRView via menu and hotkey entries.  Though that’s done statically, before the EditorVR game object is spawned.  It magically hooks itself up to be auto-spawned by VRView.

This would be fine if EditorVR were a top level context switcher.  But it’s not.  VRView seems to have originally wanted to host many EditorVR like things.  But then it gave up on that and instead decided to just be spawned by EditorVR directly in a way that doesn’t allow for context switching out of EditorVR to anything else.

SpawnDefaultTools

When EditorVR starts itself, it runs a function called SpawnDefaultTools.  It immediately sets up the horrible selection, highlighting and transform and manipulator.  All great first stabs.  All terrible things.  They’re what are known as “permanent” tools.  This is a problem.

There is no abstract sub-classing here that suggests EditorVR is just a choose-able example implementation.  It’s written in such a way that intends to be automatically run whenever one enters the VRView.  And likewise, the default permanent tools are always a thing.

The problem here is it belies Unity’s intent to have a single, top level uber-editing-context for scene manipulation.  And their intent that it be what VR editing is.  And this is a horrible mistake.  This makes EditorVR a product for scene editing, rather than a tool for switching to different editing contexts and toolsets within the larger Unity editor and platform.

Unity’s normal editing interface makes no such demand or assumption.  You can always change to a new layout and fill it purely with custom editors and windows.  Not so in EditorVR.

IExclusiveMode

The one bright spot here might have been IExclusiveMode.  Which makes the self documenting claim:

“Make use of exclusive mode, which turns off any other tools (e.g. TransformTool, SelectionTool, etc.)”

But upon using it, one finds that’s not really true.

One would hope, upon entering an ExclusiveMode tool, that all the other menus and UI would go away and you’d be left with just a simple “go back” capability which you could expose to the user any way you’d like.

Instead, you still have the ridiculous Unity menu controller mounted tool disabling button.  And further, it doesn’t turn off all the existing navigation, selection, and manipulation tools.

What this means, is that you can’t really turn off Unity’s scene editing tools and enter a purely task oriented editing context.  And this is a disaster.

Really, this ExclusiveMode idea should be replaced by a higher level context/layout switching solution that doesn’t put any restriction, assumption or UI on the editing context one enters without the developer explicitly adding it.

Tool01

I took the time to develop a tool.  I call it Tool01.  I’m very imaginative in my tool naming.

Tool01 was meant to be an ILocomote tool.  It’s a tool for moving around.

Since Unity implemented a Blink Navigation tool, I thought I’d make something different.

Most VR buffs will tell you that traditional translation and traveling tools make you sick in VR.  I’ve never really tried them.  So I though I’d make one and see if they were right.

The basic design being:  Old gamers used W-A-S-D keys to move, left-shift to run, and the mouse to turn.  So I figured I’d use the left thumbpad to move, the right thumbpad to turn, and the left trigger to run, or toggle running.  This should trigger the sense memory of old gamers and feel familiar.

Unfortunately, I found what I assumed must be a bug in EditorVR.  It filters out all ILocomote tools and considers them “permanent” tools when it adds tools to the menu. I assumed it must be a bug because otherwise, Unity believes top level locomotion tools are to be active at all times.  And that’s crazy.

Anyhow, I had to move on from Tool01 without testing it.  But it was worth doing to get used to how EditorVR works.

Tool02

Tool02 is I think, a serious improvement in nomenclature to Tool01.

Tool02 is an attempt at any kind of creation tool besides the simple drag and place creation tools that come with EditorVR.

I used LineRenderer to create a fun ribbon drawing tool.  Kind of like Google’s TiltBrush VR.  But far less useful or impressive.  Not bad for two hours work.

Squiggles!
VR Squiggles!

I didn’t bother attaching a material.  Hence the pink.

As with most VR, you can’t get a sense of the thing without being in VR.  Trust me, it’s just like drawing ribbons of stuff in TiltBrush.  It works fine.  Except of course, when you’re done, you have actual game objects with Line Renderers on them in your scene.  Which is pretty cool actually.

It was with Tool02 that I was able to use IExclusiveMode and confirm that it’s not what you would ever want it to actually be.  IExclusiveMode is somewhat broken.  But its design is just wrong, regardless of if it works.

If I want the user to enter a TiltBrush like mode to work for a bit, I don’t want the rest of Unity’s scene editing tools and navigation tools available to them.  I don’t even want Unity’s crazy weird idea of menus to show up.  I want the sculpting space to have its own UI allegories as I see fit and I’ll make sure to put in the right way to exit that editing context.  This always needs to be possible.

Now, I could decide to bring a subset of Unity’s scene editing tools in.  I could decide to adopt the windows and the idea of “tools” and “workspaces.” I could decide to allow in some of their existing menus.  But I should be making that decision.  Unity should not be forcing it.  I should be importing their scene editing allegories and tools as I wish.

Further Evidence of Awkward Design

Earlier in this piece I mentioned that I had posted a bug with ILocomote.  I stated that:

This is probably a bug.  Otherwise, Unity believes all locomotion tools are to be active at all times.

Subsequent to that, Unity got back to me.  They stated that:

This is not an oversight. You’re simply seeing a feature that has not been implemented yet: a Settings menu. The Settings will be where you can set defaults for locomotion or selection or the transform tool. The idea is that all ILocomotors, ITransformers, etc. will be enumerated, so that a user can set their default. These are still permanent tools though as they are not toggled on/off like other tools.

This unfortunately confirms my fears regarding Unity’s desire to create some kind of super scene editor that has permanent navigation and transforming and such.  This is a terrible mistake and I once again strongly suggest Unity take a bit of a humility pill and consider that their scene editing paradigm is best exposed as one of many potential EditorVR editing contexts.

For reference, here’s a link to the bug.

Suggestion to Unity

Stop right now and fix EditorVR’s design priorities.

EditorVR’s top level tools should be so minimal as to focus purely on launching into other toolsets and editing UI paradigms that have no prescribed UI to them at all.  Just a loose requirement to author a quick way to get back out that is obvious.  Focus on being able to switch EditorVR contexts both from VR and the keyboard quickly, such that a user can jump from full body contexts, to seated contexts, to desktop contexts and back.  Keep it clean.

Don’t force a selection or highlighting paradigm on EditorVR developers and users at such an early stage.  Don’t even assume that what they want to do is select and highlight things.

Then, the existing scene editing tools can be refactored to be one such editing context, based on a universally available set of UI tools and allegories that can be used by other tools developers if they wish.

Consider, in Unity’s existing desktop interface, one can create a layout for editing that doesn’t even have a game view, scene view, heirarchy view or project view.  It might be completely populated by custom editor windows written by third parties that work on the project rather than the scene.  In such a layout, selection may mean absolutely nothing.  So, why would you require that EditorVR immediately assume what you want to do is start poking about and selecting scene game objects?

From the top level, the user might say, “yea, lets edit this scene!”  Heck, Unity can even develop multiple experimental toolsets for that purpose.  Which might be a good idea for something so nascent.  But please consider that the decision to interact with the scene at an object and hierarchy level is a contextual decision, and one that you should not necessarily assume of the user.

A huge amount of what has been done is extremely useful and valuable, but needs to be optionally consumed by an editing context that has been purposefully entered by the user.

Takeaways for Tools Developers

I’m going to be watching EditorVR very closely.  I’m not quite ready to adopt it fully though, until I see that I can clean out the clunky-ness and work on something refined.  EditorVR should want me to be able to do that.  And I don’t think it’s wise to fight the design from the outside.

As such, I think it’s a bit too risky to commit to it until I see them iron out their design to be a bit more humble.

That does not mean it should be considered a failure.

To the contrary, I think it’s a triumph.  But yesterday’s triumphs are today’s antiquated systems.  That’s supposed to evoke images of feats over decades.  But in this case, it’s literally in terms of days.  EditorVR is a triumph the day it comes out, and immediately the next day it’s a road-block to itself.

And as such, I sound a bit like a schizophrenic.