VR – Mass or Niche Market?
The Interface is a Key to Successful Deployment of VR
When you use a computer these days mostly it is quite straightforward, at least for familiar everyday programs. When you try a new application again it is pretty familiar. If you have to consult a manual it is either because you’re trying something very unusual, or because the user interface to the program is not too good. Manuals are often incomprehensible, and anyway these days we don’t have much time to consult them. We expect everything to just ‘work’ and be easy to understand. The interface is typically so good that we don’t even think about it – mostly it feels like we are operating with physical objects. Just as I might pick up an empty bottle of water and move it from a table to the bin, so on the computer I ‘pick up’ an icon representing a file and move it to the ‘bin’. I’m hardly aware that to do this I’m using a mouse (or touchpad) to move a cursor over an icon, I press and hold down the mouse button, I move the mouse so that the icon attached to the cursor moves across the screen until it is over another icon representing a bin, check that the bin is highlighted in some way, and then let go the mouse button, and ‘poof!’ the file is in the bin and deleted. Every so often, just like the bin in my kitchen I have to ‘empty’ the bin, so that finally the things in it are ‘really’ gone – but just like in physical reality they are not really gone, but rather they are scattered, and no one would any more be able to pin-point where a particular item is located.
This direct interface, this transparency between our intentions, actions and what is ‘really’ going on underneath is not a natural order of things. These ideas were invented, researched, argued about, invested in, fought over, rejected, advanced, struggled over, for many years. Many of these ideas go back to the 1960s based on work by Douglas Engelbart. Xerox Parc put about a decade of investment in the 1970s into these ideas, working on bitmapped screens, the idea of a mouse, the cursor, menus, windows, icons etc.. Apple carried these ideas much further and eventually transformed the computer interface into the form we know today by building the first really mass consumer products that embodied these concepts. The first Mac was released in January 1984. There is an excellent long read in The New Yorker which gives some of the history.
For VR to become a mass consumer product it has to be as usable as a modern computer or Smart Phone, with transparent interfaces. Unfortunately it hasn’t totally become that because the technology has not had the equivalent of a Xerox Parc, a single group working on common goals for a decade to really explore and create this new medium. Instead, the research has been scattered in labs and companies around the world.
There are two aspects to the interface problem
The first is the launching of the VR program itself, and the second is what happens while it is running. Setting up and running the program is a traditional 2D user interface problem. This may be on an external device such as a laptop or phone, or even within the VR itself. However, a 2D menu that pops up in VR doesn’t belong in that world.
In VR, 2D ideas such as cursors, the mouse and windows just don’t belong. Moreover, this highlights another problem. We can think of a user interface as a language. Fundamental elements of any modern screen based user interface form a grammar. Move the cursor (via the mouse or trackpad), point and click, can be considered as the verbs. There are abstractions, such as files and folders, applications, and representations of these in the form of icons, which can be considered as nouns. A sentence is ‘open a folder’ which strings together the words: move the cursor [to the icon], point and click, and then the folder opens to a window.
Any user interface is like a set of sentences. In a way, any particular instance of using an interface is a story: “I opened this file, I put the file in that folder, I moved this file to the bin, I moved the cursor to that icon, I clicked it twice, it opened into a window, which displayed another user interface to an application, …”. The elements of this language, in fact this universal language since it is understood almost anywhere in the world, was developed over many decades as we have seen.
Just use your head
What is interesting, and remarkable, is that in VR there is only one language element that is universally understood: you can look around in VR by moving your head. This is the only universal! When you enter a VR application for the first time this is the only thing that you can expect. Nothing else at all is certain, and you will have to learn it. For example, you enter a VR and you want to change your position – move somewhere else in the virtual environment. How do you do it?
When you enter into a virtual environment you will not know how to move – you have to have learned it. Unlike moving a file from one folder to another on a 2D interface, that everyone will know without thinking about it, there is no universal method for walking in VR. The same is true with another basic operation: select an object. So you’re in the virtual environment and you want to select an object – it may be for the purposes of grabbing it, or just to select it for some other purpose. How should you do this? It won’t be obvious. Different applications use different techniques and you will have to learn which technique is used and probably practice a lot before getting it right.
It’s easier to do things in VR with the right visual cues
In the physical world we have affordances – we see a flat horizontal surface of a certain height and we know that we can sit on it (if permitted). No one has to tell us this. We see a window, we know we can look through it. A switch or a button, we know we can press it, and would normally understand what would happen as a result of our actions. We function very well without people having to explain how to do things. Take for instance that we are on a train in a foreign country and wonder how to open the door when the train stops. What do we do? We instinctively look for a button to press or a lever to pull. We know the objects that we are looking for and understand that they will offer us the result we want.
Since VR often depicts scenes from reality (even if they are fantastic scenes that could not happen in reality, they still typically have some intersection with reality – like a floor, the ground, rooms, trees, etc.) there will be natural affordances visible in the environment. For example, a round object on a vertical flat surface that seems to be cut out a bit from the surrounding flat surface is likely to be a handle to a door, and probably if you touch it something will happen like the door opening. An archway beyond which you can see another environment beckons you to go through it.
The lesson here is that in the VR, participants should be able to simply infer what they are supposed to be doing and how they are supposed to act from the affordances and events in the environment. Do not make selections through a menu, but exploit what the environment itself offers. If you want people to pick up an object, give it a handle. If you want people to go to a specific place then make that place the obvious place to go through events in the scenario itself. Perhaps objects have certain associated actions or properties. An object can be given an inviting button to press so that when the participant is tempted to touch it, the object gives out its information with explanations or examples about how to use it.
Although not really affordances, participants can learn to do actions by example. Let’s consider a particular case. In order to make body tracking work, some programs will require a calibration where the participant has to stand in a certain stance for a while. How can they know what to do? One obvious solution is that there is an operator of the program, who tells them how to stand and gives them the instructions once the person is in the VR. This works very well – but requires an operator. Another possibility is that you go into the VR and you see a screen with written instructions informing you, with a picture, about how you are supposed to stand. This could work, but does require you to read a lot of text, and currently this is not something that is particularly comfortable to do with the resolution of today’s HMDs.
Of course the text could be replaced by a voice to make it easier. But this is VR! Show a virtual human body, in the same space as you, near you, and standing as you are supposed to stand. The virtual human could be saying “Adopt the same posture as me” or whatever. The program could show a reflection of your body standing next to the virtual demonstrator, as its current calibration allows, and this process of adjusting your body and the testing of the calibration continues until convergence. This is using VR in order to set up the VR. There will be many other possibilities, but the important point is this – use VR itself as the interface to VR.
Test, test and test again
A most important message is that whatever interface you choose, someone will break it. Hence a lot of testing with participants is absolutely vital. People will come to interpret things in ways that the designers would never suspect, and they cannot know this without testing. Use natural affordances wherever possible, but remember that they have to be tested with lots of participants.
To conclude: VR has made enormous strides towards becoming a consumer product in recent years. However, there is still no universally recognised interface by which people can carry out activities within the virtual environment. There are many ideas, a lot of past research, there are many poor practices that are automatically carried over from 2D interfaces to 3D, but typically a participant has to learn how to operate in any particular application. This is so unlike normal everyday use of computers, where there is a universally recognised language with its own grammar – where typically you can start up any computer and have a pretty good understanding of how to use it without anyone explaining anything to you. And, dare we say it, without reading a manual.
For VR to become a mass consumer product that is straightforward to enjoy, this universal language has to be developed. Companies developing VR applications must put a huge amount of thought and effort into this, with experimentation and testing with naive participants, and also with participants who have become used to the application (since the needs of each group will be different). Companies must do this if VR is going to break out of its restricted shell into the mass market.
One day some company will get it spot on and develop an interface that everyone will adopt. It could be your company, it should be your company, unless of course you want to be paying royalties to another one.
Read the full article
A founder of Virtual Bodyworks.
Immersive Fellow, Digital Catapult.