VRUI
Posted: Wed Jul 02, 2014 11:29 am
I preordered an Oculus Rift. I want to do something fun and experimental with it.
I found this video on YouTube: "Concept for an Oculus Rift user interface."
That has got me thinking about interfaces designed for head mounted displays. I like experimental things, where you throw away every preconception of how something should work and try to design something different.
With head mounted displays, such as the Oculus Rift, you have added elements like stereoscopy (a different image per eye), rotation (being able to turn your head), and position tracking (being able to move your head to look around something). It gives the illusion of being immersed in a 3D environment. I think this could make for an interesting user interface.
There are limitations - while you do have the freedom to view the world impressively, most users interactions will still be limited to the standard input devices - the keyboard and mouse. Some people have paired the Oculus Rift up with alternative input devices like motion tracking gloves, Wiimotes, or Kinect cameras. I'm going to be talking specifically about UIs you could design with a head mounted display + keyboard + mouse, but it would also be interesting to talk about UIs you could design with alternative input devices too.
The problem with representing things spatially, is that often we can only see a part of what we're interacting with at once. Programs are complex systems that need to be both comprehended and interacted with.
Comprehending programs spatially, I don't think will be a challenge. Our brains are capable of comprehending things spatially, even if we can only see a little of it at one time. We can easily comprehend the layout of a house - even a multi-story house that has a 3 dimensional layout, even if we can only see a part of the interior at a time.
If a program is represented as a 3D object, we can look around it, inside it, and get a pretty good idea of how the major parts of it are laid out. It's easy to comprehend.
The next element is interacting with a complex system. We need to make the interface intuitive and accessible. We want all relevant information accessible conveniently. This does not mean it has to be all on screen at once, but that we can access it near immediately when needed.
For example, when driving a car, we can only focus our vision on a limited range of inputs at once time - the dashboard, our mirrors, the road ahead. Yet we feel that all of that information is accessible because we can simply turn our head or move our eyeballs, and have any of that information relevant to driving a car available to us.
If we are multitasking, such as reading a book and watching a movie simultaneously, we want both things conveniently accessible. The most convenient action is rotating our head between the two items, to the point that both things feel accessible on demand. If the book was in one room, and the movie was playing in another, having to walk between the rooms would lower the accessibility to both task simultaneously, and loose our ability to switch between one to the other near instantaneously.
In a VR user interface, we don't necessarily need the entire interface on screen at once. We just want everything accessible near instantaneously. We could represent our interface mapped onto a sphere, and we would feel relatively unrestricted, as if we could see it all simultaneously and conveniently, because all we would have to do is rotate our head.
This is the same as multi-monitor setups. Even though we are only focusing on one screen at once, it's more convenient to simply look between two screens, than to switch windows on a single screen. It gives the illusion that both are accessible simultaneously.
[continued in next post]
I found this video on YouTube: "Concept for an Oculus Rift user interface."
That has got me thinking about interfaces designed for head mounted displays. I like experimental things, where you throw away every preconception of how something should work and try to design something different.
With head mounted displays, such as the Oculus Rift, you have added elements like stereoscopy (a different image per eye), rotation (being able to turn your head), and position tracking (being able to move your head to look around something). It gives the illusion of being immersed in a 3D environment. I think this could make for an interesting user interface.
There are limitations - while you do have the freedom to view the world impressively, most users interactions will still be limited to the standard input devices - the keyboard and mouse. Some people have paired the Oculus Rift up with alternative input devices like motion tracking gloves, Wiimotes, or Kinect cameras. I'm going to be talking specifically about UIs you could design with a head mounted display + keyboard + mouse, but it would also be interesting to talk about UIs you could design with alternative input devices too.
The problem with representing things spatially, is that often we can only see a part of what we're interacting with at once. Programs are complex systems that need to be both comprehended and interacted with.
Comprehending programs spatially, I don't think will be a challenge. Our brains are capable of comprehending things spatially, even if we can only see a little of it at one time. We can easily comprehend the layout of a house - even a multi-story house that has a 3 dimensional layout, even if we can only see a part of the interior at a time.
If a program is represented as a 3D object, we can look around it, inside it, and get a pretty good idea of how the major parts of it are laid out. It's easy to comprehend.
The next element is interacting with a complex system. We need to make the interface intuitive and accessible. We want all relevant information accessible conveniently. This does not mean it has to be all on screen at once, but that we can access it near immediately when needed.
For example, when driving a car, we can only focus our vision on a limited range of inputs at once time - the dashboard, our mirrors, the road ahead. Yet we feel that all of that information is accessible because we can simply turn our head or move our eyeballs, and have any of that information relevant to driving a car available to us.
If we are multitasking, such as reading a book and watching a movie simultaneously, we want both things conveniently accessible. The most convenient action is rotating our head between the two items, to the point that both things feel accessible on demand. If the book was in one room, and the movie was playing in another, having to walk between the rooms would lower the accessibility to both task simultaneously, and loose our ability to switch between one to the other near instantaneously.
In a VR user interface, we don't necessarily need the entire interface on screen at once. We just want everything accessible near instantaneously. We could represent our interface mapped onto a sphere, and we would feel relatively unrestricted, as if we could see it all simultaneously and conveniently, because all we would have to do is rotate our head.
This is the same as multi-monitor setups. Even though we are only focusing on one screen at once, it's more convenient to simply look between two screens, than to switch windows on a single screen. It gives the illusion that both are accessible simultaneously.
[continued in next post]