Meet Google’s Next-Generation Virtual Reality Platform
Yesterday, at Google’s IO conference, Google showed off the future of virtual reality — and it isn’t all made up of cardboard. This is surprising, because Google’s previous ventures into the space have been, well, a little half hearted.
Don’t get me wrong: Google Cardboard, a minimalist VR shell for smartphones is undeniably neat — but its limitations are clear. It’s one of Google’s famous 20% projects — a labor of love by a handful of employees, and it shows. Cardboard has a narrow field of view, motion blur, no positional tracking, and sickening latency. It’s a recipe for poor immersion and VR sickness.
Google Cardboard doesn’t have a headband, and that’s not an oversight. Without a head strap, users have to hold it up to their faces, forcing them to turn with their torsos. That slows them down with a perceptible lag. And that’s okay — we’re talking about a headset made of cardboard. It’s still a valuable tool to get people into the VR, but it’s not a game changer.
All that is starting to change. Google employs a non-negligible fraction of the world’s brainpower, and at this year’s IO conference, started to flex some of that might in the VR space. The announcements are three-fold.
- First, there’s a significant refinement of the Cardboard headset.
- Second, there’s a VR camera designed to produce a seamless 3D experience.
- Finally, there’s Tango VR, a new VR/AR platform built around Google’s enigmatic 3D-sensing tablet.
Ready? Let’s dig in.
New Cardboard Headset
The new design for Google Cardboard is a definite improvement — it adds support for larger phones (up to six inches), includes iOS support, and adds a small physical lever to the side of the headset for input. It’s also designed to be easier to build and control.
Aside from that, it’s pretty much the same story: small, off-the-shelf lenses, a cardboard shell, and plenty of motion sickness if you use it for too long. This is a marked improvement, but we’re pretty close to the limit on cardboard-style shells, and Google is in a holding pattern until they can start to build something more useful.
The new camera developed in conjunction with GoPro, is called the Jump. It consists of 16 GoPro cameras (total cost, around $10,000), arranged on a circular plastic mount. So far, so mundane: similar mounts for producing VR video are already on the market. Where it gets really cool is the so-called “Jump Assembler,” a piece of software created by Google to cut jarring VR artifacts.
To understand this software, you have to understand the problem it’s trying to solve. Oculus’ John Carmack talks about the problems with 3D panoramic capture.
“What they wind up doing is, you’ve got slices taken from multiple cameras, so straight ahead it’s the proper stereo […] and then over here it’s proper for this [angle]. But that means that if you’re looking at what was right for the eyes over here but you are looking at out of the corner of your eye over here, […] it’s not the right disparity for the eyes. And then, even worse, if you [roll your head], it gets all kind of bad, because it’s set up just for the eyes straight ahead. “
Jump is a way to fix this. Google uses machine vision technology to compare the images together and figure out the 3D geometry of the scene. The software uses this reconstruction to create a clean, stitch-free perspective, with perfect stereo from every angle.
Google plans to allow this sort of content to be viewed through YouTube using Google Cardboard — though there’s no word yet on support for more sophisticated VR headsets like the Oculus Rift.
That’s already a huge win. Yet, the technology can be used to do more — a lot more. During the demo, Google shows the virtual camera moving around inside the ring, smoothly updating the perspective. Right now, Google is baking the 3D data to stereo panoramas. The illusion will work from any rotation, but will break down if you roll your head or move around.
In the future, though, Google could show you the reconstructed 3D geometry directly, giving you a full VR video experience. Maybe one that supports every axis of rotation, as well as a significant degree of positional tracking. This is not yet implemented, but it is a tantalizing possibility for a major step forward in VR video.
Google Cardboard has a lot of issues, but even the best example of mobile VR, Samsung’s Gear VR headset , still has one major drawback — no positional tracking.
Rotational tracking is pretty straightforward. You can use gyroscopes to detect how much your head turns, and the Earth’s magnetic poles provide a convenient reference to prevent drift. This amounts to a couple of dollars’ worth of sensors, which already come pre-installed in most smartphones.
Positional tracking is a whole another plate of poutine. Different companies have tackled the problem in different ways. Oculus uses an external camera paired to tracking dots on the headset, allowing it to be tracked using machine vision.
HTC and Valve take a different approach for their Vive headset , using many “Lighthouse” base stations with rotating drums to sweep the entire room with laser light. The trackers on the headset and controllers are covered in light sensors, which can detect (with extremely precise timing), when they are struck by that laser. By checking the precise timing, and performing some basic geometric calculations, any three sensors can collaborate to determine the exact position of the device in space, relative to the base stations.
Both of these approaches have advantages, and we can debate their relative merits. However, both have a drawback that becomes crucial in the mobile world: you need external hardware to track the headset.
Needless to say, this is a problem for a mobile headset. The whole point of mobile VR is to free yourself from the cables and infrastructure needed for higher-end desktop experiences. So, what can we do about it? The answer, at least according to Google, is to use a technology called “Visual SLAM” (Simultaneous Location and Mapping) in conjunction with a depth sensor, to track points in the world. With this a 3D mesh of the environment is created to determine where the device is in space relative to its starting position.
Google has already shipped some early hardware with this functionality in the form of the Tango tablet , which we’ve covered here before. Google thinks the sensor could revolutionize the way we interact with devices. As developer Johnny Lee puts it in the video below,
“Rather than force our entire lives into this very small rectangle [of a smartphone screen], Project Tango is developing the technologies to help everything and everyone understand precisely where they are. Anywhere. Not just where there’s good GPS coverage, good WIFI coverage […] The devices we use will share our understanding of space.”
This technology has exciting applications in virtual reality, and, recently, Google has begun to show them off.
This stuff is impressive, but it’s also early days. Valve is using an off-the-shelf holster for their VR experience along with honest-to-God NERF guns. The tablet itself isn’t designed for VR — it’s heavy, it’s high latency, and the ergonomics are awful.
That said, it might be the most impressive mobile VR demo the world has ever seen — strictly because of the positional tracking it provides. The headset allows you to move freely, through any indoors space (provided there’s enough light for the camera to see). That opens up some exciting possibilities for mixed-reality applications.
Many of the drawbacks will improve soon: Google has also announced that it’s partnering with processor-maker Qualcomm, and have created a smartphone containing the full project Tango processor suite, plus the hardware to support it. These smartphone will be reference designs for future Tango-compliant smartphone, which should be much better suited to VR applications.
As for the positional tracking itself, the tracking looks smooth. But what looks good on a computer screen can feel different when you’re inside of it. Microsoft’s HoloLens uses a similar technology to position itself, and has noticeable glitches in its stability.
In his presentation, Johnny Lee mentions that the tracking, under good conditions, is accurate down to “a few millimeters or a centimeter”. That’s impressive, but not quite up to the sub-millimeter standard for virtual reality. It sounds like it may take another generation or two before the tracking is good enough to give tethered desktop VR a run for its money.
The Future, Powered by Google
If you take all of this together, you can start to see the shape of something interesting emerging in two or three years. Imagine for a moment, that a few years from now new smartphones come with Tango hardware, a VR holster, and a version of Android optimized for low-latency VR. In that future, it would not be uncommon to see people confidently strutting down the street, phone glued to their face, immersed in their own augmented reality.
In that future, VR cameras are everywhere, and you can, in a moment, jump into VR video from anything on Earth. VR is cheap, VR is ubiquitous, and VR is a consistently comfortable, immersive experience, available to millions of people. That’s the dream, and Google has brought it tantalizingly close to reality.
What do you think? Are you interested in VR video? Can you see yourself buying a mobile VR headset? Let us know in the comments!