My colleague Ben Lang, over at Road to VR, brought news my way of the latest acquisition by Oculus Rift, following the company’s formal announcement on May 26th.
Surreal vision is a UK-based company which grew out of Imperial College London, and is at the bleeding edge of computer vision technology. One of the founders is Renato Salas-Moreno, who developed SLAM++ (simultaneous localization and mapping) technology. As Ben explains in the Road to VR blog post:
Using input from a single depth camera, SLAM++ tracks its own position while mapping the environment, and does so while recognizing discrete objects like chairs and tables as being separate from themselves and other geometry like the floor and walls.
SLAM therefore offers the potential to take a physical environment, scanning it, and literally dropping into in a virtual environment and have people interact with the virtual instances of the objects within it.
The other two founders of Surreal Vision are equally notable. Richard Newcombe is the inventor of KinectFusion, DynamicFusion and DTAM (Dense Tracking and Mapping) and worked with Salas-Moreno on SLAM++, while Steven Lovegrove, co-invented DTAM with Newcombe and authored SplineFusion. All three will apparently be relocating to the Oculus Research facilities in Redmond, Washington.
The acquisition is particularly notable in that it follows-on from Oculus VR acquiring 13th Lab at the end of 2014, another company also working with SLAM capabilities. They were acquired alongside of Nimble VR, a company developing a hand tracking system. However, at the time of those acquisitions, it was unclear what aspects of the work carried out by both companies would be carried forward under the Oculus banner.

Surreal Visions, seem to have been given greater freedom, with the Oculus VR announcement of the acquisition including a statement from the team and their hopes for the future, which reads in part:
At Surreal Vision, we are overhauling state-of-the-art 3D scene reconstruction algorithms to provide a rich, up-to-date model of everything in the environment including people and their interactions with each other. We’re developing breakthrough techniques to capture, interpret, manage, analyse, and finally reproject in real-time a model of reality back to the user in a way that feels real, creating a new, mixed reality that brings together the virtual and real worlds.
Ultimately, these technologies will lead to VR and AR systems that can be used in any condition, day or night, indoors or outdoors. They will open the door to true telepresence, where people can visit anyone, anywhere.

On May 21st, Oculus VR also confirmed that their 2nd annual Oculus Connect conference – Connect 2 – will take place between September 23rd and September 25th at the Loews Hollywood Hotel in Hollywood, CA.
The conference will feature keynote addresses from Oculus VR’s CEO Brendan Iribe, their Chief Scientist, Michael Abrash, and also from John Carmack, the company’s CTO. It promises to deliver “everything developers need to know to launch on the Rift and Gear VR”. As noted in the media and this blog, the launch of the former is now set for the first quarter of 2016, while it is anticipated that the formal launch of the Oculus-powered Gear VR system from Samsung could occur around October / November 2015.
System specifications for the consumer version of the Oculus Rift were announced on May 15th, and caused some upset / disappointment with the company indicating that the initial release of the headset would be for the Windows environment only – there would not be support for Linux or Mac OS X.
At the time the system specifications were release, Atman Binstock, Chief Architect at Oculus and technical director of the Rift, issued a blog post on the system requirement they day they were announced, in which he explained the Linux / OS X decision thus:
Our development for OS X and Linux has been paused in order to focus on delivering a high quality consumer-level VR experience at launch across hardware, software, and content on Windows. We want to get back to development for OS X and Linux but we don’t have a timeline.
The Windows specifications were summarised as: NVIDIA GTX 970 / AMD 290 equivalent or greater; Intel i5-4590 equivalent or greater; 8GB+ RAM; compatible HDMI 1.3 video output; 2x USB 3.0 ports; Windows 7 SP1 or later. All of which, Binstock said, to allow the headset to deliver, “to deliver comfortable, sustained presence – a “conversion on contact” experience that can instantly transform the way people think about virtual reality.”