How the BBC achieved a world’s first in live event VR streaming

The Rift is highly anticipated by the gaming community, and there’s a lot of interest from developers in building for this platform. We’re going to focus on helping Oculus build out their product and develop partnerships to support more games. Oculus will continue operating independently within Facebook to achieve this.

But this is just the start. After games, we’re going to make Oculus a platform for many other experiences. Imagine enjoying a court side seat at a game …..

Also sprach Zuckerberg (sorry, couldn’t resist; blame the evening wine) on March 25th, the day on which Facebook acquired Oculus VR amidst much wailing and gashing of teeth. At the time, it seemed his vision of this VR utopia of court side seats for all at Wimbledon and all these other fine things was perhaps a decade away. Indeed, given some of the areas where technology still needs time to mature, it may well still prove to be up to a decade away; but that hasn’t stopped the BBC from seeing how it all might work.

As many from the Commonwealth nations will likely know, we’ve just seen Scotland host the XX Commonwealth Games (for those who don’t know, and keeping it to a nutshell, think Olympics with fewer nations, and you’ll get the idea). The BBC were the primary broadcaster for the Games, and they used the opportunity to make Zuckerberg’s vision a reality, if only on an experimental scale, by transmitting elements of the gymnastics events at the Games in real-time as a VR experience – the very first time anywhere in the world that such a feat has been undertaken. The results of this effort were recently reported by the BBC’s digital magazine programme, Click, broadcast on the BBC News channel, and from which this article is largely drawn.

The experiment comprised three parts. First, a 360-degree, 7-lens video camera pod (6 lenses to record the view around the pod, the 7th to capture the overhead view) and a spatial microphone were set-up in front of the SSE Hydro Arena seats, the camera positioned at the same eye-level as spectators.

The 360-degree cameras (l) were installed at the same eye light as people sitting in the arena seats
The 360-degree cameras (l) were installed at the same eye-level as people sitting in the arena seats

The video from all seven cameras and audio from the microphone was fed directly to the second element of the experiment, a computer system running software designed to stitch all seven video elements into a seamless whole, overlaid with the sounds from within the arena captured by the microphone. The finished film was then streamed to the third element in the experiment: a booth within the Glasgow Science Centre where members of the public could don an Oculus headset and a set of earphones and find themselves immersed in the Hydro Arena, watching the gymnastics as they happened.

The results were predictable astonishment as most of those trying the system were exposed to immersive VR for the first time. “That’s amazing!  … You can see everything!” was the reaction of one gentleman, a bright smile visible below the goggled-eyed goggles as he turned slowly around, taking-in the entire arena. A young boy referred to it as both “cool” and “weird!”, while an older lady found herself responding to the roar of the spectators and looking around in surprise to see what had just happened.

"That's amazing!" was one man's verdict as he found himself sitting inside the SSE Hydro Arena witnessing the Commonwealth Games gymnastics event as they happened - while actually located half-a-mile from the arena
“That’s amazing!” was one man’s verdict as he found himself sitting inside the SSE Hydro Arena witnessing the Commonwealth Games gymnastics event as it happened – while actually located half-a-mile from the arena

As noted, this effort was very much an early experiment by the BBC’s R&D people into what might be possible with VR. While the film was only transmitted to a single location just half-a-mile from the Hydro Arena, it could have just as easily been transmitted anywhere given a fast enough and stable enough internet connection. The distance in this case was simply a matter of convenience, the VR experiment being just one of a number of potential new broadcasting technologies the “Beeb” is investigating as a part of its multi-platform approach to television and which were being showcased alongside the Games. In particular, the BBC  wanted to poke at potential issues this type of streaming will have to overcome if it is to become practical in the future.

One of the problems they hit was quality of processing versus speed of delivery. In order to try to keep the transmission as close to real-time as possible (remembering that the same events were being simultaneously broadcast via “traditional” methods as well as via other technologies being showcased at the Science Centre), BBC wanted to avoid undue lag occurring in the VR feed when compared to other mediums on display. This meant that the video / audio processing needed to produce the finished film for streaming had to be kept to around three or four seconds in order to achieve a smooth, continuous stream to the headset.

To achieve this, engineers had to downgrade the video quality being received by the processing software in order to reduce the amount of data the software had to handle in stitching the 7 elements of film together. This resulted in a loss of image definition which was noticeable when wearing the Oculus headset, as the video appeared somewhat grainy to the eye. The hope is that an increase in processing power may allow faster processing at a higher definition in the future. Obviously, had the “real-time” aspect of the experiment been removed from the equation, then the video could have been processed at its full quality for later streaming.

In order for the software to stitch the video images together and add the sound and then stream the finished
in order to stitch the video elements and the audio all together and continuously stream it smoothly as a live event, the BBC found they had to reduce video quality being received by the processing software, resulting in a vision loss of resolution in the finished VR stream. So heftier processing power is going to be required if events are to be streamed in real-time like this in the future

Another issue the BBC found was that if they positioned the camera pod so that it was effectively looking down on the arena floor at an angle, rather than looking directly out at it at eye level, or if they placed the camera in the middle of the floor so that the action was going on all around the observer, people reported increased bouts of dizziness, something which didn’t seem to occur with the cameras positioned at a natural eye-level.

Certainly, it’s an interesting experiment, and this kind of use of VR which may well prove far more attractive to a mainstream, mass audience than video games and virtual world style environments. After all, who wouldn’t want to have a (reasonably priced) seat at their favourite sporting event or concert or something like it, without all that tedious mucking about in cars or trains in order to get to a venue and then dealing with the crowds, etc.,  – and can even pause the show / event to refresh their beverages, etc?

For those able to access it, the entire Click programme featuring the use of VR at the Commonwealth Games can be seen on the BBC iPlayer. This is worth watching not only for the coverage of the VR streaming experiment, but because it also features the work of Nonny dela Penna, whose work was featured in the Drax Files Radio Hour #24 (and which I somehow managed to miss reviewing at the time).

Related Links

Images courtesy of the BBC.

Reflections on a prim: a potential way to create mirrors in SL

Update: just after pushing this out (slightly prematurely, thank you, Mona, for pointing out the error), Gwenners poked me on Twitter and reminded me of the 2006 experiments with reflections and supplied some links to shots from those heady days: and

The ability to have honest-to-goodness mirror surfaces in Second Life which could reflect the world – and avatars – around them has often been asked for over the years, but has tended to be avoided by the Lab as it’s been seen as potentially resource-intensive and not the easiest thing to achieve. As a result people have in the past played around with various means to try to create in-world mirrors.

Zonja Capalini posted a article on using linden water as an avatar mirror in 2011
Zonja Capalini posted an article on using linden water as an avatar mirror as far back as 2009

Zonja Capalini, for example, was perhaps one of the first to blog about using Linden water as a mirror (or at least the first I came across, thanks to Chestnut Rau and Whiskey Monday), and she certainly came up with some interesting results, as shown on the right, and which I tried-out for myself back in 2012.

However, achieving results in this way is also time-consuming and not always practical; you either have to purpose-build a set, or try shoving a jack under a region and hope you can persuade it to tip over on its side…

But there is hope on the horizon that perhaps we may yet see mirrors in SL (and OpenSim).

While it is still very early days,  Zi Ree of the Firestorm team has been poking at things to see what might be achieved, and has had some interesting results using some additional viewer code and a suitable texture.

This has allowed Zi to define a basic way of generating real-time reflections, including those of avatars, on the surface of a prim. The work is still in its early days, and Zi points to the fact that she’s not a rendering pipe expert, so there may be under-the-hood issues which may not have come to light as yet. However, she as produced a number of videos demonstrating the work to date (see the same below), and has raised a JIRA (STORM-2055) which documents the work so far, and self-compilers can use the patch provided in the JIRA if they want to try things for themselves.

Currently, the code only works when the viewer is running in non-deferred rendering (i.e. with the Advanced Lighting Model turned off). This does tend to make the in-world view a little flat, particularly if you’re used to seeing lighting and shadows.

However, having tried a version of the SL viewer with the code applied to it, I can say that it is very easy to create a mirror – all you need is a prim and a texture, make a few tweaks to some debug settings, and a possible relog. The results are quite impressive, as I hope the picture below demonstrates (click to enlarge, if required).

I see you looking at me ...
I see you looking at me …

Performance-wise, my PC and GPU didn’t seem to take too much of a hit – no doubt helped by the fact the mirror effect only works in non-deferred mode at present. Quite what things would be like if this were to be tried with ALM active and shadows and lighting enabled and afters moving around in real time could be a very different story.

As the effect is purely viewer-side, it does run up against the Lab’s “shared experience” policy; not only do you need a viewer with the code to create mirror surfaces, you need a viewer with the code to see the results. People using viewers without the code will just see a transparent prim face (or if the mirror texture is applied to the entire prim, nothing at all while it is 100% transparent).

This means that in order for mirrors of this nature to become the norm in Second Life, then the idea, as offered through this approach, is going to have to be adopted by the Lab. Obviously, to be absolutely ideal, it would also be better if it worked with Advance Lighting Model active as well. Zi additionally notes that some server-side updates are also required in order for a simulator to be able to save things like the reflectiveness of a given mirror surface, etc.

It's all done with mirrors ...
It’s all done with mirrors, y’know … (click to enlarge, if required)

Whether this work could herald the arrival of fully reflective surfaces in-world remains to be seen. It’s not clear how much interest in the idea has been shown by the Lab, but hopefully with the JIRA filed, they’ll take a look at things. There’s little doubt that if such a capability could be made to happen, and without a massive performance or system hit, then it could prove popular with users and add further depth to the platform.

SL project updates week 32/1: server, viewer

The Snow Lion, Oceanside dAlliez; Inara Pey, May 2014, on FlickrThe Snow Lion, Oceanside dAlliez (click for full size – blog post here)

Server Deployments – Week 32

As always, please refer to the server deployment thread for the latest updates and status on deployments.

  • There was no Main (SLS) channel deployment on Tuesday August 5th
  • On Wednesday August 6th, all three RC channels should receive the same maintenance update, which addresses some miscellaneous bugs, and fixes the JSON issue “Valid JSON numbers like 0e0 no longer valid after 14.06.26.291532” (BUG-6657) and includes changes from the current Main channel release.

SL Server

Release Viewer

The de facto release viewer was updated on Monday August 4th to version 3.7.13.292225, formerly the Group Ban RC viewer. For an overview of the group ban functionality, please refer to my article here (note this is based on the project viewer, although the functionality has not changed significantly).

All other viewer in the release and project channels remain as per my Current Viewer Releases page.

Oculus Rift Viewer and Oculus DK2

Versions of the Oculus Rift DK2 kit are beginning to appear, and those who have received the updated headset are noting that the current Oculus Rift project viewer (version 3.7.12.292141 – see the Alternate Viewers wiki page) is not compatible with the newer equipment, particularly with regards to the updated tracking system and the new display modes (see: RIFT-130).

Given the extent of changes between the DK1 and DK2 headsets, incompatibilities shouldn’t be that surprising, and problems are limited to SL; other applications built for the DK1 are also apparently having their share of niggles with the newer hardware. At the time of writing, it wasn’t clear if the Lab had or had not received their Oculus DK2, so it might still be a while before an updated version of the viewer with DK2 support appears.

User Group Meetings, Week 33

There will be no Open-source Developer meeting or Simulator User Group meeting on Monday August 11th or Tuesday August 12th respectively. This is because the Second Life technical team will be involved in a strategy  / team building get-together in the physical world, and so will all be offline for in-world meetings as a result of travelling, etc.

Other Items

Off-line E-mail Issues

There are reports that offline IMs to e-mail are being blocked by certain ISP, most notably United Internet in Germany, which operates a number of web-based e-mail services, including GMX, 1 and 1, web.de mail.com and A1. The problems started in early July (see this forum thread and BUG-6591 and BUG-6717), and was initially seen as an issue within SL.

However, SL user MartinRJ Fayray, himself based in Germany, contacted 1 and 1 and received confirmation that certain IPs from SL have been blocked on account of the amount of “spam” they are generating.

As the blocking appears to be simulator IP-based, the situation has given rise to some confusion, as some offline e-mails do get through to users, and others don’t (depending on where they were initiated within SL), leading to people viewing the problem as Lab-based issue.

Things currently appear to be up in the air as to what might happen, as unblocking IP ranges is something United Internet needs to do, although United Internet appear to be waiting for the Lab to contact them.

This is not the first time issues with IM to e-mail have been encountered. In 2013, many Gmail users found that their off-line e-mails were no longer being received, although that was due to a change in Gmail’s filtering policies which could be rectified by a settings change at the user’s end.

Getting more animated at High Fidelity

HF-logoOne of the things people have critiqued High Fidelity about is the look of their avatars. Yes, they can use 3D cameras to capture a user’s facial expression and translated them into facial movements on an avatar but, well, the avatars just look a little odd.

Or at least, that’s an oft-heard or read comment. I’m not entirely in disagreement; SL avatars may not be technically up-to-snuff in many ways, but they can look good, and over they years, they have spoiled us somewhat.

However, High Fidelity is still only in an alpha phase; and things are bound to improve over time with the look and feel of their environments and their avatars. As a demonstration of their attempts to improve things, the HiFi team have recently released a couple of videos and a blog post from their animator, Ozan Serim, formerly of Pixar Studios.

In the post – which marks his first time writing  for the blog, Ozan explains how he’s trying to bring more advanced animation to the platform’s avatars to, as he puts it, “make live avatars look really amazing – as close to what we see in animated films today.” This isn’t as easy at it sounds, as he goes on to note:

This is a big challenge – we have to do everything in a fraction of a second without the benefits of an animator (like me!) being able to ‘post-process’ the results of what is motion captured.  So I’ve been working on the ‘rigging’: how a live 3D camera and a motion capture package like Faceshift is able to ‘puppeteer’ an avatar.  With less accurate data, we have to be clever about things like how we move the mouth to more simplistically capture the phonemes that make up speech.

To demonstrate the result, Ozan includes a video of Emily Donald, one of the other HiFi staff members, singing

 As well as this video, using the “default” format of HiFi avatar, Ozan and members of the HiFi team have been working on improving the overall look of their avatar, and some early results of their efforts can be seen in another music video released at the start of August, and which is linked-to in the blog post.

This is again experiment in rigging facial expressions to more fully match those of a human being, with special attention being paid to the “A”s and “M”s as the avatar (Ozan) lip-synchs to Freddie Mercury singing Queen’s Bohemian Rhapsody. This is another video where it’s worth watching the avatar’s mouth movements – and also eye and eyebrow movements, which also reflect a strong level of emotion.

Again, there’s a fair way to go here, but these early results are fascinating, and not just for the technical aspects of what is being done here: capturing, processing and rigging subtle facial expressions in real-time. As a commentator on the Bohemian Rhapsody notes, “cool but creepy” – a reflection of the fact that HiFi have taken a further step into the Uncanny Valley. It’s going to be interesting to see how well they fare in crossing it.

Related Links

With thanks to Indigo Martel for the pointer.

 

The ethereal beauty of Somewhere in Second Life

Somewhere in Second Life
Somewhere in Second Life

I received a notice about a new exhibition by WuWai Chun which opened on Sunday August 3rd at the Rose Theatre & Art Gallery. I didn’t make it to the opening, sadly, due to other commitments, but managed to pop along as soon as time allowed.

The exhibition is in support of Feed A Smile, a project run by Live and Learn in Kenya (LLK), to provide nutritious warm lunches for over 400 children every day, paid for entirely from donations to the project (see my article on Feed  A Smile written to accompany Draxtor’s excellent World Makers video on the work).

Somewhere in Second Life
Somewhere in Second Life

Called Somewhere in Second Life, the display features selected images from WuWai’s travels across Second Life, which also appear in her Flickr photostream of the same name and which she describes as a personal destination guide. However, the pictures on display are not simply snapshots of in-world locations.

WuWai’s passion is to turn her pictures into paintings. Having taught herself the sometimes arcane art of post-processing, she labours over her scenic snapshots to give them the look and texture of watercolour or oil paintings. The results are images that are quite stunning in appearance, with many of them having an ethereal look to them which quite captivates the eye, drawing you into it.

Somewhere in Second Life
Somewhere in Second Life

These are pictures which really do immerse you in the sensation of visiting a gallery and slowly walking through the halls. There are no Constables or Browns or Balmers among the pictures hanging on the walls between WuWai’s pictures but frankly, had I come across one or two, I wouldn’t have been the least bit surprised,her work is that evocative.

As the exhibition is in support of Feed A Smile, the pictures are available to buy – simply right-click on any that take your fancy.

Somewhere in Second Life
Somewhere in Second Life

Related links

Two years on: target in sight

CuriosityAugust 5th 2014 marked the second anniversary of Curiosity’s remarkable arrival on Mars, in what was dubbed by members of the mission team as the “seven minutes of terror”.

It was one of the most anticipated touch-downs of a remote vehicle on another planet in history, and was followed minute-by-minute the world over via the Internet, with people watching NASA TV, following events on Twitter and even witnessing them in “real-time” through the unique focus of NASA’s Eyes on the Solar System simulator website (you can still replay the landing on the simulator).

Since then, Curiosity has done much, including meeting its primary science goal to find evidence of environments which may once have been suitable for the nurturing of microbial life (Curiosity isn’t able to detect any evidence of microbial life, past or present itself as it has no direct means to identify organic compounds or minerals, that will be the role of the next rover mission, scheduled for 2020 – see later in this article).

Most recently, the rover has been approaching its main exploratory goal, the large mound at the centre of Gale Crater which has been dubbed “Mount Sharp” by NASA, having been “on the road” for almost a year, driving steadily south, with the occasional stop-over at various scientific points of interest.

Since my last MSL update, Curiosity has achieved another mission mile stone and another mission first. On June 27th, the day of my last update, the rover trundled over the boundary line of its 3-sigma landing ellipse. Then on July 12th, it captured new images of its onboard laser firing.

As to the first of these events, I’ll let Guy Webster of NASA’s Jet Propulsion Laboratory explain.

“You must be wondering, ‘What the heck is a 3-sigma landing ellipse?’ It is a statistical prediction made prior to landing to determine how far from a targeted centre point the rover might land, given uncertainties such as the atmospheric conditions on landing day. The ‘3-sigma’ part means three standard deviations, so the rover was very, very likely (to about the 99.9-percent level) to land somewhere inside this ellipse. Such 3-sigma ellipses get a lot of scrutiny during landing-site selection because we don’t want anything dangerous for a landing – such as boulders of cliffs – inside the ellipse.”

In Curiosity’s case, the 3-sigma ellipse marked a relatively flat area on the floor of Gale Crater some 7 x 20 kilometres (4 x 12 miles) in size which was as close to the slopes of “Mount Sharp” as mission planners dare to bring the rover in for landing without risking it coming down in either chaotic terrain or on a slope where it might slide or topple over as the Skycrane set it down. The landing zone was also relatively close to the areas of geological interest which became known as “Glenelg” and “Yellowknife Bay”, and which the rover spent a good part of a year exploring – achieving its primary science goal in the process.

The Mars Reconnaissance Orbiter was overhead at the time the rover crossed this imaginary line in the sands of Mars, and captured the moment using its High Resolution Imaging Science Experiment (HiRISE) camera.

Caught in its tracks: NASA’s Mars Reconnaissance Orbiter photographs Curiosity as the rover crosses the boundary (marked by the blue line) of its original landing ellipse (click any image in this article for full size)

Sol 687 (July 12th, 2014 PDT) was the day on which the rover captured images of its laser firing on a rock dubbed “Nova”.

The laser, which is a part of the ChemCam system on mounted on the rover’s mast, is used to vaporise minute amounts of material on target rocks. Light from the resultant plasma is captured by ChemCam’s telescope for spectrographic analysis.

In all, the laser has been fired over 150,000 times in the two years since Curiosity arrived on Mars, and the results of firings have been seen in many “before and after” shots of rocks on the receiving end of a laser burst. What made this event special was that the burst firing at “Nova” was captured by the rover’s turret-mounted Mars Hand Lens Imager (MAHLI). This allowed NASA to produce a film showing the moment of impact of the laser shots.

In the first part of the film, the initial “spark” of a single laser pulse can be seen striking the surface of “Nova”. This is followed by an enhanced set of images showing the laser firing at 10 times a second, disrupting dust and minerals on the rock as the plasma cloud erupts.

Continue reading “Two years on: target in sight”