Virtual humans: helping us to talk about ourselves

Hi, I’m Ellie. Thanks for coming in today. I was created to talk to people in a safe and secure environment. I’m not a therapist, but I’m here to learn about people, and would love to learn about you. I’ll ask a few questions to get us started…

These are the opening comments from SimSensei, a virtual human application and part of a suite of software tools which may in the future be used to assist in the identification, diagnosis and treatment of mental health issues by engaging people in conversation and by using real-time sensing and recognition of nonverbal behaviours and responses which may be indicative of depression or other disorders.

SimSensei and its companion application, MultiSense, have been developed by the Institute for Creative Technologies (ICT) at the University of Southern California (USC) as part of wide-ranging research into the use of various technologies  – virtual humans, virtual reality, and so on – in a number of fields, including entertainment, healthcare and training.

In 2013, SimSensei and MultiSense underwent an extensive study, the results of which have just been published in a report entitled, It’s only a computer: Virtual humans increase willingness to disclose, which appears in the August 2014 volume of Computers in Human Behaviour.

It is regarded as the first study to present empirical evidence that the use of virtual humans can encourage patients to more honestly and openly disclose information about themselves than might be the case when they are directly addressing another human being, whom they may regard as passing judgement on what they are saying, making them less willing to reveal what information about themselves they feel is embarrassing or which may cause them emotional discomfort if mentioned.

Ellie is the "face" of SimSensei, part of a into the use of virtual tools and software to help address health issues
Ellie is a virtual human, the “face” of SimSensei, designed to interact with human beings in a natural way, and build a conversational rapport with them as a part of a suite of software which might be used to help in the diagnosis of mental ailments

SimSensei presents a patient with a screen-based virtual human, Ellie. The term “virtual human” is used rather than “avatar” because Ellie is driven by a complex AI programme which allows her to engage and interact with people entirely autonomously.

This focus of the software is to make Ellie appear as natural and as human as possible in order for her to build up a rapport with the person who is talking to her. This is achieved by the software responding to subjects using both verbal and nonverbal communication, just like a human being.

During a conversation SimSensei will adjust its reactions to a real person’s verbal and visual cues. Ellie will smile in response to positive displays of emotion – happiness, etc., she will nod encouragement or offer appropriate verbal encouragement during pauses in the flow of conversation, and so on. Rapport is further built by the software being able to engage in small talk and give natural-sounding responses to comments. For example, when one subject mentioned he was from Los Angeles, her response was to say, “Oh! I’m from LA myself!”

SimSensei’s interaction with a patient is driven by MultiSense, which is  technically referred to as “multinodal perception software framework”. MultiSense uses a microphone and camera to capture and map the patient’s verbal and nonverbal responses to SimSensei (facial expression, the direction in which they look, body movements, intonations and hesitations in their speech pattern, etc.). This data is analysed in real-time, and feedback is then given to SimSensei, helping to direct its responses as well as allowing it to detect signs of psychological distress which might be associated with depression disorders or conditions such as post-traumatic stress disorder (PTSD), and react accordingly.

During the ICT study, SimSensei and MultiSense were used to screen 239 people aged between 18 and 65, who were invited to a setting where they could interact with SimSensei as if they were undergoing an interview for admission to a hospital or clinic. On their arrival, some of them were told that they would be interacting with a fully automated piece of software with no human involvement at all, while others were told they’d be interacting with a computer avatar operated by a human. In reality, participants were assigned a fully or semi-automated virtual human entirely at random and without regard to what they were told.

When analysing the results, researchers found that those who believed they were talking purely to a virtual human were far more willing to divulge information and answer personal questions than those who had been told another human being was directing the virtual human. Researchers also noted that those who believed they were addressing a computer programme showed more intense emotional responses in their facial expressions than those who had been told beforehand that there was a human being operating the SimSensei virtual human.

MultiSense tracks the subject's facial expression, head and eye movements, body language as well as the tonal quality of their voice and what they are saying. Here, the subject' discomfort in talking about something results in gaze aversion, a downward look, fracture comments, all noted by MultiSense (and marked in red), which then helps drive the SimSensei virtual human's reactions and verbal response (shown in the crentral information box outlined in blue)
MultiSense tracks the subject’s facial expression, head and eye movements, body language as well as the tonal quality of their voice and what they are saying. Here, the subject’ discomfort in talking about something results in gaze aversion, a downward look, fracture comments, all noted by MultiSense (and marked in red), which then helps drive the SimSensei virtual human’s reactions and verbal response (shown in the central information box outlined in blue) – click to enlarge

Feedback was also gathered from the test subjects after their interviews, with those who believed they have been interacting with a computer programme indicating they felt far more comfortable in revealing information about themselves than had they been addressing a human being. By contrast, those who had been told that Ellie was being operated by a human being tended to indicate that they would have been more open in their responses to questions if they had felt they were only addressing a software programme.

Jonathan Gratch is a both the co-author of the study’s report and the Director of Virtual Human Research at ICT. Commenting on the study in a July 2014 article published in the USC News, he said, “We know that developing a rapport and feeling free of judgment are two important factors that affect a person’s willingness to disclose personal information. The virtual character delivered on both these fronts and that is what makes this a particularly valuable tool for obtaining information people might feel sensitive about sharing.”

Jon Gratch leading the USC's ICT research into the use of virtual humans and related technologies in a wide range of areas
Jonathan Gratch leading the USC’s ICT research into the use of virtual humans and related technologies in a wide range of areas

Gratch and his colleagues are keen to stress that SimSensei and MultiSense are not intended to replace trained clinicians in dealing with people’s health issues. However, the results of the ICT’s study suggests that given patients are more willing to disclose information about themselves both directly and through their nonverbal reactions to the software, the use of virtual humans could greatly assist in the diagnosis and treatment process.

In particular, the ICT is already initiating a number of healthcare projects to further explore the potential of virtual humans and the SimSensei / MultiSense framework. These include helping detect signs of depression, the potential to provide healthcare screening services for patients in remote areas, and in improving communication skills in young adults with autism spectrum disorder. Research is also being carried out into the effective use of virtual humans as complex role-playing partners to assist in the training of healthcare professionals, as well as the use of the technology in other training environments.

As noted towards the top of this article, the SimSensei  / MultiSense study is just one aspect of the ICT’s research into the use of a range of virtual technologies, including virtual reality and immersive spaces, for a wide range of actual and potential applications.  I hope to cover some more of their work in future articles.

Related Links

Images via the Institute of Creative Technologies and USC News.

How the BBC achieved a world’s first in live event VR streaming

The Rift is highly anticipated by the gaming community, and there’s a lot of interest from developers in building for this platform. We’re going to focus on helping Oculus build out their product and develop partnerships to support more games. Oculus will continue operating independently within Facebook to achieve this.

But this is just the start. After games, we’re going to make Oculus a platform for many other experiences. Imagine enjoying a court side seat at a game …..

Also sprach Zuckerberg (sorry, couldn’t resist; blame the evening wine) on March 25th, the day on which Facebook acquired Oculus VR amidst much wailing and gashing of teeth. At the time, it seemed his vision of this VR utopia of court side seats for all at Wimbledon and all these other fine things was perhaps a decade away. Indeed, given some of the areas where technology still needs time to mature, it may well still prove to be up to a decade away; but that hasn’t stopped the BBC from seeing how it all might work.

As many from the Commonwealth nations will likely know, we’ve just seen Scotland host the XX Commonwealth Games (for those who don’t know, and keeping it to a nutshell, think Olympics with fewer nations, and you’ll get the idea). The BBC were the primary broadcaster for the Games, and they used the opportunity to make Zuckerberg’s vision a reality, if only on an experimental scale, by transmitting elements of the gymnastics events at the Games in real-time as a VR experience – the very first time anywhere in the world that such a feat has been undertaken. The results of this effort were recently reported by the BBC’s digital magazine programme, Click, broadcast on the BBC News channel, and from which this article is largely drawn.

The experiment comprised three parts. First, a 360-degree, 7-lens video camera pod (6 lenses to record the view around the pod, the 7th to capture the overhead view) and a spatial microphone were set-up in front of the SSE Hydro Arena seats, the camera positioned at the same eye-level as spectators.

The 360-degree cameras (l) were installed at the same eye light as people sitting in the arena seats
The 360-degree cameras (l) were installed at the same eye-level as people sitting in the arena seats

The video from all seven cameras and audio from the microphone was fed directly to the second element of the experiment, a computer system running software designed to stitch all seven video elements into a seamless whole, overlaid with the sounds from within the arena captured by the microphone. The finished film was then streamed to the third element in the experiment: a booth within the Glasgow Science Centre where members of the public could don an Oculus headset and a set of earphones and find themselves immersed in the Hydro Arena, watching the gymnastics as they happened.

The results were predictable astonishment as most of those trying the system were exposed to immersive VR for the first time. “That’s amazing!  … You can see everything!” was the reaction of one gentleman, a bright smile visible below the goggled-eyed goggles as he turned slowly around, taking-in the entire arena. A young boy referred to it as both “cool” and “weird!”, while an older lady found herself responding to the roar of the spectators and looking around in surprise to see what had just happened.

"That's amazing!" was one man's verdict as he found himself sitting inside the SSE Hydro Arena witnessing the Commonwealth Games gymnastics event as they happened - while actually located half-a-mile from the arena
“That’s amazing!” was one man’s verdict as he found himself sitting inside the SSE Hydro Arena witnessing the Commonwealth Games gymnastics event as it happened – while actually located half-a-mile from the arena

As noted, this effort was very much an early experiment by the BBC’s R&D people into what might be possible with VR. While the film was only transmitted to a single location just half-a-mile from the Hydro Arena, it could have just as easily been transmitted anywhere given a fast enough and stable enough internet connection. The distance in this case was simply a matter of convenience, the VR experiment being just one of a number of potential new broadcasting technologies the “Beeb” is investigating as a part of its multi-platform approach to television and which were being showcased alongside the Games. In particular, the BBC  wanted to poke at potential issues this type of streaming will have to overcome if it is to become practical in the future.

One of the problems they hit was quality of processing versus speed of delivery. In order to try to keep the transmission as close to real-time as possible (remembering that the same events were being simultaneously broadcast via “traditional” methods as well as via other technologies being showcased at the Science Centre), BBC wanted to avoid undue lag occurring in the VR feed when compared to other mediums on display. This meant that the video / audio processing needed to produce the finished film for streaming had to be kept to around three or four seconds in order to achieve a smooth, continuous stream to the headset.

To achieve this, engineers had to downgrade the video quality being received by the processing software in order to reduce the amount of data the software had to handle in stitching the 7 elements of film together. This resulted in a loss of image definition which was noticeable when wearing the Oculus headset, as the video appeared somewhat grainy to the eye. The hope is that an increase in processing power may allow faster processing at a higher definition in the future. Obviously, had the “real-time” aspect of the experiment been removed from the equation, then the video could have been processed at its full quality for later streaming.

In order for the software to stitch the video images together and add the sound and then stream the finished
in order to stitch the video elements and the audio all together and continuously stream it smoothly as a live event, the BBC found they had to reduce video quality being received by the processing software, resulting in a vision loss of resolution in the finished VR stream. So heftier processing power is going to be required if events are to be streamed in real-time like this in the future

Another issue the BBC found was that if they positioned the camera pod so that it was effectively looking down on the arena floor at an angle, rather than looking directly out at it at eye level, or if they placed the camera in the middle of the floor so that the action was going on all around the observer, people reported increased bouts of dizziness, something which didn’t seem to occur with the cameras positioned at a natural eye-level.

Certainly, it’s an interesting experiment, and this kind of use of VR which may well prove far more attractive to a mainstream, mass audience than video games and virtual world style environments. After all, who wouldn’t want to have a (reasonably priced) seat at their favourite sporting event or concert or something like it, without all that tedious mucking about in cars or trains in order to get to a venue and then dealing with the crowds, etc.,  – and can even pause the show / event to refresh their beverages, etc?

For those able to access it, the entire Click programme featuring the use of VR at the Commonwealth Games can be seen on the BBC iPlayer. This is worth watching not only for the coverage of the VR streaming experiment, but because it also features the work of Nonny dela Penna, whose work was featured in the Drax Files Radio Hour #24 (and which I somehow managed to miss reviewing at the time).

Related Links

Images courtesy of the BBC.

Google: all you need for your own VR experience is … cardboard

Google have released their first foray in to the world of immersive VR. They’re calling it Cardboard, because the do-it-yourself headset is made of … well, cardboard.

“Construct a VR viewer from everyday items you can find in your garage, online or at your local hardware store,” is the headline on the Google Cardboard website, complete with a picture of the necessary components.

Build your own VR heaset from cardboard, magnifying lens and a few other bits, and use it with your Android smartphone
Google Cardboard: build your own VR headset from cardboard, magnifying lens and a few other bits, and use it with your Android smartphone

And before you laugh yourself silly thinking this is another little joke from those pranksers who brought us GMail Blue in April 2013, it’s not. The heart of the system is a VR App designed to run on smartphones which can be mounted into the home-made headset.

Cardboard was unveiled at the Google I/O Developers Conference in San Francisco. the app takes advantage of a smartphone’s built-in accelerometers and gyroscopes to provide head tracking, and demonstration environments include a Hall of Mirrors and the opportunity to travel through Chicago. Users can also watch YouTube videos as if sitting in a movie theatre and explore 360-degree panoramic photos or run a series of VR experiments using Google Chrome on their ‘phones.  There’s also a software development kit which allows users to code their own immersive experiences.

“David Coz and Damien Henry at the Google Cultural Institute in Paris built a cardboard smartphone housing to prototype VR experiences as part of a 20% project. The results elicited so many oohs and ahs that they inspired a larger group to work on an experimental SDK,” the website explains, providing the “why” of the effort.

The finsihed headset with 'phone mounted
The finished headset with ‘phone ready to be fitted

Nor is the headset entirely low-tech. Although a phone is almost completely encased in the headset, the instructions provide a guide to making a trigger with a metal ring and a magnet and which uses the ‘phone’s magnetometer. Flicking the ring downward as items come into view allows you to select them.

VR headsets for smartphones aren’t exactly a new idea. We’ve had Kickstarter campaigns for the likes of Altergaze, and there are items like Durovis Dive. But Google Cardboard offers fun approach to things – the company noting that it can be worn with glasses, but that “you may want to cut flaps into both sides of the viewer. There’s a fold line pre-cut into both sides of the viewer to make this easier.”

For those who don’t want to be bothered with gathering all the bits and cutting cardboard to create their own headset, and just want to put the thing together and start enjoying VR on their smartphone, a pre-cut kit with all the necessary parts can be purchased from Dodocase!

Now all we need is SL Go with the Oculus Rift viewer code 😉 .

Creating the VR metaverse

On Tuesday May 20th, at the SVVR conference in Mountain View, California, Second Life’s own Draxtor Despres (Bernard Drax, RL) hosted a panel discussion / Q&A session entitled Creating the VR Metaverse.

The panel comprised:

During the hour of the event, the panel discussed many aspects of the future of the metaverse, including identity and privacy, governance, whether the metaverse wile be a single entity or many, content portability, the user interface, and more, before answering questions from both Second Life and the real world audience.

The discussion was recorded and posted to You Tube, courtesy of Brian Hart. The following transcript is taken from the point at which the discussion started, after each of the participants  had been given the opportunity to introduce themselves.

L-to-R: Stafano Corazza, Josh Carpenter, Ebbe Altberg, Philip Rosedale, Tony Parisi (image: Ben Lang, The Road to VR)

As usual, please note that:

  • This is not a word-for-word transcript of the entire meeting. While all quotes given are as they are spoken in the audio, to assist in readability and maintain the flow of conversation, not all asides, jokes, interruptions, etc., have been included in the text presented here
  • If there are any sizeable gaps in comments from a speaker which resulted from asides, repetition, questions to others etc,, these are indicated by the use of “…”
  • Sound quality on the video is not ideal. There may therefore be the occasional misquote, although every effort has been taken to avoid this.

07:16 Bernard Drax (BD): Tony, you’ve been around for some time; what kind of deja-vu feeling is this, and what do you want to scream at these 23-year olds that are making the goggles?

07:34 Tony Parisi (TP): For those of you who don’t know my background, 23 years and three months ago I created this technology called VRML, virtual reality modelling language … I don’t teach VRML any more, but I’m still very passionate about it is a product for connected devices and connected experiences., which is why we got together to build that technology two decades ago.

the principle behind was, just like the other media that were getting sucked into the world-wide web, 3D would be a media type as part of that as well; you could use it to build visualisation, you could use it to create virtual worlds, you could use it to heal the sick, feed the poor, and a whole bunch of really cool things.

Back then I was your age, 23-year-olds or a little bit older. We were very excited, there’s a lot of deja-vu for me in this conference, because this conference has a lot of the energy of the first couple of VRML get-togethers. We didn’t know what wasn’t possible; we had all kinds of high hopes and dreams and of course, years into it, reality crashed into us. we learned a lot, but it was definitely a bit early to try to deploy virtual experiences back then.

The one take-away I will offer to everyone here, and its been a continued theme in my work… I’ve heard a lot about Unity, I’ve heard a lot about game engines, I’ve seen insane experiences; Unreal (Engine), those Kite guys, I can’t think of their name, mind-blowing, incredible production value … but don’t ignore the web.

Continue reading “Creating the VR metaverse”

Oculus VR sued over alleged “misappropriated trade secrets”

On the day Linden Lab announced the arrival of the Oculus Rift capable project viewer, news also came that ZeniMax Media has pulled the trigger on a lawsuit against Oculus VR and company founder Palmer Luckey alleging, among other things, the misappropriation of trade secrets by Oculus VR.

The lawsuit, filed in the U.S. District Court for the Northern District of Texas has been widely reported in the on-line tech media, makes some heavy reading. As well as the claim of misappropriation of trade secrets relating to virtual reality technology, the Maryland-based company also alleges infringement of ZeniMax copyrights and trademarks and asserts claims for breach of contract, unjust enrichment, and unfair competition against the defendants.

The move is the latest in a war of words which initially erupted in the form of public correspondence between ZeniMax and  Oculus VR – who obviously strenuously deny all claims made by ZeniMax. The latter first informed Engadget of their intentions at the start of May, 2014, and in which they specifically pointed at John Carmack’s involvement in the development of Oculus Rift at a time when he was working for ZeniMax subsidiary id Software, as well as pointing to a non-disclosure agreement (NDA) signed by Palmer Luckey in 2012, relating to the use of ZeniMax technology.

Oculus Rift: ZeniMax lawsuit specifically related to the early development of the headset, alleged use of their technology, possible IP infringements and breach of contract (image courtesy of BGR.com)

Cormack himself took to Twitter in an immediate rebuttal of the ZeniMax allegations, noting that while he recognises that any code he wrote while under ZeniMax’s employment is clearly theirs, at the same time the company never once patented any ideas arising from his work – placing the burden of proof on ZeniMax to demonstrate trade secrets / IP has been misappropriated where no patents exist.

John Carmack, Oculus VR’s CTO used Twitter in an immediate rebuttal of ZeniMax’s claims at the start of the month.

With the claims following on the heel of Facebook acquiring Oculus VR, the latter also commented on the ZeniMax allegations, framing them in terms of the Faceback acquisition, stating:

It’s unfortunate, but when there’s this type of transaction, people come out of the woodwork with ridiculous and absurd claims. We intend to vigorously defend Oculus and its investors to the fullest extent.

Daniel Nye Griffiths, writing for Forbes Online provides a solid examination of the initial claims made by ZeniMax and the response by Oculus VR,  which although somewhat superseded by the lawsuit’s filing, help frame the two companies respective positions. In their response to the claims by ZeniMax, and without using the actual words, Oculus VR pretty much demanded ZeniMax to put up or shut up.

Continue reading “Oculus VR sued over alleged “misappropriated trade secrets””

Oculus open-source competitor on the horizon, with multi-function controller

With the SVVR Conference and Expo underway in California, now is perhaps a timely opportunity to take a peek at what is being billed as an open-source competitor to the Oculus Rift.

Techcrunch, along with several other technology blogs / websites, covered the news a few days ago that a Chinese start-up, ANTVR Technology, is developing an open-source, cross-platform virtual reality gaming set, called the ANTVR kit.

The kit is said to be compatible with games designed for the Oculus Rift and with most PC and console platforms. It can connect to any device offering direct HDMI output, or via an HDMI adapter if no direct HDMI output is available. Supported systems include computers, games consoles, iOs devices, Android devices, and even Blu-ray players.

The ANTVR kit headset design (images courtesy of ANTVR)
The ANTVR kit headset design (image courtesy of ANTVR Technology)

The new headset is currently a part of a Kickstarter campaign, which despite the backlash over the eventual acquisition of Oculus VR by Facebook, has already seen 450 people commit (at the time of writing) almost $170,000 of the $200,000 goal in just seven days, suggesting that if the rate of pledges is maintained, the ANTVR Kit could end-up going that same way as both Oculus VR and Technical Illusion’s castAR, and exceeding its modest target by a good margin.

The headset unit has a 1920 x 1080 high definition, 1.03 megapixel per eye, display with a 4:3 ratio offering a 100-degree diagonal field-of-view. A dual aspherical lens arrangement is apparently included to help eliminate image distortion when projecting standard ratio images. Like the Oculus Rift, it has an internal 9-axis Inertial Measurement Unit (IMU) for head rotation and movement tracking, and it can be worn with prescription lens glasses.

Comparing the ANTVR with Oculus SDK2 and Sony's Morpheus
Comparing the ANTVR with Oculus SDK2 (which actually has a 5-inch screen) and Sony’s Morpheus (credit: TechInAsia)

A novel aspect of the headset is the inclusion of a “glance window”, a slide-up port on the front of the unit which can be pushed up to allow the wearer to re-orient themselves in the real world or their keyboard. While still not a real solution for those needing to use the keyboard and can’t re-orient finger positions easily (no tactile indicators on F, J, and numeric pad 5, for example), it at least means the headset itself doesn’t need to be pushed up to the forehead to see things.

Is it a Controller? Is it a Joystick? Is it a Gun? It’s all Three – and more

A further interesting feature of the kit is the inclusion of the multi-function handset controller. When completely assembled, this forms a gun which can be used in first-person shooter games and the like. However, the “barrel” of the gun can be detached, and the “pistol grip” becomes a joystick, suitable for use with flight simulators, etc., or as a Wii-style controller. This further opens-out into a game controller handset.

The three-part handset
The three-part handset (image courtesy of ANTVR Technology)

A further unique aspect of the handset unit is that it also includes a 9-axis IMU, which tracks body movement and actions, allowing the wearer to control a degree of on-screen character movement via both head and body movement, and to simulate a range of actions (crouching, jumping, throwing a grenade…).

The dual 9-axis IMUs translate body movements into on-screen character movements
The dual 9-axis IMUs translate body movements into on-screen character movements (stills via ANTVR promotional video, YouTube)

An additional WHDI unit can be added to the assembled handset (and is shown in the image above), allowing for a reported low-lag (less than 1ms) fully wireless gaming experience. The WHDI unit is not supplied as standard, but the company states it will offer it for $200.

As with the Oculus Rift, a software development kit (SDK) is to be made available with the ANTVR kit. The open-source nature of the kit means that there is potential for it to be used with a range of systems beyond those for which it initially supports.

“We wanted to make a gaming system that is universal, but it’s very difficult to make your product compatible with every kind of gaming platform,” Qin Zheng, ANTVR Technology’s founder, said in the Techcrunch report. “We’ve worked on making it compatible with Xbox, PC, and PlayStation, but there are many other gaming systems. If there are developers with other gaming systems or just device developers, they can choose to modify the firmware inside our hardware.”

Qin Zheng, ANTVR Technology's founder
Qin Zheng, ANTVR Technology’s founder (image credit: TechInAsia)

The Kickstarter campaign is being run along very similar lines to the Oculus VR, up to and including an opportunity to visit the ANTVR Technology studios in Beijing for those willing to pay-out $5,000 (plus meeting their own airfares, etc.), which will also include guided tours of China’s capital. For $270-$300, supporters get the ANTVR kit and other goodies, while for $470-$500, supporters get the kit with a WHDI wireless unit as well.Those offering less that $270 get to choose from other reward options. Qin hopes that following the kickstarter campaign, ANTVR Technology will be able to start shipping kits in September 2014.

The following promotional video examines the ANTVR kit, and shows it in use with the additional  WHDI wireless adapter.

Related Links