Virtual humans: helping us to talk about ourselves

Hi, I’m Ellie. Thanks for coming in today. I was created to talk to people in a safe and secure environment. I’m not a therapist, but I’m here to learn about people, and would love to learn about you. I’ll ask a few questions to get us started…

These are the opening comments from SimSensei, a virtual human application and part of a suite of software tools which may in the future be used to assist in the identification, diagnosis and treatment of mental health issues by engaging people in conversation and by using real-time sensing and recognition of nonverbal behaviours and responses which may be indicative of depression or other disorders.

SimSensei and its companion application, MultiSense, have been developed by the Institute for Creative Technologies (ICT) at the University of Southern California (USC) as part of wide-ranging research into the use of various technologies  – virtual humans, virtual reality, and so on – in a number of fields, including entertainment, healthcare and training.

In 2013, SimSensei and MultiSense underwent an extensive study, the results of which have just been published in a report entitled, It’s only a computer: Virtual humans increase willingness to disclose, which appears in the August 2014 volume of Computers in Human Behaviour.

It is regarded as the first study to present empirical evidence that the use of virtual humans can encourage patients to more honestly and openly disclose information about themselves than might be the case when they are directly addressing another human being, whom they may regard as passing judgement on what they are saying, making them less willing to reveal what information about themselves they feel is embarrassing or which may cause them emotional discomfort if mentioned.

Ellie is the "face" of SimSensei, part of a into the use of virtual tools and software to help address health issues
Ellie is a virtual human, the “face” of SimSensei, designed to interact with human beings in a natural way, and build a conversational rapport with them as a part of a suite of software which might be used to help in the diagnosis of mental ailments

SimSensei presents a patient with a screen-based virtual human, Ellie. The term “virtual human” is used rather than “avatar” because Ellie is driven by a complex AI programme which allows her to engage and interact with people entirely autonomously.

This focus of the software is to make Ellie appear as natural and as human as possible in order for her to build up a rapport with the person who is talking to her. This is achieved by the software responding to subjects using both verbal and nonverbal communication, just like a human being.

During a conversation SimSensei will adjust its reactions to a real person’s verbal and visual cues. Ellie will smile in response to positive displays of emotion – happiness, etc., she will nod encouragement or offer appropriate verbal encouragement during pauses in the flow of conversation, and so on. Rapport is further built by the software being able to engage in small talk and give natural-sounding responses to comments. For example, when one subject mentioned he was from Los Angeles, her response was to say, “Oh! I’m from LA myself!”

SimSensei’s interaction with a patient is driven by MultiSense, which is  technically referred to as “multinodal perception software framework”. MultiSense uses a microphone and camera to capture and map the patient’s verbal and nonverbal responses to SimSensei (facial expression, the direction in which they look, body movements, intonations and hesitations in their speech pattern, etc.). This data is analysed in real-time, and feedback is then given to SimSensei, helping to direct its responses as well as allowing it to detect signs of psychological distress which might be associated with depression disorders or conditions such as post-traumatic stress disorder (PTSD), and react accordingly.

During the ICT study, SimSensei and MultiSense were used to screen 239 people aged between 18 and 65, who were invited to a setting where they could interact with SimSensei as if they were undergoing an interview for admission to a hospital or clinic. On their arrival, some of them were told that they would be interacting with a fully automated piece of software with no human involvement at all, while others were told they’d be interacting with a computer avatar operated by a human. In reality, participants were assigned a fully or semi-automated virtual human entirely at random and without regard to what they were told.

When analysing the results, researchers found that those who believed they were talking purely to a virtual human were far more willing to divulge information and answer personal questions than those who had been told another human being was directing the virtual human. Researchers also noted that those who believed they were addressing a computer programme showed more intense emotional responses in their facial expressions than those who had been told beforehand that there was a human being operating the SimSensei virtual human.

MultiSense tracks the subject's facial expression, head and eye movements, body language as well as the tonal quality of their voice and what they are saying. Here, the subject' discomfort in talking about something results in gaze aversion, a downward look, fracture comments, all noted by MultiSense (and marked in red), which then helps drive the SimSensei virtual human's reactions and verbal response (shown in the crentral information box outlined in blue)
MultiSense tracks the subject’s facial expression, head and eye movements, body language as well as the tonal quality of their voice and what they are saying. Here, the subject’ discomfort in talking about something results in gaze aversion, a downward look, fracture comments, all noted by MultiSense (and marked in red), which then helps drive the SimSensei virtual human’s reactions and verbal response (shown in the central information box outlined in blue) – click to enlarge

Feedback was also gathered from the test subjects after their interviews, with those who believed they have been interacting with a computer programme indicating they felt far more comfortable in revealing information about themselves than had they been addressing a human being. By contrast, those who had been told that Ellie was being operated by a human being tended to indicate that they would have been more open in their responses to questions if they had felt they were only addressing a software programme.

Jonathan Gratch is a both the co-author of the study’s report and the Director of Virtual Human Research at ICT. Commenting on the study in a July 2014 article published in the USC News, he said, “We know that developing a rapport and feeling free of judgment are two important factors that affect a person’s willingness to disclose personal information. The virtual character delivered on both these fronts and that is what makes this a particularly valuable tool for obtaining information people might feel sensitive about sharing.”

Jon Gratch leading the USC's ICT research into the use of virtual humans and related technologies in a wide range of areas
Jonathan Gratch leading the USC’s ICT research into the use of virtual humans and related technologies in a wide range of areas

Gratch and his colleagues are keen to stress that SimSensei and MultiSense are not intended to replace trained clinicians in dealing with people’s health issues. However, the results of the ICT’s study suggests that given patients are more willing to disclose information about themselves both directly and through their nonverbal reactions to the software, the use of virtual humans could greatly assist in the diagnosis and treatment process.

In particular, the ICT is already initiating a number of healthcare projects to further explore the potential of virtual humans and the SimSensei / MultiSense framework. These include helping detect signs of depression, the potential to provide healthcare screening services for patients in remote areas, and in improving communication skills in young adults with autism spectrum disorder. Research is also being carried out into the effective use of virtual humans as complex role-playing partners to assist in the training of healthcare professionals, as well as the use of the technology in other training environments.

As noted towards the top of this article, the SimSensei  / MultiSense study is just one aspect of the ICT’s research into the use of a range of virtual technologies, including virtual reality and immersive spaces, for a wide range of actual and potential applications.  I hope to cover some more of their work in future articles.

Related Links

Images via the Institute of Creative Technologies and USC News.

Of storms in teacups and dear diary articles

For what was a fairly minor piece on Second Life, Karyne Levy’s August 1st piece for Business Insider, Second Life Has Devolved Into A Post-Apocalyptic Virtual World, And The Weirdest Thing Is How Many People Still Use It, created quite a storm in a teacup, ripples from which continue to spread with accusations it is “negative” and “poorly researched”.

Yes, it is a tad lightweight, has a ridiculous title which has no bearing on the content, and gives every indication of being written in a hurry. It also gets a couple of things wrong: sex has always been a part of SL, rather than something that filled the void left by big business; and it isn’t actually as easy to see adult themed items in search as is suggested (not without setting the right Maturity ratings first).

But “negative”? Not really. Sure, it quotes William Reed Seal-Foss saying that SL stagnated (a view actually shared by many in SL); however Ms Levy counters this herself, pointing out the platform is pretty much still as popular among its users as it ever was. She also references the fact that it is embracing new technology like the Oculus Rift and she references Chris Stokel-Walker’s excellent 2013 article on SL for The Verge (which I reviewed when it first appeared).

Nor is any failure to mention the likes of the LEA or live performances or any of the hundreds of photogenic regions in SL evidence of a lack of research on Ms. Levy’s part. The reason such places aren’t mentioned is simple: they’re not the focus of the article.

Karyne Levy: "Dear diary" article (image via Business Insider)
Karyne Levy: “Dear diary” article (image via Business Insider)

The bottom line is that the article isn’t supposed to be any kind of analysis or examination of Second Life; nor is it an exploration of the creative opportunities within the platform. It is simply this: a “dear diary” account of one person’s venture back into Second Life and her experiences in doing so, and to judge it as anything else is to entirely miss the point.

As it is, and given the way the piece demonstrates just how shoddy the new user experience is, with its sink-or-swim approach to new users, I’d suggest Ms. Levy is to be commended for not sitting down and dashing-off an article along the lines of “after ignoring it for X years, I tried SL again. It still sucks”.

Let’s face it, she comes in-world, apparently negotiates the Learning and Social Islands (both of which are anything but), and gets herself to a role-play region only to find herself summarily ignored. As experiences go, it’s hardly great, and I suspect there are more than a few who can attest to having a similar experience when coming into SL for the first time.

Fortunately, rather than running off never to be seen again, Ms. Levy uses the assistance of an acquaintance – Judy – to help her on her way. How and where Ms. Levy may have contacted Judy isn’t that important given the nature of the piece; the fact that she at least had someone willing to help her is.

Ms. Levy met-up with Judy at the Caledon Oxbridge new user orientation centre, where she was able to acquaint herself with the rudiments of Viewer 3.x
Ms. Levy met-up with Judy at the Caledon Oxbridge new user orientation centre, where she was able to acquaint herself with the rudiments of Viewer 3.x

Nor does it particularly matter whether or not Judy took Ms. Levy to the “right” places in SL or that her personal view of SL seems oddly slanted. What matters is that she was able to provide help, and enabled her to have a little fun whilst in-world.

That last part is actually quite important, hence the emphasis. Having fun is what is likely to bring newcomers back to Second Life. Probably more so than bashing them for writing something which fails to measure up to some preconception of what their article “should” be about.

At the end of the day, there is nothing intrinsically negative about the Business Insider. It doesn’t malign the platform, or cast judgement on the initial experience the writer had when in-world. It doesn’t poke an accusatory finger at anyone or mock Judy’s SL / RL relationship. The most that can really be said about it is that it overplays the adult / sex element; but that’s not bad research, that’s unfortunate titillation.

Would I have preferred something with more meat on it? Yes; I’m not about to deny that. But by the same standard, I’m also not about to start clubbing Ms. Levy about the head with a rolled-up version of her article because it doesn’t meet my expectations. As strategies go, that’s probably going to be a lot less successful in getting her to write something more considered in the future  than, say, inviting her back in-world and showing her the things she might enjoy writing about.

Reflections on a prim: a potential way to create mirrors in SL

Update: just after pushing this out (slightly prematurely, thank you, Mona, for pointing out the error), Gwenners poked me on Twitter and reminded me of the 2006 experiments with reflections and supplied some links to shots from those heady days: and

The ability to have honest-to-goodness mirror surfaces in Second Life which could reflect the world – and avatars – around them has often been asked for over the years, but has tended to be avoided by the Lab as it’s been seen as potentially resource-intensive and not the easiest thing to achieve. As a result people have in the past played around with various means to try to create in-world mirrors.

Zonja Capalini posted a article on using linden water as an avatar mirror in 2011
Zonja Capalini posted an article on using linden water as an avatar mirror as far back as 2009

Zonja Capalini, for example, was perhaps one of the first to blog about using Linden water as a mirror (or at least the first I came across, thanks to Chestnut Rau and Whiskey Monday), and she certainly came up with some interesting results, as shown on the right, and which I tried-out for myself back in 2012.

However, achieving results in this way is also time-consuming and not always practical; you either have to purpose-build a set, or try shoving a jack under a region and hope you can persuade it to tip over on its side…

But there is hope on the horizon that perhaps we may yet see mirrors in SL (and OpenSim).

While it is still very early days,  Zi Ree of the Firestorm team has been poking at things to see what might be achieved, and has had some interesting results using some additional viewer code and a suitable texture.

This has allowed Zi to define a basic way of generating real-time reflections, including those of avatars, on the surface of a prim. The work is still in its early days, and Zi points to the fact that she’s not a rendering pipe expert, so there may be under-the-hood issues which may not have come to light as yet. However, she as produced a number of videos demonstrating the work to date (see the same below), and has raised a JIRA (STORM-2055) which documents the work so far, and self-compilers can use the patch provided in the JIRA if they want to try things for themselves.

Currently, the code only works when the viewer is running in non-deferred rendering (i.e. with the Advanced Lighting Model turned off). This does tend to make the in-world view a little flat, particularly if you’re used to seeing lighting and shadows.

However, having tried a version of the SL viewer with the code applied to it, I can say that it is very easy to create a mirror – all you need is a prim and a texture, make a few tweaks to some debug settings, and a possible relog. The results are quite impressive, as I hope the picture below demonstrates (click to enlarge, if required).

I see you looking at me ...
I see you looking at me …

Performance-wise, my PC and GPU didn’t seem to take too much of a hit – no doubt helped by the fact the mirror effect only works in non-deferred mode at present. Quite what things would be like if this were to be tried with ALM active and shadows and lighting enabled and afters moving around in real time could be a very different story.

As the effect is purely viewer-side, it does run up against the Lab’s “shared experience” policy; not only do you need a viewer with the code to create mirror surfaces, you need a viewer with the code to see the results. People using viewers without the code will just see a transparent prim face (or if the mirror texture is applied to the entire prim, nothing at all while it is 100% transparent).

This means that in order for mirrors of this nature to become the norm in Second Life, then the idea, as offered through this approach, is going to have to be adopted by the Lab. Obviously, to be absolutely ideal, it would also be better if it worked with Advance Lighting Model active as well. Zi additionally notes that some server-side updates are also required in order for a simulator to be able to save things like the reflectiveness of a given mirror surface, etc.

It's all done with mirrors ...
It’s all done with mirrors, y’know … (click to enlarge, if required)

Whether this work could herald the arrival of fully reflective surfaces in-world remains to be seen. It’s not clear how much interest in the idea has been shown by the Lab, but hopefully with the JIRA filed, they’ll take a look at things. There’s little doubt that if such a capability could be made to happen, and without a massive performance or system hit, then it could prove popular with users and add further depth to the platform.

Lab delay introduction of new Skill Gaming Policy

secondlifeOn Wednesday July 9th, Linden Lab announced forthcoming changes to their Skill Gaming policy, which were due  to come into force as from Friday August 1st, 2014. They would bring with them stricter control enforced over the operation of games of skill in Second Life, and see the introduction of a new region type  – The Skill Gaming Region – which will only be accessible to those Second Life users who are of sufficient age and are located in a jurisdiction that Linden Lab permits for this kind of online gaming activity.

However, on Tuesday July 29th, 2014, the Lab issued a blog post stating that the new Skill Gaming policy will not now take effect until Monday September 1st, 2014, pointing to the number of applications received as being the reason for the delay.

The update on the introduction of the revisions to Skill Gaming in Second Life reads in full:

As we recently blogged, we have a new policy for Skill Gaming in Second Life. In short, skill games that offer Linden Dollar payouts will be allowed in Second Life, but each game, its creator, its operator, and the region on which it’s operated must be approved by Linden Lab.

Today, we are changing the date that the changes described in our previous blog post go into effect. Instead of starting on August 1, the updated Skill Gaming Policy will go into effect on September 1, 2014. The original blog post and the FAQs will also be updated to reflect this new deadline.

Since our original announcement, we’ve received many applications from Second Life users who want to become approved skill game creators and operators. By moving the date back, we’ll be able to process a larger number of applications and also offer creators more time to make necessary changes to their games.

If you would like to apply to become an approved skill games creator and/or operator, you can do so through Echosign.

Infrastructure support for the new Skill Gaming regions has already been deployed to the main grid as a part of the server deployments of weeks 28 and 29.

The Drax Files Radio Hour: lunch and Second Life

radio-hourEpisode #28 of  The Drax Files Radio Hour was posted on Friday July 18th. After a measure of disappointment on my part that the promised interview with Jacquelyn Ford Morie didn’t appear – for fully understandable reasons – this segment makes up for it with a chat with long-term Second Life business man Lislo Mensing, or as he’s known in the physical world, Stefan Weiss.

Stefan is the driving force behind a recreation of the heart of Munich in Second Life. He’s also the owner of Teledollar, a Linden Dollar Authorised Reseller, and he has some interesting first-hand experiences of trying to marry-up the virtual and physical worlds.

This is perhaps the most informal interview Drax has conducted to date in the radio show, taking place as it does over lunch in the physical Munich, where he met Stefan while enjoying his summer vacation in Germany. As such, this is both the first in a trio of such informal “lunch with a lifer” interviews and is also something of an introduction to Bavarian cuisine!

Virtual Munich, which dates from 2007, is a recreation of the centre of Munich. It features many of the landmarks from its physical namesake, including several of the city’s churches and the old city gates. All of these, while prim builds, have been constructed using around 6,000 photos taken of the actual city of Munich, allowing as much as possible of the original’s essence to be recaptured within the virtual constraints of two regions in SL. Streets and plazas are faithfully recreated, and even a portion of the underground (subway) transit system has been reproduced (tickets L$69), which connects the heart of the city to the surrounding rural regions.

In developing the build, Stefan saw the potential for a symbiotic way of promoting the virtual in the physical and vice-versa. Approaching the Munich civic authorities, he put forward the idea that virtual Munich could be used as a means of promoting the physical Munich, offering people who might be considering a visit to the city the means to immersively learn about it and explore it prior to actually visiting. There was initially a positive response to the idea, and even talk of including the virtual version of Munich in documentation about the city’s 850th anniversary.

Unfortunately, all this came to nought when, in late 2007, German media outlets (and others around the world) followed the UK’s Sky News in running exposés on sexual ageplay rings within Second Life. Understandably, support for the project from both Munich’s civic authorities and from businesses rapidly declined in the wake of the reports; so much so that Stefan was refused permission to take photos of the non-public areas of some of the historic buildings which he had hoped to be able to share with people by recreating them in-world.

A view across the munich skyline in Second Life
A view across the Munich skyline in Second Life

While there has always been much speculation as to the impact these and other such exposés had on the wider view the public and businesses had on Second Life, Stefan’s frank description of the situation he personally faced really adds perspective to one of the factors that undoubtedly led to SL fall from grace in the media’s eyes, and which may have had a far greater impact on the media’s perception of the platform than its inability to live up to the hype created around it.

Stefan Weiss as caught at a Munich SL user’s meet-u (image by Xphile Boucher, via The Drax Files Radio Hour website)

Beyond this, the conversation touches on the relevance of virtual worlds, with Stefan pointing out that really, not much has changed over the years where the broader attitude towards VWs is concerned. This, he points out, is largely due to what I’m going to henceforth call the Pamela Effect henceforth (particularly after the re-run of Drax’s interview with her in segment #27 of the Drax Files Radio Hour): most of those in the “mainstream” market simply don’t see VWs as being in any way relevant to their physical and digital lives and activities, and so don’t see why they should bother giving VWs a go.

Not only is this attitude perhaps common among the vast majority of Internet users, but it obviously also encompasses businesses as well,  who have far more accessible means at their disposal for marketing the products and services and of reaching an audience. It is relevance – far more than issues of getting the keyboard and mouse “out of the way” – which is likely going to be the major issue for anyone trying to drive a virtual world even further into the mainstream consciousness – at least for a the foreseeable future.

I’ve mentioned three reasons why I think this is the case in a previous article (although these are the only reasons for my feeling this way).  Stefan points to a couple more; things which are regularly discussed, at least among those of us already engaged in VWs: scalability (in terms of having an environment which can actually support compelling, mass audience / mass participation activities), and accessibility. In this latter respect, Tony Parisi is more than likely correct in his view that unless a virtual world embraces the ease of access presented by the web, it’s going to have trouble making its presence felt.

Relevance is also something that came to mind when the Oculus Rift received its obligatory mention in the interview. While much has been made of the potential of VR bringing about a renaissance in interest in virtual worlds, very little has really been said about the potential for VR to do exactly the reverse, and leave virtual worlds still locked into a narrow niche within the mainstream market.

Simply put, if VR brings about the kind of situation which is discussed in the show, allowing hundreds and thousands of people world-wide to sit down and witness a World Cup final as if they were there, or a Wimbledon championship or take a ride into space or stand on the surface of Mars or explore the wreck of the Titanic or do a hundred other things that might not be otherwise possible for them, and share the experience with others –  then why should they even bother with farting around with a virtual world?

Towards the end of the piece, things get a little confusing as other virtual environments, such as Twinity and Google Lively are touched upon amidst some lunchtime chuckles. There’s also a brief overview of the Teledollar operation before times catches up with Stefan and Drax, and things are cut short by the needs of the physical world and work.

This is very much a curate’s egg of a conversation; there is a lot discussed and mentioned which offers food for thought. The over-the-lunch-table nature of the conversation lends a curious tilt to things, helping to add flavour to the proceedings, and giving it an oddly intimate feel for the listener, as if we’re sitting on a table close by and overhearing their discussion as they eat; and what interesting listening it makes!

Second Life helps cane growers learn about sustainable farming practices and more

There is no doubting that Second Life is an excellent platform for teaching and learning. That’s been demonstrated time and again, with many and varied educational and distance learning programmes being run through the platform, and with many schools, colleges, universities and other organisations making use of Second Life for a wealth of education and learning activities over the years.

One of the more intriguing means of using the platform educational purposes has been recently highlighted in an Australian Broadcasting Corporation website and video report, Queensland’s Cane Farmers Learn About Climate Change Via Virtual Reality World, which outlines a project initiated in 2012 by the University of Southern Queensland (USQ), Australia, and which is now being extended.

Sweet Success is a programme developed by the Australian Digital Futures Institute (ADFI) and the International Centre of Applied Climate Sciences (ICACS) at USQ. It uses machinima created in Second Life to encourage Queensland’s sugarcane farmers to consider sustainable farming practices (including their own environmental impact on the land), and to stimulate discussion about how to incorporate an understanding of climate risk into their decision-making.

Sweet Success sought to better inform sugar cane farmers on climate and environmental impact using digital techniques, including machinima filmed in Second Life
Sweet Success sought to better inform sugar cane farmers on climate and environmental impact using digital techniques, including machinima filmed in Second Life

The videos are set in an environment typical of that found in Queensland’s cane growing region, and feature a number of individuals typical of the character and disposition of Queensland cane farmers. Lasting some 3-5 minutes, the films serve as both a focal point for discussion and as  a means to introduce the farmers to the climate information, interactive models, etc., which might be used to better inform their farming decisions.

The initial programme involved around 20 sugar cane farmers who were able to watch the films, study the material and discuss the issues and ideas raised. While there was some initial scepticism, the farmers admitted the videos were a positive means of passing on information on things they may not have thought about.

Dr. Helen Farley, one of the researchers involved in Sweet Success, and her SL alter-ego
Dr. Helen Farley, one of the researchers involved in Sweet Success, and her SL alter-ego

Dr. Kate Farley, one of the Digital Futures faculty members involved in the project, and herself a long-term advocate for the use of virtual worlds for learning and teaching in higher education, describes the decision to use Second Life as being primarily a matter of finance and convenience: Second Life allowed the films to be put together at a far lower cost and much quicker than would have been the case with live action location shooting.

Matt Kealley, senior manager of environment and natural resources for the Canegrowers industry group sees the approach as potentially offering the means to deliver a lot of information on farming, climate, weather and so on to his members. He also believes that once the novelty of being presented with a film shot in a virtual environment had worn off, his members found the information presented to be “compelling” in content and value.

In fact, such has been the success of the pilot programme, the project has now been expanded to include some 400 Queensland sugarcane growers.

Dr. Kate Reardon-Smith of the ACSC
Dr. Kate Reardon-Smith of the ACSC

While the cost-effective nature of using Second Life as a film medium might have been the primary consideration in using it for the Sweet Success films, Dr. Farley, together with fellow researcher, Dr. Kate Reardon-Smith, believes that the approach has other benefits as well.

Leading a series of presentations on the work, both Dr. Farley and Dr. Reardon-Smith point to the use of Second Life as being ideal for addressing matters of climate risk assessment, sustainable farming methods and so on for a wide variety of farming locations and systems, simply through the use culturally appropriate clothing, language and design. In addition, the digital nature of the finished product makes it easy to package with the supporting material for dissemination anywhere in the world.

Nor is Sweet Success the only activity undertaken by USQ to use Second Life as a means of educating farmers. In 2010, ICACS, under its old title of the Australian Centre for Sustainable Catchments (ACSC), joined with the Asia-Pacific Network to use Second Life avatars as a means to present real world climate-based scenarios to farmers in the Andhra Pradesh region of India. The aim of the project was to challenge farmers about on-farm decisions that involve seasonal climate risk. As a distance learning project, it was delivered to Internet kiosks within the region where farmers could then discuss and debate the issues raised.

The ACSC-APN project in the Andhra Paresh region of India also used Second Life as a means to
The ACSC-APN project in the Andhra Pradesh region of India also used Second Life as a means to engage farmers on the subject of seasonal climate risk and farming decisions

All told, both of these projects present a unique and fascinating extension of the use of Second Life as an educational medium and for distance learning.

Related Links

All images via the University of Southern Queensland