Lab issue viewer with a revised log-in screen

The Lab has been experimenting with a revised log-in screen for the official viewer. The viewer, version 3.7.14.292660, is referred to as offering  “a simple and clean login screen for new users.”

In actual fact, the viewer offers two log-in screens, although one of them (shown in the image below) will only be displayed the very first time a new user runs the viewer (or if an existing users performs a completely clean install of this release candidate).

The log-in splash screen new users will see when launching the viewer for the first time (or existing users will see following a clean install)
The log-in splash screen new users will see when launching the viewer for the first time (or existing users will see following a clean install)

Those who have previously logged-in to Second Life (or have not performed a clean install) will see a more familiar log-in screen on starting the viewer, and will immediately notice that the log-in credentials area has been relocated to the top of the screen (see the image below).

The keen-eyed may also notice that the Create Your Account option that used to appear over on the right of the log-in credentials area, and which was introduced as the Lab were making the viewer available through Steam, has been completely removed.

The new log-in splash screen sees the removal of the Create Your Account option and the placement of the log-in options at the top of the screen in a new header area
The new log-in splash screen most users will encounter sees the log-in credentials area moved to the top of the screen and the removal of the Create Your Account option

The new header area offers three independent log-in options:

  • At last location – as  most users will be familiar with, logs you in to your last location; you’ll also be logged in to that location if you type-in an avatar’s name and password and tap ENTER as per the current viewer log-in screen
  • My Favourite Places – a drop-down which lets you choose to log-in to your home location, or any landmark you have dragged and dropped into the viewer’s Favourites Bar / the My Favourites folder in your Inventory
  • The familiar Type a Location text entry box, allowing you to type-in the name of a specific region / sim to which you want to log in.

Note that if you have the grid selection drop-down active, it appears to the right of the log-in options, as shown in the enlarged view, below.

A closer look at the revised log-in area and the three separate options
A closer look at the revised log-in area and the three separate options

Relocating the log-in area like this certainly makes it a lot more attention-grabbing for new users, although existing users are likely going to have to go through a period of muscle memory re-training to get used to things, assuming this progresses to the status of being the de facto release viewer.

I suspect the three log-in options, with their separate buttons may generate a mixed response among existing users; I’m not altogether convinced by them myself. I assume that things have been done this way due to the addition of the My Favourites drop-down, combined with feedback from new users as to what they’d like to see. However, when taken as a whole, the approach comes over as clumsy and potentially less than intuitive, particularly when compared to the older version, which offered a logical left-to-right flow of information.

Outside of the log-in screen updates, this version of the viewer doesn’t appear to contain any additional functional updates, but does include a fix to prevent the viewer crashing when opening Preferences.

One thing I did notice while fiddling with this version of the viewer, is that if you already have landmarks in your viewer’s Favourites Bar / in the My Favourites folder, they may not actually appear in the drop-down in the log-in area until  after the first time you’ve used the viewer to log-in to SL. Similarly, should you subsequently log-in with another version of the SL viewer, you will need to log-in to SL at least once with this viewer to get your Favourites to again be displayed in the drop-down. Given most users don’t hop between different versions of the same viewer that often, this shouldn’t be a problem for those opting to grab a copy of this viewer and take it from a run.

At the time of writing, the viewer has yet to be added to the official Alternate Viewers wiki page, as it is experimental. I suspect it will appear there soon if the project is carried forward. In the meantime, please use the link to the release notes and download options at the top of this page if you wish to look at the viewer yourself.

 

Viewer release summaries 2014: week 32

Updates for the week ending: Sunday August 10th, 2014

This summary is published every Monday and is a list of SL viewer / client releases (official and TPV) made during the previous week. When reading it, please note:

  • It is based on my Current Viewer Releases Page, a list of all Second Life viewers and clients that are in popular use (and of which I am aware), and which are recognised as adhering to the TPV Policy. This page includes comprehensive links to download pages, blog notes, release notes, etc., as well as links to any / all reviews of specific viewers / clients made within this blog
  • By its nature, this summary presented here will always be in arrears, please refer to the Current Viewer Release Page for more up-to-date information

Official LL Viewers

  • Current Release: updated to version 3.7.13.292225 on August 4 (formerly the group ban RC) download, release notes
  • Release channel cohorts (See my notes on manually installing RC viewer versions if you wish to install any release candidate(s) yourself):
    • New Log-in Screen RC viewer version 3.7.14.292660 released on August 6 – a simple and clean login screen for new users (download and release notes)
    • Library Refresh RC viewer updated to version 3.7.14.292638 on August 6 – contains an update to a large set of libraries used by the viewer to provide security, stability and consistency improvements to this and future viewers (download and release notes)
  • Project viewers:
    • No updates.

LL Viewer Resources

Third-party Viewers

V3-style

  • No updates

V1-style

  • Cool VL viewer updated on August 9th – Stable release to version 1.26.12.11 and Legacy version 1.26.8.69 – core updates: please refer to the release notes

Mobile / Other Clients

  • No updates

Additional TPV Resources

Related Links

Virtual humans: helping us to talk about ourselves

Hi, I’m Ellie. Thanks for coming in today. I was created to talk to people in a safe and secure environment. I’m not a therapist, but I’m here to learn about people, and would love to learn about you. I’ll ask a few questions to get us started…

These are the opening comments from SimSensei, a virtual human application and part of a suite of software tools which may in the future be used to assist in the identification, diagnosis and treatment of mental health issues by engaging people in conversation and by using real-time sensing and recognition of nonverbal behaviours and responses which may be indicative of depression or other disorders.

SimSensei and its companion application, MultiSense, have been developed by the Institute for Creative Technologies (ICT) at the University of Southern California (USC) as part of wide-ranging research into the use of various technologies  – virtual humans, virtual reality, and so on – in a number of fields, including entertainment, healthcare and training.

In 2013, SimSensei and MultiSense underwent an extensive study, the results of which have just been published in a report entitled, It’s only a computer: Virtual humans increase willingness to disclose, which appears in the August 2014 volume of Computers in Human Behaviour.

It is regarded as the first study to present empirical evidence that the use of virtual humans can encourage patients to more honestly and openly disclose information about themselves than might be the case when they are directly addressing another human being, whom they may regard as passing judgement on what they are saying, making them less willing to reveal what information about themselves they feel is embarrassing or which may cause them emotional discomfort if mentioned.

Ellie is the "face" of SimSensei, part of a into the use of virtual tools and software to help address health issues
Ellie is a virtual human, the “face” of SimSensei, designed to interact with human beings in a natural way, and build a conversational rapport with them as a part of a suite of software which might be used to help in the diagnosis of mental ailments

SimSensei presents a patient with a screen-based virtual human, Ellie. The term “virtual human” is used rather than “avatar” because Ellie is driven by a complex AI programme which allows her to engage and interact with people entirely autonomously.

This focus of the software is to make Ellie appear as natural and as human as possible in order for her to build up a rapport with the person who is talking to her. This is achieved by the software responding to subjects using both verbal and nonverbal communication, just like a human being.

During a conversation SimSensei will adjust its reactions to a real person’s verbal and visual cues. Ellie will smile in response to positive displays of emotion – happiness, etc., she will nod encouragement or offer appropriate verbal encouragement during pauses in the flow of conversation, and so on. Rapport is further built by the software being able to engage in small talk and give natural-sounding responses to comments. For example, when one subject mentioned he was from Los Angeles, her response was to say, “Oh! I’m from LA myself!”

SimSensei’s interaction with a patient is driven by MultiSense, which is  technically referred to as “multinodal perception software framework”. MultiSense uses a microphone and camera to capture and map the patient’s verbal and nonverbal responses to SimSensei (facial expression, the direction in which they look, body movements, intonations and hesitations in their speech pattern, etc.). This data is analysed in real-time, and feedback is then given to SimSensei, helping to direct its responses as well as allowing it to detect signs of psychological distress which might be associated with depression disorders or conditions such as post-traumatic stress disorder (PTSD), and react accordingly.

During the ICT study, SimSensei and MultiSense were used to screen 239 people aged between 18 and 65, who were invited to a setting where they could interact with SimSensei as if they were undergoing an interview for admission to a hospital or clinic. On their arrival, some of them were told that they would be interacting with a fully automated piece of software with no human involvement at all, while others were told they’d be interacting with a computer avatar operated by a human. In reality, participants were assigned a fully or semi-automated virtual human entirely at random and without regard to what they were told.

When analysing the results, researchers found that those who believed they were talking purely to a virtual human were far more willing to divulge information and answer personal questions than those who had been told another human being was directing the virtual human. Researchers also noted that those who believed they were addressing a computer programme showed more intense emotional responses in their facial expressions than those who had been told beforehand that there was a human being operating the SimSensei virtual human.

MultiSense tracks the subject's facial expression, head and eye movements, body language as well as the tonal quality of their voice and what they are saying. Here, the subject' discomfort in talking about something results in gaze aversion, a downward look, fracture comments, all noted by MultiSense (and marked in red), which then helps drive the SimSensei virtual human's reactions and verbal response (shown in the crentral information box outlined in blue)
MultiSense tracks the subject’s facial expression, head and eye movements, body language as well as the tonal quality of their voice and what they are saying. Here, the subject’ discomfort in talking about something results in gaze aversion, a downward look, fracture comments, all noted by MultiSense (and marked in red), which then helps drive the SimSensei virtual human’s reactions and verbal response (shown in the central information box outlined in blue) – click to enlarge

Feedback was also gathered from the test subjects after their interviews, with those who believed they have been interacting with a computer programme indicating they felt far more comfortable in revealing information about themselves than had they been addressing a human being. By contrast, those who had been told that Ellie was being operated by a human being tended to indicate that they would have been more open in their responses to questions if they had felt they were only addressing a software programme.

Jonathan Gratch is a both the co-author of the study’s report and the Director of Virtual Human Research at ICT. Commenting on the study in a July 2014 article published in the USC News, he said, “We know that developing a rapport and feeling free of judgment are two important factors that affect a person’s willingness to disclose personal information. The virtual character delivered on both these fronts and that is what makes this a particularly valuable tool for obtaining information people might feel sensitive about sharing.”

Jon Gratch leading the USC's ICT research into the use of virtual humans and related technologies in a wide range of areas
Jonathan Gratch leading the USC’s ICT research into the use of virtual humans and related technologies in a wide range of areas

Gratch and his colleagues are keen to stress that SimSensei and MultiSense are not intended to replace trained clinicians in dealing with people’s health issues. However, the results of the ICT’s study suggests that given patients are more willing to disclose information about themselves both directly and through their nonverbal reactions to the software, the use of virtual humans could greatly assist in the diagnosis and treatment process.

In particular, the ICT is already initiating a number of healthcare projects to further explore the potential of virtual humans and the SimSensei / MultiSense framework. These include helping detect signs of depression, the potential to provide healthcare screening services for patients in remote areas, and in improving communication skills in young adults with autism spectrum disorder. Research is also being carried out into the effective use of virtual humans as complex role-playing partners to assist in the training of healthcare professionals, as well as the use of the technology in other training environments.

As noted towards the top of this article, the SimSensei  / MultiSense study is just one aspect of the ICT’s research into the use of a range of virtual technologies, including virtual reality and immersive spaces, for a wide range of actual and potential applications.  I hope to cover some more of their work in future articles.

Related Links

Images via the Institute of Creative Technologies and USC News.