VWBPE 2014: Philip Rosedale keynote – but is technology really the key to mass adoption?

On Wednesday April 9th, the 2014 Virtual Worlds Best practices in Education conference opened with a keynote address by Philip Rosedale. In it, he covers a lot of the ground he laid-out at the SVVR meet-up at the end of March (which I’ve covered here) in terms of communications in virtual worlds, although things were at times couched in more general terms than being specifically framed by demonstrations of some of High Fidelity has been doing.

The following is the official video of the presentation, recorded by Mal Burns on behalf of VWBPE. Timestamps within the notes indicate the points at which Philip’s exactly comments can be heard.

After a brief introduction by Kevin Phelan (Phelan Corrimal in SL), Philip provides a short overview of Rosedale’s own attraction to virtual worlds – born out of a desire to “building crazy things” which extended into imagining what it would be like to build a virtual world able to mimic the richness of the real world. In this, Ernest Cline’s Ready Player One is mentioned, as is Second Life’s role as a pioneer and validation of what might be achieved if the right tools were in place that would enable a billion, rather than a million, people engage in virtual world spaces.

[12:30] He particularly sees the mouse and the keyboard as major barriers to entry, as they require complex manipulation (keystrokes and mouse movements) to achieve avatar movement, while limiting communications by disallowing facial expressions and / or natural gestures. In this he points to emerging hardware such as the Razer Hydra, Sixense Stem and 3D cameras as overcoming these limitations and opening the floodgates to virtual world adoption.

[23:55] Latency is also raised as a bugaboo issue as well. While I agree that reducing the level of latency is good for communications, I’m not convinced by all of the arguments put forward (for example, I doubt most people using a mobile ‘phone are even aware of the 500 millisecond delay, much less finding it a reason to loathe using their cellphone), which is not to say I think that latency isn’t an issue worthy of being addressed as far as might be possible.

[31:50] Identity disclosure, and our right to determine what is disclosed of our identity and how is very much a fundamental part of trusted use of any system, and as such, is key to the future of virtual worlds. This is something he has spoken about at SVVR and has blogged on the subject as well, indicating that the level of trust sought and identity given should, as with real life, be more fluid, depending upon what we’re doing and where we’re going. This spills over into areas of commerce and into the idea of having the freedom to move around between the kind of multiple worlds the metaverse is envisaged as being, and doing so with confidence and trust in the different environments and having control over what we are willing to reveal to those environments, rather than having them determine what they should take.

[35:07] For Virtual worlds to really grow, he believes they need to be more like the Internet, with people running their own servers and links between them operating much like the Web does today, allowing for complete continuous interconnectedness between servers and worlds, built upon open-source software (again: trust), and which can be properly scaled – such as through High Fidelity’s examination of distributed computing (again, as I point-out in covering the SVVR talk – think SETI@home).

[49:50] Q&A.

The presentation is interesting, and couched in general terms rather than being specific to High Fidelity – which is not inappropriate for the venue. Little of it comes over as hype or a sales pitch. I found the comments on identity, together with the statements made at SVVR and in the High Fidelity blog post, to be very much on-message and highly relevant. The distributed computing approach is an interesting idea as well, and possibly one with a lot of potential if the right value proposition is offered to people – such as rewarding them with crypto-currency credits they can spend on goods and services (or even cash-out over time?).

Where I do perhaps have an issue with things is in the view that the only barrier to the mass adoption of VWs is primarily that of technology. The latter can certainly enhance our experiences once we’re in a virtual world, no doubt about that. There is also no denying that with something like SL, more needs to be done to reduce that initial learning curve for someone entering the environment.

Are motion controllers and the like really the key to unlocking people’s ability to recognise virtual worlds as a value proposition for their time or is something else actually required? (image courtesy of Razer Hydra)

However, like it or not, springboarding VWs into mainstream adoption isn’t purely a technical issue, there is a social element as well. There needs to be compelling reasons to encourage people to turn to VWs instead of other possible options. Facial recognition software and motion controllers may well be wonderful for translating your expressions and gestures to an avatar when communicating with someone on the other side of the world, but frankly, so is a webcam and monitor screen. As such, for many, the technology will not be the value proposition that will encourage them to be more involved in VWs. There needs to be something more.

The need for a real value proposition is perhaps most clearly exemplified by Pamela in the 8th segment of The Drax Files Radio Hour. She dismisses any involvement in a virtual world because she sees no advantage in it compared to what she can do now. hers is unlikely to be a minority attitude.

That Philip Rosedale dismisses this social element so readily in the Q&A session isn’t entirely surprising – he is a technologist, after all – but given his experience in the field, it is disappointing. Technology can and will make immersive VW environments a lot easier to use, for sure. But I suspect the company or group that really cracks the nut of presenting VWs in terms of compelling, mainstream activities people believe should see as a daily part of their lives is actually going to be more responsible for unlocking the door to mass adoption than the company or group that provides a technologically superior means of accessing a VW.

LL Terms of Service: Ebbe – “we’re working on it…”

Update, April 13th: the full transcript of Ebbe’s VWBPE 2014 address is now available.

On Friday April 11th, Ebbe Altberg, Linden Lab’s CEO addressed a pack amphitheatre at the 2014 Virtual Worlds Best Practice in Education (VWBPE) conference in Second Life. Some 200 people were in attendance in what was around a 90-minute session which comprises an opening statement from Ebbe, followed by a Q&A / discussion session.

I’ll have a full transcript of the meeting available shortly. However, as a part of his opening statement, Ebbe made a series of comments relating to the Lab’s Terms of Service, which I think are worth highlighting on their own. So here is a full transcript of his comments on the subject:

Terms of Service. I am working with my Legal Counsel to try to try to figure out how we can make it more obvious – or very obvious – that the creators of the content own the content, and we obviously have no intent of ever stealing your content or profiting off of your content independently of the creators in some fashion.

The current terms might indicate that we might somehow have some plan to steal people’s content and somehow profit from it for ourselves, without benefitting the creator, and that’s obviously not our intent at all. It would be very damaging to our business if we started to behave in that way because this whole platform is all about the content you all create. And if you can’t do that, and trust that it is yours, that’s obviously a problem. So I’m working on that, and I can ask you right now to trust us that we’re not going to do what the current clause might suggest we’re going to do, but we’re working on some simple tweaks to the language to make that more explicit.

We also have no interest in locking you in; any content that you create, we feel you should be able to export, and take and save and possibly if you want to move to another environment or OpenSim, that should be possible. So we’re not trying to lock you in either. Obviously, it’s very important to us to get content both in and out, so I just want to put that right out there.

Quite what will come out of this obviously remains to be seen, as will whether or not the changes successfully quell all concerns. However, it would appear that the wheels are finally in motion, and that hopefully, an equitable resolution will be forthcoming.

SL projects updates 15/2: group bans; group chat and more

SL Server Deployments week 15 – recap

There were no deployments.

Upcoming Releases

As noted in part 1 of this report, week 16 should see a server maintenance update deployed which includes a fix for BUG-5533, “llTeleportAgent() and llTeleportAgentGlobalCoords() can break any script in any attached object that contains a change event.”

Commenting on this at the Server Beta User Group meeting on Thursday April 10th, Maestro Linden said:

Kelly found some race conditions which likely lead to the breakage, and the fix appears to work. Right now it’s on Aditi, in the DRTSIM-251 channel. Ahern is on that channel, though I found out that almost all of Ahern’s parcels have scripts disabled. However, “Tehama” is also on that channel, and does allow scripts on some of its parcels. It’s public access, so that’s a good place to check the fix out if you’re interested.

Server Beta Meeting, Thursday April 10th
Server Beta Meeting, Thursday April 10th

SL Viewer

On Wednesday April 9th, the Lab release the VoiceMO release candidate viewer, version 3.7.6.288881 (release notes), into the viewer release channel. This RC combines the updates found in the Voice RC viewer (3.7.5.288516) and the Merchant Outbox RC viewer (3.7.5.288408), both of which at present also remain in the release channel for the time being.

Also on Wednesday April 9th, the Lab issued a new Maintenance viewer RC, version 3.7.6.288799, which includes some 54 MAINT fixes from the Lab – see the release notes for details. However, this does include a fix for the Male Avatar Chest Rendering Broken / Mesh Clothes Breaking on Male Avatars (BUG-5537) issue I covered in week 13. The fix for this is given as “MAINT-3896 Male Avatar chest rendering broken – chest shows bigger for onlookers, than to oneself – effects attachments and all worn mesh types”. This update also includes a change to remove the “Start Second LIfe now?” dialogue which is displayed after installing the viewer, and which resulted in the viewer being started with system Admin privileges on Windows systems.

Commenting briefly on things viewer, Simon Linden indicated that he’s also fixed the bug where terrain textures aren’t updated if you change the heights or textures, although I gather this has yet to reach a viewer.

Group Chat

A further Group Chat test was run on Aditi. “I’m looking into the overhead for group chat on maintaining that list of people in the chat,” Simon explained as the test kicked-off. “I think it becomes heavy load in a group with a lot of people on-line.” The test proceeded along similar lines to those carried out previously.

Group Bans

Baker Linden reports that most of the viewer-side bugs related to group bans have been fixed. however, Caleb Linden found an ugly bug in the back-end code whereby only the first name on a list of people being invited into a group would be checked to see if that person had been previously banned (and thus prevented from joining); anyone else further down the list would had been previously banned would be allowed to re-join on receiving the invitation.

A fix has yet to be written, but Baker doesn’t see it as a hard task to do, “My fix will check each one and if you’re banned, it will gracefully skip the banned agent’s invite,” he said when describing how things will work. Other Items

Aditi Log-in Issue / Inventory Update Issue

The script which should synchronise people’s passwords and inventories between Agni (the main grid) and Aditi (the beta grid) is not functioning correctly (see BUG-5563). As I’ve previously reported, this means that following a password change, people are finding they must continue to use their old password to access Aditi and are not seeing their inventory update. Commenting on the issue, Maestro Linden said, “We’re not sure why it’s not working – the script which is supposed to synchronize the accounts says that it updates them on Aditi.” So at the moment any fix is dependent on the Linden carrying out further checks to ascertain what is going on.

Feature Request: Hide All Objects Outside Parcel

A request for a parcel control feature which, when set, would cause the viewer to ignore and not render all objects outside of a parcel has been put forward (see BUG-5671) and is drawing strong debate.

On the one hand, people feel this could greatly enhance performance when entering regions that are massively loaded with objects and textures (because the viewer would simply ignore everything outside of the parcel where the option is enabled).

On the other hand, some people feel that such an option could negatively impact the feeling of SL as a place (imagine teleporting around regions and all you see are the objects on the parcel you’re in, the rest is just terrain).

One suggestion put forward is that rather than making this a parcel option (and thus forced on anyone entering a parcel where it is enabled), such a capability is, if possible, made a viewer-side option, so that people get a more flexible choice as to what is rendered / what they see. I have to say that this potentially offers the most attractive option were such a capability to be taken-up by LL.

 

 

New blog banner – thank you, Loverdag!

I had a lovely surprise on Thursday April 10th. Loverdag, a Second Life photographer I greatly admire, dropped me a Flickr mail with a link to a photo of me she’d taken (in secret!) while we both happened to be visiting Done Wiv A Twist.  The photo simply stunned me and left me with my jaw hanging open – I say this without any vanity on my part – it’s simply gorgeous.

I’d been thinking about revamping the header at the top of this blog for a while, and seeing Loverdag’s photo was simply too good an opportunity to pass up. So, with her permission (and due credit in the banner!) it’s now a part of my new blog banner.

Thank you, again, Loverdag for taking such a marvellous photo of me – and for allowing me to use it like this 🙂 .

Lab seeks to make buying clothes that fit easier … sort-of

secondlifeThe Commerce team have issued a blog post and Knowledge Base article aimed at helping people ensure the clothes they buy will actually fit their avatar.

I’ll be honest and admit that I hadn’t realised that there was a particular issue with clothing that needed any clarification; but I’m also biased in that I’ve been around SL long enough and reporting on it, that understanding the various clothing types doesn’t actually present me with a problem. However, I can understand a new arrival being confused by terms such as “system clothing” or “clothing layers”, and “mesh clothing”, “fitted mesh clothing”, “rigged mesh clothing” and so on, and wondering what the heck it is all about and where the differences lie.

The blog post is aimed at content creators, and is intended to encourage them to define the clothing they produce in terms of three avatar types, and to label their clothing accordingly with icons.

However, to get a clearer understanding of what is being proposed, it is perhaps best to refer to the Knowledge Base article, which provides far more comprehensive information.

Essentially, it has been decided that clothing should be defined in terms of avatar categories. These are defined by the Lab as:

  • Classic – Classic avatars are the original default Second Life avatars.  They have a modifiable humanoid shape, and can wear clothing in the form of textures and attachments added to that shape. Most of a classic avatar’s appearance and clothing can be modified by pressing the Appearance button in the Second Life Viewer, but cannot take advantage of newer graphical features such as normal and specular maps.
  • Standard mesh – A standard mesh avatar is a classic avatar that is wearing a rigged mesh attachment, usually a full-body avatar, and whose classic body is hidden by a full body alpha mask.  It is classified as “standard” if it was created using the standard fitted mesh model available on the Second Life wiki.
  • Custom/branded – A custom avatar is a classic avatar that is hidden by a full body alpha mask and is wearing a customized rigged mesh attachment or attachments that otherwise replace the classic avatar body.  These avatars can come in a wide variety of shapes and sizes, and each model typically requires clothing specifically designed to work with such an avatar.

Hints to help a consumer determine what category of avatar they are using are also provided,

In addition, the Lab is asking that creators define their clothing as one of four types in order to indicate which categories of avatar it is most likely to be compatible with:

  • Classic only – The “layer-based” textured clothing applied directly to classic avatars.  This clothing type only displays properly on classic avatars and is rendered completely invisible by the alpha mask worn by most mesh avatars.
  • Mesh only – An attachment that is designed to appear as clothing on a standard mesh avatar.  It may appear to be a layer-based texture, but does not work properly on classic avatars.  Mesh only clothing must be created outside Second Life in a 3D modeling tool.
  • Classic/Mesh – Attachments primarily designed for standard mesh avatars that can be made to work on a classic avatar.  In order to be classified as classic/mesh, the clothing must include an appropriate alpha mask designed to hide the affected parts of a classic avatar.
  • Branded – A catch-all term meant to encompass the many possible custom avatar designs.  Such avatars can typically only wear clothing specifically designed for that specific avatar; therefore each custom designed avatar and its compatible clothing may be considered a “brand”.  Likewise, clothing designed for a custom avatar shape should not be expected to work properly with classic or standard mesh avatars, or even other custom avatars.

In order to help shoppers find clothing that properly fits their avatars, Merchants are additionally being asked to use one of two label images to use when advertising their clothes, and to update any clothing they have listed on the SL Marketplace so that it is defined by one of the three avatar categories (so that it is defined as being compatible with Classic Avatars or Mesh Avatars or, in the case of a specific custom avatar, it is defined by the avatar’s brand name.

The two logos the Lab are requesting content creators use to denote their clothing are:

images © Depositphotos.com/i3alda
images © Depositphotos.com/i3alda

Note these are copyrighted stock images, requiring the use of the label, “© Depositphotos.com/i3alda” with each.

Further details can be obtained directly from the Knowledge Base article, which also includes notes on why custom avatar types should ideally have a unique brand associated with them.

The new definitions do appear be to perhaps as confusing as the current terminology (“system”, “fitted mesh”, etc.), as such it will be interesting to see the response to this proposal / request, and how well things work in practice.

Talking castAR and High Fidelity

The Silicon Valley VR (SVVR) Meet-up at the end of March featured a series of presentations from people within the VR field, including those by Brian Bruning, VP of Business Development and Marketing at Technical Illusions (castAR) and Philip Rosedale of High Fidelity.

The full video of the presentations is provided below, and I’ve included notes on each of these two presentations in particular. When reading, please be aware that these are notes, and not a full transcript.

Brain Bruning – castAR

Brian Bruning’s presentation commences at the 0:05:48 mark.

Image courtesy of Technical Illusions
Image courtesy of Technical Illusions

I’ve covered the early work on castAR in the past, some of which is touched upon at various points in the presentation, so I don’t want to repeat things here. What is interesting is that the system’s development has been following a similar route to that of the Oculus Rift: Technical Illusions have been out attending technology shows, conferences, exhibitions, etc., to gain visibility for the product , they ran a successful Kickstarter campaign for castAR which raised $1,052,110 of a $400,000 target.

[07;10] castAR has three modes of operation:

  • Projected augmented reality (AR), which presents a 3D hologram image projected onto a retro-reflective surface in front of you. allowing you to interact with it via a “wand”
  • Augmented reality of a similar nature to that of Google Glass
  • Virtual reality of the kind seen with the Oculus Rift.
castAR projected AR gaming with the castAR wand (image via Technabob)

The emphasis is that the headset is natural, comfortable-looking (a pair of glasses) which has three product features built-in. As a result of the Kickstarter, the company has now grown to 10 people, and the technical specifications for the system have been decided:

Glasses

  • Less than 100 grams in weight
  • Fits over most prescription glasses
  • Ultra flexible micro coax cable
  • Active shutters with 50% duty cycle
Projectors

  • 1280 x 720 resolution per eye
  • 120hz refresh rate per eye 24 bits of color per pixel
  • 65 degree horizontal field of view 93% fill factor
Tracking System

  • 110 degree FOV
  • 120hz update rate
  • 8.3ms response time
  • 6 degrees of freedom
  • Absolute positioning Over 200 unique tracking points
  • 0.07mm accuracy at 1.5m
AR & VR Clip-On

  • 90 degree horizontal FOV
  • Very low distortion freeform optics
  • 5mm by 8mm eye box
  • Removable flip-up shutter for AR mode

[11:20] castAR has its roots within the gaming environment and has been developed with the games market in mind (again, as had pretty much been the case with Oculus Rift), although they had recognised the potential for wider applications – they just hadn’t anticipated that someone like Facebook would step into the VR / AR arena and potentially add impetus to the wider applications for VR / AR.

[11:45] One of the benefits seen with a combined approach to VR / AR is that there are situations in work, in education, in research / medical fields where a completely occluded view of the real world  – as required by head-mounted displays (HMDs) such as the Oculus Rift – are simply not appropriate (Mr. Bruning jokes that there are even some activities associated with gaming where a HMD is inappropriate – such as simply trying to eat a snack or take a drink without interrupting the game flow!). In these situations, the projected AR or the Google Glass-like” AR are seen as more beneficial, and hence the drive to address all three modes of operation.

[13:20] Technical Illusions believe that many of the challenges faced by AR and VR content creators are similar in nature – such as dealing with UI issues, both seeing UI elements and interacting with those UI elements, or dealing with physical objects which my be places within a VR / AR scene. As such, Technical Illusions are focused on educating content creators to the needs of immersive / augmented environments and are producing dev kits to assist content creators in developing suitable environments / games / activities which take such issues into account.

[14:57] Current planning is for Technical Illusions to have their dev kits and the Kickstarter sets shipped in summer 2014, and to have the consumer version ready to ship by the fourth quarter of 2014, and it is indicated that price-point for consumer kits (glasses, tracking components, retro-reflective surface and input wand) will be “sub $300”.

The castAR update is an interesting, fast-paced piece, primarily focused on the projected AR capabilities of the glasses. Little or nothing was said reading the ability of the system to be used as a VR system, and no disclosure was given on the VR clip-on system.

This is apparently a deliberate decision on the part of the company, in that they are allowing VR HMD focused companies promote the potential use of VR, While Technical Illusions focus on the potential of projected AR capabilities.  While an interesting approach to take, I can’t help but feel that (assuming the VR clip-on is at a “feature complete” status) promoting all capabilities in castAR  wouldn’t be better, as they help present the product as a more versatile tool.

Continue reading “Talking castAR and High Fidelity”