The Wall Street Journal WSJ.D Live conference has just wrapped up for 2016, having taken place in Laguna Beach, California.
Attending the event, Linden Lab CEO demonstrated using VR headset and controllers within a Sansar scene, showing how the controllers can be used to manipulate objects. The video is available of the WSJ YouTube channel, and I’ve embedded it at the end of this article. The Sansar scene itself is relatively simple, and the aim appears to be just to show how reasonably easy it is to move content around when defining a space, rather than any in-depth look at the fidelity of the platform’s graphics.
As we know, the actual editing environment in Sansar is quite separate from the run-time environment. While the latter doesn’t permit “in-world” building, it has been indicated that users will be able to move content around within in – thus allowing them to personalise spaces and arrange scenes. Given the overall context of the presentation – which also includes a look at Sansar avatars – I’m assuming this presentation was using the run-time environment, rather than the editing environment.
The video includes brief shot of the in-world controller / menu, but motion is such that determining anything of import from it is difficult.
Angel investor Benjamin Rohé was at the presentation, and Tweeted a short video of Sansar avatars. As we know from Lab Chat sessions, these are liable to be going through further development as Sansar progresses, so it’s hard to judge how close these are to the looks those stepping through the doors when Sansar allows public admission from early 2017, but I’m guessing it’s not too face off base. What will be interesting is to see just how customisable they will become.
Beyond the look, nothing really new is said about the platform – numbers of users engaged with it through the closed alpha and the Creator Preview have reached “few hundred”, and the public release is still looked at in terms of Q1 2017.
The video leaves a lot of unanswered questions – how are tasks like walking and running handled, for example, when using Sansar via HMD? Will it be point-and-hop, which others will see and a fluid walking motion (remember that they’ll be seeing your avatar, whereas you won’t)? Will those entering Sansar without VR headsets, etc., be able to see their own avatar in third-person as we’re accustomed to doing in Second life (and which is actually part of the attraction of spaces like SL)? And more besides. So judging the platform on the strength of clips like this might not be entirely fair.
But it does add to the list of questions for the nest set of lab Chats!