On Thursday, July 6th, Linden Lab released a further Sansar preview video focusing on the work of a content creator – Ria, which I’ve embedded at the end of this article
Given we’re now not that far away from the doors to Sansar opening to a wider audience in the “creator beta” (or whatever the Lab finally calls it), the video can be seen as a further ratcheting of things – alongside recent media articles – ready for the opening. At the same time, the past week has seen a further batch of invited into the Creator Preview find their way to those who have applied to access Sansar.
At 99 seconds in length, the video is an engaging enough piece, Ria’s experience from both within and without, which takes the form of an immersive story involving a little girl and her toys, utilising three locations linked by teleports. Kudos to Drax for presenting a means of suggesting the potential of VR immersion by overlaying images from within the game with shots of Ria looking around her creation while using a HMD. It may not be as immersive as “the real thing”, but it’s a lot better that intercut views of heads with HMDs strapped to them bobbing and weaving in front of computer screens we’ve seen in the past.
Those looking for details on Sansar are going to be disappointed however – this is a promotional video after all. That said, there are some interesting shots of the edit environment and what appears to be the fully realised run-time space. Again, given it is a promo video, reading too much into what “is” or “isn’t” said would be a mistake.
Some have found a couple of statements in the video objectionable. The first is the idea that “there is nothing even remotely like Sansar out there” – and I admit to finding it questionable myself.While it may not be as deeply immersive as a “true” VR experience, the fact remains that SL offers pretty much everything Sansar promises, and has done for a good while now. And just because it doesn’t support headsets doesn’t change that. And in terms of VR, there is High Fidelity to consider as well…
The second is that Sansar will achieve “broad appeal” when launched. This has been pooh-poohed on the basis that VR itself has yet to achieve a significant market share. However, “broad appeal” needn’t necessarily mean “mass market” – and the two seem to be getting conflated.
I personally don’t think VR (and by extension Sansar) will be “mass market”. However, as I’ve oft said, there are markets were VR could have a significant role, and Sansar could be ideally positioned to leverage them. Design, architecture, training, simulation, education, healthcare, for example; plus, as friend and content creator Dassni pointed out to me in a lengthy conversation, it might even appeal to indie game / game modding enthusiasts. Taken together, these could facilitate the kind of “broad appeal” for Sansar to generate a comfortable level of revenue for the Lab – in time.
How much time? Well, therein lies the rub. Sansar itself is going to need a lot more development work once the gates open to a wider audience, and even among the markets already looking at VR, the preference might be to wait until headsets have improved in capability and looks and come down in price – something which could be around 2-3 years away.
The following notes are taken from the Content Creation User Group meeting, held on Thursday, July 6th, 2017 at 13:00 SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are usually available on the Content Creation User Group wiki page.
Audio extracts are provided where relevant. Note that this article is organised (as far as possible) by topic, and does not necessarily reflect the chronological order in which items were discussed. Medhue was a little late to the meeting, and so missed the first 15 minutes. However, his video is embedded at the end of this article, and time steps to it, where applicable, are provided and will open the video at that point in a separate browser tab for ease of reference.
New Starter Avatars
The Lab issued new starter avatars on Wednesday, July 5th. Six out of the eight classic avatars utilised Bento extensions for rideable horses or wings. See my write-up on them for more.
Work is continuing on trying to get linksets to work correctly. This is focused on ensuring there is sufficient back-end code to correctly handle multiple animated requests from different elements within an animated object.
Some general questions related to animated mesh were asked at various points in the meeting, these are addressed below.
Will animated objects use the Bento skeleton – yes.
[5:07] Will animated mesh allow the return of mesh UUID flipping (removed due to the ability being badly abused) – very unlikely.
[6:12] Where will animations for animated objects be stored? Within the object (or elements of the object) itself, and called via the object’s own scripts – as per scripted attachments on avatars are handled.
[7:15] Will animated objects use an AO? Not in the sense of an avatar AO, as animated objects will not make use of the basic system animations / locomotion graph. There was some debate over the effectiveness of using the AO system, although it was pointed out it could make it easier when having pets following you, running when you run, etc. One suggestion was that pathfinding might be adopted to act as a pseudo-AO.
[29:02] There is still no data on an animated objects project viewer will be available.
Attaching Avatars and Animated Objects To One Another
There is obviously considerable interest in enabling avatars and animated objects attach one to another. For example, being able to walk up to a free roaming horse and then mount it and ride it, or having a pet running around on the ground you could “pick up” and have it sit on your shoulder, move between your shoulders, look around, lie around your neck, etc.
Achieving this raises numerous issues – how should two skeletal objects attach one to another, how are the independent animation sets handled, how are they kept in sync, how the hierarchy is managed (which is the parent, which is the child, etc.
Some options have been suggested for allowing avatars to attach to animated objects – such by having a specific “sit bone” which could be targeted and then used as an anchor point to help maintain some semblance of synchronisation between the animated object and the avatar’s own animations. Feature request BUG-100864 offers a similar suggestion, utilising a scripted approach. Vir has suggested that this feature request perhaps be used as the basis for further discussion, and welcomes JIRAs on alternative approaches.
“First Pass” at Animated Objects
[09:59] Vir reminded people that the current work is only a first pass at animated objects, designed to provide basic, usable functionality. Providing more NPC-like capabilities: animated objects with locomotion graphs and using the system animations; attaching animated objects to avatars / avatars to animated objects; giving animated objects the notion of an inventory and wearables, etc., are all seen as potential follow-up projects building on the initial capability, rather than attempting to do everything at once.
Caching / Pre-loading Animations
Sounds and animations can suffer a noticeable delay on a first-time play if they have the be fetched directly at the time they’re needed. For sounds, this can be avoided by using LSL to pre-cache them (e.g. using llPreloadSound) so they are ready for the viewer to play when needed, but there is no similar capability for animations.
A feature request (BUG-7854) was put forward at the end to December 2015, but has not moved beyond Triage. Vir’s view is that pre-loading animations in a manner similar to sounds makes sense, should be relatively straight-forward and could help with syncing animations in general. However, whether or not it might / could be done within the animated objects project is TBD.
Sample Code and Code Libraries
[11:39-27:45] Medhue Simoni opens a discussion on code availability – noting that Pathfinding had suites of example code which appear to have vanished, suggesting that the Lab could do more to provide more complex examples of how new capabilities could be used and then made available to everyone could help leverage such capabilities more effectively.
From this came ideas of open-sourcing more of the Lab’s own code for experiences (like Linden Realms), the potential for abuse this could present (people developing cheats for games), the complexities (or otherwise) of LSL coding, the fact that often when the Lab develops something, they’re not aware of exactly what directions creators will take it, and so masses of example code might be of limited value, etc., – although code demonstrating how to do specific things would undoubtedly be of benefit.
Vir points out that the Lab’s resources are finite for coding, and an alternative might be for a more recognised open-source repository to store, reference and obtain documented code and examples might be in order – there are already libraries and resources on the SL wiki, but these aren’t necessarily easy to navigate. There is also the LSL wiki – although this may be in need of update, as well as resources on a number of forums.
[25:47] Within this conversation, the question was asked if the 64Kb limit on scripts could be raised, and the short answer – as Vir doesn’t deal directly with the scripting side of things is – unknown.
[29:56-end] This conversation then spins out into the technical limitations of Second Life (CPU core usage, etc.) when compared to other platforms as seen by one creator. some of the broader comments in voice and text seem predicated on misunderstandings (e.g. the Lab is investing in newer hardware where possible, but are hamstrung by a need to ensure back compatibility with existing content, which sometimes limits just what can be done; the idea that the new starter avatars are No Mod – they’d fully mod, etc), and which also touches on the basic need for education on content creation (e.g. responsible text sizing and use), before spinning out into general concerns on overall security for content in SL.