Sansar Profile 6: In the halls of the dwarven king

David Hall’s Dwarven Fortress – a sense of scale can be gained by my avatar, in white, standing on the platform towards the lower left, and visible between the columns – click for full size

The sixth Sansar Creator Profile video arrived on Wednesday, August 9th, featuring creator David Hall. A 3D creator “for the past couple of years”, David describes his work as more of a passion than a vocation – although he admits he’s always wanted to be a builder of worlds. As such, he is perhaps representative of the broader audience that Linden Lab would like to attract: those who are perhaps not so much interested in or invested with the wider aspects of virtual worlds and their multiplicity of opportunities  and interactions, but rather those who want to be able to sculpt and create the environments they wish to build, and then share them purely with the people the know or believe will share their passion.

David’s featured experience in the video is very much a reflection of this: a vast Dwarven Fortress; which could feature as an artistic statement, an immersive meeting place, or eventually a role-play environment or similar. However it is not his only Sansar experience; David has also created Sunrise, which as the name suggests, captures that first early morning period when birds have started their songs and the sun has just risen above distance hills to cast a soft yellow glow over the world. It’s perhaps not as involved as the Dwarven Fortress, but it is no less immersive, and the sensation of walking through the trees to the look-out point, surround by birdsong is delicious.

Hot air balloons over water – Sunrise by David Hall

The Dwarven Fortress itself is impressive, but again – from an experience consumer perspective – illustrates the issue in opening Sansar’s doors to the general public: there is actually very little to do other than wander around / take photos. While some interaction within experiences is  possible (to a greater degree when using VR systems than when operating in Desktop mode), this lack of interactions – whether intentional on the part of the experience creator or as a result of the platform awaiting capabilities – will continue to be a source of negative feedback towards Sansar.

For those curious about content creation with Sansar, and the tools available within the platform for object placement, lighting, atmospherics, etc., the video offers some insights, along with the use of external tools for the physical creation of models – in David’s case, Maya and Substance Painter. He provides a concise thumbnail description of the steps involved in creating a scene, whilst the video footage allows those who have not tried the editing tools within Sansar with a feel for what is currently available.

Working in Sansar’s Edit mode

What I found interesting in this video is David’s sheer passion for his creativity coupled with his ability to turn that passion into almost lyrical comments. In doing so, through this video he both touches on Sansar’s potential as a platform for personal creativity and sharing and on the potential to really spark the imagination in a manner that could become very compelling for many seeking a new creative outlet. The platform is – more so than Second Life and virtual worlds like it – a truly blank 3D canvas without and fixed context of “land”, “water” or “air”, upon which people are almost entirely free to paint their deepest imaginings.

Freed from these larger “world” context, Sansar spaces are, for their creators, potentially far more liberating than any default feeling of a geographical rooting – unless that is what is desired. There is simply no need to consider the context of a wider pre-defined “world environment”. Sansar spaces are simply that – spaces to be filled and utilised howsoever the creator wishes and in any many which bes serves their ultimate intended use.

In the halls of the Dawrven king – David Hall’s Dwarven Fortress in Sansar

Of course, it would be easy to point to the reliance on external tools with which to create; but let’s be honest here. Learning to build well within Second Life, even with prims, is not any easy task – nor is it entirely divorced from requiring tools and skills from outside of the platform (think custom textures here, or materials creation). The skills used in building within SL are acquired and refined over time – which really, other than the broader complexity involved in tools like Maya or Blender, etc., –  is no different to sitting down and acquiring the skills to use those tools.  So the need to harness something like Blender if you wish to make truly unique content for Sansar isn’t necessarily a huge hurdle for those with the desire and passion to be immersively creative.

At just over two minutes in length, this is one of the slightly longer pieces on Sansar, and it packs a lot into it. We’ve still a long way to go before Sansar is offering the kind of environments, capabilities and activities users are accustomed to in SL and elsewhere. But if David is typical of the creators sinking their teeth into the platform, and providing things are built out at a steady rate going forwards and without devastatingly long lead times, it will be interesting to see where Sansar’s growing capabilities might lead people in the coming months.


SL project updates week 32/2: Content Creation UG

Content Creation User Group Meeting, Hippotropolis Camp Fire Circle

The following notes are taken from the Content Creation User Group meeting, held on  Thursday, August 10th, 2017 at 13:00 SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are usually available on the Content Creation User Group wiki page.

Medhue Simoni live steamed the meeting to You Tube, and his video is embedded at the end of this article. These notes present the meeting in terms of topics discussed, rather than a chronological breakdown of the meeting, so the provided time stamps may appear to be out of sequence in places. All time stamps are provided as links which will open the video in a separate browser tab, allowing the discussion to be heard in full.

Note: Due to Vir’s time on vacation, the next official CCUG meeting will be on Thursday, August 31st. Details will be posted on the wiki page.

Project Summary

The goal of this project is to provide a means of animating rigged mesh objects using the avatar skeleton, in whole or in part, to provide things like independently moveable pets / creatures, and animated scenery features via scripted animation.

  • At this point in time, this is not about adding fully functional, avatar-like non-player characters (NPCs) to Second Life
  • Animated objects will not (initially):
    • Have an avatar shape associated with them
    • Make use of an avatar-like inventory (although individual parts can contain their own inventory such as animations and scripts)
    • Make use of the server-side locomotion graph for walking, etc., and so will not use an AO
    • Use the avatar baking service
    • Be adjustable using the avatar shape sliders
  • The project may be extended in the future.
  • It will involve both back-end and viewer-side changes, likely to encompass new LSL commands to trigger and stop animations (held in the object’s contents)
  • It will most likely include a new flag added to an existing rigged object type in order for the object to be given its own skeleton.

Recent Progress

[06:05] Alexa Linden is leading the product side of the animated objects project, and is working on build documentation for the viewer, test plans and related information, etc., for LL’s internal use.

[30:32-31:45] Will it be possible to attach a rigged mesh (e.g. clothing) onto an animated object? If it is rigged mesh, it doesn’t actually need to be an attachment; it can just be specified as a part of the linkset comprising the animated object, and animated against the same skeleton. However, static attachments will not be initially supported with animated objects.

[33:28-34:56] Animesh and sliders:  it’s unlikely that slider support will be implemented for animated objects in the sense that you right-click on an animesh, edit its shape and then adjust the sliders as with an avatar. What would be more likely is to allow body shapes which already contain all the slider settings to be taken and applied to animated objects to given them a desired shape.

This work will likely follow-on for the current project and the work with the baking service, as it would require baking service support to work correctly, just as body shapes for avatars are currently supported through the baking service.

[36:12-36:58 and 37:44-38:10] There is no time frame on when the viewer will appear, but the Lab wants to build on Bento’s experience: get a test viewer out, gain feedback and suggestions, and then improve on it. This doesn’t mean everything people would like to see associated with animated mesh reach the viewer – or at least in one release of the viewer -, but the idea is very much on collaborative efforts to develop the capability. Internal testing the viewer has revealed a couple more things which need to be tackled before its made more generally available (and, of course, test regions need to be established on Aditi).

[57:16-58:05] Performance impact with animated objects won’t really be understood until more widespread testing begins with a public project viewer. There will be some limitations places on animesh intended to help reduce any negative impact (e.g render cost, land impact, maximum number allowed in a region, etc.), but these are all still TBD at this point in time.

Rendering Cost Calculations

[07:29 – 08:25] Related to the above (but not confined to animesh) and as has been previously noted in a several of my SL project updates, the Lab is re-visiting how the rendering cost calculations are handled within Second Life, and Vir has most recently been involved in this work. The aim is to make the calculations a lot more reliable and accurate when establishing the render cost of objects, and thus possibly encourage people to make more efficient content. This work will involve both internal testing by the Lab and “external” testing involving users.

Project EEP (Environment Enhancement Project)

Project Summary

To enhance windlight environment settings and capabilities, including: making environment settings an inventory asset (so they can be sold / bought / swapped); the ability to define the environment (sky, sun, moon, clouds) at the parcel level; LSL scripted support for experience environments / per agent; extended day settings (e.g. having a 24-hour day for a region and 7-day cycles) and extended environmental parameters (possibly including Godrays, distance fog, etc).

See also:

[12:44-13:50] Rider has been busy with other projects since the work was first announced, but will hopefully provide updates when the work resumes.

[44:15-46:32] Further summary of the work by Rider.

Bakes On Mesh

Project Summary

Extending the current avatar baking service to allow wearable textures (skins, tattoos, clothing) to be applied directly to mesh bodies as well as system avatars. This involves server-side changes, including updating the baking service to support 1024×1024 textures. This may lead to a reduction in the complexity of mesh avatar bodies and heads.

Recent Progress

[22:33-23:12] Work is progressing. The updates to the baking service to support 1024×1024 textures are currently on internal testing by the Lab using at least one of the development grids. It’s in a “pre-Aditi” (Beta grid) state, but will hopefully be moving forward soon.

Other Items

Note that some of the following are the subject of more extensive commentary in local chat.

[11:46-12:40] Adjustable walk / run speeds: (see: feature request BUG-7006 for example) nothing happening on this “immediately”. The JIRA has been pulled in by the Lab as it may tie-in with some work being considered for animation playback. However, things are unlikely to be looked at until the next round of animation updates, which will include supplemental animations. The specs for this work have to be fully determined.

Alexa Linden: Product Manager for Animated Objects (Animesh)

[16:29-17:00] Increasing script memory limits: not currently on the roadmap.

[18:00-19:35 and 21:36-22:15] Development kits for the default mesh avatars: in short, nothing planned on the Lab’s part at present. There are, of course, various models and samples available through the Bento wiki pages which might be useful as teaching tools.

[23:14-29:56] Adding further bones to the avatar skeleton for clothing, etc / custom skeletons: adding further bone to the avatar skeleton is unlikely. As it is, the additional Bento bones – if carefully used – can be re-purposed for a wide variety of uses beyond their default names, including in clothing, etc., although custom animations will be required as well. However, this can – within limits – allow creators to build semi-customised skeletons.

A particular consideration with custom skeletons is the issue compatibility between different objects wanting different skeletons, it makes it much harder to ensure different avatar part work together smoothly (e.g. a pair of wings from one avatar working with the quadruped body of another).

[25:33-26:02] Near-term roadmap: the current near-term roadmap for content creation features is; animated objects (animesh), bakes on mesh, then a follow-on to allow bakes on mesh to be used on animesh objects together with some additional features, in order to enable more NPC-like character creation.

[49:44-50:34] Dynamic mirrors: (see STORM-2055) these continue to be periodically raised at meetings, the Lab remains disinclined to implement anything on the grounds of performance impact, particularly as dynamic reflective surfaces would, in all likelihood, be used indiscriminately by many.

[50:45-51:41 and 53:24-54:54] Terrain texture resolution and adding materials to terrain: SL terrain textures suffer from having a relatively large pixel size/ low pixel density, resulting in terrain looking blurred. This can be exacerbated when default terrain is mixed with mesh terrain, where the latter can use the same textures and benefit from the use of materials.  Currently, there is nothing on the SL roadmap for making changes to SL terrain textures.  The pixel size / density issues is seen as a non-trivial fix, given the impact it would have on terrain as a whole and how it may affect those using custom textures on their land.

[59:10-1:01:20] Lab-provided building learning centres: the question was raised about the Lab providing more in-world locations where people could learn about building in SL (“building islands”). There are already a good number of user-provided areas in SL, however, the idea here is to provide more of a self-teach facility (think the Ivory Tower of Prims) rather than one which relies on classroom based teaching, and which includes the best practices, access to test models, etc. Alexa said she’d run the idea past the LDPW team.