2018 Sansar Product Meetings week #34: permissions system

No Spectators: The Art of Burning Man – Truth is Beauty, by Marco Cochrane

The following notes are taken from the Sansar Product Meeting held on Thursday, August 23rd. These Product Meetings are open to anyone to attend, are a mix of voice (primarily) and text chat. Dates and times are currently floating, so check the Sansar Atlas events sections each week.

Attending this meeting were Eliot, the Sansar Community Manager, with Bagman Linden (Linden Lab’s Chief Technology Officer, Jeff Petersen), Birdman, SeanT and Nyx Linden.

Edit Mode Move to Server-Side

  • Edit mode became server-centric (rather than client-local) on Wednesday, August 22nd.
  • The one drawback in this move is that it means that when editing a scene for the first time, there will be a delay in accessing Edit mode while the server is spun-up and loads.
  • The move paves the way for the introduction of the new licensing / permissions / supply chain system.
  • It will also in time allow for things like creators being able to work collaboratively within the same scene.
    • This is indicated as being “pretty far down the line”, and unlikely to appear before 2019.

Licensing / Permissions / Supply Chain System Deployment

It had been intimated in previous meetings that the licensing / permissions / supply chain system could start to be deployed in the September release. However, Nyx Linden was a little more cautious in addressing when the  deployment might occur.

  • The Lab is still working on some bug fixes and wish to ensure the first stage of deployment is smooth and successful.
  • Due to the way in which things are interlinked, the deployment would be pretty much all of the core supply chain / licensing / permissions system, although further extensions to the capability may be added in the future.
  • The system will include a “Save to Inventory” option.
    • This will initially only allow for objects to be “added to” – so an object can have a script or lighting capabilities added to it and then saved back to inventory. It will not initially allow for disparate objects (e.g. walls and floor and roof) to be combined into a single object.
    • It will however allow creators to use licensed components (e.g. scripts, sounds, lights) from other creators in their own items, and then sell those items, with the component creators also receiving payment.
    • The ability to combine disparate objects into a single unit will be added over time.
  • All items sold through the Sansar Store prior to the permissions system deployment will be set to “no resale” to prevent them being wrongly re-used / re-sold.
  • Items uploaded after the system is deployed will always be available for re-use in other peoples’ objects.
    • However, the original creator will have the ability to set whether or not their creations can be re-sold. So, if an item flagged as “not for re-sale” is used as a component in another creator’s object, they will not be able to then place that object for sale.
  • Clothing and avatar accessories will not be included in the initial permissions system deployment.

Experience Loading

  • The Lab is continuing to look at experience loading and when to “drop” the loading screen and allow people to start moving around within an experience.
  • One option being considered is to have everything within immediate viewing range of a spawn point to be caches by the client prior to dropping the loading screen, then having lower resolution textures on the faces of more distant objects or those that are initially “out of sight”, which are then progressively swapped out for hight resolution textures during the first one or two minutes the user is in the experience.
  • This would allow access to experiences to be somewhat quicker for some, although it would mean that spawning followed by immediate rapid teleporting might result in seeing some of the lower-grade textures prior to them being swapped out.
    • However, the experience wouldn’t be like SL, where actual objects are still being rendered (resulting in an avatar walking into a wall that has yet to render in their view; all of the physical objects would be visible in Sansar, some might just briefly have lower quality textures.
  • Sansar caches sizes: Sansar uses a 10Gb “large” cache and a 10Gb “small” cache for smaller files. Both of these will be user-configurable in the future)

Disabling Capabilities in Run-Time

  • Some types of experience that would benefit from having some run-time capabilities such as free camera movement or teleporting disabled in the Runtime mode (e.g. blocking the ability for someone to avoid traps in a game by teleporting past them, or using the free cam to cheat their way around a maze).
  • Bagman confirmed the back-end technology, as it stands, doesn’t allow for this, but it is something the Lab is aware of, and something they do want to address and make possible. However, it is not as high on the priority list right now as some other aspects of interactivity and options for creators the Lab wants to add to the platform.

VR Avatar Options

The Lab is working on “switching on” the avatar in VR, including the ability to see your hand  / body in first-person VR (one would hope this is also extended to Desktop as well); the ability to use hand gestures (e.g. give a thumbs-up, clench a fist, etc.) through the VR controllers,  etc.

In Brief

  • Avatar collisions: The R25 update should include the ability to disable the avatar collision capsule, making in possible for other avatars to come as close to you as possible (and even pass through your own.
  • Finding people within an experience: this has been previously discussed, and the Lab is a looking at options (e.g. “teleport to friends” or a teleport request option or an ability to be directly teleported to friends on accessing an experience, etc.
  • Voice indicator: another long-standing request – a means to more easily identify who is speaking on Voice – also being looked at.
  • Object hand-off: the ability to directly pass an object from one avatar to another is also being looked at by the Lab.
  • VR Options:
    • VR Look Book: this will be coming “soon” to Sansar, allowing VR users to change outfits, swap avatars without having to come out of VR.
    • Tactile feedback: a small vibration is being added to VR hand controllers when picking up or dropping objects.
    • Ability to change the client settings from within VR: this isn’t currently being looked at, but is seen as perhaps needing to be moved up the priorities list.
    • Text chat in VR: is seen as requiring a more technical solution – a virtual keyboard, etc., – although it is on the UI team’s radar.
  • Server crash: There are occasions when an experience server can crash, leaving the local “instance” of the scene running on the client. When this happens, the use has no idea the server has crashed – and nor, initially, does the client. As there can be latency and other network delays between server and client, the Sansar client has a very long-time out while waiting for updates (around 90 seconds). During this time, the only indicator that something has happened as that other avatars in the experience will not more or respond to voice / chat. Bagman has indicated the Lab will see if there is anything that can be done to make such crashes clearer to the user when they occur, rather than just waiting on the time-out.

 

Abstract and surreal in Second Life

La Maison d’Aneli: Cullum Writer

Now open at La Maison d’Aneli Gallery, curated by Aneli Abeyante, and located in the gallery’s sky exhibitions space, is a series of exhibitions which – with one exception – might be described as exercises in the surreal and the abstract, mixed with a little geometry.  The artists sharing the space are Cullum Writer, JudiLynn India, Senka Beck and 9Volt Borkotron, and Aneli Abeyante herself. Four of these artists are exhibited on the upper level of the gallery space, and one on the lower, who shares the space with Megan Prumier, who completes the current set of artists.

“My work is entirely intuitive,” JudiLynn says of her paintings. “I get lost in the layering of texture and colour. My work embodies my spirit and personality [and] my goal is to allow you to experience the image with your own mind’s eye.” The result of this approach is highly individual painting, rich in colour, abstract – sometimes surreal – in nature, which are by turns wonderful primal and, despite their abstract nature, very natural.

La Maison d’Aneli: JudiLynn India

This latter aspect is though the layering of colour to which JudiLynn refers, and the colours themselves, offering a rich foundation of what might be called earth colours – greens, blues, browns, which are overlaid and blended with bright, vibrant yellow, oranges, red, golds and more, to create images that can be so richly interpreted by the imagination.

Facing Judilynn’s exhibit is that by Aneli Abeyanti. Fully embracing geometry in their form and motion (most of the pieces are animated), these are glorious pieces of modern abstract art, mesmerising in form and movement. A small display, true – just seven pieces; but one not to be missed.

Maison d’Aneli: Aneli Abeyanti

Between the two, and to one side is Detoxomania an immersive 3D art piece of abstract form by Senka Beck and 9Volt Borkotron. In terms of colour, this is again a primal installation in may respect, the colours and motion within it intended to illicit an emotional response. It’s also ethereally tactile. Moving (or camming) through it, it is as if the various elements can be felt as one passes them.

“It isn’t about substance abuse,” Senka explains of the piece, which might be seen as a surreal landscape, “but about the mania of interpreting our lives in terms of toxicity. Toxic people, toxic relationships, toxic environments, toxic thoughts … Please enter, reflect and detoxify if you may.” To aid those wishing to do so are places within the installation to sit and contemplate.

Maison d’Aneli: Senka Beck and 9Volt Borkotron

Born in Porto Alegre, Brazil, Cullum Writer found her artistic inspiration through Second Life. From in-world snapshots, her expressionism has grown to encompass fractals, collages, and digital art with a defined geometric foundation. She presents some 14 pieces at La Maison d’Aneli on the lower floor of the exhibition space. All of them are abstract in nature and exceptional at capturing the eye. Some appear to be traditional painting in form, others more digital in origin, with a stylistic flow from left to right as you face her display area.

Also on the lower level, and standing quite aside from the more abstract exhibitions Is a small monochrome exhibition of Megan Prumier’s always evocative avatar studies.

La Maison d’Aneli: Megan Prumier

Overall, an interesting, eclectic selection of art across five exhibitions.

SLurl Details

2018 SL UG updates #34/2: CCUG summary with audio

“That’s no moon…” – Rider Linden teases with possibilities whilst talking Environmental Enhancement Project (EEP). Credit: Rider Linden

The following notes are taken from the Content Creation User Group (CCUG) meeting, held on  Thursday, August 23rd, 2018 at 13:00 SLT.  These meetings are chaired by Vir Linden, and agenda notes, meeting SLurl, etc, are usually available on the Content Creation User Group wiki page.

The choppiness in some of the audio segments where Vir’s voice drops out is due to issues with SL Voice.

Animesh

Project Summary

The goal of this project is to provide a means of animating rigged mesh objects using the avatar skeleton, in whole or in part, to provide things like independently moveable pets / creatures, and animated scenery features via scripted animation. It involves both viewer and server-side changes.

Resources

Release Candidate Viewer

As noted in my Simulator User Group updates and Viewer Release summaries, the Animesh viewer was promoted to release candidate status in week #33 with the release of version 6.0.0.518579 on August 13th, 2018. The three key points of this are that:

  • Animesh is one step away from being the de facto viewer release – although this is dependent on a range of factors, including the other RC viewer in the pipeline, the viewer’s performance whilst at RC, etc.
  • Animesh capabilities will now naturally be available more widely among users of the official viewer.
  • Third-party viewers are now officially allowed to incorporate Animesh in their offerings.

Server-Side

Animesh has been on general release server-side across the grid for the last couple of weeks, and thus far, no problems have surfaced with in. However, this could be due to a relatively low usage of the capability up until now, given it has only be available within project viewers.

Sample Content Rotation

As noted in past CCUG summaries and updates in these pages, a  workaround for rotation issue with uploaded models now means that some of the Animesh sample content provided by the Lab – such as the raptors used in the GIF file I often use to banner these reports – now crab sideways rather than walking forward when seen in the Animesh viewer.  Following a suggestion put forward at the meeting, Vir is going to see if the affected content can be updated so that it moves correctly and can be provided as samples for people wanting to play with Animesh without potentially confusing them.

It’s also been pointed out that the scripts used within the sample content are very specific to that content, and so could lead to problems if people try to pull the scripts and use them as templates. However, Vir is uncertain as to how generic the scripts can be made in order for them to work as templates.

Bugs

There are still a number of bugs the Lab is working on in relation to Animesh. These should be listed in the JIRA filter (link above), and are related to some Animesh editing issues, animation stop / restart issues, and some lagging, possibly due to the underlying skeleton lagging behind the animation playback (so the Animesh isn’t properly oriented as an animation starts to play, for example). Not all of these issues may be fixed before Animesh reaches de facto release status, but Vir is continuing to try to work through them.

Environment Enhancement Project (EEP)

Project Summary

A set of environmental enhancements, including:

  • The ability for region / parcel owners to define the environment (sky, sun, moon, clouds, water settings) at the parcel level.
  • New environment asset types (Sky, Water, Days that can be stored in inventory and traded through the Marketplace / exchanged with others.
    • Day assets can include four Sky “tracks” defined by height: ground level (which includes altitudes up to 1,000m) and (optionally) 1,000m and above; 2,000m and above and 3,000m and above, plus a Water “track”.
  • Experience-based environment functions
  • An extended day cycle (e.g a 24/7 cycle) and extended environmental parameters.
  • There are no EEP parameters for manipulating the SL wind.
  • EPP will also include some rendering enhancements  and new shaders as well (being developed by Graham Linden), which will allow for effects such as crepuscular rays (“God rays”)
    • These will be an atmospheric effect, not any kind of object or asset or XML handler.
  • The new LSL functions for finding the time of day according to the position of the windlight Sun or Moon have been completed,and are more accurate than the current options.
  • EEP will not include things like rain or snow.
  • It will still be possible to set windlight local to your own viewer.

Resources

Current Status

Alex Linden from the product team has been working on internal testing with Rider, and is now in the process of moving the pieces into place to have EEP deployed the Aditi (the beta grid).  Rider believes that a project viewer should be surfacing “very, very soon” as a result.  The plans for the project viewer are:

    • Get the assets support in to the first release.
    • Hopefully include crepuscular rays (“Godrays”) and work on shaders for distance fogging and atmospheric density in the first release of the project viewer as well. However, as these are in fact another shader, they may not make it into the initial cure of the project viewer, depending on how things go.
    • Add the scripted ability to manipulate EEP assets at a later date, but before the viewer progress to RC status.

As well as discussing project progress, Alexa and Rider also indicated some of the things they’ve been playing around with while testing EEP.

Cthulhu the sun, Cthulhu the sun / And I say it’s all right – as the Beatles never sang: Rider Linden has fun with EEP. Credit: Rider Linden

Bakes On Mesh

Project Summary

Extending the current avatar baking service to allow wearable textures (skins, tattoos, clothing) to be applied directly to mesh bodies as well as system avatars. This involves server-side changes, including updating the baking service to support 1024×1024 textures, and may in time lead to a reduction in the complexity of mesh avatar bodies and heads.

This work does not include normal or specular map support, as these are not part of the existing Bake  Service.

Resources

Current Status

Still awaiting the AIS updates (see below). Once these are in place there are a couple of other back-end pieces that require putting into place.

AIS Update

Both EEP and Bakes on Mesh are awaiting a AIS (Advanced Inventory System) update that will provide the necessary back-end support for both projects on Agni (and Main grid), as each is adding new inventory asset types to Second Life. Currently, it is believed that the work on support for Bakes on Mesh is a little further along to allow it to take advantage of the AIS update, and the latter may start being deployed in week #35 (commencing Monday, August 27th, 2018).

In Brief

  • In-world meshes bigger than 64m on an axis: there are no current plans to allow the upload of in-world meshes that exceed the 64m limit.  The main reasons for this are:
    • Vertex resolution: the larger the object, the greater the distance each vertex integer has to span, leading to inaccuracies in positioning (think prim drift for mesh), together with resolution degradation.
    • More particularly, the griefing potential and possible performance issues (e.g. rendering load on the viewer and possible implications for the physics engine).
  • Sim surrounds: partially in line with the above (and also ARCTan, below, given the use of sculpts), the Lab is interested in learning more above how people might like sim surrounds to be addressed. Some ideas offered include: making surrounds a specific form of large mesh with physics automatically disabled at upload (possibly used with a special field within the Estate / Region floater they can be “dropped into” to be applied; continuing with sculpts (with their 1 LI advantage over other potential land impact loads & their potential to have a relatively low number of vertices and thus form relatively economical content).
  • Project ARCTan: the project to recalculate object and avatar complexity will be progressing, however, it involves input from Graham Linden, who is currently focused on the shader / rendering work for EEP.   However, data collection is continuing.
    • People are starting to question what this will mean for sculpt(ie)s in SL – the majority of which tend to be horribly inefficient in terms of rendering. The short answer to this at the moment is the overall impact hasn’t been determined.
  • Real-Time bounding box tracking: the question was asked whether the new bounding box calculations introduced as a part of Animesh will include “unused” joints (e.g. if the hind legs in the skeleton are not rigged to, are they included in the bounding box calculations?). short answer: no; with the exception of the original (pre-Bento) base skeleton joints (as used by the system avatar).
  • Next Meeting:  Thursday, August 30th, 2018.