2022 week #42: CCUG meeting summary: PBR Materials

Storybook, August 2022 – blog post

The following notes were taken from  my audio recording and chat log of the Content Creation User Group (CCUG) meeting held on Thursday, October 20th 2022 at 13:00 SLT. These meetings are chaired by Vir Linden, and their dates and times can be obtained from the SL Public Calendar.

This is a summary of the key topics discussed in the meeting and is not intended to be a full transcript.

PBR: Materials and Reflections

  • Please also see previous CCUG meeting summaries for further background on this project.
  • Work is now entering a “public alpha” mode which will commence with the release of a formal project viewer, in order to put the viewer before a wider group of creators and obtain their input / feedback in addition to that gained during the “closed alpha” with those on the Discord channel.
    • This project viewer could be available in the next week(ish), subject to QA testing.
    • It is hoped this “alpha testing” will show that minimal changes to behaviour / functionality will be required, but it will allow LL time to gather broader stats on performance (particularly on hardware specs they have not bee able to test), and allow for additional tuning and optimisation.
    • LL are aware there are some performance regressions within the viewer (together with some issues with AMD drivers) that need to be addressed before it can proceed beyond project viewer status, but these are not seen as blockers to this wider testing, as they can be noted within their viewer release notes.
    • It is also noted that permissions will be “a little bit broken” to start with, and some edge case rendering cases may appear broken -but issues like this will be documented in the project viewer release notes.
    • Once these issues have been ironed out, it is anticipated that there will be no FPS loss when comparing the PBR Materials viewer to the current release viewer.
  • Following this “open alpha”, the viewer will move to “beta mode” (presumably as it reached RC status), when the focus will be on bug fixes rather than adding / changing features.; as such, the project viewer phase is seen  as crucial in obtaining feedback encompassing concerns over things like file formats, functionality, concerns over changes to the rendering pipe, etc.

Reflection Probes

  • LL is concerned that reflection probes may prove to be an Achilles Heel with the new capabilities, simply because the technique  in using them by the industry as a whole is somewhat counter-intuitive.
  • The core issue is getting people’s heads around the idea that the “shiny thing” is not itself the probe; but the probe must be correctly set, including its influence volume.
    • Typical documentation on setting-up reflection probes, such as that provided by Unreal and Unity – both of which share the same concepts around the influence volume as the SL implementation (although the latter has fewer parameters), doesn’t always make for easy reading.
    • The influence volume is a particular issue for SL due to the use of the automatic probes which exist within a region, and which are unlikely to always be in the right place, so may need to be countered by manually-placed probes.
  • One significant concern arising from this is that content will be sold with it own reflection probe which is precisely the wrong thing to do, as it could lead to dozens of probes in a single room, none of which have influence volumes that eat into viewer resources when a single probe for the room as a whole would achieve the same results.
    • A way around this would be to offer the means to disable the reflection probe as supplied within individual objects sold by creators.
    • However, this then begs the questions to what happens if said content is sold No Mod – blocking any ability to disable the associated reflection probe? Would it be sufficient to let market forces determine the success of such items?
  • No definitive  solution for this has been determined – other than the need for full and proper education for creators (with documentation) – both of which assume creators will a) want to be educated; b) RTFM; and the meeting time drew to a close whilst this was still under discussion, so will likely be picked up at the next meeting.

Forward Rendering + Deprecating OpenGL 2.0

  • As has been previously stated by LL (and covered in these CCUG summaries), a key change that the PBR work will bring to the viewer is the removal for the forward rendering pipe – otherwise know as “disabling ALM”.
    • To help with this, LL have implemented additional optimisation within the PBR viewer designed to help low-end systems perform better with ALM (Advanced Lighting Model) always on, and more optimisations are to be added to the viewer.
    • However, one of the areas of feedback the Lab would like to obtain during the “alpha testing” is how well low-end hardware they may not have been able to test functions when running the PBR viewer.
    • Obviously, removal of forward rendering  will mean the final end to invisiprims, which have for the last 10+ years required ALM to be turned off in order to mask other elements (e.g. system body parts, Linden Water, etc.).
  • It is acknowledged that forward rendering currently offers better anti-aliasing than ALM, but a means to improve this under ALM is being looked at.
  • The PBR Materials viewer will effectively see support for OpenGL 2 finally deprecated in the viewer, the minimum requirement being OpenGL 3 (Open GL 4 being required for reflection probes).
    • This thought to impact around 0.5% of all Windows / Linux users (stats still are collected for Linux, and many of those might be bots utilising older clients, with the majority of users – even those on elderly hardware – trending towards OpenGL 4 (which has been available for around a decade).
    • Given given basic GPU cards such as the Nvidia GTS 250 (introduced in 2009) and many laptops with integrated graphics can support at least OpenGL 3, it is hoped that even those still running OpenGL 2 will be able to updated to OpenGL 3 to avoid issues.
      • If you are concerned about your hardware supporting OpenGL 3, go to Help → About in your viewer, and you can check the OpenGL version you are running from there – most people will likely find they are running a version of OpenGL 4. Those on Mac OS systems might also use this tool to check.
    • The testing period will be sufficient enough for LL to determine whether they need to address the lack of OpenGL 2 support in future iterations of the viewer.
If you are worried about your older hardware supporting PBR Materials correctly, use Help → About to check the version of OpenGL you are using; you will likely find it is at least OpenGL 3, if not OpenGL 4+

Alpha Modes

  • With the PBR / Materials viewer, alphas are (as previously reported) move to linear colour space, which may cause problems for some pre-PBR content.
  • This is not been an issue for those creators working within the close Discord channel group using pre-release viewers (particularly given some TPVs already use linear colour space), but LL expects to get fuller feedback on the move once the project viewer is ready for use on a much broader basis.
  • If the move doesn’t work for pre-PBR content with alphas, the result might be that legacy content and PBR content will not alpha sort against each other, forcing an artificial sorting (e.g. sort rigged mesh alphas first, then sort non-rigged over).

In Brief

  • The question was asked about SL supporting the Open Asset Import Library for mesh imports.
    • It was described as a “cool but bloated”, and “great when you want broad spectrum asset import and don’t really care about doing it super well for any particular format, not so great when you want to support one thing really well.”
    • As such, LL would rather focus on just support glTF mesh, and as a part o this, as the viewer progresses and glTF mesh import is added, support for importing COLLADA files will likely be discontinued in the official viewer, and creators will be pointed to a COLLADA-to-glTF converter.
  • Terrain textures: LL is still looking at options for a future project to improve terrain textures.
    • One user-generated suggestion that has been accepted  for consideration is BUG-232769.
    • The Graphics Team are also looking at materials paint maps for terrain.
      • IF this approach were to be adapted it might comprise of a set of four materials per parcel, which could be painted onto the terrain “at some fidelity” (e.g. around 1/4 m or so), over the existing elevation-based texturing.
      • This would allow parcel owner to, or example, “paint” something like a cobblestone path through a part of their parcel or add high quality grass for  lawn, etc.
      • In particular, it could enable creators to provide “terrain ready” materials packs others can purchase and use in their parcels.
    • Note that currently, these are ideas. There is no actual project for revamping terrain textures in progress at the moment.
  • It was reiterated that there is an internal move to update the Second Life minimum system requirements to reflect the versions of required APIs which must be run on client hardware rather than specifying specific hardware (CPU, GPU, memory, etc.) as is currently the case.
  • The hoary old complaint of aviators having to fly through parcels with different EEP settings was again raised as an “issue”  (while not perfect, applying a preferred EEP setting to your avatar – Fixed Sky or Day Cycle – still seems the easiest option to overcome this rather than working to re-jig the viewer as a whole).
  • A  reminder Local KVP / “Linkset Data” should be deployed to the simhost RC channels in week #43. I’ll have a post out on this during that week.
    • When testing this capability it is important to not that data committed to a linkset will not survive the move of the linkset to a simulator channel that does not have the necessary support for the capability; ergo, all testing must take place within the defined RC channels following the initial deployment.
    • There will be a special article on this capability in these pages once it has rolled to RC.

Next Meeting

  • Thursday, November 3rd, 2022.

2022 Puppetry project week #41 summary

Puppetry demonstration via Linden Lab – see below.  Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”

The following notes have been taken from chat logs and audio recording of the Thursday, October 13th Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).

Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.

Project Summary

  • Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the  LLSD Event API Plug-in (LEAP) system.
    • Note that facial expressions and finger movements are not currently enabled.
    • Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
  • The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
  • Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
  • No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
  • There is now a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).

Bugs, Feature Requests and Code Submissions

  • For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
  • There is also a public facing Kanban board with public issues – those experiencing issues can also contact Wulf Linden.
  • Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:

Further Information

Meeting Notes

New Viewer Version – 6.6.3.575529 Dated October 12th

  • This viewer uses a different, more efficient data format sending updates up to the region, and from the region to viewers.
    • The new and old formats and viewers are not compatible; someone on the new project viewer will be unable to see puppetry rendered for someone using the older viewer version, and vice-versa.
    • It is hoped that severe breakages between viewer versions like this will be avoided going forward, but this change was deemed necessary
  • This viewer also a crash (deadlock) fix, and puppetry animations should fade in/out when starting or explicitly stopping (animations may stop abruptly should the LEAP plugin crash, or the data stream is lost, etc.).
  •  Those self-compiling viewers with the puppetry code should ensure they are pulling the updated code from the  6.6.3.575529 (or later as new versions appear) repositories.

Protocol Overhaul

Leviathan Linden Linden noted the project team is going to overhaul the Puppetry/LEAP protocol.

  • The intent is to replace all the current LEAP commands (“move”, “set_this”, “set_that”, etc.), and replace with just two commands: “set” and “get”.
  • On the “set” side:
    • It will be possible set avatar joint transforms, or specify IK targets, and also set various configuration settings as necessary.
    • These set commands will be “incremental” in nature (so that changes can be made to reach the final state), and once set, they stay at the defined value until modified, cleared, or the plug-in “goes away”.
  • On the “get” side:
    • get_skeleton and any other get_foo commands (if used) will be replaced with {get: [skeleton, foo, …]}.
    • A message will be generated and set back to the viewer making the Get request, but the form of the message is still TBD.
  • Meanwhile, the viewer will only do IK for your own avatar, and will transmit the full parent-relative joint transforms of all puppeted joints through the server to other viewers, and LL will make it possible for a plug-in to just supply full parent-relative joint transforms if desired (e.g. no IK, just play the data)
  • This overhaul will also provide:
    • A way to move the Pelvis. This will include both a pre-IK transform (which is just setting the Pelvis transform) and also a post-IK transform, in case the avatar is to be moved after setting all the joints.
    • A “terse” format for the LEAP/Puppetry protocol to simplify some “set” commands to reduce data going over the LEAP data channel. It will be possible to mix these “terse” command with long-form explicit commands.
  • Leviathan plans to break all of this work down into a set of Jira issues and place them on the kanban board for ease of viewing.

The overall aim of this overhaul is to make the protocol more easily extendible in the future.

To the above, Simon Linden added:

The data stream is radically different than what we started with. Essentially your viewer will do the work for your avatar: send[ing] all data needed for your puppetry animations [so] the people seeing you just have to use those positions – no IK or significant processing. That should help out in the long run with crowds 

Example Script

Simon Linden has produced a simple example script that is pushed to the Leap repository:

  • It reads a JSON file and sends that puppetry data to the viewer.
  • Using it, is is possible to edit some values, save the JSON text file, and see bones move as an example of doing so.

In Brief

  • BUG-232764 “[PUPPETRY] [LEAP] Puppetry should be able to ‘Get’ and ‘Set’ avatar camera angle” has been raised to go with the protocol overhaul, and while it has yet to be formally accepted, has been viewed as a good idea by the Puppetry team.
  • Puppetry does not support physics feedback or collisions as yet, and work for it to do so is not on the short list of “things to do next”
  • There is currently an issue of “near-clipping” within a a first-person (e.g. Mouselook) view and using puppetry (so, for example, holding a hand up in front of your avatar’s face in Mouselook results in the hand being clipped and now fully rendering). This is believed to by an artefact of the viewer still rendering the head (even though unseen when in first-person view), and this interfering with rendering near-point objects like hands. The solution for this is still TBD.

Date of Next Meeting

  • Thursday, October 27th, 2022, 13:00 SLT.

2022 week #40: CCUG meeting summary

Sweetwater Valley, August 2022 – blog post

The following notes were taken from  my audio recording and chat log of the Content Creation User Group (CCUG) meeting held on Thursday, October 6th 2022 at 13:00 SLT. These meetings are chaired by Vir Linden, and their dates and times can be obtained from the SL Public Calendar.

This is a summary of the key topics discussed in the meeting and is not intended to be a full transcript.

Official Viewers Status

No changes through the week, leaving the current crop of official viewers as:

  • Release viewer: version 6.6.4.575022 – hotfix for Crash at ~LLModalDialog() – promoted September 15 – no change.
  • Release channel cohorts:
    • Maintenance 3 RC viewer, version 6.6.5.575257, September 23.
    • Maintenance P (Preferences, Position and Paste) RC viewer version 6.6.5.575055 September 19.
  • Project viewers:
    • Performance Floater / Auto-FPS project viewer, version 6.6.5.575378, October 4.
    • Puppetry project viewer, version 6.6.3.574545,  issued on August 30.
    • Love Me Render (LMR) 6 graphics improvements project viewer 6.6.2.573263, July 21.

The Performance Floater / Auto-FPS project viewer includes a merge between the performance improvements from Firestorm integrated with the Lab’s auto-FPS capabilities.

PBR: Materials and Reflections

  • Please also see previous CCUG meeting summaries for further background on this project.
  • Test viewers continue to be made available to those on the Content Creation Discord channel. Requests to join that channel should be made in person at CCUG meetings. I am no longer able (at LL’s request) to furnish such information.
  • Viewer notes:
    • In order to make it clear when someone is working with PBR materials assets, there is an additional option within the viewer’s Build floater which, when selected, will open a dedicated PBR Materials editor, rather than munging the editor controls into the Build floater.
    • This also allows options like Glow (not a part of the glTF specification) to be retained and potentially used as an overlay on PBR materials.
    • Render work has seen the removal of the stencil buffer. This means that those build tools relying on the stencil buffer will be changing or completely going away from the viewer (e.g. the show grid cross-section checkbox, which has been broken for some time).
  • Work is in progress to integrate Linden Water into the new rendering pipe and support reflection probes.
    • This is also seeing some additional work on underwater refraction and on water reflections (e.g. no longer necessarily real-time reflections) so as to lighten the performance load.
    • The changes will mean Linden Water will look a little different as to how it looks at the moment.
  • Additional scripting functionality has been added to the glTF test regions on Aditi to allow the configuring the reflection probes:
    • llSetPrimitiveParams([ PRIM_REFLECTION_PROBE, /*integer*/ enabled, /*float*/ ambiance, /*float*/ clip_distance, /*integer*/ flags ])
    • llGetPrimitiveParams([ PRIM_REFLECTION_PROBE ])
    • Link set variants of these functions should also work. These are the flags you can use in the last parameter:
      • PRIM_REFLECTION_PROBE_BOX` – Flag that determines if probe is a box or sphere.
      • PRIM_REFLECTION_PROBE_DYNAMIC` – Flag that determines if probe will cause avatars to be shown in its reflections
  • Overall, it is felt:
    • That a plan / process is in place to future-proof the work for the additional of further glTF parameters down the road.
    • The test viewer is moving rapidly towards a point where a pubic Project viewer will debut.
  • The aspirational goal for this work is to have the viewer fully released by the end of 2022. However, this is dependent upon feedback and reaction to some of the compromises made, once the viewer gets to project status as a wider audience.

In Brief

  • Planar mirrors: if developed, these are seen as being similar to mirrors found in VRChat, etc., where they can reflect the local scene in detail, although at a performance impact.
    • There are potential ways to help reduce the impact – such as by limiting the distance at which a mirror is seen as “active” via the viewer (e.g. if you are within 5 metres, reflections are generated in the mirror; if you are beyond 5 metres from it, no reflections are generated) or by having mirrors touch-activated.
    • Ideally, such mirrors would have a specific function, and not merely another item of set dressing for a scene, and reflection probes will be used for generating environmental reflections (e.g. those seen on the shine of a car body or piece of silverware), rather than trying to make general object surfaces planar mirrors.
    • Note that any of this work would be for a future project, and is not part of the current PBR Materials + reflections work.
  • Custom pivot points: this work is described as currently “stalled out” due to investigations into supporting full hierarchies and similar, which have proven more complicated that first thought – although the benefits of having a full hierarchy is seen as being a major benefit.
  • LOD Clamping, etc. (please refer to my previous CCUG / TPVD summary for background on this: discussions are still in progress, and nothing definitive has been decided as yet. However, enforcing a hard clamp on setting LOD factors (including via the RenderVolumeLODFactor debug) is seen as a potential first step.
    • Were this to be done, TPVs would be given a suitable lead time to encourage creators to make suitable adjustments to their content.
    • This would apply only to in-world objects, although it is recognised avatars are a problem in their own right.
    • The latter have had some amelioration applied, as the Performance Improvement code ignores rigged attachment scaling, and only paying attention to the avatar bounding sphere, so a) LODs should be selected based on the size of the avatar; b) rigged attachments should change LOD depending you your camera distance from them.
    • The above does not apply to rigged meshes with no LODs – so it is possible the Lab will start auto-generating LODs for these in the future.
  • Avatar imposters: SL currently leans heavily on the avatar imposter system to reduce the load of rendering avatars. It has been noted that a preferable route would be to generate mesh proxies for avatars at a distance. However, whilst discussed within the Lab, this is not  – yet – a project.

Next Meeting

  • Thursday, October 20th, 2022.

2022 week #39: CCUG & TPVD meetings – PBR and LOD

Sweetwater Valley, August 2022 – blog post

The following notes were taken from:

  • My audio recording and chat log of the Content Creation User Group (CCUG) meeting held on Thursday, September 29th 2022 at 13:00 SLT.
  • My notes and the video from the Third-Party Viewer Developer (TPVD) meeting held on Friday, September 30th, 2022 at 13:00 SLT. The video is provided by Pantera – my thanks to her for recording it, and it can be found at the end of this article. Times stamps to the video are included where relevant in the following notes.

Both meetings are chaired by Vir Linden, and their dates and times can be obtained from the SL Public Calendar.

This is a summary of the key topics discussed in the meeting and is not intended to be a full transcript.

Official Viewers Status

[TPVD Video: 0:00-1:00]

No changes through the week, leaving the current crop of official viewers as:

  • Release viewer: version 6.6.4.575022 – hotfix for Crash at ~LLModalDialog() – promoted September 15 – no change.
  • Release channel cohorts (please see my notes on manually installing RC viewer versions if you wish to install any release candidate(s) yourself).
    • Maintenance 3 RC viewer, version 6.6.5.575257, September 23.
    • Maintenance P (Preferences, Position and Paste) RC viewer version 6.6.5.575055 September 19.
  • Project viewers:
    • Puppetry project viewer, version 6.6.3.574545,  issued on August 30.
    • Love Me Render (LMR) 6 graphics improvements project viewer 6.6.2.573263, July 21.
    • Performance Floater project viewer, version 6.5.4.571296, May 10.

General Viewer Notes

  • The next likely promotion to de facto release status will be the Maintenance 3 RC viewer.
  • The Performance Floater project viewer (which includes UI updates and the Lab’s new Auto-FPS feature) has been undergoing a lot of work to reconcile the Lab’s auto-FPS work with that of Firestorm (by Beq Janus and released in Firestorm 6.5.3, March 2022), and so an updated version should be appearing Soon™, possibly in week #40.
  • [Video: 28:10-32:35] The move to use Visual Studio 2022 in the Windows builds of the official viewer is moving ahead. Licenses are now in place, and an internal viewer (DRTVWR-568) built using VS2022 is being tested, and a project viewer may appear off the back of this.
    • In addition to this work, and as part of the migration to github, the third-party libraries used by the build process will be updated. This work will not include Clang.

Autobuild

[TPVD Video: 4:43-8:30]

  • A new version of Autobuild has been released with some features TPV developers may be interested in:
    • zstandard, xz, gzip compression of package archives.
    • blake2b hash support.
    • Support for downloading packages from restricted sources such as private GitHub Releases and GitLab packages.
    • CPU count exported as AUTOBUILD_CPU_COUNT for build scripts.
  • Signal Linden would like to hear from developers using a forked version of Autobuild could let him know what they need to be able to use the upstream version of Autobuild so that is is “simple to use” and has all the features TPV devs need to build their viewers.
  • This discussion  included a conversation on using WSL in place of cygwin and on setting credentials to protect build packages that are not supposed to be redistributed.

PBR: Materials and Reflections

  • Please also see previous CCUG meeting summaries for further background on this project.
  • Test viewers continued to be made available to those on the Content Creation Discord channel, with work now focused on brining the viewer more into line with the release viewer so that it can move forward to a project viewer status for wider distribution.
    • Requests to join that channel should be made in person at CCUG meetings. I am no longer able (at LL’s request) to furnish such information.
  • It currently looks as though the route to be taken in aligning the PBR / Materials viewer to the current viewers code is that users will not be able to disable PBR rendering, but will be able to turn off the new reflections  capabilities. This means that:
    • Objects with PBR materials on their faces will continue to show those materials, they just will not respond to reflection probes when the reflections capability is disabled.
    • Legacy materials (those we currently have today) should continue to look pretty much as they do at the moment.
  • A major change between the PBR / Materials viewer and the current viewer is the former performs the majority of alpha blending rendering in linear colour space.
    • This can cause some different results to be displayed with alpha blending and the haze in some EEP settings.  However, the majority of colours should render the same as, or close to, how they appear now.
    • However, the benefit is it reduces the amount of work the GPU / CPU has to do in converting between different colour spaces (e.g. linear and RGB).
  • Linden Water still has to be incorporated into the new render pipe (notably the the reflection and refraction paths, which currently require the forward rendering ((i.e. non-ALM) path – a path being disabled in the viewer as a part of this work.
  • [TPVD video: 1:34-3:15] texture overrides are likely to be handled via specifying a glTF complaint JSON blob per texture entry – although which fields will be supported is still TBA. It’s hoped that this approach will allow for rapid front-end / back-end support of features.
  • Reflections: the blending between reflection probes is still “not great” so this may cause some issues with presenting reflections across large surfaces (such as the face of a large skyscraper or glass building), with the suggestion being to manually place additional probes.

LSL Support

  • New PBR / Materials related LSL functions are to be introduced to allow for setting PBR materials on prim / object faces.
  • These functions currently comprise:
    • llGetRenderMaterial(sideNum) –  Returns materialNameOrID
    • llSetRenderMaterial(materialNameOrID, sideNum)
    • llSetLinkRenderMaterial(linkNum, materialNameOrID, sideNum)
    • llGetPrimitiveParams([PRIM_RENDER_MATERIAL, sideNum]) – Returns [ materialNameOrID
    • ]llGetLinkPrimitiveParams(linkNum, [PRIM_RENDER_MATERIAL]) – Returns [ materialNameOrID, … ]
    • llSetLinkPrimitiveParamsFast(linkNum, [PRIM_RENDER_MATERIAL, sideNum, materialNameOrID])
  • The standalone functions are seen as being in line with llSetTexture, and to be less verbose when typing, compared to typing a list as with Set /GetlinkPrimitiveParams.
  • All of these functions work similarly to the functions for setting textures on the faces of prims (ex: llSetTexture), but instead of referencing an image asset, they reference a material, such as can be created with the Material Editor.
  • materialNameOrID can be the material UUID string, or the name of a material item in the prim’s inventory.
  • These functions are currently deployed on the Aditi PBR test regions (Rumpus Room and Materials Sandbox regions) for testing.

Land Impact / LOD Clamping

[TPVD video 39:15-meeting end + CCUG Meeting]

The core of both the CCUG and TPVD meetings was the issue of the user experience, in-world mesh LODs, Land Impact, and what might be done to improve things.

Side note: it was acknowledged that many of the issues raised also apply to mesh avatar clothing and avatar accessories, but due to the manner in which avatars are handled in general, this is seen as a separate issue, deserving of its own discussion and potential routes to improve.

The Problem

  • Around 30% of SL users – and a lot who are entering SL for the first time – are on systems that require a reasonable LOD factor (e.g. no more than 2) in order to have a reasonable frame rate. Unfortunately, this leaves them with a “broken” view of the world, as a result of a lot of in-world mesh items being built so they need to be seen at higher LOD setting at even reasonable camera distances.
  • This is the result of a combination of issues, including (but not necessarily limited to):
    • The Land Capacity / Land Impact (LI) system, and the need to manage the impact (LI) in-world builds builds have.
    • The failure / unwillingness of some creators to properly optimised the Level of Detail (LOD) generation of their models, despite knowing they should, and using the lowest LOD options they can in order to minimise LI (and thus have their models decimate  – fall apart – even when see from relatively close distances).
    • The ability to force the viewer to fully render any LOD model of an in-world object, no matter how poorly optimised, in full detail via the unsupported RenderVolumeLODFactor setting, with creators then telling customer to set their viewer to high LOD factor (sometimes double figures) – something which can severely impact frame rates.
      • “Unsupported” is here a deliberate choice of words. As Runitai Linden noted at both meetings, debug settings, whether exposed as a UI element by TPVs or not, are not regarded as being a core, supported part of the viewer and thus are subject to change / removal by the Lab.
    • Issues within the mesh uploader cost calculations which appear to penalise properly modelled LODs by increasing the cost of a model with “decent” LODs to upload.
  • It is an issue that is seen as needing to be addressed, simply because new users are seen as coming into SL on lower-performing systems and having a bad visual experience. The question is how best to address it.

Possible Routes to Help Alleviate

  • Enforced clamping of the RenderVolumeLODFactor debug setting to no more than 4.00 for all viewers. This has been the case for some time in the official viewer (with the Graphics Preferences slide clamped to a maximum of 2.00), a practice also employed by some TPVs.
    • There was a general level of support for such a move, the view being it would force those creators who persist in trying to circumvent LOD modelling in favour of gaining a lower LI on their items to no longer do so, and encourage those coming into SL mesh content creation to properly model LODs.
  • Overhauling the LOD calculations for how objects are seen and rendered by the viewer, so that instead of only looking at the number of degrees on-screen the bounding sphere of an object takes up, the viewer scales its calculations in accordance with screen resolution.
    • This is seen by the Lab as a potentially good idea.

Other Points Raised in the Discussions

  • [TPVD Video 51:41-53:26] – Proper LODs appear to be penalised with higher LI values. This is likely to be down to how LI is calculated across a regions as explained by Runitai, and the math involved is unlikely to be changed.
  • [TPVD Video:  55:21-56:51] – Issues of render cost vs. download costs (getting all the asset data to the viewer for rendering) and what is seen as an imbalance between the two when rendering multiple copies of the same object. however, for the reasons given in the video, this is also unlikely to change.
  • [TPVD Video: 57:25-58:38] – RenderDynamicLOD is a debug setting (again, unsupported), that, when set to FALSE, forces the viewer to select a LOD model for an in-world object, based on its size, and always renders that LOD model, irrespective of camera distance.
    • As such, it cannot be gamed to avoid LODs per se.
    • It can, in some circumstances, result in an improvement (perhaps only slight) in FPS. As such, it is possible this setting might be presented as an option in the Advanced Graphics Preferences at some point (thus making it a supported feature).
  • [TPVD Video: 58:59-59:46] – In response to a suggestion made in chat that LL provide some form of “mesh inspection” service to ensure mesh items are decently optimised / modelled.
    • This was seen as antithetical to SL being a platform for content creation, as it would bottleneck the creative process and potentially deter creators.
    • It would also raise the question of how to review and “accept / refuse” all existing content within SL.
    • Instead, the preferable route is seen as trying to provide a means for creators to use them platform whilst ensuring that are encouraged to produce good looking, performant, content.
  • [TVPD Video 59:49-60:43] – However, it was observed that at the end of the day, if content creators are unable / unwilling to adhere to some building principles which allow the world to scale well be providing properly optimised LODs, there is always the option of replacing all creator-generated LODs with auto-generated LODs.
    • This is something which may (please note the emphasis!) be done in the case of avatar clothing and accessories.
    • It  is also seen as something which might help enable SL to run graphically on mobile devices.

CCUG In Brief

  • There was some confusion over LL providing “instanced” regions,  with some at the meeting being convinced it was a product offering indicated as “coming” or “premium”.
    • Currently, there are no clear plans for this to happen – the nearest to “instancing” the Lab offers is the cloning of event regions.
    • Instancing and on-demand products have been discussed at the Lab, but as pointed out in the meeting, providing them is not a certainty at present, and there are questions about what might happen WRT AWS fees, etc., should LL start to offer such a product (they may not actually go down as a result of unpredictability of use).
  • Alpha masks for the additional AUX wearable channels – a feature request has been received and accepted for these to be implemented, but no time frame on possible delivery, due the the need for both viewer and simulator updates as part of the implementation.
  • The question was asked of those attending the meeting as to which they would prefer to see: improvements to the in-world building tools or improving inter-operability with 3D tools.
    • This was something of a loaded question, inasmuch as those attending the CCUG are, for the most part, commercial content creators – people focused on generating income from their work. As such – and as demonstrated by the responses to the question (which included a call of in-world builders “leaching” off of others – hardly a fair categorisation) – inter-operability proved to be the more popular.
    • It was, however, acknowledged by Lab staff at the meeting that there are other creators in Second Life who are not necessarily driven by commercial aims but who can still contribute to the wider community and multiple ways and who still utilise the in-world tools, and as such, their feedback should also be sought.

TPVD In Brief

  • [Video: 8:51-9:51] Multi-Factor Authentication: there is an upcoming update which will see MFA enforced viewer-side. When implemented, it will mean users who have opted-in to MFA will only be able to log-in to SL on viewers with MFA support; they will no longer be able to switch between viewers with / without MFA support.
  • [Video: 9:59-11:00] Inventory Updates: discussed in previous meetings, it has been confirmed that as part of this work the AIS2 API will be deprecated and will “go away at some point”, and the viewer fully transitioned to AIS3 only.
    • This means that any new inventory fields added as a part of any forthcoming inventory project will only be accessible via AIS3.
  • [Video: 12:11-17:05] Legacy Profiles:
    • It has been noted that the URL  Profile field is now completely missing from the legacy Profiles viewer code  from the Lab.  A Jira for this has been requested.
    • Viewer with the Legacy Profile code now also incorrectly report an avatar’s rezday (listing it one day early). This is a known issue and will be addressed, but requires a back-end update.
  • [Video 32:51-39:14] A discussion on viewer code signing (e.g. for recognition of executables being from a trusted source) – please refer to the video.

Next Meetings

  • CCUG: Thursday, October 6th, 2022.
  • TPVD: Friday, October 28th, 2022.

2022 Puppetry Project weeks #36 and #38 summary

Puppetry demonstration via Linden Lab – see below.  Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”

The following notes have been taken from chat logs and audio recordings of the September 8th and September 22nd Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are:

  • Generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).
  • A mixed Voice / text chat format – attendees are not obligated to use voice when asking questions, but will need to listen to voice to hear the entire meeting.

Notes in these summaries are not intended to be a full transcript of every meeting.

Project Summary

  • Previously referred to as “avatar expressiveness”,
  • Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the  LLSD Event API Plug-in (LEAP) system.
    • Note that facial expressions and finger movements are not currently enabled.
    • Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
  • The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
  • Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
  • No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
  • There is now a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).

Further Information

Bugs, Feature Requests and Code Submissions

  • For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
  • Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:

Summary of September 8th Meeting

Note: timing issues on my part meant I was unable to attend the first third of this meeting.

  • It is acknowledged that the current Puppetry viewer (viewer branch DRTVWR-558) is somewhat crashy and subject to some looping issues.
  • One aspect of Puppetry that should be highlighted is the ability for it to work alongside / in concert with existing SL animations – so you can be running a dance animation and still wave to a friend using puppeteering without the two animations clashing.
  • It is acknowledged that to ensure some reasonable smoothness of movement and to prevent things like movement conflicts between joints, there will need to be a more formalised animation constraints system. The current plan is to make this configurable via XML.
  • It is also acknowledged that tracking in general needs to be tightened within the plug-in code.
  • Puppetry does not currently interact with the Havok physics system (puppetry is largely viewer-side; physics – with the exception of some special use sub-libraries – is largely simulator-side).
  • The protocols which are used server-side to support Puppetry are not set in stone at this point; cases which require additional messaging, etc. can be discussed with the Puppetry team members from the simulator / server side of LL (e.g. Rider and Simon Linden).
  • Direct avatar interactions (e.g. shaking / holding hands, swinging a tennis racket to strike a ball, etc.): the IK system could help enable this, but it would also require a lot more work on the avatar / world mapping system to be fully possible, and this work has yet to be tackled (if it is to be tackled as a part of this initial Puppetry work).
  • The project is, at this point, fairly open as to where it might go: these initial project meetings are geared towards developers who may be interested in contributing and pushing elements of the project forward (e.g. support for full body tracking, etc.). Obviously, at some point, constraints will be placed on what is to be initially delivered.

Plugins (Pros and Cons)

  • Requests were made for the Puppetry system to support OpenXR (as well as LEAP). It was indicated that OpenXR would be considered as a default if a suitable plug-in were to be developed and contributed to Linden Lab for proper vetting and formal inclusion in the viewer.
  • The fact that the Puppetry project is using plug-ins raised concerns over system security. Plug-ins are executable, and so if accepted to run, a malicious plug-in could do considerable harm to a person’s system.
    • LL is aware of this, and is actively trying to minimise risk as far as possible.
    • However, safety also lay with users – do not download viewers from unofficial sites / sites that cannot be trusted; do not accept and run plug-ins that are passed around through forums, etc.
  • The benefits of using plug-ins was summarised as:
    • Speed of internal development / testing: there is no need to run a complete viewer build process simply because a couple on lines of code have been changed in testing; only the plug-in needs to be updated.
    • Extensibility: plugs-ins allow for more flexible support of additional creation tools or to add support for additional data formats (e.g. as with OpenXR) / hardware / programming languages (e.g. Python, C++, etc.).
    • Performance: using plug-ins allows the required additional processing such as webcam capture, processing and translation to be handed-off the separate processing threads within a computer from the viewer, thus preventing the latter losing performance by having to do the processing itself.
    • User assurance: removing things like the webcam controls to a plug-in that is not run by default as a de facto part of the viewer’s processing will (hopefully) remove fears about webcams somehow being used to “spy” on users.

Summary of September 22nd Meeting

  • It is hoped an updated version of the Puppetry Project Viewer will be available via the Alternate Viewers page in week #39 (commencing Monday, September 26th). This includes fixes and updates to the motion logic that should make avatar motion more predictable.
  • In terms of device support for puppeteering, any device that can be recognised as a joystick should be supportable within the Puppetry viewer (utilising the existing Joystick support options through Preferences) – although some refinement to the controls may be required via LL.
  • LSL support for puppeteering: nothing has been defined at present, but there are some ideas as to what might be needed / nice to have. It has been suggested LSL support is a subject for discussion at the next meeting.
  • Simon Linden has pushed a couple of capabilities:
    • A simple poser contained in a side branch of the LEAP repository. This reads a basic JSON file with bone positions (rotations)  for all 133 bones in the avatar skeleton and sends it as LEAP data to the viewer for animating the avatar. Thisfile can be live-edited, and is desgined to help those working with puppeteering  to experiment with it in an easy format – it will not be an end feature for the project.
    • Added a further branch to the Puppetry viewer repository called DRTVWR-558 Data Packing. This converts the data going from the viewer to the server onwards to a more efficient format, allowing the full animation data set to be contained in  a single packet for transmission.
      • However, this format is incompatible with the existing data format used within viewers built via DRTVWR-558; so as viewers are built using the newer code, this will not be able to show puppeteering using the older format, and vice-versa.
      • Those involved in experimenting with Puppetry should therefore switch to the viewer using the updated data format, once this is made available through the Alternate Viewer page, as it will be replacing the current data format going forward.
  • Leviathan Linden has suggested that if LL can transmit all bone data in compressed format, then they may not need to send IK targets and have the viewer manage the IK for all avatars in a scene, but rather have the viewer run the IK for a user’s avatar and then stream the avatar’s entire state, reducing the load on the viewer.

Pelvis Movement / Full Body Tracking / OpenXR Support

  • There was initial discussion about supporting local joint offsets and particularly off-setting the avatar pelvis to allow for subtle movements without actually moving the avatar.
    • This is somewhat similar to scripted animations, such as stands with an AO system – the avatar appears to step forward / back / walk in circle, but it is not physically moving as far as the simulator is concerned – the motions are the result of the avatar pelvis being offset from it’s actual position as seen by the simulator, and the animations running based on that offset.
    • There was some initial confusion over this and physically moving the avatar, as such, it was suggested this be referred to as “pelvis movement, rather than “offsetting joints / bones”.
  • Part of the reason for this discussion is because several non-Linden developers have been experimenting with partial and full-body tracking via OpenXR, and have found that not being able to move the pelvis within Puppetry can lead to issues of floating, etc., when an avatar kneels or crouches (as seen within existing SL animations) – the result of the legs being pulled up towards the pelvis, rather than the pelvis being moved towards the ground.
  • In addition this work has noted:
    • If Second Life were to return the “full” appearance data for an avatar (i.e. after allmesh transforms, slider data,, baked appearance information, etc.) has been applied, rather than the “raw” skeletal appearance, better calculations could be made around the pelvis height from the floor.
    • The approach works equally well with partially body tracking via a Rift S headset, and fully body tracking using alve headsets and Kinect devices.
    • However, it currently uses Blender as a conduit for translating movement within an OpenXR rig to the Second Life puppeteering rig, and would benefit enormously from a dedicated OpenXR plug-in, and the developers are willing to provide data data gathered from the work they’ve thus far completed to help facilitate this.
    • Separately to this, OPEN-363 “[Puppetry] [LEAP]: Add native OpenXR plugin” has been raised, but is (at the time of writing) awaiting review.
  • The above formed a nucleus of the discussion for much of the meeting with the ability to move the avatar pelvis now being seen as more of a priority requirement, with Leviathan Linden indicating they will try to look specifically at this between now and the next meeting.

Date of Next Meeting

2022 week #37: CCUG meeting summary – updated

Endless: Birdlings Flat, July 2022 – blog post

Update, September 19th, 2022: in response to the discussion concerning the upcoming mesh new starter avatars and encouraging creators to help develop an ecosystem of clothing and accessories for them (see In Brief, below), Alexa Linden forwarded the following comment to me:

Linden Lab plans to release devkits for our new Default Avatars after we’ve gone live with them. We’re waiting to make sure everything goes smoothly and no tweaks are needed. Once we’re happy with the results we’ll work on the documentation and devkits to make them available for ALL creators interested in supporting these avatars.

Alexa Linden, Septmber 19th, 2022

 

The following notes were taken from  my audio recording and chat log of the Content Creation User Group (CCUG) meeting held on Thursday, September 15th 2022 at 13:00 SLT. These meetings are chaired by Vir Linden, and their dates and times can be obtained from the SL Public Calendar.

This is a summary of the key topics discussed in the meeting and is not intended to be a full transcript.

Official Viewers Status

  • Release viewer: version 6.6.4.575022 – hotfix for Crash at ~LLModalDialog() – promoted September 15 – NEW.
  • Release channel cohorts (please see my notes on manually installing RC viewer versions if you wish to install any release candidate(s) yourself).
    • Maintenance P (Preferences, Position and Paste) RC viewer version 6.6.4.574750, issued September 6.
    • Maintenance 3 RC viewer, version 6.6.4.574727, September 1.
  • Project viewers:
    • Puppetry project viewer, version 6.6.3.574545,  issued on August 30.
    • Love Me Render (LMR) 6 graphics improvements project viewer 6.6.2.573263, July 21.
    • Performance Floater project viewer, version 6.5.4.571296, May 10.

Materials and PBR Work

Please also see previous CCUG meeting summaries for further background on this project.

  • Tangent spaces:
    • The glTF 2.0 standards are very specific about where tangent spaces are expected to be and behave. To this end, Second Life will use mikkTSpace tangents at least for PBR support.
    • There are internal discussions going on at the Lab as to whether this should be retroactively applied to existing meshes. If this is the case, meshes will have to be re-uploaded in order to have the correct tangents, do to the way bounding boxes are currently handled (the DA upload changes the coordinate frame of the original mesh, resulting in tangents being generated which do not match the normal map).
    • As it is, use of mikkTSpace tangents does increase Land Impact (LI), as the tangents are being stored inside the mesh alongside the normal information.  However, LL are currently running experiments to see if the tangents can be generated and stored without altering LI.
    • Until it is shown whether or not these experiments work, LL is requesting holding off on discussions about possible LI increases until they know what the likely scenarios is.
    • Alongside of mikkTSpace, this project will use per pixel binormal generation.
  • There is a mismatch between the colour space handling swatch in the viewer’s material’s editor and the glTF specification; the latter all calls for these colours to be in liner space, and in the viewer they are in SRGB space. This means that the values displayed by the swatch does not match the values which would be typed into a tool like substance Painter. It’s not currently clear how this mis-match will be handled.
  • Viewer:
    • Transparency and double-sided materials are now supported in the PBR development viewer.
    • To maintain commonality with the glTF specification, albedo in the PBR viewer is now called base colour.
    • Access to the PBR development server is via the Content Creation Discord channel, and requests to join that channel should be made in person at CCUG meetings. I am no longer able (at LL’s request) to furnish such information.
  • Again, this project is only focused on the materials elements of the glTF 2.0 specification, it does not including glTF mesh model import – although there is potential traction for this be a follow-on project, with the goal of eventually replacing COLLADA support for mesh uploads.
  • Ideally the Graphics team would like to structure work such that elements of the glTF 2.0 specification can be selected and then implemented within Second Life, thus presenting an “extensible” means of support the standard.

Puppetry Update

Please also refer to:

Notes:

  • Jiras from the Lab and users will now be available in the public Jira for ease of reference.
  • Meetings are held on Aditi (the beta grid) at the Castalet region Puppetry Theatre on alternate Thursdays at 13:00 SLT. The next meeting will be on Thursday, September 22nd, 2022.
  • Some users have been looking at using OpenXR  for Puppetry and enjoying some success.

In Brief

  • The New Linden Lab mesh New Starter avatar (see here for more): the project has reached the stage of the Lab (through the LDPW) generating a fixed set of content to support the avatar, which will be made available through the Library.
    • There is – as Patch Linden has indicated – the potential for content creators to be encouraged to support this new body and potentially provide a support ecosystem of clothing and accessories – although this will require a dev kit.
    • It’s been suggested that some existing clothing / accessory creators be invited to help test the new avatar.
    • Mention of the new mesh avatar triggered an extended discussion on the current state (and, for creators) pitfalls of the current multi-body / head marketplace and providing content, content efficiency, etc. As a more philosophical discussion (at this point in time) than a clear set of opportunities for improvements, the discussion largely falls outside of this summary.

Next Meeting

  • Thursday, September 15th, 2022.