2022 week #40: CCUG meeting summary

Sweetwater Valley, August 2022 – blog post

The following notes were taken from  my audio recording and chat log of the Content Creation User Group (CCUG) meeting held on Thursday, October 6th 2022 at 13:00 SLT. These meetings are chaired by Vir Linden, and their dates and times can be obtained from the SL Public Calendar.

This is a summary of the key topics discussed in the meeting and is not intended to be a full transcript.

Official Viewers Status

No changes through the week, leaving the current crop of official viewers as:

  • Release viewer: version 6.6.4.575022 – hotfix for Crash at ~LLModalDialog() – promoted September 15 – no change.
  • Release channel cohorts:
    • Maintenance 3 RC viewer, version 6.6.5.575257, September 23.
    • Maintenance P (Preferences, Position and Paste) RC viewer version 6.6.5.575055 September 19.
  • Project viewers:
    • Performance Floater / Auto-FPS project viewer, version 6.6.5.575378, October 4.
    • Puppetry project viewer, version 6.6.3.574545,  issued on August 30.
    • Love Me Render (LMR) 6 graphics improvements project viewer 6.6.2.573263, July 21.

The Performance Floater / Auto-FPS project viewer includes a merge between the performance improvements from Firestorm integrated with the Lab’s auto-FPS capabilities.

PBR: Materials and Reflections

  • Please also see previous CCUG meeting summaries for further background on this project.
  • Test viewers continue to be made available to those on the Content Creation Discord channel. Requests to join that channel should be made in person at CCUG meetings. I am no longer able (at LL’s request) to furnish such information.
  • Viewer notes:
    • In order to make it clear when someone is working with PBR materials assets, there is an additional option within the viewer’s Build floater which, when selected, will open a dedicated PBR Materials editor, rather than munging the editor controls into the Build floater.
    • This also allows options like Glow (not a part of the glTF specification) to be retained and potentially used as an overlay on PBR materials.
    • Render work has seen the removal of the stencil buffer. This means that those build tools relying on the stencil buffer will be changing or completely going away from the viewer (e.g. the show grid cross-section checkbox, which has been broken for some time).
  • Work is in progress to integrate Linden Water into the new rendering pipe and support reflection probes.
    • This is also seeing some additional work on underwater refraction and on water reflections (e.g. no longer necessarily real-time reflections) so as to lighten the performance load.
    • The changes will mean Linden Water will look a little different as to how it looks at the moment.
  • Additional scripting functionality has been added to the glTF test regions on Aditi to allow the configuring the reflection probes:
    • llSetPrimitiveParams([ PRIM_REFLECTION_PROBE, /*integer*/ enabled, /*float*/ ambiance, /*float*/ clip_distance, /*integer*/ flags ])
    • llGetPrimitiveParams([ PRIM_REFLECTION_PROBE ])
    • Link set variants of these functions should also work. These are the flags you can use in the last parameter:
      • PRIM_REFLECTION_PROBE_BOX` – Flag that determines if probe is a box or sphere.
      • PRIM_REFLECTION_PROBE_DYNAMIC` – Flag that determines if probe will cause avatars to be shown in its reflections
  • Overall, it is felt:
    • That a plan / process is in place to future-proof the work for the additional of further glTF parameters down the road.
    • The test viewer is moving rapidly towards a point where a pubic Project viewer will debut.
  • The aspirational goal for this work is to have the viewer fully released by the end of 2022. However, this is dependent upon feedback and reaction to some of the compromises made, once the viewer gets to project status as a wider audience.

In Brief

  • Planar mirrors: if developed, these are seen as being similar to mirrors found in VRChat, etc., where they can reflect the local scene in detail, although at a performance impact.
    • There are potential ways to help reduce the impact – such as by limiting the distance at which a mirror is seen as “active” via the viewer (e.g. if you are within 5 metres, reflections are generated in the mirror; if you are beyond 5 metres from it, no reflections are generated) or by having mirrors touch-activated.
    • Ideally, such mirrors would have a specific function, and not merely another item of set dressing for a scene, and reflection probes will be used for generating environmental reflections (e.g. those seen on the shine of a car body or piece of silverware), rather than trying to make general object surfaces planar mirrors.
    • Note that any of this work would be for a future project, and is not part of the current PBR Materials + reflections work.
  • Custom pivot points: this work is described as currently “stalled out” due to investigations into supporting full hierarchies and similar, which have proven more complicated that first thought – although the benefits of having a full hierarchy is seen as being a major benefit.
  • LOD Clamping, etc. (please refer to my previous CCUG / TPVD summary for background on this: discussions are still in progress, and nothing definitive has been decided as yet. However, enforcing a hard clamp on setting LOD factors (including via the RenderVolumeLODFactor debug) is seen as a potential first step.
    • Were this to be done, TPVs would be given a suitable lead time to encourage creators to make suitable adjustments to their content.
    • This would apply only to in-world objects, although it is recognised avatars are a problem in their own right.
    • The latter have had some amelioration applied, as the Performance Improvement code ignores rigged attachment scaling, and only paying attention to the avatar bounding sphere, so a) LODs should be selected based on the size of the avatar; b) rigged attachments should change LOD depending you your camera distance from them.
    • The above does not apply to rigged meshes with no LODs – so it is possible the Lab will start auto-generating LODs for these in the future.
  • Avatar imposters: SL currently leans heavily on the avatar imposter system to reduce the load of rendering avatars. It has been noted that a preferable route would be to generate mesh proxies for avatars at a distance. However, whilst discussed within the Lab, this is not  – yet – a project.

Next Meeting

  • Thursday, October 20th, 2022.

2022 week #39: CCUG & TPVD meetings – PBR and LOD

Sweetwater Valley, August 2022 – blog post

The following notes were taken from:

  • My audio recording and chat log of the Content Creation User Group (CCUG) meeting held on Thursday, September 29th 2022 at 13:00 SLT.
  • My notes and the video from the Third-Party Viewer Developer (TPVD) meeting held on Friday, September 30th, 2022 at 13:00 SLT. The video is provided by Pantera – my thanks to her for recording it, and it can be found at the end of this article. Times stamps to the video are included where relevant in the following notes.

Both meetings are chaired by Vir Linden, and their dates and times can be obtained from the SL Public Calendar.

This is a summary of the key topics discussed in the meeting and is not intended to be a full transcript.

Official Viewers Status

[TPVD Video: 0:00-1:00]

No changes through the week, leaving the current crop of official viewers as:

  • Release viewer: version 6.6.4.575022 – hotfix for Crash at ~LLModalDialog() – promoted September 15 – no change.
  • Release channel cohorts (please see my notes on manually installing RC viewer versions if you wish to install any release candidate(s) yourself).
    • Maintenance 3 RC viewer, version 6.6.5.575257, September 23.
    • Maintenance P (Preferences, Position and Paste) RC viewer version 6.6.5.575055 September 19.
  • Project viewers:
    • Puppetry project viewer, version 6.6.3.574545,  issued on August 30.
    • Love Me Render (LMR) 6 graphics improvements project viewer 6.6.2.573263, July 21.
    • Performance Floater project viewer, version 6.5.4.571296, May 10.

General Viewer Notes

  • The next likely promotion to de facto release status will be the Maintenance 3 RC viewer.
  • The Performance Floater project viewer (which includes UI updates and the Lab’s new Auto-FPS feature) has been undergoing a lot of work to reconcile the Lab’s auto-FPS work with that of Firestorm (by Beq Janus and released in Firestorm 6.5.3, March 2022), and so an updated version should be appearing Soon™, possibly in week #40.
  • [Video: 28:10-32:35] The move to use Visual Studio 2022 in the Windows builds of the official viewer is moving ahead. Licenses are now in place, and an internal viewer (DRTVWR-568) built using VS2022 is being tested, and a project viewer may appear off the back of this.
    • In addition to this work, and as part of the migration to github, the third-party libraries used by the build process will be updated. This work will not include Clang.

Autobuild

[TPVD Video: 4:43-8:30]

  • A new version of Autobuild has been released with some features TPV developers may be interested in:
    • zstandard, xz, gzip compression of package archives.
    • blake2b hash support.
    • Support for downloading packages from restricted sources such as private GitHub Releases and GitLab packages.
    • CPU count exported as AUTOBUILD_CPU_COUNT for build scripts.
  • Signal Linden would like to hear from developers using a forked version of Autobuild could let him know what they need to be able to use the upstream version of Autobuild so that is is “simple to use” and has all the features TPV devs need to build their viewers.
  • This discussion  included a conversation on using WSL in place of cygwin and on setting credentials to protect build packages that are not supposed to be redistributed.

PBR: Materials and Reflections

  • Please also see previous CCUG meeting summaries for further background on this project.
  • Test viewers continued to be made available to those on the Content Creation Discord channel, with work now focused on brining the viewer more into line with the release viewer so that it can move forward to a project viewer status for wider distribution.
    • Requests to join that channel should be made in person at CCUG meetings. I am no longer able (at LL’s request) to furnish such information.
  • It currently looks as though the route to be taken in aligning the PBR / Materials viewer to the current viewers code is that users will not be able to disable PBR rendering, but will be able to turn off the new reflections  capabilities. This means that:
    • Objects with PBR materials on their faces will continue to show those materials, they just will not respond to reflection probes when the reflections capability is disabled.
    • Legacy materials (those we currently have today) should continue to look pretty much as they do at the moment.
  • A major change between the PBR / Materials viewer and the current viewer is the former performs the majority of alpha blending rendering in linear colour space.
    • This can cause some different results to be displayed with alpha blending and the haze in some EEP settings.  However, the majority of colours should render the same as, or close to, how they appear now.
    • However, the benefit is it reduces the amount of work the GPU / CPU has to do in converting between different colour spaces (e.g. linear and RGB).
  • Linden Water still has to be incorporated into the new render pipe (notably the the reflection and refraction paths, which currently require the forward rendering ((i.e. non-ALM) path – a path being disabled in the viewer as a part of this work.
  • [TPVD video: 1:34-3:15] texture overrides are likely to be handled via specifying a glTF complaint JSON blob per texture entry – although which fields will be supported is still TBA. It’s hoped that this approach will allow for rapid front-end / back-end support of features.
  • Reflections: the blending between reflection probes is still “not great” so this may cause some issues with presenting reflections across large surfaces (such as the face of a large skyscraper or glass building), with the suggestion being to manually place additional probes.

LSL Support

  • New PBR / Materials related LSL functions are to be introduced to allow for setting PBR materials on prim / object faces.
  • These functions currently comprise:
    • llGetRenderMaterial(sideNum) –  Returns materialNameOrID
    • llSetRenderMaterial(materialNameOrID, sideNum)
    • llSetLinkRenderMaterial(linkNum, materialNameOrID, sideNum)
    • llGetPrimitiveParams([PRIM_RENDER_MATERIAL, sideNum]) – Returns [ materialNameOrID
    • ]llGetLinkPrimitiveParams(linkNum, [PRIM_RENDER_MATERIAL]) – Returns [ materialNameOrID, … ]
    • llSetLinkPrimitiveParamsFast(linkNum, [PRIM_RENDER_MATERIAL, sideNum, materialNameOrID])
  • The standalone functions are seen as being in line with llSetTexture, and to be less verbose when typing, compared to typing a list as with Set /GetlinkPrimitiveParams.
  • All of these functions work similarly to the functions for setting textures on the faces of prims (ex: llSetTexture), but instead of referencing an image asset, they reference a material, such as can be created with the Material Editor.
  • materialNameOrID can be the material UUID string, or the name of a material item in the prim’s inventory.
  • These functions are currently deployed on the Aditi PBR test regions (Rumpus Room and Materials Sandbox regions) for testing.

Land Impact / LOD Clamping

[TPVD video 39:15-meeting end + CCUG Meeting]

The core of both the CCUG and TPVD meetings was the issue of the user experience, in-world mesh LODs, Land Impact, and what might be done to improve things.

Side note: it was acknowledged that many of the issues raised also apply to mesh avatar clothing and avatar accessories, but due to the manner in which avatars are handled in general, this is seen as a separate issue, deserving of its own discussion and potential routes to improve.

The Problem

  • Around 30% of SL users – and a lot who are entering SL for the first time – are on systems that require a reasonable LOD factor (e.g. no more than 2) in order to have a reasonable frame rate. Unfortunately, this leaves them with a “broken” view of the world, as a result of a lot of in-world mesh items being built so they need to be seen at higher LOD setting at even reasonable camera distances.
  • This is the result of a combination of issues, including (but not necessarily limited to):
    • The Land Capacity / Land Impact (LI) system, and the need to manage the impact (LI) in-world builds builds have.
    • The failure / unwillingness of some creators to properly optimised the Level of Detail (LOD) generation of their models, despite knowing they should, and using the lowest LOD options they can in order to minimise LI (and thus have their models decimate  – fall apart – even when see from relatively close distances).
    • The ability to force the viewer to fully render any LOD model of an in-world object, no matter how poorly optimised, in full detail via the unsupported RenderVolumeLODFactor setting, with creators then telling customer to set their viewer to high LOD factor (sometimes double figures) – something which can severely impact frame rates.
      • “Unsupported” is here a deliberate choice of words. As Runitai Linden noted at both meetings, debug settings, whether exposed as a UI element by TPVs or not, are not regarded as being a core, supported part of the viewer and thus are subject to change / removal by the Lab.
    • Issues within the mesh uploader cost calculations which appear to penalise properly modelled LODs by increasing the cost of a model with “decent” LODs to upload.
  • It is an issue that is seen as needing to be addressed, simply because new users are seen as coming into SL on lower-performing systems and having a bad visual experience. The question is how best to address it.

Possible Routes to Help Alleviate

  • Enforced clamping of the RenderVolumeLODFactor debug setting to no more than 4.00 for all viewers. This has been the case for some time in the official viewer (with the Graphics Preferences slide clamped to a maximum of 2.00), a practice also employed by some TPVs.
    • There was a general level of support for such a move, the view being it would force those creators who persist in trying to circumvent LOD modelling in favour of gaining a lower LI on their items to no longer do so, and encourage those coming into SL mesh content creation to properly model LODs.
  • Overhauling the LOD calculations for how objects are seen and rendered by the viewer, so that instead of only looking at the number of degrees on-screen the bounding sphere of an object takes up, the viewer scales its calculations in accordance with screen resolution.
    • This is seen by the Lab as a potentially good idea.

Other Points Raised in the Discussions

  • [TPVD Video 51:41-53:26] – Proper LODs appear to be penalised with higher LI values. This is likely to be down to how LI is calculated across a regions as explained by Runitai, and the math involved is unlikely to be changed.
  • [TPVD Video:  55:21-56:51] – Issues of render cost vs. download costs (getting all the asset data to the viewer for rendering) and what is seen as an imbalance between the two when rendering multiple copies of the same object. however, for the reasons given in the video, this is also unlikely to change.
  • [TPVD Video: 57:25-58:38] – RenderDynamicLOD is a debug setting (again, unsupported), that, when set to FALSE, forces the viewer to select a LOD model for an in-world object, based on its size, and always renders that LOD model, irrespective of camera distance.
    • As such, it cannot be gamed to avoid LODs per se.
    • It can, in some circumstances, result in an improvement (perhaps only slight) in FPS. As such, it is possible this setting might be presented as an option in the Advanced Graphics Preferences at some point (thus making it a supported feature).
  • [TPVD Video: 58:59-59:46] – In response to a suggestion made in chat that LL provide some form of “mesh inspection” service to ensure mesh items are decently optimised / modelled.
    • This was seen as antithetical to SL being a platform for content creation, as it would bottleneck the creative process and potentially deter creators.
    • It would also raise the question of how to review and “accept / refuse” all existing content within SL.
    • Instead, the preferable route is seen as trying to provide a means for creators to use them platform whilst ensuring that are encouraged to produce good looking, performant, content.
  • [TVPD Video 59:49-60:43] – However, it was observed that at the end of the day, if content creators are unable / unwilling to adhere to some building principles which allow the world to scale well be providing properly optimised LODs, there is always the option of replacing all creator-generated LODs with auto-generated LODs.
    • This is something which may (please note the emphasis!) be done in the case of avatar clothing and accessories.
    • It  is also seen as something which might help enable SL to run graphically on mobile devices.

CCUG In Brief

  • There was some confusion over LL providing “instanced” regions,  with some at the meeting being convinced it was a product offering indicated as “coming” or “premium”.
    • Currently, there are no clear plans for this to happen – the nearest to “instancing” the Lab offers is the cloning of event regions.
    • Instancing and on-demand products have been discussed at the Lab, but as pointed out in the meeting, providing them is not a certainty at present, and there are questions about what might happen WRT AWS fees, etc., should LL start to offer such a product (they may not actually go down as a result of unpredictability of use).
  • Alpha masks for the additional AUX wearable channels – a feature request has been received and accepted for these to be implemented, but no time frame on possible delivery, due the the need for both viewer and simulator updates as part of the implementation.
  • The question was asked of those attending the meeting as to which they would prefer to see: improvements to the in-world building tools or improving inter-operability with 3D tools.
    • This was something of a loaded question, inasmuch as those attending the CCUG are, for the most part, commercial content creators – people focused on generating income from their work. As such – and as demonstrated by the responses to the question (which included a call of in-world builders “leaching” off of others – hardly a fair categorisation) – inter-operability proved to be the more popular.
    • It was, however, acknowledged by Lab staff at the meeting that there are other creators in Second Life who are not necessarily driven by commercial aims but who can still contribute to the wider community and multiple ways and who still utilise the in-world tools, and as such, their feedback should also be sought.

TPVD In Brief

  • [Video: 8:51-9:51] Multi-Factor Authentication: there is an upcoming update which will see MFA enforced viewer-side. When implemented, it will mean users who have opted-in to MFA will only be able to log-in to SL on viewers with MFA support; they will no longer be able to switch between viewers with / without MFA support.
  • [Video: 9:59-11:00] Inventory Updates: discussed in previous meetings, it has been confirmed that as part of this work the AIS2 API will be deprecated and will “go away at some point”, and the viewer fully transitioned to AIS3 only.
    • This means that any new inventory fields added as a part of any forthcoming inventory project will only be accessible via AIS3.
  • [Video: 12:11-17:05] Legacy Profiles:
    • It has been noted that the URL  Profile field is now completely missing from the legacy Profiles viewer code  from the Lab.  A Jira for this has been requested.
    • Viewer with the Legacy Profile code now also incorrectly report an avatar’s rezday (listing it one day early). This is a known issue and will be addressed, but requires a back-end update.
  • [Video 32:51-39:14] A discussion on viewer code signing (e.g. for recognition of executables being from a trusted source) – please refer to the video.

Next Meetings

  • CCUG: Thursday, October 6th, 2022.
  • TPVD: Friday, October 28th, 2022.

2022 Puppetry Project weeks #36 and #38 summary

Puppetry demonstration via Linden Lab – see below.  Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”

The following notes have been taken from chat logs and audio recordings of the September 8th and September 22nd Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are:

  • Generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).
  • A mixed Voice / text chat format – attendees are not obligated to use voice when asking questions, but will need to listen to voice to hear the entire meeting.

Notes in these summaries are not intended to be a full transcript of every meeting.

Project Summary

  • Previously referred to as “avatar expressiveness”,
  • Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the  LLSD Event API Plug-in (LEAP) system.
    • Note that facial expressions and finger movements are not currently enabled.
    • Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
  • The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
  • Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
  • No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
  • There is now a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).

Further Information

Bugs, Feature Requests and Code Submissions

  • For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
  • Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:

Summary of September 8th Meeting

Note: timing issues on my part meant I was unable to attend the first third of this meeting.

  • It is acknowledged that the current Puppetry viewer (viewer branch DRTVWR-558) is somewhat crashy and subject to some looping issues.
  • One aspect of Puppetry that should be highlighted is the ability for it to work alongside / in concert with existing SL animations – so you can be running a dance animation and still wave to a friend using puppeteering without the two animations clashing.
  • It is acknowledged that to ensure some reasonable smoothness of movement and to prevent things like movement conflicts between joints, there will need to be a more formalised animation constraints system. The current plan is to make this configurable via XML.
  • It is also acknowledged that tracking in general needs to be tightened within the plug-in code.
  • Puppetry does not currently interact with the Havok physics system (puppetry is largely viewer-side; physics – with the exception of some special use sub-libraries – is largely simulator-side).
  • The protocols which are used server-side to support Puppetry are not set in stone at this point; cases which require additional messaging, etc. can be discussed with the Puppetry team members from the simulator / server side of LL (e.g. Rider and Simon Linden).
  • Direct avatar interactions (e.g. shaking / holding hands, swinging a tennis racket to strike a ball, etc.): the IK system could help enable this, but it would also require a lot more work on the avatar / world mapping system to be fully possible, and this work has yet to be tackled (if it is to be tackled as a part of this initial Puppetry work).
  • The project is, at this point, fairly open as to where it might go: these initial project meetings are geared towards developers who may be interested in contributing and pushing elements of the project forward (e.g. support for full body tracking, etc.). Obviously, at some point, constraints will be placed on what is to be initially delivered.

Plugins (Pros and Cons)

  • Requests were made for the Puppetry system to support OpenXR (as well as LEAP). It was indicated that OpenXR would be considered as a default if a suitable plug-in were to be developed and contributed to Linden Lab for proper vetting and formal inclusion in the viewer.
  • The fact that the Puppetry project is using plug-ins raised concerns over system security. Plug-ins are executable, and so if accepted to run, a malicious plug-in could do considerable harm to a person’s system.
    • LL is aware of this, and is actively trying to minimise risk as far as possible.
    • However, safety also lay with users – do not download viewers from unofficial sites / sites that cannot be trusted; do not accept and run plug-ins that are passed around through forums, etc.
  • The benefits of using plug-ins was summarised as:
    • Speed of internal development / testing: there is no need to run a complete viewer build process simply because a couple on lines of code have been changed in testing; only the plug-in needs to be updated.
    • Extensibility: plugs-ins allow for more flexible support of additional creation tools or to add support for additional data formats (e.g. as with OpenXR) / hardware / programming languages (e.g. Python, C++, etc.).
    • Performance: using plug-ins allows the required additional processing such as webcam capture, processing and translation to be handed-off the separate processing threads within a computer from the viewer, thus preventing the latter losing performance by having to do the processing itself.
    • User assurance: removing things like the webcam controls to a plug-in that is not run by default as a de facto part of the viewer’s processing will (hopefully) remove fears about webcams somehow being used to “spy” on users.

Summary of September 22nd Meeting

  • It is hoped an updated version of the Puppetry Project Viewer will be available via the Alternate Viewers page in week #39 (commencing Monday, September 26th). This includes fixes and updates to the motion logic that should make avatar motion more predictable.
  • In terms of device support for puppeteering, any device that can be recognised as a joystick should be supportable within the Puppetry viewer (utilising the existing Joystick support options through Preferences) – although some refinement to the controls may be required via LL.
  • LSL support for puppeteering: nothing has been defined at present, but there are some ideas as to what might be needed / nice to have. It has been suggested LSL support is a subject for discussion at the next meeting.
  • Simon Linden has pushed a couple of capabilities:
    • A simple poser contained in a side branch of the LEAP repository. This reads a basic JSON file with bone positions (rotations)  for all 133 bones in the avatar skeleton and sends it as LEAP data to the viewer for animating the avatar. Thisfile can be live-edited, and is desgined to help those working with puppeteering  to experiment with it in an easy format – it will not be an end feature for the project.
    • Added a further branch to the Puppetry viewer repository called DRTVWR-558 Data Packing. This converts the data going from the viewer to the server onwards to a more efficient format, allowing the full animation data set to be contained in  a single packet for transmission.
      • However, this format is incompatible with the existing data format used within viewers built via DRTVWR-558; so as viewers are built using the newer code, this will not be able to show puppeteering using the older format, and vice-versa.
      • Those involved in experimenting with Puppetry should therefore switch to the viewer using the updated data format, once this is made available through the Alternate Viewer page, as it will be replacing the current data format going forward.
  • Leviathan Linden has suggested that if LL can transmit all bone data in compressed format, then they may not need to send IK targets and have the viewer manage the IK for all avatars in a scene, but rather have the viewer run the IK for a user’s avatar and then stream the avatar’s entire state, reducing the load on the viewer.

Pelvis Movement / Full Body Tracking / OpenXR Support

  • There was initial discussion about supporting local joint offsets and particularly off-setting the avatar pelvis to allow for subtle movements without actually moving the avatar.
    • This is somewhat similar to scripted animations, such as stands with an AO system – the avatar appears to step forward / back / walk in circle, but it is not physically moving as far as the simulator is concerned – the motions are the result of the avatar pelvis being offset from it’s actual position as seen by the simulator, and the animations running based on that offset.
    • There was some initial confusion over this and physically moving the avatar, as such, it was suggested this be referred to as “pelvis movement, rather than “offsetting joints / bones”.
  • Part of the reason for this discussion is because several non-Linden developers have been experimenting with partial and full-body tracking via OpenXR, and have found that not being able to move the pelvis within Puppetry can lead to issues of floating, etc., when an avatar kneels or crouches (as seen within existing SL animations) – the result of the legs being pulled up towards the pelvis, rather than the pelvis being moved towards the ground.
  • In addition this work has noted:
    • If Second Life were to return the “full” appearance data for an avatar (i.e. after allmesh transforms, slider data,, baked appearance information, etc.) has been applied, rather than the “raw” skeletal appearance, better calculations could be made around the pelvis height from the floor.
    • The approach works equally well with partially body tracking via a Rift S headset, and fully body tracking using alve headsets and Kinect devices.
    • However, it currently uses Blender as a conduit for translating movement within an OpenXR rig to the Second Life puppeteering rig, and would benefit enormously from a dedicated OpenXR plug-in, and the developers are willing to provide data data gathered from the work they’ve thus far completed to help facilitate this.
    • Separately to this, OPEN-363 “[Puppetry] [LEAP]: Add native OpenXR plugin” has been raised, but is (at the time of writing) awaiting review.
  • The above formed a nucleus of the discussion for much of the meeting with the ability to move the avatar pelvis now being seen as more of a priority requirement, with Leviathan Linden indicating they will try to look specifically at this between now and the next meeting.

Date of Next Meeting

2022 week #37: CCUG meeting summary – updated

Endless: Birdlings Flat, July 2022 – blog post

Update, September 19th, 2022: in response to the discussion concerning the upcoming mesh new starter avatars and encouraging creators to help develop an ecosystem of clothing and accessories for them (see In Brief, below), Alexa Linden forwarded the following comment to me:

Linden Lab plans to release devkits for our new Default Avatars after we’ve gone live with them. We’re waiting to make sure everything goes smoothly and no tweaks are needed. Once we’re happy with the results we’ll work on the documentation and devkits to make them available for ALL creators interested in supporting these avatars.

Alexa Linden, Septmber 19th, 2022

 

The following notes were taken from  my audio recording and chat log of the Content Creation User Group (CCUG) meeting held on Thursday, September 15th 2022 at 13:00 SLT. These meetings are chaired by Vir Linden, and their dates and times can be obtained from the SL Public Calendar.

This is a summary of the key topics discussed in the meeting and is not intended to be a full transcript.

Official Viewers Status

  • Release viewer: version 6.6.4.575022 – hotfix for Crash at ~LLModalDialog() – promoted September 15 – NEW.
  • Release channel cohorts (please see my notes on manually installing RC viewer versions if you wish to install any release candidate(s) yourself).
    • Maintenance P (Preferences, Position and Paste) RC viewer version 6.6.4.574750, issued September 6.
    • Maintenance 3 RC viewer, version 6.6.4.574727, September 1.
  • Project viewers:
    • Puppetry project viewer, version 6.6.3.574545,  issued on August 30.
    • Love Me Render (LMR) 6 graphics improvements project viewer 6.6.2.573263, July 21.
    • Performance Floater project viewer, version 6.5.4.571296, May 10.

Materials and PBR Work

Please also see previous CCUG meeting summaries for further background on this project.

  • Tangent spaces:
    • The glTF 2.0 standards are very specific about where tangent spaces are expected to be and behave. To this end, Second Life will use mikkTSpace tangents at least for PBR support.
    • There are internal discussions going on at the Lab as to whether this should be retroactively applied to existing meshes. If this is the case, meshes will have to be re-uploaded in order to have the correct tangents, do to the way bounding boxes are currently handled (the DA upload changes the coordinate frame of the original mesh, resulting in tangents being generated which do not match the normal map).
    • As it is, use of mikkTSpace tangents does increase Land Impact (LI), as the tangents are being stored inside the mesh alongside the normal information.  However, LL are currently running experiments to see if the tangents can be generated and stored without altering LI.
    • Until it is shown whether or not these experiments work, LL is requesting holding off on discussions about possible LI increases until they know what the likely scenarios is.
    • Alongside of mikkTSpace, this project will use per pixel binormal generation.
  • There is a mismatch between the colour space handling swatch in the viewer’s material’s editor and the glTF specification; the latter all calls for these colours to be in liner space, and in the viewer they are in SRGB space. This means that the values displayed by the swatch does not match the values which would be typed into a tool like substance Painter. It’s not currently clear how this mis-match will be handled.
  • Viewer:
    • Transparency and double-sided materials are now supported in the PBR development viewer.
    • To maintain commonality with the glTF specification, albedo in the PBR viewer is now called base colour.
    • Access to the PBR development server is via the Content Creation Discord channel, and requests to join that channel should be made in person at CCUG meetings. I am no longer able (at LL’s request) to furnish such information.
  • Again, this project is only focused on the materials elements of the glTF 2.0 specification, it does not including glTF mesh model import – although there is potential traction for this be a follow-on project, with the goal of eventually replacing COLLADA support for mesh uploads.
  • Ideally the Graphics team would like to structure work such that elements of the glTF 2.0 specification can be selected and then implemented within Second Life, thus presenting an “extensible” means of support the standard.

Puppetry Update

Please also refer to:

Notes:

  • Jiras from the Lab and users will now be available in the public Jira for ease of reference.
  • Meetings are held on Aditi (the beta grid) at the Castalet region Puppetry Theatre on alternate Thursdays at 13:00 SLT. The next meeting will be on Thursday, September 22nd, 2022.
  • Some users have been looking at using OpenXR  for Puppetry and enjoying some success.

In Brief

  • The New Linden Lab mesh New Starter avatar (see here for more): the project has reached the stage of the Lab (through the LDPW) generating a fixed set of content to support the avatar, which will be made available through the Library.
    • There is – as Patch Linden has indicated – the potential for content creators to be encouraged to support this new body and potentially provide a support ecosystem of clothing and accessories – although this will require a dev kit.
    • It’s been suggested that some existing clothing / accessory creators be invited to help test the new avatar.
    • Mention of the new mesh avatar triggered an extended discussion on the current state (and, for creators) pitfalls of the current multi-body / head marketplace and providing content, content efficiency, etc. As a more philosophical discussion (at this point in time) than a clear set of opportunities for improvements, the discussion largely falls outside of this summary.

Next Meeting

  • Thursday, September 15th, 2022.

2022 week #35: CCUG + TPVD meetings summary

WillowWood, July 2022 – blog post

The following notes were taken from:

  • My audio recording and chat log of the Content Creation User Group (CCUG) meeting held on Thursday, September 1st 2022 at 13:00 SLT.
  • My notes and the video from the Third-Party Viewer Developer (TPVD) meeting held on Friday, September 2nd, 2002 at 13:00 SLT. The video is provided by Pantera – my thanks to her for recording it, and can be found at the end of this article. Times stamps to the video are included where relevant in the following notes.

Both meetings are chaired by Vir Linden, and their dates and times can be obtained from the SL Public Calendar.

This is a summary of the key topics discussed in the meeting and is not intended to be a full transcript.

Official Viewers Status

[TPVD video: 1:00-2:20]

  • Release viewer: version 6.6.3.574158 – formerly the Profiles RC viewer, dated August 18, promoted August 30.
  • Release channel cohorts:
    • Izarra Maintenance RC, version 6.6.4.574724, September 1.
    • Maintenance 3 RC viewer, version 6.6.4.574727, September 1.
    • Maintenance P (Preferences, Position and Paste) RC viewer version 6.6.3.573877 issued August 15.
  • Project viewers:
    • Puppetry project viewer, version 6.6.3.574545,  issued on August 30.
    • Love Me Render (LMR) 6 graphics improvements project viewer 6.6.2.573263, July 21.
    • Performance Floater project viewer, version 6.5.4.571296, May 10.

General Viewer Notes

  • LL are likely going to be updating the Windows viewer build tools to use Visual Studio 2022.
  • This will likely be ahead of the move to use Github as the main viewer repositories, as outlined at the previous TPV Developer meeting.

Materials and PBR Work

Please also see previous CCUG meeting summaries for further background on this project.

  • Overall, the work on the viewer side of things – rendering in support of glTF 2.0 standards (and consistency of results when going from a tool like Substance Painter trough the uploader to displaying in SL)  is now “near complete”.
  • It  is hoped that it will “not be long” now before a project viewer is more generally available, although there is still additional back-end work to be completed, together with adding support for things like transparency support, ensuring PDR rendering works under linden Water, and similar.
  • Again, the focus of this work for the first pass is “core” glTF 2,0 support.
    • Ratified (under ISO) extensions may be up for inclusion in future enhancements to the capability.
    • Non-ratified extensions will not be up for inclusion in future updates.
  • In order for to be compliant with glTF, tangents are going to have to be generated in mikkTSpace, where normal maps are applied. This means that existing normal maps within Second Life / normal maps generated without using MikkTSpace may not look correct when rendered via the PBR pipe.
  • [TPVD video: 28:41-30:55]:
    • Runitai Linden noted that this project has been a valuable experiment in real-time collaboration between the LL dev and members of the community through the Discord server.
    • He expressed thanks to the TPV developers and the creators who have assisted the graphic team both in the development of the PBR rendering path and in helping with the reflections probe development, both in terms of code contributions and in helping to identify and address edge-case issues.
    • He further noted It  is hoped more projects might by run this way.

Textures: Handling

[TPVD video: 5:19-10:22]

  • Also pulled into this work are improvements to the texture handling (previously DRTVWR-559), This involves  better core utilisation and VRAM usage.
  • For Windows, this work includes an API which:
    • More accurately track texture memory use in the viewer and report it back to the client operating system.
    • Should ensure all available video memory (i.e. that not being used by other applications) on  Windows systems is used by the viewer prior to any texture paging occurring.
    • Works with both Intel and AMD hardware (the latter is important that the OpenGL extensions commonly used by TPVs to achieve a more efficient use of VRAM apparently no longer work correctly on AMD hardware).
  • For Mac OSX, the new method is to use internal accounting to attempt to track how much video memory is free and then estimate a value of available memory for textures from that.
    • This is because the operating will not simply report the amount of free video memory (only how much is installed), ruling out the use of a more scientific approach.
  • Once available in production viewers, these changes should mean those running systems with more recent video cards with decent amounts of free video memory should see much improved texture fetching and loading and see a reduction of textures being paged out to cache (the blurring / sharpening / blurring of textures) seen when the viewer thinks it is using all available / allowed video memory.
  • A further change is to specify the maximum amount of system memory the viewer can use for textures (16 GB, if available on 64-bit systems; 4GB on 32-bit systems).

Puppetry Update

Please also refer to:

Notes:

  • The discussion on puppetry mentioned in  the above articles will be the first such meeting, and if there is demand for it, there will be a similar meeting on Aditi on alternate Thursdays from September 8th onwards, to be held in the theatre on Aditi Castelet region.
  • These meetings will (initially) be very development focused rather than creator / user focused, given the overall status of the project.
  • It is advisable that attendees use the Puppetry project viewer when attending these meetings (available from the Alternate Viewers page), so that they might see any demonstration which may take place during meetings.
  • [TPVD video 12:50-14:53]:
    • It’s important to notice that what has been made available is a very early stage “alpha” release.
    • The choice of  the  LLSD Event API Plug-in (LEAP) system means that it should be fairly easy to write third-party code to support capture devices (e.g. from Leap Motion through to (potentially) full body trackers – something Vru Linden is already tinkering with).
    • The Thursday meetings are being established to discuss precisely these kinds of opportunities and the potential for things like multi camera support, etc.
  • [TPV video: 31:09-33:24]  Given the success of the real-tome collaboration with PBR / Reflection Probes, it is likely the Puppetry project will also follow a similar approach and utilise a Discord channel for discussion and contributions, etc., over and above the fortnightly meetings on Aditi.

TPVD In Brief

  • [TPVD video: 2:26-4:15] Inventory Updates:
    • This is something the Lab is considering, and has been looking for feedback from users on possible approaches. – see also the previous CCUG  / TPVD  meetings summary.
    • If / when this work goes ahead, it will also involve some general code and other technical tidying-up,  including:
      • Reducing the number of different AIS APIs currently in use.
      • Removing deprecating (and eventually removing) UDP messaging paths for inventory, together with outdates inventory caps (particularly as the latter are superseded.

 

Next Meetings

  • CCUG: Thursday, September 15th, 2022.
  • TPVD: Friday, September 30th, 2022.

2022 CCUG meeting week #33 summary

Missing Melody, July 2022 – blog post

The following notes were taken from my audio recording and chat log of the Content Creation User Group (CCUG) meeting held on Thursday, August 18th 2022 at 13:00 SLT. These meetings are chaired by Vir Linden, and their dates and times can be obtained from the SL Public Calendar.

This is a summary of the key topics discussed in the meeting and is not intended to be a full transcript.

Official Viewers Status

  • Release viewer: version 6.6.2.573358 – formerly the Maintenance 2 RC viewer, dated August 1, promoted August 4 – no change.
  • Release channel cohorts:
    • Profiles RC viewer updated to version 6.6.3.574158, on August 18.
    • Maintenance P (Preferences, Position and Paste) RC viewer version 6.6.3.573877 issued August 15.
    • Izarra Maintenance RC, version 6.6.3.573920, August 15.
    • Maintenance (N)omayo RC viewer, version 6.6.3.573882, August 5.
  • Project viewers:
    • Love Me Render (LMR) 6 graphics improvements project viewer 6.6.2.573263, July 21.
    • Performance Floater project viewer, version 6.5.4.571296, May 10.
    • Mesh Optimizer project viewer, version 6.5.2.566858, dated January 5, issued after January 10.
    • Copy / Paste project viewer, version 6.3.5.533365, dated December 9, 2019.

Materials and PBR Work

Please also see previous CCUG meeting summaries for further background on this project.

  • The back-end updates are now “all there”, and the focus is now on “tightening up” the graphics., the the image-based side of things now looking “pretty good”.
  • Internal testing currently involves Second Life, the new PBR / Materials viewer, the Kronos glTF 2.0 standard and Adobe Substance tools (Painter, Stager) to ensure that results displayed within Second Life are consistent with expectations when working within Substance Painter and with glTF.
  • Some inconsistencies with using directional lights created in Blender have been noted and subject to further testing.
  • Those with access to the Content Creation Discord server will be able to obtain an updated viewer soon. This will lack transparency support or LSL support; it will also have some “rough edges” around the UI and inventory support.
    • This is a test viewer only, and not for general consumption.
    • A more public Project viewer will be made through the Alternate Viewers channels when the work is more stable and suited for wider consumption.
  • Once the initial work on PBR  Materials is released, the graphics team will likely work on some quality of life improvements (e.g. bug fixes) for the graphics system, rather than launching into a new project immediately.

Possible New Inventory Fields

Whilst not solely related to content creation, the Lab has been discussing the potential of adding new inventory fields. Ideas being considered or also put forward at the meeting comprise:

  • Providing a thumbnail image of the inventory item, rather than having to rely solely on descriptive text.
  • A means of “tagging” inventory items (e.g. to define what they are in terms of being an attachment or not, and whether the attachment is / is not rigged, etc.), rather than just simply leaving them as a list of orange boxes.
  • Providing a formal means of “archiving” items that are not regularly used but which are not yet ready to be deleted (other than boxing things up and creating more orange boxes….).
  • Splitting head shapes and body shapes to make it easier for people who use different heads with the same body (or vice-versa).

In Brief

  • Requests are again surfacing for texture animation support in particles (see feature request BUG-5307). Those interested in seeing such capabilities should consider adding feedback to this Jira.
    • This led to questions on a complete overhaul of the SL particle system, which is not something currently under consideration as a possible future project at the Lab – which is not so say incremental updates are ruled out. Again, specific requests incremental updates system should be made via Jira.
    • For texture animations on particles, for example, the Lab would likely consider adopting the existing texture animation system for use with particles, rather than rebuilding the particle system to handle texture and other animations.
  • There was a brief discussion of a viewer-side Animation Override (AO) system (e.g. similar in nature to the Firestorm approach). This has been raised in the past at TPV Developer meetings, where it appears to get more robust discussion.
  • The question was raised of having support for user-defined custom shaders in Second Life. The short answer is “no”, as there are too many variables (a custom shader for a single scene may work – but what happens with 30 people utilise their own shaders / shaders made by others and all congregate at a single club? The rendering will not scale (also, with people all creating their own shaders, how can a consistent result be ensured? What about the risk of malicious shaders being used with content?).

Next Meeting

  • Thursday September 1st, 2022.