2023 week #20: SL CCUG meeting summary: mirrors and PBR terrain

Small Town Green, March 2023 – blog post

The following notes were taken from my audio recording and chat log transcript of the Content Creation User Group (CCUG) meeting held on Thursday, May 18th, 2023 at 13:00 SLT. 

These meetings are for discussion of work related to content creation in Second Life, including current work, upcoming work, and requests or comments from the community, together with viewer development work. They are usually chaired by Vir Linden, and dates and times can be obtained from the SL Public Calendar.

Notes:

  • These meetings are conducted in mixed voice and text chat. Participants can use either when making comments  or ask or respond to comments, but note that you will need Voice to be enabled to hear responses and comments from the Linden reps and other using it. If you have issues with hearing or following the voice discussions, please inform the Lindens at the meeting.
  • The following is a summary of the key topics discussed in the meeting, and is not intended to be a full transcript of all points raised.

glTF Materials and Reflection Probes

Project Summary

  • To provide support for PBR materials using the core glTF 2.0 specification Section 3.9 and using mikkTSpace tangents, including the ability to have PBR Materials assets which can be applied to surfaces and also traded / sold.
  • There is a general introduction / overview / guide to authoring PBR Materials available via the Second Life Wiki.
  • Substance Painter is also used as a guiding principal for how PBR materials should look in Second Life.
  • Up to four texture maps are supported for PBR Materials: the base colour (which includes the alpha); normal; metallic / roughness; and emissive, each with independent scaling.
  • Given the additional texture load, work has been put into improving texture handling within the PBR viewer.
  • In the near-term, glTF materials assets are materials scenes that don’t have any nodes / geometry, they only have the materials array, and there is only one material in that array.
  • The overall goal is to provide as much support for the glTF 2.0 specification as possible.
  • To provide support for reflection probes and cubemap reflections.
  • The viewer is available via the Alternate Viewers page.
  • Please also see previous CCUG meeting summaries for further background on this project.

Status

  • The RC glTF / PBR viewer was updated to version 7.0.0.580085 on Tuesday. May 16th.
  • Work is continuing to try to fix the issues with pre-PBR skies looking “broken” in the PBR viewer. This is seen as the last major fix required to the viewer, the rest of the required work being seen as or “maintenance” fixes rather than major breakages
    • The probable solution for the sky issue is to use the reflection probe ambience as a hint as to how the sky should rendered. If the ambience setting is zero, various environment sliders such as Brightest (currently referred to a Gamma in the viewer) should respond in much the same way as they do in pre-PBR viewers. Otherwise the sky will be rendered as PDR / HDR.
    • This is acknowledged as being something of a kludge, but is seen as a easiest way to maintain rendering for non glTF / PBR scenes.
  • Water reflections: the glTF / PBR viewer includes updates to the rendering of Linden Water reflections.
    • These are seen as being “not as good” as water reflections seen in non-PBR viewers, but they run with a lot less in the way of overhead on the viewer.
    • The reduction in reflection quality is the result of no longer doing a full rendering pass on Linden Water, and the decision to do this was made to offset the cost of reflection probes and planar mirrors (see below).

Screen Space Reflections and Mirrors

  • Screen Space Reflections (SSR) – (non-SL overview):
    • Now supports both glossy and “stupid” roughness values, with “good” performance, and supports adaptive sampling rates.
    • SRR will apply to “everything” in a scene.
    • The same approach taken with SSR will also be used for planar mirrors.
  • Planar mirrors:
    • Geenz Linden is working on both occlusion culling (at what distance from an avatar / camera should mirrors render) and general mirror constraints (how many mirrors should be active for an avatar at any given time). There are currently no plans to limit mirrors on the basis of size.
    • The latter will most likely initially be set to one, and if there are multiple mirrors within range of an avatar, only the nearest will have reflections actively rendered, the rest will simply render as glossy surfaces.
    • The distance culling is unlikely to exceed 12 m, and there are cases (e.g. some Linden Homes regions) where it would be best suited as being a lot less, to avoid situations where someone has a mirror in their home – but it is impacting their neighbours’ rendering.
    • The rate of update for mirrors might be throttled; a decision has not been taken on this as yet.
    • The limitations are unlikely to have debug overrides (although this might change in part for some as testing progresses) in order to prevent people from inadvertently crippling their frame rates. In this latter regard, Geenz notes:  “The performance sucks, the use cases are limited, and you should plan accordingly.”
    • Rendering takes the form of: reculling the scene from the perspective of the mirror, re-rendering that scene into the G-Buffer, re-shading that scene in the deferred pass, and finally re-rendering semi-transparent objects – all of which is intensive.
  • The intent: LL want to get to a point where SSR and reflection probes should be sufficient for more reflection use-cases. Mirrors should only ever used in exceptional cases where SSR and reflection probes cannot achieve the desired effect (i.e. the shiny metal coffee pot on the table and the “glass” on the picture frame on the wall should both utilise SSR / reflection probes for the reflections on their surfaces and not be set as a mirror).

PBR Terrain

  • As per my TPVD summary for Friday, May 12th, there is a project underway to provide PBR support for terrain.
  • This is seen as a means of leveraging PBR Materials to offer some quality improvements to terrain ahead of any longer-term terrain project which might yet be considered / actioned.
  • The idea is to enable the use of Materials asset IDs in place of the usual texture IDs and applying them to the ground.
  • An initial alpha build of a viewer supporting this work is available through the content creation Discord server. However, note that it is only alpha and unsupported outside of the project at this time.
    • Please also note that at the request of Linden Lab, I am unable to publish details on how creators can obtain access to the content creation Discord server. Those who are interested should wither a) attend a Content Creation User Group meeting and request access there; or b) contact Vir Linden to request details on how to request access.
  • As this is purely a viewer-side change, it does not require a server-side update, but for testing, the viewer should preferably be used on Aditi (the beta grid), where there are materials available within the PBR regions expressly for testing the capability. There is also a debug setting in the viewer which allows it to be used “anywhere”, but this is described as currently “hacky”.
  • This work also sees an increase in the overall texel density for terrain, raising it to 1024×1024, and the texture repeat has been doubled. The latter may only be a temporary move, with discussions at LL revolving around various ideas such as hex tiling.
An example of PBR materials applied to Second Life Terrain. Via Niri Nebula, original by Animats
  • Important notes with this work:
    • It is not terrain painting. It is the application of PBR materials – terrain painting is described as “something that’s on the radar” at LL.
    • The work does not include support for displacement maps.
    • The work is currently only viewer-side, with no corresponding server-side support, the idea here being to prototype what might be achieved and testing approaches / results.
    • It is viewed as a “mini project”, which can potentially be built upon to include elements such as simulator support (including EM tools, etc.).
    • Given the above point, there are also discussions on how best to handle the default grass texture for land (which is just a basic diffuse map) if the PBR terrain work is to go mainstream. Currently, updating this is not part of the mini-project.

glTF and the Future

  • glTF “phase one”: PBR materials (current project).
  • glTF “phase two”: mesh asset uploads support and scenes (up to, but not including animations).
  • glTF “phase three”: animations, morph targets, etc.

Next Meeting

  • Thursday, June 1st, 2023.

† The header images included in these summaries are not intended to represent anything discussed at the meetings; they are simply here to avoid a repeated image of a gathering of people every week. They are taken from my list of region visits, with a link to the post for those interested.

2023 SL Puppetry project week #19 summary

Puppetry demonstration via Linden Lab – see below.  Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”

Note: this has been updated to include comments made at the TPV Developer meeting on Friday, May 12th.

The following notes have been taken from chat logs and audio recording of the Thursday, May 11th, 2023 Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).

Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.

Project Summary

General Project Description as Originally Conceived

LL’s renewed interest in puppetry was primarily instigated by Philip joining LL as official advisor, and so it really was about streaming mocap. That is what Philip was interested in and why we started looking at it again. However since Puppetry’s announcement what I’ve been hearing from many SL Residents is: what they really want from “puppetry” is more physicality of the avatar in-world: picking up objects, holding hands, higher fidelity collisions. 
As a result, that is what I’ve been contemplating: how to improve the control and physicality of the avatar. Can that be the new improved direction of the Puppetry project? How to do it?

Leviathan Linden

  • Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the  LLSD Event API Plug-in (LEAP) system.
    • Note that facial expressions and finger movements are not currently enabled.
    • Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
  • The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
  • Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
  • No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
  • There is a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).

Additional Work Not Originally In-Scope

  • Direct avatar / object / avatar-avatar interactions (“picking up” an apple; high-fives. etc.).
  • Animations streaming: allowing one viewer to run animations and have them sent via the simulator to all receiving viewers without any further processing of the animations by those viewers.
  • Enhanced LSL integration for animation control.
  • Adoption of better animation standards – possibly glTF.
  • Given the project is incorporating a lot of additional ideas, it is likely to evolve into a rolling development, with immediate targets for development / implementation decided as they are agreed upon, to be followed by future enhancements. As such, much of what goes into the meetings at present is general discussion and recommendations for consideration, rather than confirmed lines o development.

Bugs, Feature Requests and Code Submissions

  • For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
  • There is also a public facing Kanban board with public issues.
  • Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:

Further Information

Meeting Notes

Viewer Progress

  • An updated version of the viewer – version 6.6.12.579958 – was release on Thursday, May 11th.
    • This update includes access to Rider Linden’s experimental attachment point tracking & forwarding to the server feature.
    • It also includes various incremental improvements to handling puppetry, such as support for parsing binary LEAP data from the LEAP script.
  • Avatar attachment point tracking (per the TPVD meeting discussion on May 12th):
    • This allows the tracking of joints (using attachment points) using a script.
    • Using visible attachment points (i.e. those on the avatar , NOT any of the screen-based HUD attachment points) cuts down on the the amount of data having to be be handled at both ends.
    • The speed at which the attachment point movement is read back is such that it could not be exploited to create a copy of an animation with any real fidelity.
    • This is a deliberate move to ensure that animation creators are not left feeling uncomfortable about LSL animation tracking.
    • There is a combined throttle / sleep time elements to tracking attachment points: the throttle limits the number of attachment points that can be tracked over a certain period of time; he script sleep time is designed to allow an animation to move those attachment points forward sufficiently before a further taking record is made. Thus, it is next to impossible to track and record a coherent animation frame.
  • It was noted that previously, joint constraints had been hard coded in C++, but their configuration has been moved into a human-readable LLSD file which can be modified without rebuilding the viewer.
  • Two areas of focus going forward are:
    • Improving the Inverse Kinematic (IK) system within the viewer – something Leviathan Linden is already working on. This will include overall improvements to IK constraints as well as to positioning, with the existing source-code constraints moved  replaced by a further config file – “constraints” here being in terms of joint rotation / movement.
    • Providing .FBX animation file import and Mixamo skeleton re-targeting.
  • The IK work is still being thrashed out (and subject to much more discussion at meetings, but is seen as a priority over other elements of work, such as the animations streaming idea Leviathan Linden had been working on. The hope is that by improving IK, it will play into streaming and “live” animations a lot more robustly and smoothly. It is also seen as a foundational piece of work for further opening up puppetry and animation work.

General Notes

  • It was noted that for animation import, LL is looking towards using / supporting Assimp (github: https://github.com/assimp/assimp), which supports some 30-40 animation formats, converting them to it ownformat for ease of import to multiple platforms. Notably, it supports .FBX and glTF, so it fits with the Lab’s goal of utilising glTF for materials, mesh imports, etc.
  • [TPVD meeting, May 12th] This will not alter the existing internal format for animation. It is just  to allow the import of other formats.
  • It is acknowledged that alongside o that, the Labe will require a retargeting system for animations; although what form this will take is still TBD.
  • The core of the meeting was a general discussion of things that might / could e done in the future, and what technologies LL might look towards.

Date of Next Meeting

  • Thursday, May 25th, 2023, 13:00 SLT.

2023 week #18: SL CCUG meeting summary – PBR

Elvion, March 2023 – blog post †

The following notes were taken from my audio recording and chat log transcript of the Content Creation User Group (CCUG) meeting held on Thursday, May 4th, 2023 at 13:00 SLT. 

These meetings are for discussion of work related to content creation in Second Life, including current work, upcoming work, and requests or comments from the community, together with viewer development work. They are usually chaired by Vir Linden, and dates and times can be obtained from the SL Public Calendar.

Notes:

  • These meetings are conducted in mixed voice and text chat. Participants can use either to make comments / ask or respond to comments, but note that you will need Voice to be enabled to hear responses and comments from the Linden reps and other using it. If you have issues with hearing or following the voice discussions, please inform the Lindens at the meeting.
  • The following is a summary of the key topics discussed in the meeting, and is not intended to be a full transcript of all points raised.

Official Viewer Status

No updates through until the meeting, leaving the official viewer pipelines as:

  • Release viewer: Performance Floater / Auto FPS RC viewer, version 6.6.11.579629, promoted April 25.
  • Release channel cohorts (please see my notes on manually installing RC viewer versions if you wish to install any release candidate(s) yourself).
  • Project viewers:
    • Puppetry project viewer, version 6.6.8.576972, December 8, 2022.

glTF Materials and Reflection Probes

Project Summary

  • To provide support for PBR materials using the core glTF 2.0 specification Section 3.9 and using mikkTSpace tangents, including the ability to have PBR Materials assets which can be applied to surfaces and also traded / sold.
  • There is a general introduction / overview / guide to authoring PBR Materials available via the Second Life Wiki.
  • Substance Painter is also used as a guiding principal for how PBR materials should look in Second Life.
  • Up to four texture maps are supported for PBR Materials: the base colour (which includes the alpha); normal; metallic / roughness; and emissive, each with independent scaling.
  • Given the additional texture load, work has been put into improving texture handling within the PBR viewer.
  • In the near-term, glTF materials assets are materials scenes that don’t have any nodes / geometry, they only have the materials array, and there is only one material in that array.
    • It is currently to early to state how this might change when glTF support is expanded to include entire objects.
  • The overall goal is to provide as much support for the glTF 2.0 specification as possible.
  • To provide support for reflection probes and cubemap reflections.
  • The viewer is available via the Alternate Viewers page.
  • Please also see previous CCUG meeting summaries for further background on this project.

Status

  • The viewer is now at Release Candidate status, per the viewer update list above. HOWEVER, the server-side support for glTF / PBR is still awaiting deployment to the Preflight RC channel on the main grid, so for the time being, the viewer still only works on Aditi (the Beta grid), on the following regions: Materials1, Materials Adult, Rumpus Room and Rumpus Room 2 through 4.
  • The viewer will remain in RC for some time to allow for broader feedback to be gained, particularly once the server support has been deployed to simhosts on Preflight (and, most likely Snack as a follow-on), and so is more amenable for testing by a wider group of users / creators.
  • As always, those who do find significant issues in using the viewer in RC are asked to report them via a BUG report ASAP.
  • Runitai Linden (aka Dave P), has been working on avatar performance with PBR, hoping to up the performance a little further, as well as continuing to refine reflection probes.
  • Brad Linden continues to work on bug fixing, improving network traffic overheads, etc.
  • A new addition to the PBR viewer is a reflection probe visualisation debug tool, allowing the volume of space specific probes are influencing to be seen, allowing people to better understand where reflections on surfaces are coming from, etc.
  • Application priorities: if a surface had either only PBR Materials applied, or PBR overlaying the “traditional” SL materials, it will be rendered according to the glTF specification. If an object has faces with different materials types (e.g. PBR Materials on some faces – such as the sides of a prim cube, and “traditional” SL materials on the others), the viewer will render the PDR faces via the the PBR renderer, and those face with the older materials in a manner consistent with how the should appear if rendered on a non-PBR viewer.

PBR Resources for Testing

  • There have been some requests for content to test PBR Materials against. Content has been provided (by LL and some of the creators testing PBR Materials already) on Aditi, and some of this could potentially be ported.
  • One suggestion was to make a sandbox available for PBR testing, allowing creators to build / import their own content and test it under different EEP settings (e.g. their own / those in the Library), using the Apply Only to Myself option.
  • Custom EEP settings are one particular area of testing that EEP creators might want to look at (both in terms of the PBR viewer and also with any PBR Materials test content). This is because there have been some changes made to the environment rendering in the PBR viewer which might impact some custom EEP settings, which may require them to be adjusted / updated and / or BUG reports raised against significant issues.
  • For those wishing to gain familiarity with PBR Materials in general, their is the SL Wiki entry for it, and it has been suggested some general test content could be provided through that page.

Future glTF Work

  • Geenz Linden is actively working on real-time mirror as a future follow-on project from the PBR Materials work, as well as working on Screen Space Reflections

Next Meeting

  • Thursday, May 18th, 2023.

† The header images included in these summaries are not intended to represent anything discussed at the meetings; they are simply here to avoid a repeated image of a gathering of people every week. They are taken from my list of region visits, with a link to the post for those interested.

2023 week #16: SL CCUG meeting summary (abbreviated)

OCWA Experience The Ocean, February 2023  – blog post

The following notes were taken from my audio recording and chat log transcript of the Content Creation User Group (CCUG) meeting held on Thursday, April 20th, 2023 at 13:00 SLT. 

These meetings are for discussion of work related to content creation in Second Life, including current work, upcoming work, and requests or comments from the community, together with viewer development work. They are usually chaired by Vir Linden, and dates and times can be obtained from the SL Public Calendar.

Notes:

  • These meetings are conducted in mixed voice and text chat. Participants can use either to make comments / ask or respond to comments, but note that you will need Voice to be enabled to hear responses and comments from the Linden reps and other using it. If you have issues with hearing or following the voice discussions, please inform the Lindens at the meeting.
  • The following is a summary of the key topics discussed in the meeting, and is not intended to be a full transcript of all points raised.

Additional note: unfortunately, my audio recording died whilst saving to disk, leaving me with just the first 10 minutes of audio from the meeting available for playback / summary. Given responses to questions in text are supplied in Voice, it is impossible to provide any reasonable summary beyond the point at which the recording save failed, so this is therefore a very foreshortened report, and not representative of the entire meeting.

Official Viewer Status

The Performance Floater / Auto FPS RC viewer updated to version  6.6.11.579629 on April 20th.

The rest of the official viewers remain as:

  • Release viewer: Maintenance R viewer, version 6.6.10.579060, dated March 28, promoted March 30th.
  • Release channel cohorts:
    • Maintenance T(ranslation) RC viewer, version 6.6.11.579154, April 6th.
    • Maintenance S, version 6.6.11.579153, March 31st.
  • Project viewers:
    • The PBR Materials / reflection probes project viewer, version 7.0.0.579401, April 10th.
    • Puppetry project viewer, version 6.6.8.576972, December 8, 2022.

The Performance Floater / Auto FPS viewer looks set to become the next viewer to be promoted to de facto release status in the coming week.

glTF Materials and Reflection Probes

Project Summary

  • To provide support for PBR materials using the core glTF 2.0 specification Section 3.9 and using mikkTSpace tangents, including the ability to have PBR Materials assets which can be applied to surfaces and also traded / sold.
  • To provide support for reflection probes and cubemap reflections.
  • The overall goal is to provide as much support for the glTF 2.0 specification as possible.
  • In the near-term, glTF materials assets are materials scenes that don’t have any nodes / geometry, they only have the materials array, and there is only one material in that array.
    • It is currently to early to state how this might change when glTF support is expanded to include entire objects.
  • The project viewer is available via the Alternate Viewers page, but will only work on the following regions on Aditi (the Beta grid):  Materials1; Materials Adult and Rumpus Room 1 through 4.
  • Please also see previous CCUG meeting summaries for further background on this project.

Status

  • Viewer:
    • The viewer is now very close to being promoted to Release Candidate status. Issues within the viewer build farm prevented it from getting fully cleared by QA, who are currently taking one more look at it.
    • It is believed that all the significant showstoppers thus far found have been dealt with (although more may show up as a result of it becoming available to a wider audience in RC).
    • The above should include the issue of objects in the camera’s view failing to render unless occlusion culling is disabled and the issues of some mesh items “exploding in the the viewer’s viewer both being fixed.
    • Ton mapping has been updated so there is no longer the ability to change / turn off tone mapping. This has been done in the name of “trying to keep things consistent” with older contents that has tone mapping built-in. This does lead to a few edge cases (such as not being able to get totally pitch black environments), but also fixes some issues around general exposure (e.g. preventing full bright objects changing brightness depending on camera distance).
    • It is likely that the work on exposures will eventually be fed into the snapshot tool, so photographers can adjust the exposure settings for them pictures.
    • There is still a collection of minor issues / bugs still to be resolved – such as getting parity some some of the current sky settings – which will be dealt with in RC as the viewer progresses.
    • Those who do find significant issues in using the viewer in RC are asked to report them via a BUG report ASAP.
  • When the view does go to RC, it is likely the server-side support will be deployed to one (Preflight) or possibly two (Preflight and Snack) small simulator RC channels to allow for testing on Agni (the Main grid). Details of the available regions will be published in my project summaries as and when available.

Future glTF Work

  • There is an internal (to LL) design document in development which is intended to set-out the next steps in the glTF work; however, this is not currently ready for public release.
  • Past indicators have been that it is possible the near-term work for glTF could include planar mirrors (with some controls around their use) and also glTF mesh uploads.

Next Meeting

  • Thursday, May 4th, 2023.

† The header images included in these summaries are not intended to represent anything discussed at the meetings; they are simply here to avoid a repeated image of a gathering of people every week. They are taken from my list of region visits, with a link to the post for those interested.

2023 SL Puppetry project week #15 summary

Puppetry demonstration via Linden Lab – see below.  Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”

The following notes have been taken from chat logs and audio recording of the Thursday, March 23rd, 2023 Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).

Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.

Project Summary

General Project Description as Originally Conceived

LL’s renewed interest in puppetry was primarily instigated by Philip joining LL as official advisor, and so it really was about streaming mocap. That is what Philip was interested in and why we started looking at it again. However since Puppetry’s announcement what I’ve been hearing from many SL Residents is: what they really want from “puppetry” is more physicality of the avatar in-world: picking up objects, holding hands, higher fidelity collisions. 
As a result, that is what I’ve been contemplating: how to improve the control and physicality of the the avatar. Can that be the new improved direction of the Puppetry project? How to do it?

Leviathan Linden

  • Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the  LLSD Event API Plug-in (LEAP) system.
    • Note that facial expressions and finger movements are not currently enabled.
    • Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
  • The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
  • Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
  • No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
  • There is a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).

Additional Work Not Originally In-Scope

  • Direct avatar / object / avatar-avatar interactions (“picking up” an apple; high-fives. etc.).
  • Animations streaming: allowing one viewer to run animations and have them sent via the simulator to all receiving viewers without any further processing of the animations by those viewers.
  • Enhanced LSL integration for animation control.
  • Adoption of better animation standards – possibly glTF.
  • Given the project is incorporating a lot of additional ideas, it is likely to evolve into a rolling development, with immediate targets for development / implementation decided as they are agreed upon, to be followed by future enhancements. As such, much of what goes into the meetings at present is general discussion and recommendations for consideration, rather than confirmed lines o development.

Bugs, Feature Requests and Code Submissions

  • For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
  • There is also a public facing Kanban board with public issues.
  • Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:

Further Information

Meeting Notes

Viewer Progress

  • An updated version of the project viewer is still “close” – it is currently awaiting clearance by QA.
  • This will include the attachment point tracking discussed in previous meetings (e.g., a step towards being able to “pick things up” in SL. The simulator support for this is already in place on the Aditi Puppetry regions.
  • However when available it will not include:
    • LSL animation control as yet (which has yet to be added to the simulator code anyway. Rider Linden believes he has a good protocol for single avatar animation, but would rather work on it some more.
    • Any IK improvements, as Leviathan Linden is still working on these.
    • Any extended LEAP API functionality (the added features for getting world position/orientation, lookat position/orientation, camera position/orientation/target). This will be coming in a future viewer update.
  • Another thing in this release will be the change to llsd_binary for the LEAP messaging protocol. This will be in the release notes but to use it you will want to update to either 1.3.1 of llbase and/or 1.2.0 of llsd for python scripts. Messages from LEAP scripts to the viewer will still work with the older python libraries, but messages from the viewer to the script will not be parsed correctly.

Server-Side Work

  • The LSL function API has been published to the Content Creation Discord group (sorry, I’ve been asked by LL not to publish details on joining the server – if you are a content creator interested in joining it, please contact Vir Linden or attend a meeting (Content Creation / Puppetry and ask in person).
  • Getting attachment point positions has been given a throttle, in part to not make it trivial to use LSL to rip an animation, and in part to prevent the server doesn’t get overwhelmed. This latter rate of throttling is variable and can change as load increases/decreases. However, as Rider linden noted, there would always be some delay and some disagreement about the actual position of the attachment point between LSL and all the observing viewers. As such, function is not meant for a high-fidelity use. Collision volumes on the attachment points will be a better solution in this respect, but that is functionality which is still down the line.

General Notes

  • Leviathan Linden’s work for streaming the full avatar animation state has stalled, due to it essentially hijacking the main puppetry data channel to send everything, even when not running a puppetry script, through LEAP. As such, Leviathan thinks it needs to be moved to its own experimental viewer.
  • Simon Linden’s work on allowing animation uploads of new/different formats has been decoupled from the Puppetry project’s codebase, and is now being built on the main viewer branch, allowing it to move forward without dependencies on Puppetry.
  • OpenXR support as a LEAP plug-in is still seen as desirable, since it would allow support for a broader range of devices. However, it is seen as a little more “down the road”, as there is some core infrastructure that needs to finish being vetted prior to work starting on this.

My thanks to Jenna Huntsman for the chat transcript from the meeting, and you can see her video recording of the session here.

Date of Next Meeting

  • Thursday, April 27th, 2023, 13:00 SLT.

2023 week 14: SL CCUG meeting summary

Perpetuity, February 2023 – blog post
The following notes were taken from my audio recording and chat log transcript of the Content Creation User Group (CCUG) meeting held on Thursday, April 6th, 2023 at 13:00 SLT.  These meetings are for discussion of work related to content creation in Second Life, including current work, upcoming work, and requests or comments from the community, together with viewer development work. They are usually chaired by Vir Linden, and dates and times can be obtained from the SL Public Calendar. Notes:
  • These meetings are conducted in mixed voice and text chat. Participants can use either to make comments / ask or respond to comments, but note that you will need Voice to be enabled to hear responses and comments from the Linden reps and other using it. If you have issues with hearing or following the voice discussions, please inform the Lindens at the meeting.
  • The following is a summary of the key topics discussed in the meeting, and is not intended to be a full transcript of all points raised.
Additional note: unfortunately, physical world matters meant I missed the initial part of the meeting, and as it is held in voice, there is little by way of chat transcript to reflect initial discussions prior to my arrival.

Official Viewer Status

On April 6th:
  • Maintenance T(ranslation) RC viewer, version 6.6.11.579154, was issued.
  • The PBR Materials / reflection probes project viewer updated to version 7.0.0.579241.
The rest of the current official viewers remain as:
  • Release viewer: Maintenance R viewer, version 6.6.10.579060, dated March 28, promoted March 30th.
  • Release channel cohorts:
    • Maintenance S, version 6.6.11.579153, March 31st.
    • Performance Floater / Auto FPS RC viewer updated to version 6.6.11.579238, April 4th.
  • Project viewers:
    • Puppetry project viewer, version 6.6.8.576972, December 8, 2022.

glTF Materials and Reflection Probes

Project Summary

  • To provide support for PBR materials using the core glTF 2.0 specification Section 3.9 and using mikkTSpace tangents, including the ability to have PBR Materials assets which can be applied to surfaces and also traded / sold.
  • To provide support for reflection probes and cubemap reflections.
  • The overall goal is to provide as much support for the glTF 2.0 specification as possible.
  • In the near-term, glTF materials assets are materials scenes that don’t have any nodes / geometry, they only have the materials array, and there is only one material in that array.
    • It is currently to early to state how this might change when glTF support is expanded to include entire objects.
  • The project viewer is available via the Alternate Viewers page, but will only work on the following regions on Aditi (the Beta grid):  Materials1; Materials Adult and Rumpus Room 1 through 4.
  • Please also see previous CCUG meeting summaries for further background on this project.

Status

  • The viewer has been updated to maintain parity with the release viewer, and work continues to get the viewer to a position where it can move to RC status.
    • Once it does go to RC status, it is expected to remain there for “a few months”.
  • Currently, the viewer is at a point where creators who wish to make content using PBR tools such as Substance Painter can do so and work to the rule-of-thumb that if it looks the same in both Substance Painter and the glTF viewer, than all is well and good – BUT, if the SL version looks noticeably different in the viewer, then a bug report should be filed, the issue should not be worked around.
  • Getting the simulator support for glTF moved to Agni is now being considered.
  • With regards to Bakes on Mesh, glTF Materials work in a similar manner to the current materials – the result of the BoM process gets fed into the base colour (+ the emissive map) like it does with the diffuse map for materials at present.
    • This does not mean BoM is glTF materials enabled; that still requires an update to the Bake Service to support materials data.
    • Updating the Bake Service is still seen as a “high value” future project.
  • The Sun midday position of the Sun has been adjusted so that it is no longer directly overhead, but is angled to appear as it would at a latitude of around 40ºN/S in spring.
Left: the glTF viewer repositions the midday Sun so it is in similar position as it would appear in the physical world at a latitude of around 40ºN/S in the spring, as opposed to being directly overhead as seen in the image on the right. Credit: Runitai Linden
  • Automatic alpha masks are turned off in the PBR viewer, and are likely to remain this way unless a compelling reason emerges for this not to be the case. So the Develop(er) → Rendering → Automatic Alpha Masks option for deferred rendering is off (and the one for forward rendering removed, as the glTF viewer does not support forward rendering).

HDRi Sky Rendering

  • In order to  get parity with High Dynamic Range Imaging (HDRi) environment maps has meant the sky as rendered on the glTF viewer is essentially HDR with added dynamic exposure. Without this change, the sky was lighting everything as if it were a “giant blue wall” rather than a bright sky.
  • This has impacted EEP (the Environment Enhancement Project, and means that the sky can look over-exposed under some settings.
  • LL is trying to zero in on a sky of sky parameters that is acceptable to most EEP settings. However, the issue is particularly noticeable for EEP settings which use “day for night” (e.g. they utilise dark sky tinting, etc., and replace the Sun texture with a planet or moon or some such, because the HDR rendering assumes that because the Sun “up”, there should be a brighter lighting used in the sky.
  • The choice here is:
    • Should the parameter be adjusted for uniformity (and some EEP settings require adjustment), or
    • Should additional control be supplied to allow additional control over the sky brightness, etc., to deal with EEP settings  where the above issues occur?
  • The problems with this second approach are that:
    • It “severely” fragment the expected colour space in the process, leaving content creators having to work with multiple lighting models (e,g. as with working with ALM on or off at present)?
    • It is akin to LL removing the ability to disable ALM in the PBR viewer and remove the older forward rendering code, only to then implement another “button” to alter the environment rendering, rather than keeping things uniform.
  • This topic has been the subject of heated debate within the Content Creation Discord channel.

In Brief

  • Priorities for graphics / content creation work after glTF Materials are currently planar mirrors and then glTF mesh imports.

Next Meeting

  • Thursday, April 20th, 2023.