2023 week 11: SL CCUG meeting summary

Gothbrooke Forest, January 2023 – blog post
The following notes were taken from my audio recording and chat log transcript of the Content Creation User Group (CCUG) meeting held on Thursday, March 16th, 2023 at 13:00 SLT.  These meetings are for discussion of work related to content creation in Second Life, including current work, upcoming work, and requests or comments from the community, together with viewer development work. They are chaired by Vir Linden, and dates and times can be obtained from the SL Public Calendar. Notes:
  • These meetings are conducted in mixed voice and text chat. Participants can use either to make comments / ask or respond to comments, but note that you will need Voice to be enabled to hear responses and comments from the Linden reps and other using it. If you have issues with hearing or following the voice discussions, please inform the Lindens at the meeting.
  • The following is a summary of the key topics discussed in the meeting, and is not intended to be a full transcript of all points raised.

glTF Materials and Reflection Probes

Project Summary

  • To provide support for PBR materials using the core glTF 2.0 specification Section 3.9 and using mikkTSpace tangents, including the ability to have PBR Materials assets which can be applied to surfaces and also traded / sold.
  • To provide support for reflection probes and cubemap reflections.
  • The overall goal is to provide as much support for the glTF 2.0 specification as possible.
  • In the near-term, glTF materials assets are materials scenes that don’t have any nodes / geometry, they only have the materials array, and there is only one material in that array.
    • It is currently to early to state how this might change when glTF support is expanded to include entire objects.
  • The project viewer is available via the Alternate Viewers page, but will only work on the following regions on Aditi (the Beta grid):  Materials1; Materials Adult and Rumpus Room 1 through 4.
  • Please also see previous CCUG meeting summaries for further background on this project.

Status

  • The PBR Materials project viewer updated to version 7.0.0.578792, on March 15th 2023. Note that this viewer will only function on the following Aditi (beta grid) regions: Materials1; Materials Adult and Rumpus Room 1 through 4.
  • Texture handling / management:
    • As a result of data gathered by the Lab revealing a lot of users only have around 1 GB of texture memory, Dave P (Runitai Linden) has been making another pass through texture handling to making loading faster and memory use more efficient.
    • VRAM management has been improved to more selectively release texture memory on systems which might otherwise “run low” on available VRAM.
    • The hope is these will reduce texture trhashing issues (texture blurring, clearing, blurring, clearing) in the future for those so affected.
  • Geenz Linden continues to work on the Mac side of the PBR work; Comic Linden is finalising UV treatment  and Bed Linden is working on the one remaining server-side bug the team is aware of and  is working on atmospherics and issues with rendering them in linear space.
  • Brad Linden is working on a series of bugs in PBR materials handling where editing via LSL or manually sees the updates (changes) dropped rather than applied in various edge-cases and situations.
    • The simulator-side fixes for this issues are in place; fixes within the viewer are awaiting inclusion in a upcoming viewer update.

In Brief

  • glTF format for geometry (mesh), animations, etc., this is something the Lab does want to do, but will take the form of follow-on project(s) from the current glTF PBR materials work.
    • supporting glTF geometry imports is seen as a major project as it will likely require handling of arbitrary hierarchies, which is not something SL currently handles – although it is acknowledged that once done, will offer a lot of benefits.
  • There  was a general discussion on terrain improvements. This is something that LL had been considering, but content creators attending the CCUG meeting favoured the PBR work and graphics updates, so the terrtain updates have just to be put back onto the road map. Where it would slot, is not clear, as the desire from creators is to see the glTF work continue with geometry import support, etc., as noted above.
  • Another major graphic project waiting in the wings is the introduction of support for the Vulkan graphics API / MoltenVK (for Mac). This would likely take priority over any significant terrain work.

Next Meeting

  • Thursday, March 30th, 2023.

2023 SL Puppetry project week #10 summary

Puppetry demonstration via Linden Lab – see below.  Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”

The following notes have been taken from chat logs and audio recording of the Thursday, March 9th, 2023 Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).

Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.

Project Summary

General Project Description as Originally Conceived

LL’s renewed interest in puppetry was primarily instigated by Philip joining LL as official advisor, and so it really was about streaming mocap. That is what Philip was interested in and why we started looking at it again. However since Puppetry’s announcement what I’ve been hearing from many SL Residents is: what they really want from “puppetry” is more physicality of the avatar in-world: picking up objects, holding hands, higher fidelity collisions. 
As a result, that is what I’ve been contemplating: how to improve the control and physicality of the the avatar. Can that be the new improved direction of the Puppetry project? How to do it?

Leviathan Linden

  • Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the  LLSD Event API Plug-in (LEAP) system.
    • Note that facial expressions and finger movements are not currently enabled.
    • Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
  • The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
  • Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
  • No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
  • There is a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).

Additional Work Not Originally In-Scope

  • Direct avatar / object / avatar-avatar interactions (“picking up” an apple; high-fives. etc.
  • Animations streaming: allowing one viewer to run animations and have them sent via the simulator to all receiving viewers without any further processing of the animations by those viewers.
  • Enhanced LSL integration for animation control.
  • Adoption of better animation standards – possibly glTF.
  • Given the project is incorporating a lot of additional ideas, it is likely to evolve into a rolling development, with immediate targets for development / implementation decided as they are agreed upon, to be followed by future enhancements. As such, much of what goes into the meetings at present is general discussion and recommendations for consideration, rather than confirmed lines o development.

Bugs, Feature Requests and Code Submissions

  • For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
  • There is also a public facing Kanban board with public issues.
  • Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:

Further Information

Meeting Notes

Viewer Progress

  • An updated version of the project viewer is due to be made available once it has cleared LL’s QA process. This includes:
    • Using the binary protocol for the LEAP module communication, with new logic which causes LEAP modules to one be loaded by the viewer when they are used.
    • The AgentIO LEAP module adds the ability to adjust the look at target, viewer camera and agent orientation.
    • Support for sending the joint position of your avatar to the server, which is then available in LSL.
      • The code reports the post animation location for attachment points, allowing the the sever to know where things like hands and wings, etc.,  are, and this in turn allows LSL to query where that attachment point is in space and how it is rotated.
  • HOWEVER, the animation streaming code (see previous Puppetry meeting notes) will not be in the next viewer update.

Server-Side Work

  • The simulator code now has llGetAttachmentPointAnim() support, which should be recognised by the upcoming viewer update.
  • The Aditi puppetry regions are to be merged with the updated code so this can be tested.
  • While there has been some work completed on animation imports since the last meeting, there was nothing significant for LL to report on progress at this meeting.

General Notes

  • There is additional work going on to try to improve the IK system, with the aim of having the basics working better than is currently the case – better stability, etc. This work may appear in the viewer update after the one currently being prepared to go public.
  • Performance:
    • To prevent puppetry generating too much messaging traffic (UDP) between the viewer and simulator, a throttle is being worked on so that when the simulator is under a heavy load from multiple viewers running puppetry code, it can tell them all to tone down the volume of messages.
    • There will also be some switches and logic put into place that can be used when needed, helping to protect regions in case the load gets overwhelming.
    • A further suggestion made is to ensure the simulator does not broadcast puppetry messages for avatars seated and not using the code (such as an audience at a performance) to further reduce to volume of messaging, this is viewed as a potentially good avenue of work to consider.
    • There is also a threshold in place – if an attachment point does not move beyond it, it is not considered as moved, which will hopefully also reduce the amount of messaging the simulator has to handle.
  • LSL Integration:
    • See: OPEN-375: “LSL Functions for reading avatar animation positions”.
    • This work is now paused. Rider Linden developed a proof of concept, but found that in order to better manipulate parameters within the constraints, a configuration file should be used. He is therefore refactoring the code to do this before proceeding further.
    • The configuration file will be called avatar_constraints.llsd and it will live alongside avatar_lad.xml in the character directory.
  • Questions were again raised on whether Puppetry is for VR / will enable the viewer to run VR.
    • It was again pointed out that while Puppetry lays more foundational work which could be leveraged for use with VR headsets, than is not the aim of the Puppetry project.
    • Providing VR headset support is a much broader issue, which would require the involvement of other teams from LL – Product, the Graphics Team, the viewer developers, etc.

Date of Next Meeting

  • Thursday, March 23rd, 2023, 13:00 SLT.

2023 week 9: SL CCUG meeting summary – PBR

Cloud Edge, January 2023 – blog post
The following notes were taken from my audio recording and chat log transcript of the Content Creation User Group (CCUG) meeting held on Thursday, March 2nd, 2023 at 13:00 SLT.  These meetings are for discussion of work related to content creation in Second Life, including current work, upcoming work, and requests or comments from the community, together with viewer development work. They are chaired by Vir Linden, and dates and times can be obtained from the SL Public Calendar. Notes:
  • These meetings are conducted in mixed voice and text chat. Participants can use either to make comments / ask or respond to comments, but note that you will need Voice to be enabled to hear responses and comments from the Linden reps and other using it. If you have issues with hearing or following the voice discussions, please inform the Lindens at the meeting.
  • The following is a summary of the key topics discussed in the meeting, and is not intended to be a full transcript of all points raised.

Official Viewers Summary

The PBR Materials project viewer updated to version 7.0.0.578526, on March 3rd, 2023. Note that this viewer will only function on the following Aditi (beta grid) regions: Materials1; Materials Adult and Rumpus Room 1 through 4.

Available Viewers

General Viewer Notes

  • The Maintenance R and the Performance Improvements / Auto-FPS RC viewers are both now apparently in line for promotion to de facto release status, although both may go through further RC updates prior to being promoted.

glTF Materials and Reflection Probes

Project Summary

  • To provide support for PBR materials using the core glTF 2.0 specification Section 3.9 and using mikkTSpace tangents, including the ability to have PBR Materials assets which can be applied to surfaces and also traded / sold.
  • To provide support for reflection probes and cubemap reflections.
  • The overall goal is to provide as much support for the glTF 2.0 specification as possible.
  • In the near-term, glTF materials assets are materials scenes that don’t have any nodes / geometry, they only have the materials array, and there is only one material in that array.
    • It is currently to early to state how this might change when glTF support is expanded to include entire objects.
  • The project viewer is available via the Alternate Viewers page, but will only work on the following regions on Aditi (the Beta grid):  Materials1; Materials Adult and Rumpus Room 1 through 4.
  • Please also see previous CCUG meeting summaries for further background on this project.

Status

  • Work continues on viewer-side bug fixes.
  • Tone mapping: work is progressing on implementing the Krzysztof Narkowicz variant of ACES tone mapping, which should – depending on the monitor being used  / viewer preferences set – produce better graphical results. As the result can vary by monitor / eye, this will include both an exposure slider and an option to disable the option.
  • Geenz Linden is working on the Mac side of the PBR work; Comic Linden is finalising UV treatment  and Bed Linden is working on the one remaining server-side bug the team is aware of and Dave P (Runitai Linden) is working on atmospherics and issues with rendering them in linear space.
  •  Linear space alpha blending: there are still issues with this, particularly at either end of the scale (high colours / high transparency and low colours / low transparency). This is being worked on, but may end up with a debug setting to disable linear space alpha blending by those who need to, with a warning that this is not how scenes are intended to be viewed.
A scene imported by Nagachief Darkstone and WindowsCE to demonstrate reflection probes (note the reflections on the knight’s armour – these are not generated by attached environment lights but by a reflection probe within the building structure. Image courtesy of Rye Cogtail

In Brief

  • It now looks as if the move away from the OpenGL API will be to Vulkan for Windows (/Linux?) and MoltenVK for Mac.
  • LL is interested in implementing something similar to the Firestorm Local Mesh capability by Beq Janus and Vaalith Jinn (see here and here for more), possibly as a result of a code contribution.
  • Land Impact:
    • Some creators are using the Animesh checkbox on upload to try to get around large mesh objects having heavy Land Impact values. LL gave notice at the meeting that this is regarded as an exploit, and it will be patched – so those doing so should really cease in order to avoid people facing unplanned object returns when their parcel start reporting they are over capacity.
    • In terms of Land Impact overall, it was acknowledged that while updated to allow for mesh, etc., the formula does still have some shortfalls; however, redressing this would require work which also involves bandwidth and server memory, and is not currently on the cards.
    • It is hoped that the move to support glTF mesh imports will offer a means to address LOD issues and Land Impact, as it will bring with it a fundamental shift in the data model
  • Cull distance volumes: one way to reduce the render load on a system is to have cull distance volumes. The PBR reflection probes are being seen by LL as a means to test data gathering which can eventually be used in cull distance volumes (e.g. so you can set-up a volume inside a room and have it so that the viewer does not start rendering anything within that room until a camera is within X metres of the room).
    • This could potentially make Land Impact more dynamic in terms of content streaming costs, based on the use of cull volumes / camera position.
    • It could also be used to assist in privacy matters (e.g. “don’t render what’s in this room unless people are in this room”).

Next Meeting

  • Thursday, March 16th, 2023.

2023 SL Puppetry project week #8 summary

Puppetry demonstration via Linden Lab – see below.  Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”

The following notes have been taken from chat logs and audio recording of the Thursday, February 24th, 2023 Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).

Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.

Project Summary

General Project Description as Originally Conceived

LL’s renewed interest in puppetry was primarily instigated by Philip joining LL as official advisor, and so it really was about streaming mocap. That is what Philip was interested in and why we started looking at it again. However since Puppetry’s announcement what I’ve been hearing from many SL Residents is: what they really want from “puppetry” is more physicality of the avatar in-world: picking up objects, holding hands, higher fidelity collisions. 
As a result, that is what I’ve been contemplating: how to improve the control and physicality of the the avatar. Can that be the new improved direction of the Puppetry project? How to do it?

Leviathan Linden

  • Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the  LLSD Event API Plug-in (LEAP) system.
    • Note that facial expressions and finger movements are not currently enabled.
    • Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
  • The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
  • Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
  • No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
  • There is a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).

Additional Work Not Originally In-Scope

  • Direct avatar / object / avatar-avatar interactions (“picking up” an apple; high-fives. etc.
  • Animations streaming: allowing one viewer to run animations and have them sent via the simulator to all receiving viewers without any further processing of the animations by those viewers.
  • Enhanced LSL integration for animation control.
  • Adoption of better animation standards – possibly glTF.
  • Given the project is incorporating a lot of additional ideas, it is likely to evolve into a rolling development, with immediate targets for development / implementation decided as they are agreed upon, to be followed by future enhancements. As such, much of what goes into the meetings at present is general discussion and recommendations for consideration, rather than confirmed lines o development.

Bugs, Feature Requests and Code Submissions

  • For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
  • There is also a public facing Kanban board with public issues.
  • Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:

Further Information

Meeting Notes

General Progress

  • LSL Integration:
    • See: OPEN-375: “LSL Functions for reading avatar animation positions”.
    • Rider Linden has not been able to get a lot done on the scripted control due to being out of the office. He does have the LSL function discussed in the last meeting so that it is correctly sending the necessary data down to the agent’s viewer.
    • He is now working now on how to feed that into the IK, and has a general framework, although he notes it’s been slow progress.
  • Simon Linden has been working on animation importing. This is additional work in terms of the Puppetry project, but comes as a result of discussions at previous meetings.
    • He is looking to add additional .BVH support, and possibly .FBX (e.g. .FBX using some specific skeletons and settings; the goal is to be able to get data out of animation tools and into SL without requiring 2 years of Blender skills). Given the general move towards glTF, this is seen as being more preferable (there is a possible appetite within LL for a re-write of the animation system, although it not on the immediate horizon (or a visible horizon at present).
    • Requests are still being made to allow animation priorities to be changed post-upload and edit animation values dynamically – it is not clear how much of this will be touched.
    • Changing the manner in which animation priorities currently work is not something LL are planning on touching.
    • Right now the messages that transmit what animations to play do not have a way to specify a priority, just the animation’s asset ID and the viewer will get the priority from the asset. This may change in the future, but the focus right now is on getting scripted animation control improved.
  • Leviathan Linden is continuing to work on animation streaming, but progress has been delayed due to bug hunting and fixing. However, he hopes to get the code into the Puppetry project viewer branch sooner rather than later. He has noted that this is very sensitive to bad framerate on the sender and on the simulator. This probably means that before animation streaming and/or puppetry could be “delivered”, some technical debt on the server at least.
  • The focus at the moment is on putting everything that has been worked on together and then making sure it all works within the viewer. After that comes the issue of making sure that things work between viewers (e.g. that 20 people running animation streaming in a scene does not result in the viewers collapsing or being unable to playback all the streams; ensuring the new capabilities paly nicely with existing “canned animation” systems (e.g. dance machines, etc.).

In Brief

  • It’s been noted that moving the simulators to 64-bit is being worked on.

Date of Next Meeting

  • Thursday, March 9th, 2023, 13:00 SLT.

2023 week 7: SL CCUG and TPVD meeting summaries: Mirrors!

Under the Northern Lights, December 2022 – blog post
The following notes were taken from:
  • My audio recording and chat log transcript of the Content Creation User Group (CCUG) meeting held on Thursday, February 16th 2023 at 13:00 SLT.
  • My chat transcript and the video recording of the Friday, February 17th TPV Developer’s meeting, recorded by Pantera Północy and embedded at the end of this article. My thanks, as always, to her for recording these meetings.
These meetings are for discussion of work related to content creation in Second Life, including current work, upcoming work, and requests or comments from the community, together with viewer development work. They are chaired by Vir Linden, and dates and times can be obtained from the SL Public Calendar. Notes:
  • These meetings are conducted in mixed voice and text chat. Participants can use either to make comments / ask or respond to comments, but note that you will need Voice to be enabled to hear responses and comments from the Linden reps and other using it. If you have issues with hearing or following the voice discussions, please inform the Lindens at the meeting.
  • The following is a summary of the key topics discussed in the meeting, and is not intended to be a full transcript of all points raised.

Official Viewers Summary

Available Viewers

The have been no further updates to the currently available official viewers sine the PBR materials viewer was updated at the start of the week, as reported in my week #7 SUG meeting summary. Therefore the pipelines remain as follows:
  • Release viewer: Maintenance Q(uality) viewer, version 6.6.9.577968 Thursday, February 2, 2023.
  • Release channel cohorts (please see my notes on manually installing RC viewer versions if you wish to install any release candidate(s) yourself).
  • Project viewers:
    • PBR Materials project viewer, version 7.0.0.578161, February 14, 2023. This viewer will only function on the following Aditi (beta grid) regions: Materials1; Materials Adult and Rumpus Room 1 through 4.
    • Puppetry project viewer, version 6.6.8.576972, December 8, 2022.

General Viewer Notes

  • It is hoped that the Performance Floater RC viewer will be promoted to de facto release status within the week, which would allow all official viewers to leverage Visual Studio 2022 on Windows builds going forward.
  • There are some changes to be made to github due to all the pull requests (PRs) going to branches which can change over time, causing issues as they do so. In the future, it is likely that PRs will go into the Main branch (which only changes on a per release basis) and from their moved into their intended branch.

CCUG – glTF Materials and Reflection Probes

Project Summary

  • To provide support for PBR materials using the core glTF 2.0 specification Section 3.9 and using mikkTSpace tangents, including the ability to have PBR Materials assets which can be applied to surfaces and also traded / sold.
  • To provide support for reflection probes and cubemap reflections.
  • The overall goal is to provide as much support for the glTF 2.0 specification as possible.
  • In the near-term, glTF materials assets are materials scenes that don’t have any nodes / geometry, they only have the materials array, and there is only one material in that array.
    • It is currently to early to state how this might change when glTF support is expanded to include entire objects.
  • The project viewer is available via the Alternate Viewers page, but will only work on the following regions on Aditi (the Beta grid):  Materials1; Materials Adult and Rumpus Room 1 through 4.
  • Please also see previous CCUG meeting summaries for further background on this project.

Status

  • Viewer:
    • Work continues on bug fixes.
    • A major new bug is the discovery that the UV treatment is off-specification. This appears to be due to OpenGL putting the 0,0 coordinate in the lower left corner of the image rather than the top left. This does mean that all PBR materials uploaded to Aditi (the beta grid) prior to the fix going into the viewer will effectively be “broken” post-fix. Viewer:
    • The lighting model for water in the project viewer has been updated to use the glTF specification lighting model for water so that reflection probes can be used to generate reflections on water. However, trying to adapt the “old” water shader to use the glTF lighting model is proving difficult, due the “bonkers” way things like fresnel  offset and scale have been implemented. This issue is to be addressed.
    • It is believed that most existing content should render reasonably faithfully under the PBR / glTF, with the exception of the known issue of alpha blending on colour curves. Runitai Linden has a couple more ideas how this might be improved, but overall, it might come down to having to explain that the colour space is changing for glTF, and as a result some alpha blended content will need to be adjusted in order to render correctly.
    • As Advanced Lightning Model (ALM) will be enabled all the time in the PBR viewer (the Forward renderer will be disabled), the viewer’s quality settings are being updated so that Shadows will be disabled by default across a much wider range of settings, as these are what causes the significant performance hit when ALM in enabled, rather than ALM itself (but Shadows can still obviously be manually enabled).
    • This viewer also causes instrumentation regressions within the Performance Floater viewer, which will likely be addressed when the code is ready to be merged with the release version of the viewer. .
  • It is hoped that the simulator-side support can be deployed to an RC on the Main grid (Agni) in the near future in order to further advance viewer testing as that moves from project to RC status as well.

CCUG – Mirrors(!)

  • The “very next thing” LL plans to implement after PBR Materials reaches Release Candidate status is – mirrors!
  • These will be planar mirrors, so best suited to flat surfaces such as the face of a cube, rather than curved or spherical surfaces.
  • Mirrors will effectively be a real-time 1:1 rendering of what is seen within the scene that is being reflected, but with some limitations to cater for performance. Those limitations  / controls under discussion at the Lab include:
    • The mirror effect will only be generated in viewers that are very close to it.
    • Perhaps limiting the number of mirrors which can be active within a viewer to just one per scene (so if there are two mirrors close by your avatar, only one will be active at a time). Or allowing user select the number of mirrors they wish see “working” at any given time.
    • Adding a viewer Preferences option to enable / disable mirrors, depending on the user’s needs.
    • Nevertheless, even with precautions such as the above, there will be a performance impact in having real-time mirrors active in the viewer.
  • Mirrors will likely support LSL control over them.
  • It is already being recommended that mirror surfaces are only used as mirrors, not as a means of generating “reflections” in general – which should be left to reflection probes / cube maps.
  • It is hoped that the way the mechanism for rendering reflections onto a mirror surface would use the same channels as reflection probes – so when the mirror is seen from a distance, it uses the reflection rendering based on the local reflection probes, but when approached, the reflection probe rendering would fade out, and the real-time planar mirror reflection rendering would fade in.
  • That said, precisely HOW real-time mirrors will work is still subject to discussion and planning: at the moment, the focus has been only considering time in terms of ensuring the PBR work does not block opportunities for adding real-time reflections, and that they will play nicely with the PBR Materials work when they are being developed.

CCUG – Avatars / New Start Avatars / Ecosystem

  • A question was raised about the upcoming new mesh starter avatars previewed at SL19B in June2022. These have yet to be releases, and are not intended to compete with existing mesh avatars, also LL hopes creators will help develop an ecosystem in support for the avatars as the devkits for them are released – there is no confirmed release date for the avatars.
  • The above lead to a general discussion on the learning curves involved in getting to grips with avatar bodies and heads, trying to math heads to bodies, etc., the need for more discussions on avatar capabilities, helping people understand the avatar content creation process so they can join the ecosystem, etc.
  • The was an agreement that more discussion on avatar-related content creation, real and perceived limitations on the avatar system – particularly rigging clothing and attachments and the reliance on additional toolsets (e.g. AvaStar MayaStar, etc.), issues of supporting information available through the SL Wiki / Knowledge Base, etc. See In Brief for more on discussions / potential new meetings.
  • There are internal discussions going on at the Lab concerning avatar physics, enabling the simulator to “know” more bout the avatar, how it is being animated, having the simulator-side physics engine fully recognise the avatar body as a physical object (rather than just a simplified capsule), etc., via the likes of the Puppetry project and elsewhere, but solutions are still TBD.

In Brief

  • CCUG: Alpha blending issues on avatars – there was a general discussion on alpha stacking/ordering and blending issues, with Beq Janus’ blog post on the subject relating to avatars / outfits being referenced as a good primer on the issue and steps to mitigate problems.
  • TPVD: work is continuing on the Inventory thumbnails work, but nothing ready for any form of public release.
  •  TPVD: it has been suggested that LL might want to add code to the new Group Chat History functionality to indicate the end of historic Group chat within a Group chat tab / panel, as people appear to be getting confused as to why they are opening Group chat to find past conversations displayed (due to word about the new functionality taking time to spread).
  • TPVD: concern was raised that the allowance of lossless Normal Map under PBR will lead to a lot of abuse with people using it to upload lossless textures as well, which it was feared would hit people’s VRAM. Runitai pointed out that lossless does not necessarily hit VRAM, but does impact caching and bandwidth. This sparked a general conversation on textures, resolution, quality, etc. However, the risk of people abusing the upload was acknowledged, and store will be monitored for unexpected spike in usage after the release of PBR.
  • TPVD: a discussion on viewer development as support for AAA game-style rendering. Please refer to the video for details,
  • Both meetings: user on-boarding – at both the CCUG and the TPVD meeting it was suggested that there needs to be a regular user group meeting to discuss user on-boarding, engagement and retention and how to address these on an ongoing basis.
    • This led to a lengthy discussion on the issues of engagement + retention which illustrated one of the core issues in just discussing it: everyone has a different opinion on what “the problem” is with engagement / retention.  Some see it as primarily being an expense issue (the cost of creating a good-looking avatar); some see it as people being unable to find interesting this to do; some see it as being performance / hardware / overall appearance of SL.
    • The problem with the above is (and as demonstrated at the TPVD meeting particularly) it can lead to very siloed outlooks where disagreements as to “the problem” become the focus of conversations, rather than agreement that all of these issues can play a role, and as such, solutions need to be perhaps more “holistic” in nature and encompassing all of the perceived pain points.
    • It has been suggested that an upcoming CCUG or TPVD meeting could be utilised as a kick-off session for broader discussions about on-boarding, etc.

Next Meetings

  • CCUG: Thursday, March 2nd, 2023.
  • TPVD: Friday, March 16th, 2023.

2023 SL Puppetry project week #6 summary

Puppetry demonstration via Linden Lab – see below.  Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”

The following notes have been taken from chat logs and audio recording of the Thursday, February 9th, 2023 Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).

Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.

Project Summary

General description of the project and its inception:

LL’s renewed interest in puppetry was primarily instigated by Philip joining LL as official advisor, and so it really was about streaming mocap. That is what Philip was interested in and why we started looking at it again. However since Puppetry’s announcement what I’ve been hearing from many SL Residents is: what they really want from “puppetry” is more physicality of the avatar in-world: picking up objects, holding hands, higher fidelity collisions. 
As a result, that is what I’ve been contemplating: how to improve the control and physicality of the the avatar. Can that be the new improved direction of the Puppetry project? How to do it?

Leviathan Linden

  • Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the  LLSD Event API Plug-in (LEAP) system.
    • Note that facial expressions and finger movements are not currently enabled.
    • Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
  • The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
  • Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
  • No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
  • This project is taking in a lot of additional ideas – animation standards, improving the current animation system, enabling truer avatar / avatar and avatar object interactions such that it is likely to evolve into a rolling development, with immediate targets for development / implementation as they are agreed upon, to be followed by future enhancements.
  • As such, much of what goes into the meetings at present is general discussion and recommendations for consideration, rather than confirmed lines o development.
  • There is a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).

Bugs, Feature Requests and Code Submissions

  • For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
  • There is also a public facing Kanban board with public issues – those experiencing issues can also contact Wulf Linden.
  • Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:

Further Information

Meeting Notes

Animation Streaming

  • Leviathan Linden has been experimenting with animation streaming over the viewer’s animation channel, such that whatever is sent from the controlling viewer is played directly by all receiving viewers without any further processing of the animations by those receiving viewers.
  • This had been discussed in previous meetings as a potential means of lightening the load of animation data processing individual viewers would have to perform, reducing the potential performance impact in situations where animation synchronisation is important. It also lays something of a further foundation for more procedural-based animation processing, allowing viewers to work smarter – sending less data more frequently, which will in turn help enable synchronised animations through puppetry, such as spontaneously sharing a high five.
  • With this initial test, featuring just one viewer using puppetry and one receiving it, actually revealed a more noticeable lag in streaming compared to individual processing and playback of received animation data. It is not clear at this time whether this would worsen in situations where multiple puppetry animations are being streamed / received.
  • The videos were initially posted to the restricted-access Second Life Content Creation Discord server, and then combined into a single video by Kadah Coba, which is reproduced here as an animated GIF – my thanks to Kadah for the work in combining the videos.

Puppetry streaming test: top – animation played in one viewer (large image) with data sent for processing by a receiving video (inset). Bottom: the same animation played on the same viewer and then streamed to the receiving viewer (inset) and played on receipt without any additional animation processing.

  • Leviathan notes this is a very quick and dirty test, requiring “some hackery” within the viewer’s animation code, but does not (as yet) require any changes to the server-side puppetry management code.
  • Further refinement of the code is required, together with further testing to see if the approach can be smoothed / improved, as such, the work is not currently fit for integration into the Puppetry Project viewer, although it has been suggested it might be offered as a temporary, separate test viewer to allow broader testing.
  • On potential issue is that the streaming is currently dependent on the reliability of the originating viewer; if it is running at a low FPS, potentially causing the receiving viewers see a “choppier” result with lagging and smooth animations within the stream.

LSL Integration

  • See: OPEN-375: “LSL Functions for reading avatar animation positions”.
  • Rider Linden has been working on an LSL API to control animation on an avatar.
Conceptually it would behave like LEAP from LSL. The simulator will send the animation instructions down to the targeted viewer which will perform the IK and animation, and then send the results back as though they were coming from any other LEAP plugin. Targeting objects should be possible with the API (although I’ll add a world position/rotation parameter so you don’t have to do the math yourself).

– Rider Linden

  • This work is possibly best described as moving a step towards enabling an avatar using puppetry to reach out and pick up an apple by allowing a script to position the avatar’s hand at the location of the apple, from where the user can use a supported capture tool to “pick up” the apple.
  • The envisioned actions would be: the user move their avatar’s arm towards the apple, the apple detects the collision between the avatar hand and itself and attaches to the hand as if it has been directly picked up.
  • Further work is required involving collisions between the apple and the avatar’s hand, so the apple knows it is being “grabbed”. This might be achieved by using an existing collision event as the trigger for attachment, or an entirely new event.
  • One problem is to avoid having multiple extra collision objects bouncing around the physics engine for every single attachment point on an avatar (55 in total), which would add-up, performance-wise, very quickly.
    • One suggestion for mitigating this is that as region knows where your hand is (which is true with the attachment update stream), it could be possible by implementing a new “grab” action that works in the physics simulation for picking up small objects; however, it would likely need some hint/magic on the viewer to render the object “at the hand” rather than “near the hand”..
  • Beyond this, there is also additional work to allow avatar-to-avatar interactions via puppetry – such as the aforementioned high five – which involves addressing some permission issues.

In Brief

  • Concern was raised that emphasis on puppetry over the traditional canned animation assets is that it could make SL inaccessible for some (because of the need for additional motion capture hardware, a possible need for more powerful client computers, etc). In response, those at the meeting pointed out:
    • The approach being taken by the Lab is not new – it has been a common factor (albeit implemented in a verity of ways) within games for well over a decade, and is used in multi-player games without participants being “lagged” to the point where gameplay is broken.
    • What is being proposed with Puppetry is not even “new” (in the broadest sense); rather it is adding a further layer of animation capabilities to Second life which can enable a greater sense of interactivity to the platform.
    • In terms of hardware, it was further pointed out that while some at the meeting are using VR hardware – headsets and peripherals – all that is actually required to start leveraging the capabilities (and as LL have demonstrated in the animated GIF forming the banner of this summary is a basic webcam.
  • In a more general conversation, it was pointed out by those at the meeting and the Lab engineers that:
    • Whilst things like streaming puppetry animations may at times result in more visible lag / animation desynchronization, it offers so much more in the way of avatar interaction with the world, it would be more than worthwhile.
    • This work is purely about puppetry and interactivity; it does not actually alter the way more general animations – walking, standing, etc., work, as the underpinning locomotion engine within the simulator and how the viewer calculates motion based on data from the simulator is not being altered.
    • Instead, the LSL API (and the LEAP API?) will enable general avatar orientation and attachment point orientation / movement to ensure that arm correctly reaches out to “grab” the apple mentioned above, by effectively running in conjunction with the locomotion engine.

Date of Next Meeting

  • Thursday, February 23rd, 2023, 13:00 SLT.