2023 SL Puppetry project week #6 summary

Puppetry demonstration via Linden Lab – see below.  Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”

The following notes have been taken from chat logs and audio recording of the Thursday, February 9th, 2023 Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).

Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.

Project Summary

General description of the project and its inception:

LL’s renewed interest in puppetry was primarily instigated by Philip joining LL as official advisor, and so it really was about streaming mocap. That is what Philip was interested in and why we started looking at it again. However since Puppetry’s announcement what I’ve been hearing from many SL Residents is: what they really want from “puppetry” is more physicality of the avatar in-world: picking up objects, holding hands, higher fidelity collisions. 
As a result, that is what I’ve been contemplating: how to improve the control and physicality of the the avatar. Can that be the new improved direction of the Puppetry project? How to do it?

Leviathan Linden

  • Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the  LLSD Event API Plug-in (LEAP) system.
    • Note that facial expressions and finger movements are not currently enabled.
    • Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
  • The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
  • Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
  • No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
  • This project is taking in a lot of additional ideas – animation standards, improving the current animation system, enabling truer avatar / avatar and avatar object interactions such that it is likely to evolve into a rolling development, with immediate targets for development / implementation as they are agreed upon, to be followed by future enhancements.
  • As such, much of what goes into the meetings at present is general discussion and recommendations for consideration, rather than confirmed lines o development.
  • There is a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).

Bugs, Feature Requests and Code Submissions

  • For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
  • There is also a public facing Kanban board with public issues – those experiencing issues can also contact Wulf Linden.
  • Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:

Further Information

Meeting Notes

Animation Streaming

  • Leviathan Linden has been experimenting with animation streaming over the viewer’s animation channel, such that whatever is sent from the controlling viewer is played directly by all receiving viewers without any further processing of the animations by those receiving viewers.
  • This had been discussed in previous meetings as a potential means of lightening the load of animation data processing individual viewers would have to perform, reducing the potential performance impact in situations where animation synchronisation is important. It also lays something of a further foundation for more procedural-based animation processing, allowing viewers to work smarter – sending less data more frequently, which will in turn help enable synchronised animations through puppetry, such as spontaneously sharing a high five.
  • With this initial test, featuring just one viewer using puppetry and one receiving it, actually revealed a more noticeable lag in streaming compared to individual processing and playback of received animation data. It is not clear at this time whether this would worsen in situations where multiple puppetry animations are being streamed / received.
  • The videos were initially posted to the restricted-access Second Life Content Creation Discord server, and then combined into a single video by Kadah Coba, which is reproduced here as an animated GIF – my thanks to Kadah for the work in combining the videos.
Puppetry streaming test: top – animation played in one viewer (large image) with data sent for processing by a receiving video (inset). Bottom: the same animation played on the same viewer and then streamed to the receiving viewer (inset) and played on receipt without any additional animation processing.
  • Leviathan notes this is a very quick and dirty test, requiring “some hackery” within the viewer’s animation code, but does not (as yet) require any changes to the server-side puppetry management code.
  • Further refinement of the code is required, together with further testing to see if the approach can be smoothed / improved, as such, the work is not currently fit for integration into the Puppetry Project viewer, although it has been suggested it might be offered as a temporary, separate test viewer to allow broader testing.
  • On potential issue is that the streaming is currently dependent on the reliability of the originating viewer; if it is running at a low FPS, potentially causing the receiving viewers see a “choppier” result with lagging and smooth animations within the stream.

LSL Integration

  • See: OPEN-375: “LSL Functions for reading avatar animation positions”.
  • Rider Linden has been working on an LSL API to control animation on an avatar.
Conceptually it would behave like LEAP from LSL. The simulator will send the animation instructions down to the targeted viewer which will perform the IK and animation, and then send the results back as though they were coming from any other LEAP plugin. Targeting objects should be possible with the API (although I’ll add a world position/rotation parameter so you don’t have to do the math yourself).

– Rider Linden

  • This work is possibly best described as moving a step towards enabling an avatar using puppetry to reach out and pick up an apple by allowing a script to position the avatar’s hand at the location of the apple, from where the user can use a supported capture tool to “pick up” the apple.
  • The envisioned actions would be: the user move their avatar’s arm towards the apple, the apple detects the collision between the avatar hand and itself and attaches to the hand as if it has been directly picked up.
  • Further work is required involving collisions between the apple and the avatar’s hand, so the apple knows it is being “grabbed”. This might be achieved by using an existing collision event as the trigger for attachment, or an entirely new event.
  • One problem is to avoid having multiple extra collision objects bouncing around the physics engine for every single attachment point on an avatar (55 in total), which would add-up, performance-wise, very quickly.
    • One suggestion for mitigating this is that as region knows where your hand is (which is true with the attachment update stream), it could be possible by implementing a new “grab” action that works in the physics simulation for picking up small objects; however, it would likely need some hint/magic on the viewer to render the object “at the hand” rather than “near the hand”..
  • Beyond this, there is also additional work to allow avatar-to-avatar interactions via puppetry – such as the aforementioned high five – which involves addressing some permission issues.

In Brief

  • Concern was raised that emphasis on puppetry over the traditional canned animation assets is that it could make SL inaccessible for some (because of the need for additional motion capture hardware, a possible need for more powerful client computers, etc). In response, those at the meeting pointed out:
    • The approach being taken by the Lab is not new – it has been a common factor (albeit implemented in a verity of ways) within games for well over a decade, and is used in multi-player games without participants being “lagged” to the point where gameplay is broken.
    • What is being proposed with Puppetry is not even “new” (in the broadest sense); rather it is adding a further layer of animation capabilities to Second life which can enable a greater sense of interactivity to the platform.
    • In terms of hardware, it was further pointed out that while some at the meeting are using VR hardware – headsets and peripherals – all that is actually required to start leveraging the capabilities (and as LL have demonstrated in the animated GIF forming the banner of this summary is a basic webcam.
  • In a more general conversation, it was pointed out by those at the meeting and the Lab engineers that:
    • Whilst things like streaming puppetry animations may at times result in more visible lag / animation desynchronization, it offers so much more in the way of avatar interaction with the world, it would be more than worthwhile.
    • This work is purely about puppetry and interactivity; it does not actually alter the way more general animations – walking, standing, etc., work, as the underpinning locomotion engine within the simulator and how the viewer calculates motion based on data from the simulator is not being altered.
    • Instead, the LSL API (and the LEAP API?) will enable general avatar orientation and attachment point orientation / movement to ensure that arm correctly reaches out to “grab” the apple mentioned above, by effectively running in conjunction with the locomotion engine.

Date of Next Meeting

  • Thursday, February 23rd, 2023, 13:00 SLT.

2023 week 5: SL CCUG meeting summary

Jitters Coffee Shop, Heterocera – December 2022 – blog post
The following notes were taken from my audio recording and chat log transcript of the Content Creation User Group (CCUG) meeting held on Thursday, January 19th 2023 at 13:00 SLT. These meetings are for discussion of work related to content creation in Second Life, including current work, upcoming work, and requests or comments from the community, and are chaired by Vir Linden. Dates and times can be obtained from the SL Public Calendar. Notes:
  • These meetings are conducted in mixed voice and text chat. Participants can use either to make comments / ask or respond to comments, but note that you will need Voice to be enabled to hear responses and comments from the Linden reps and other using it. If you have issues with hearing or following the voice discussions, please inform the Lindens at the meeting.
  • The following is a summary of the key topics discussed in the meeting, and is not intended to be a full transcript of all points raised.

Official Viewers Summary

Available Viewers

  • On Thursday, February 2nd, 2023:
    • The Maintenance Q(uality) viewer, version 6.6.9.577968 was promoted to de facto release status.
    • The PBR Materials project viewer updated to version 7.0.0.577997.
  • On Friday, February 3rd, 2023 the Maintenace R RC viewer updated to version 6.6.10.578087 – translation updates and the return of slam bits.
The remaining official viewer pipelines are unchanged, as follows:
  • Release channel cohorts (please see my notes on manually installing RC viewer versions if you wish to install any release candidate(s) yourself).
    • Performance Floater / Auto-FPS RC viewer, version 6.6.9.577251, January 4, 2023.
  • Project viewers:
    • 7.0.0.577780, January 25, 2023 – This viewer will only function on the following Aditi (beta grid) regions: Materials1; Materials Adult and Rumpus Room 1 through 4.
    • Puppetry project viewer, version 6.6.8.576972, December 8, 2022.

General Viewer Notes

  • It is believed the next viewer due for promotion will be the Performance Floater / Auto FPS viewer, at which point all viewer releases will officially all be based on VS 2022 for Windows, as this is the first RC viewer merged with the VS 2022 release process (previously a separate viewer build fork).

glTF Materials and Reflection Probes

Project Summary

  • To provide support for PBR materials using the core glTF 2.0 specification Section 3.9 and using mikkTSpace tangents, including the ability to have PBR Materials assets which can be applied to surfaces and also traded / sold.
  • To provide support for reflection probes and cubemap reflections.
  • The overall goal is to provide as much support for the glTF 2.0 specification as possible.
  • In the near-term, glTF materials assets are materials scenes that don’t have any nodes / geometry, they only have the materials array, and there is only one material in that array.
    • It is currently to early to state how this might change when glTF support is expanded to include entire objects.
  • The project viewer is available via the Alternate Viewers page, but will only work on the following regions on Aditi (the Beta grid):  Materials1; Materials Adult and Rumpus Room 1 through 4.
  • Please also see previous CCUG meeting summaries for further background on this project.

Status

  • Viewer:
    • The latest update to the viewer includes improvements to probe light blending (e.g. between overlapping manually-placed probes), additional work on radiance and irradiance, and the resolution quality of reflections set back up to 256×256 per face (but will downgrade to 128×128 if the viewer has less than 2 GB of available VRAM). All of this should leave the visual quality of reflection probes “pretty close” to what will be seen in the RC / release version of the viewer.
    • However, there is still a hard transition line between line between overlapping automatic probes and manually place probes, and LL want to encourage those using probes to use full scene manually-placed probes to ensure a much better quality of reflections.
    • Automatically-placed probes have had parallax correction removed, this should prevent the issue of people seeing the ray traced sphere of the probe rather than the reflection of the environment. Parallax correction is now only available with manually-placed probes.
    • Screen Space Reflections (SSR) have also been integrated with the reflection probes so that when the viewer does a look-up on a reflection probe and SSR in enabled, if there is an available SSR that gives a better result than the (generally automatic) reflection probe’s sample, it will be used instead.
    • In addition, there has been further work on optimisation and frame rate smoothing.
    • Still to be addressed in the viewer:
      • Some general artefacts still requiring clean-up, notably water reflections / light speckling, objects being rotated in reflections, and some remaining issues with colour curves on HUD attachments.
      • Further UI work.
      • There are some glTF materials caching issues which have yet to be addressed.
  • LL is in the process of finalising compliance with the glTF standards. This does not impact viewer and glTF testing on Aditi (the beta grid), but will require updates to the viewer which are likely to come into force during RC / beta testing, meaning that only the updated viewer should be used from that point forward. However, it should not impact / alter overall functionality within the viewer.
    • This is the result of an incorrect assumption being made about some of the data required by the glTF specification would “always be there”, when in fact the filtering process LL uses to ensure glTF files uploaded do not contain anything malicious was dropping some of that data (so the visuals were correct, but the data was “wrong” compared to the standard).
  • In an effort to have media on a prim work on glTF materials in a similar manner to the current SL materials (where setting a face to media on an object overwrites the diffuse (texture) map), setting media on a PBR materials face will overwrite both the diffuse and emissive maps, whilst sill allowing media to be viewed on a shiny surface, et., as per the current behaviour.
  • There is concern about the impact of allowing “double-sided” materials as a part of glTF PBR (e.g. the risk of over-use, potential performance impacts, etc.).
    • LL acknowledged that a lot of samples they’ve tested from Sketchfab do contain a a lot of “unnecessary” double-sided surfaces, and so are considering implementing a check and warning on import where this occurs.
    • There is currently a checkbox for enabling / disabling double-sided materials in the PBR viewer’s materials editor, but LL’s view is that allowing double-sided content is not going to “ruin SL forever” – a view not necessarily shared by some creators.

Vulkan

  • It is possible that 2023 will see the resumption of work to add support for the Vulkan / MoltenVK (for OS X) API alongside of OpenGL, as the latter is both growing increasingly long in the tooth and is gradually being deprecated / no longer used / supported on a number of fronts.
  • Within LL, it is believed that implementing this support will not only get SL past the issue of OpenGL’s status, but also offer performance improvements within the viewer (e.g. allowing SL to be less CPU-bound and make more use of the GPU, reducing the volume of draw calls, etc.).

In Brief

  • LL have been experimenting with pulling purchased content using glTF materials from Sketchfab and importing it to SL. The process is not easy, and it is acknowledge more work needs to be done to smooth this out at some point in the future – it will not be improved before the current project moves to a release status.
    • This raised the question of licensing, rights, mesh uploads to SL and the Terms of Service, with a not from LL that this likely needs to be reviewed as a whole, and guidelines provided to specify requirements for uploading content purchased by / built via 3rd party sites and things like license compliance (together the the need for LL to determine the means of gatekeeping things like license compliance).
    • Routes to upload from the likes of Sketchfab would be beneficial, as it would mean creators used to building for those platforms could pull their content into SL without having to learn a further (esoteric) set of content creation requirements.
  • There was a general discussion on texture compression with glTF (which remains in place for all maps other than normal map, where compressed lossless is viewed as important), and on glTF allowing backward-facing normal maps (unsupported with the current SL materials system), and possible problems / benefits these might lead to.
  • In terms of which parts of the glTF specification SL is supporting, LL will likely produce a living document that will detail the initial support, and which will be updated when further support is added / certified by Khronos.
  • For future work, it appears at present that supporting glTF mesh imports would take priority over implementing glTF materials extensions from the specification.
  • Related to the above point, discussions are in progress within LL on how to continue to support COLLADA (.DAE) mesh imports into SL without actively supporting COLLADA format.
    • Reasons for sunsetting direct COLLADA import to SL include:  there are still a lot of traps creators can fall into with the SL implementation of COLLADA support which can be a barrier to entry for those wishing to engage in SL as creators (similar traps do not exist within the glTF standard); maintaining the COLLADA importer adds cost and overheads, as every importer update has to be tested against it and adjustments made where they break COLLADA uploads.
    • One option under consideration is the use of the Open Asset Import Library (Assimp). This acts an an intermediary (so .DAE files get converted to the glTF mesh format), allowing a route of upload for COLLADA file to SL without LL having to maintain the COLLADA import mechanism directly, reducing the overheads of having to both test against it and maintain COOLADA import support.
    • This route would also allow other mesh formats (e.g. .FBX) to be “supported”, with the glTF specification support document mentioned above, available for creators to check which elements within the glTF specification they may need to add in order to get a like-for-like between their build format (.DAE. .FBX, etc.), and glTF.
    • Concern was raised about what the removal of direct COLLADA import might do the the market for full permission .DAE files which can be purchased through the SL Marketplace for upload to SL (although it is not clear how big this market is).
    • It is hoped that any approach taken would offer at least a robust a means of importing COLLDA meshes as the current importer.
  • Also under internal discussion is how to support hierarchies in the context of glTF assets and what that means (e.g. import an asset with a node graph hierarchy, attach it to a mesh, and then manipulate the hierarchy via the edit tools or LSL).

Next Meeting

  • Thursday, February 16th, 2023.

2023 week 3: SL CCUG and TPV Developer meetings summary

Where Our Journey Begins, November 2022 – blog post
The following notes were taken from
  • My audio recording and chat log transcript of the Content Creation User Group (CCUG) meeting held on Thursday, January 19th 2023 at 13:00 SLT.
  • Pantera’s video of the Third Part Viewer Developer (TPVD) meeting held on Friday, January 20th, 2023 at 13:00 SLT, embedded at the end of this article.
These meetings are chaired by Vir Linden, and their dates and times can be obtained from the SL Public Calendar; also note that the following is a summary of the key topics discussed in the meetings and is not intended to be a full transcript of all points raised. Note, The TPVD meeting was abbreviated to 20 mins.

Official Viewers Status – TPV Meeting

Available Viewers

  • On Thursday, January 19th, 2023, the Maintenance R RC viewer was updated to version  6.6.9.577678 – translation updates and the return of slam bits.
  • On Wednesday, January18th, 2023:
    • The Maintenance (Q)uality RC viewer was updated to version on 6.6.9.577581 – new Debug settings UI, quality of life improvements.
    • The PBR Materials project viewer updated to version 7.0.0.577610, on January 19, 2023 – SSR support.
      • This viewer has been reported as “broken” when running on systems with AMD GPUs, a situation that was being investigation as these notes were being written.
This leaves the rest of the currently-available official viewer as:
  • Release viewer: Maintenance P (Preferences, Position and Paste) RC viewer version 6.6.8.576863 Monday, December 12, 2022.
  • Release Candidate viewers:
    • Performance Floater / Auto-FPS RC viewer, version 6.6.9.577251, January 4, 2023.
  • Project viewers:
    • Puppetry project viewer, version 6.6.8.576972, December 8, 2022.

General Viewer Notes

  • Vir pointed out that as Microsoft has ended support for Windows 8 on January 10th, 2023, it is no longer regarded as a supported operating system for running Second Life, and the viewer will not be tested against it (and the System Requirements page has been updated to reflect this, specifying Windows 10+ most recent service pack as the baseline supported Windows OS.
  • The second phase of the Github work in on-going, notably updating all the viewer build libraries.

Inventory Enhancement Project – Both Meetings

Linden Lab is looking to enhance the Inventory system.
  • The first element of this work is to be the addition of a fixed-resolution thumbnail preview capability, allowing users to see a small image of a given object (where this makes sense – so the likes of note cards and scripts would be excluded) within inventory, with these thumbnails either being of individual items or entire folders.
  • This work has now started, but it will be “some time” before there is anything user-facing to show.
  • A code contribution from Kitty Barnett (Catznip) for an inventory texture tool tip / preview may well be folded in to this work.
  • Once the thumbnail preview work has been completed, it is possible the Lab will look to further enhancements to inventory management. One future enhancement under consideration is support for folders to be included in the Contents inventory of individual objects.

glTF Materials and Reflection Probes – CCUG

Project Summary

  • To provide support for PBR materials using the core glTF 2.0 specification Section 3.9 and using mikkTSpace tangents, including the ability to have PBR Materials assets which can be applied to surfaces and also traded / sold.
  • To provide support for reflection probes and cubemap reflections.
  • The overall goal is to provide as much support for the glTF 2.0 specification as possible.
  • The project viewer is available via the Alternate Viewers page, but will only work on the following regions on Aditi (the Beta grid):  Materials1; Materials Adult and Rumpus Room 1 through 4.
  • Please also see previous CCUG meeting summaries for further background on this project.

Status

  • Viewer:
    • Screen Space Reflections (SSR) has now been integrated into the Project Viewer, (January 19th onwards), although work around this is still being finalised, notably mixed Reflection Probes with SSR.
    • Some fixing is required to the default colour curves in the viewer.
    • Stability issues have been noted on Intel integrated graphics hardware, and these are being investigated / fixed.
    • There have now been a total of two further weeks focused on optimising the viewer’s performance, and the results of this work should be available in the next update to the Project Viewer.
    • UI updates: the viewer will likely see some UI updates as it progresses:
      • The next update will likely include a new dedicated icon for on the Build floater for creating Reflection Probes.
      • The drop-down in the Build floater Textures tab (Currently Materials and Media on the release viewer, may be re-labelled for PBR (where it is Materials, Media, PBR) to more clearly differentiate between the current materials maps system and “new”” PBR materials.
      • Similarly, the new Materials Inventory folder in the Project Viewer may also be renamed to avoid the potential for confusion between the use of PBR materials assets and the current materials maps system.
      • Further, the menu option Build → Upload → Materials (in the Project Viewer) may be reamed to more clearly reflect it is for PBR materials, not material maps.
      • Exactly was terms will be used is TBD, but the general push (from LL’s perspective is to use meaningful lbeels on options and menus which can be Googled with some relevance by those wanting to know more about the underpinning file formats / standards, etc.
  • Work is going into messaging between the back-end and the viewer to ensure it will scale once the viewer starts to see widespread use.

In Brief

  • As the PBR Materials project is approaching a possible RC release, the questions has been asked as to what is next on the graphics front. While no formal decision has been made, options include:
    • A return to looking at adding something like Vulkan / MoltenVK (for OS X) API support alongside of the OpenGL API (which has been slated for deprecation by Apple and is growing increasingly long in the tooth). This work is being looked at more as a performance optimisation rather than a visual boost to rendering (e.g. allowing SL to be less CPU-bound and make more use of the GPU, reducing the volume of draw calls, etc.).
    • Further expansion of the glTF work to include mesh (and moving away from Collada .DAE), animations, etc. But again, no definitive plans / direction has been agreed within the Lab.
      • While there is a appetite at the Lab to support as much as the glTF 2.0 specification, some aspects will be excluded from the list – mentioned at the meeting were Setting the filter mode and sampler state of textures, both of which were described as “problematic” for implementation within SL.
    • Overhauling / updating support / options for ground textures.
  • The CCUG meeting saw a general discussion on animations, the skeleton and all that it involves (bones, attachment points, collision volumes, rigging, etc., and the rules / policies surrounding them, couched initially in the potential for expanding the skeleton / avatar system – for which there are currently no plans to do so beyond what has already been done.
  • Pivot points: (related to the above) past CCUG meeting saw some extended discussion on implementing pivot points on the skeleton, and LL indicated they would investigate this. Since then, it has become increasingly apparent that a node hierarchy would be beneficial and potentially easier to implement; ergo pivot points have been pulled into the pot for a future hierarchy project.
  • There is a known bug where Premium Plus users  – who gain free texture uploads – are being charged for textures included in mesh uploads. An upcoming simulator-side maintenance update should correct this.
  • A request was made at the CCUG meeting to allow mesh uploads greater than the 64m single object size limit, to make it easier to import large structures  / scenes without having to curt them into sections (and thus avoid potential issues in fit, scale, update, etc, and can result in Interest List related rendering issues).
    • LL is unlikely to change the limit, but would prefer to provide improved tools that can help content creators / scene builders.
    • One should tool discussed – which is currently not in development – is that of a “scene view” tool which identifies every individual instance of an object in scene, and provides a mechanism to allow (subject to permissions, obviously) them to be swapped-out to fix things like “broken” items / those for which there is an update / replacement.
 

Next Meetings

  • CCUG: Thursday, February 2nd, 2023.
  • TPVD: Friday, February 17th, 2023.

2023 SL Puppetry project week #2 summary

Puppetry demonstration via Linden Lab – see below.  Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”

The following notes have been taken from chat logs and audio recording of the Thursday, January 12th, 2023 Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).

Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.

Project Summary

General description of the project and its inception:

LL’s renewed interest in puppetry was primarily instigated by Philip joining LL as official advisor, and so it really was about streaming mocap. That is what Philip was interested in and why we started looking at it again. However since Puppetry’s announcement what I’ve been hearing from many SL Residents is: what they really want from “puppetry” is more physicality of the avatar in-world: picking up objects, holding hands, higher fidelity collisions. 
As a result, that is what I’ve been contemplating: how to improve the control and physicality of the the avatar. Can that be the new improved direction of the Puppetry project? How to do it?

Leviathan Linden

  • Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the  LLSD Event API Plug-in (LEAP) system.
    • Note that facial expressions and finger movements are not currently enabled.
    • Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
  • The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
  • Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
  • No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
  • This project is taking in a lot of additional ideas – animation standards, improving the current animation system, enabling truer avatar / avatar and avatar object interactions such that it is likely to evolve into a rolling development, with immediate targets for development / implementation as they are agreed upon, to be followed by future enhancements.
  • As such, much of what goes into the meetings at present is general discussion and recommendations for consideration, rather than confirmed lines o development.
  • There is a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).

Bugs, Feature Requests and Code Submissions

  • For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
  • There is also a public facing Kanban board with public issues – those experiencing issues can also contact Wulf Linden.
  • Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:

Further Information

Meeting Notes

LSL Integration

  • See: OPEN-375: “LSL Functions for reading avatar animation positions”.
  • Rider Linden is starting to look at LSL integration  – the first step being to make the simulator aware of what is actually animating.
  • Currently, the code he has developed lets the server know the position of an avatars attachment points; this sends details of 55 points (HUD points excepted). Attachment points have been selected over bones, as the simulator already has a solid concept of attachment points, and it avoids complications with rigged meshes “doing their own thing” with bone positions.
  • A concern with this is the number of updates being sent to the server for processing.
    • One idea is to refine the the code so that only the attachment points which change relative to the avatar centre (avatar frame/Local Position relative to the avatar) actually send information to the server, in order to reduce the number of updates being generated.
    • Another idea might be to only send updates every n frames, rather than every frame. This would reduce the fidelity of movement, but could still provide sufficient data while reducing the load on the simulator, particularly where multiple avatars in a region are using puppetry.
  • This issue is related to synchronising puppetry actions across multiple viewers as well; a long-standing issues, given that animation playback of animations is viewer-side, and not genuinely across viewers (the resync function found in some TPVs only does so locally).
  • All of the above lead to a discussions of ways and means to best allow LSL integration with animations and ensure a reasonable transmission of results together with decent synchronisation between the viewer and the simulator, whether by frame count or time stamp, in order to ensure predictability of results across multiple viewers. .
  • In addition, the discussion included the advantage in enhancing Second Life to support procedural animations as well as the current canned animations.
  • Rider is also looking into a script enhancement to register collisions.
  • There was some conflating of ideas during the discussion – immediate first steps in opening Puppetry to LSL, and more far reaching goals – setting position, registering collisions (per the above), defining better interpolation for positioning (e.g. as defined in the Khronos glTF specification), etc., which caused a degree of confusion.
  • However, the openness towards making Puppetry a good foundation for future enhancement (such as moving more to procedural-based animations, enabling SL to support “industry standard” animation workflows to encourage animators into the platform, etc., remains, together with (hopefully) enabling more realistic avatar / avatar and avatar / object interactions.
  • That said, Simon Linden did offer a not of caution to all discussing the work:
Not to pop the bubble, but every one please keep in mind all the stuff we’re talked about is experimental and really interesting. I have no idea what we can make into real features and what can work with crowds and all the other interesting problems to make it happen well – we’ll see what we all can do this year 🙂

– Simon Linden

Date of Next Meeting

  • Thursday, January 26th, 2023, 13:00 SLT.

2023 week #1: SL CCUG meeting summary

Mullein Woods, November 2022 – blog post
The following notes were taken from m y audio recording and chat log transcript of the Content Creation User Group (CCUG) meeting held on Thursday, January 5th 2023 at 13:00 SLT. These meetings are chaired by Vir Linden, and their dates and times can be obtained from the SL Public Calendar; also note that the following is a summary of the key topics discussed in the meeting and is not intended to be a full transcript of all points raised.

Official Viewers Status

  • On Wednesday, January 4th, 2023:
    • The Maintenance Q(uality) RC viewer updated to version 6.6.9.577418.
    • The Performance Floater / Auto-FPS RC viewer updated to version 6.6.9.577251.
  • Both the VS 2022 Build RC viewer and the LMR6 project viewer have been withdrawn.
This leaves the rest of the currently-available official viewer as:
  • Release viewer: Maintenance P (Preferences, Position and Paste) RC viewer version 6.6.8.576863 Monday, December 12, 2022.
  • Project viewers:
    • PBR Materials project viewer, version 7.0.0.577157, December 14, 2022. Note: this viewer will only function on the following Aditi (beta grid) regions: Materials1; Materials Adult and Rumpus Room 1 through 4.
    • Puppetry project viewer, version 6.6.8.576972, December 8, 2022.

Inventory Enhancement Project

Linden Lab is looking to enhance the Inventory system.
  • The first element of this work is to be the addition of a fixed-resolution thumbnail preview capability, allowing users to see a small image of a given object (where this makes sense – so the likes of note cards and scripts would be excluded) within inventory, with these thumbnails either being of individual items or entire folders.
    • The first phase of the work is determining how to generate the thumbnail images and ensure they maintain an association with the objects to which they are related (e.g. so if an item is sold or transferred to another user, the thumbnail goes with it).
    • Once this has been decided, the next phase will be to build-out the UI so that such thumbnails can be viewed from inventory.
    • This work will not replace the Outfit Folder image capability nor will it prevent creators from including high resolution images with their products if they wish.
  • Once the thumbnail preview work has been completed, it is possible the Lab will look to further enhancements to inventory management. One future enhancement under consideration is support for folders to be included in the Contents inventory of individual objects.

glTF Materials and Reflection Probes

Project Summary

  • To provide support for PBR materials using the core glTF 2.0 specification Section 3.9 and using mikkTSpace tangents, including the ability to have PBR Materials assets which can be applied to surfaces and also traded / sold.
  • To provide support for reflection probes and cubemap reflections.
  • The overall goal is to provide as much support for the glTF 2.0 specification as possible.
  • The project viewer is available via the Alternate Viewers page, but will only work on the following regions on Aditi (the Beta grid):  Materials1; Materials Adult and Rumpus Room 1 through 4.
  • Please also see previous CCUG meeting summaries for further background on this project.

Status

  • The focus remains on bug and regression issue fixing within the viewer and quality of life improvements, particularly as in wider grid testing, it has been found the PBR viewer can only generate single-digit FPS in some regions.
  • Screen Space Reflections (SSR): Geenz Linden continues to work on integrating SSR into the PBR viewer, but is encountering issues.

Animation System Enhancements – A Discussion

In response to requests for the animation system to be improved (e.g. via CCUG meetings, as a result of the Puppetry project, etc.), Vir Linden asked those at the meeting to state what they see as the most important changes / updates they would like to see. Responses included those expressed at the Server User Group meeting earlier in the week:
  • A procedural animation system to allow creators / users to set the rules of how avatars walk, run, jump; their timings, how animations play priority wise and mixing wise in the series, all able to be packaged up into an item – it has been suggested that whilst “old” the SL is well-placed to be folded into a procedural animation system.

  • Improved animation formats and easier means of animation import into Second Life.
  • The ability to dynamically set animation priorities for more fluid animation integration (e.g. when you are holding and pointing a gun, you continue to point it as you walk, rather than the avatar’s arm dropping to a walking animation when moving).
  • Viewer-side animation editor.
  • Better support for inverse kinematics.
  • Collaboration between the Puppetry team, the glTF team and any animation project to ensure consistency of decision-making about formats, proper LSL support / calls, etc.
This discussion covered a lot of ground, including the potential for the implementation of an “animation 2.0” system which could potentially operate alongside the existing system (much like PBR materials and “legacy” materials); the benefits in greater adherence to emerging standards – particularly in the area of avatar / skeleton formats and capabilities, and the fact that SL is both well-placed to be a part of defining those standards whilst also being hamper by the fact the existing SL avatar format is a niche product / approach, and more. However, the two key points of the discussion might be summarised as:
  • Changes to the animation  / avatar systems are not projects the Lab is working on at present.
  • However, the it demonstrates that, as with recent projects, the Lab is looking seriously at enhancing SL and moving it towards more readily understood standards. As such, it is taking the time to ascertain options that are exciting to creators an users and which might be seen s benefiting the platform and its future growth, and so might be formalised into active projects – and include user engagement where appropriate in their development.
In terms of what might be attempted by way of “small-scale” improvements to the animation system, the viewpoint from LL is that the ability to dynamically set animation priorities and adding scaling support to the animation format are seen as providing users / creators with recognisable benefits.

Next Meeting

  • Thursday, January 19th, 2023.