2023 SL Puppetry project week #2 summary

Puppetry demonstration via Linden Lab – see below.  Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”

The following notes have been taken from chat logs and audio recording of the Thursday, January 12th, 2023 Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).

Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.

Project Summary

General description of the project and its inception:

LL’s renewed interest in puppetry was primarily instigated by Philip joining LL as official advisor, and so it really was about streaming mocap. That is what Philip was interested in and why we started looking at it again. However since Puppetry’s announcement what I’ve been hearing from many SL Residents is: what they really want from “puppetry” is more physicality of the avatar in-world: picking up objects, holding hands, higher fidelity collisions. 
As a result, that is what I’ve been contemplating: how to improve the control and physicality of the the avatar. Can that be the new improved direction of the Puppetry project? How to do it?

Leviathan Linden

  • Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the  LLSD Event API Plug-in (LEAP) system.
    • Note that facial expressions and finger movements are not currently enabled.
    • Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
  • The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
  • Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
  • No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
  • This project is taking in a lot of additional ideas – animation standards, improving the current animation system, enabling truer avatar / avatar and avatar object interactions such that it is likely to evolve into a rolling development, with immediate targets for development / implementation as they are agreed upon, to be followed by future enhancements.
  • As such, much of what goes into the meetings at present is general discussion and recommendations for consideration, rather than confirmed lines o development.
  • There is a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).

Bugs, Feature Requests and Code Submissions

  • For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
  • There is also a public facing Kanban board with public issues – those experiencing issues can also contact Wulf Linden.
  • Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:

Further Information

Meeting Notes

LSL Integration

  • See: OPEN-375: “LSL Functions for reading avatar animation positions”.
  • Rider Linden is starting to look at LSL integration  – the first step being to make the simulator aware of what is actually animating.
  • Currently, the code he has developed lets the server know the position of an avatars attachment points; this sends details of 55 points (HUD points excepted). Attachment points have been selected over bones, as the simulator already has a solid concept of attachment points, and it avoids complications with rigged meshes “doing their own thing” with bone positions.
  • A concern with this is the number of updates being sent to the server for processing.
    • One idea is to refine the the code so that only the attachment points which change relative to the avatar centre (avatar frame/Local Position relative to the avatar) actually send information to the server, in order to reduce the number of updates being generated.
    • Another idea might be to only send updates every n frames, rather than every frame. This would reduce the fidelity of movement, but could still provide sufficient data while reducing the load on the simulator, particularly where multiple avatars in a region are using puppetry.
  • This issue is related to synchronising puppetry actions across multiple viewers as well; a long-standing issues, given that animation playback of animations is viewer-side, and not genuinely across viewers (the resync function found in some TPVs only does so locally).
  • All of the above lead to a discussions of ways and means to best allow LSL integration with animations and ensure a reasonable transmission of results together with decent synchronisation between the viewer and the simulator, whether by frame count or time stamp, in order to ensure predictability of results across multiple viewers. .
  • In addition, the discussion included the advantage in enhancing Second Life to support procedural animations as well as the current canned animations.
  • Rider is also looking into a script enhancement to register collisions.
  • There was some conflating of ideas during the discussion – immediate first steps in opening Puppetry to LSL, and more far reaching goals – setting position, registering collisions (per the above), defining better interpolation for positioning (e.g. as defined in the Khronos glTF specification), etc., which caused a degree of confusion.
  • However, the openness towards making Puppetry a good foundation for future enhancement (such as moving more to procedural-based animations, enabling SL to support “industry standard” animation workflows to encourage animators into the platform, etc., remains, together with (hopefully) enabling more realistic avatar / avatar and avatar / object interactions.
  • That said, Simon Linden did offer a not of caution to all discussing the work:
Not to pop the bubble, but every one please keep in mind all the stuff we’re talked about is experimental and really interesting. I have no idea what we can make into real features and what can work with crowds and all the other interesting problems to make it happen well – we’ll see what we all can do this year 🙂

– Simon Linden

Date of Next Meeting

  • Thursday, January 26th, 2023, 13:00 SLT.

6 thoughts on “2023 SL Puppetry project week #2 summary

  1. I appreciate your posting the puppetry status! My purely selfish interest in puppetry is to use my 2nd life avatar as my virtual avatar for streaming in this case YouTube. I already have a working set up where the lip movements of the avatar track my voice as they do for everybody else in the standard viewer when enabled. What’s missing is my avatar head movement using my webcam tracking my head versus using my mouse tracking my hand. I DID download the special viewer and read the requirements to add Python and plugins. If someone already has this working I would like to set a time to meet them in the test viewer and see how their webcam tracking looks. Cheers! Site


    1. Thanks! I confess to being a little out of my depth with some of the meeting discussions, and am hoping to turn to someone with appropriate expertise to offer an overviews / examination of the project once it gets to something like RC status and is (hopefully) more accessible to users. Currently, the meetings are more discussions on potential and directions / foundational work which is all following into a better definition as to what the first tranche of development work will aim to deliver overall.

      In this, things have already expanded well beyond the initial outline of the work – and it is good to see how much a) the Lab is responding to input from animators and content developers attending the meeting, and b) how those same content developers are pushing for puppetry to embrace the same over-arching specification / “standards” LL is already adopting in other significant projects.


    1. Potentially, although the focus is not primarily of Puppetry being about VR headsets & those who have them; the applications are pretty broad. That said OpenVR has been mentioned as an API route to take alongside of LEAP.


      1. This new FS viewer update that came out is fast enough for VR! So all they need is to add in VR support. Thing someone already did work on a modded version of FS in the past that had VR support, but think they let the code get out of date, so the VR part would need some fixing so it can work right. But bet it would not be hard for the FS team to do. Would be great if they released it as an official part of the next FS release (in months or so)


        1. Given what is happening with Puppetry – which as you know is still in the formulation / kicking ideas around based on the very early cuts of the code – it’ll likely be a lot easier for TPVs to wait and see what comes out of LL by way of core code that can be merged (as happened with the performance boost FS gained with the 6.6.3 release(tweaked in the current 6.6.8 release)) & then make a determination (if any) on what they might consider doing in addition (and as a potential code contribution back to LL).


Have any thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.