
Note: this has been updated to include comments made at the TPV Developer meeting on Friday, May 12th.
The following notes have been taken from chat logs and audio recording of the Thursday, May 11th, 2023 Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).
Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.
Project Summary
General Project Description as Originally Conceived
LL’s renewed interest in puppetry was primarily instigated by Philip joining LL as official advisor, and so it really was about streaming mocap. That is what Philip was interested in and why we started looking at it again. However since Puppetry’s announcement what I’ve been hearing from many SL Residents is: what they really want from “puppetry” is more physicality of the avatar in-world: picking up objects, holding hands, higher fidelity collisions.
As a result, that is what I’ve been contemplating: how to improve the control and physicality of the avatar. Can that be the new improved direction of the Puppetry project? How to do it?
Leviathan Linden
- Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the LLSD Event API Plug-in (LEAP) system.
- Note that facial expressions and finger movements are not currently enabled.
- Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
- The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
- Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
- No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
- There is a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).
Additional Work Not Originally In-Scope
- Direct avatar / object / avatar-avatar interactions (“picking up” an apple; high-fives. etc.).
- Animations streaming: allowing one viewer to run animations and have them sent via the simulator to all receiving viewers without any further processing of the animations by those viewers.
- Enhanced LSL integration for animation control.
- Adoption of better animation standards – possibly glTF.
- Given the project is incorporating a lot of additional ideas, it is likely to evolve into a rolling development, with immediate targets for development / implementation decided as they are agreed upon, to be followed by future enhancements. As such, much of what goes into the meetings at present is general discussion and recommendations for consideration, rather than confirmed lines o development.
Bugs, Feature Requests and Code Submissions
- For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
- There is also a public facing Kanban board with public issues.
- Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:
- Discuss them with the Puppetry team: and work with them to ensure a proper convergence of ideas.
- Be signed-up to the Lab’s contribution agreement in order for submitted code to be accepted for review / use:
- Contributor Agreement notes – SL wiki.
- Contributor Agreement FAQ – SL wiki.
- Code Contributor agreement – PDF form.
Further Information
- Introducing Second Life Puppetry – Linden Lab blog post (August 30th, 2022).
- Puppetry: How it Works – Second Life Knowledge Base.
- Second Life Puppetry wiki index page – not that more subjects / categories are / will be added over time.
- Second Life Public Calendar – meeting dates.
- Alternate Viewers page – for the latest version of the Puppetry viewer.
Meeting Notes
Viewer Progress
- An updated version of the viewer – version 6.6.12.579958 – was release on Thursday, May 11th.
- This update includes access to Rider Linden’s experimental attachment point tracking & forwarding to the server feature.
- It also includes various incremental improvements to handling puppetry, such as support for parsing binary LEAP data from the LEAP script.
- Avatar attachment point tracking (per the TPVD meeting discussion on May 12th):
- This allows the tracking of joints (using attachment points) using a script.
- Using visible attachment points (i.e. those on the avatar , NOT any of the screen-based HUD attachment points) cuts down on the the amount of data having to be be handled at both ends.
- The speed at which the attachment point movement is read back is such that it could not be exploited to create a copy of an animation with any real fidelity.
- This is a deliberate move to ensure that animation creators are not left feeling uncomfortable about LSL animation tracking.
- There is a combined throttle / sleep time elements to tracking attachment points: the throttle limits the number of attachment points that can be tracked over a certain period of time; he script sleep time is designed to allow an animation to move those attachment points forward sufficiently before a further taking record is made. Thus, it is next to impossible to track and record a coherent animation frame.
- It was noted that previously, joint constraints had been hard coded in C++, but their configuration has been moved into a human-readable LLSD file which can be modified without rebuilding the viewer.
- Two areas of focus going forward are:
- Improving the Inverse Kinematic (IK) system within the viewer – something Leviathan Linden is already working on. This will include overall improvements to IK constraints as well as to positioning, with the existing source-code constraints moved replaced by a further config file – “constraints” here being in terms of joint rotation / movement.
- Providing .FBX animation file import and Mixamo skeleton re-targeting.
- The IK work is still being thrashed out (and subject to much more discussion at meetings, but is seen as a priority over other elements of work, such as the animations streaming idea Leviathan Linden had been working on. The hope is that by improving IK, it will play into streaming and “live” animations a lot more robustly and smoothly. It is also seen as a foundational piece of work for further opening up puppetry and animation work.
General Notes
- It was noted that for animation import, LL is looking towards using / supporting Assimp (github: https://github.com/assimp/assimp), which supports some 30-40 animation formats, converting them to it ownformat for ease of import to multiple platforms. Notably, it supports .FBX and glTF, so it fits with the Lab’s goal of utilising glTF for materials, mesh imports, etc.
- [TPVD meeting, May 12th] This will not alter the existing internal format for animation. It is just to allow the import of other formats.
- It is acknowledged that alongside o that, the Labe will require a retargeting system for animations; although what form this will take is still TBD.
- The core of the meeting was a general discussion of things that might / could e done in the future, and what technologies LL might look towards.
Date of Next Meeting
- Thursday, May 25th, 2023, 13:00 SLT.