
The following notes have been taken from chat logs and audio recording of the Thursday, March 23rd, 2023 Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).
Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.
Project Summary
General Project Description as Originally Conceived
LL’s renewed interest in puppetry was primarily instigated by Philip joining LL as official advisor, and so it really was about streaming mocap. That is what Philip was interested in and why we started looking at it again. However since Puppetry’s announcement what I’ve been hearing from many SL Residents is: what they really want from “puppetry” is more physicality of the avatar in-world: picking up objects, holding hands, higher fidelity collisions.
As a result, that is what I’ve been contemplating: how to improve the control and physicality of the the avatar. Can that be the new improved direction of the Puppetry project? How to do it?
Leviathan Linden
- Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the LLSD Event API Plug-in (LEAP) system.
- Note that facial expressions and finger movements are not currently enabled.
- Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
- The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
- Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
- No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
- There is a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).
Additional Work Not Originally In-Scope
- Direct avatar / object / avatar-avatar interactions (“picking up” an apple; high-fives. etc.
- Animations streaming: allowing one viewer to run animations and have them sent via the simulator to all receiving viewers without any further processing of the animations by those viewers.
- Enhanced LSL integration for animation control.
- Adoption of better animation standards – possibly glTF.
- Given the project is incorporating a lot of additional ideas, it is likely to evolve into a rolling development, with immediate targets for development / implementation decided as they are agreed upon, to be followed by future enhancements. As such, much of what goes into the meetings at present is general discussion and recommendations for consideration, rather than confirmed lines o development.
Bugs, Feature Requests and Code Submissions
- For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
- There is also a public facing Kanban board with public issues.
- Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:
- Discuss them with the Puppetry team: and work with them to ensure a proper convergence of ideas.
- Be signed-up to the Lab’s contribution agreement in order for submitted code to be accepted for review / use:
- Contributor Agreement notes – SL wiki.
- Contributor Agreement FAQ – SL wiki.
- Code Contributor agreement – PDF form.
Further Information
- Introducing Second Life Puppetry – Linden Lab blog post (August 30th, 2022).
- Puppetry: How it Works – Second Life Knowledge Base.
- Second Life Puppetry wiki index page – not that more subjects / categories are / will be added over time.
- Second Life Public Calendar – meeting dates.
- Alternate Viewers page – for the latest version of the Puppetry viewer.
Meeting Notes
Viewer Progress
- An updated version of the project viewer is still “close” – it did not make it past QA, but it is hoped it will be available soon.
- Leviathan Linden is working on the IK system in order to try to make it more robust to handle the kind of work Rider Linden is doing, but is not at the point of having anything ready for delivery into a viewer, although the idea was to have something possibly ready by the viewer update after the one that is still waiting to be cleared for project release.
Server-Side Work
- Following the meeting, the current Puppetry test regions on Aditi were due to be updated with a simulator version which merges the server-side Puppetry code with the latest release of the simulator code.
- Rider Linden is continuing to work on llSetAttachmentPoint, intended to allow avatars “pick up” objects using puppetry. At the time of the meeting he was completing work on ANIM_POS/ROT_TARGET which is intended to try to keep the attachment point directed at an object in world (as opposed to a fixed location or a location relative to the avatar.
- This uses a command to tell the viewer to carry out the necessary IK work to move an attachment point as close to a location as it can get, using the target object’s UUID / location as reported by the simulator.
- The idea is that this functionality will work not only with hands / arms but also with other appendages (e.g. wings).
- In theory, this should also allow avatars to touch attachment point on other avatars (e.g. holding hands), however, exactly how this works within the framework of the Permissions system – in terms of others accepting / refusing any direct interaction through something like a dialogue request as we see today – has yet to be worked out.
- This led to a broader discussion on attaching object to avatars, the core of which is summarised below.
Object Parenting vs. Temp Attachments
- The idea of being able to use Puppetry to reach out and grasp a (suitably scripted) object in-world – for example, an apple, a bottle or similar) raised questions on how the process will work.
- Currently, “temp” attachment can be made to avatars (e.g. via an Experience), but this still actually requires the object being temporarily transfers to the avatar’s inventory (where it does not how up) and from there attached to the relevant attach point (e.g. a hand).
- This is somewhat slow and cumbersome – particularly if you want to do something with the object (e.g. if it is a ball, throw it), as the object needs to be picked up, held, follow the throwing motion of the avatar’s arm, detach at the point of release, resume its status as a physical object in-world, have direct and velocity applied, and then move in the direction of the throw.
- The suggestion was made that to simplify things, a concept of avatar-to-object parenting needs to be introduced to SL – so when the ball is picked up, it immediately becomes a child of that avatar – no need for the passing to inventory, attaching from there, detaching to inventory / deletion, etc., as seen with temp attachments.
- Developing a hierarchy scheme for SL is already being mused through the Content Creation User Group, so it was suggested this could help present a road to object parenting with avatars. However, as it will take some time for any hierarchy system to be developed and implemented – and given it falls outside of the Puppetry project – , its might mean that existing mechanisms may have to be accepted, even if they do place some limitations on what might be achieved until such time as a hierarchy system can be introduced.
- As an alternative, it was suggested that the physics engine might be used to attach a degree of object parenting to an avatar:
- The physics engine allows actions to be created, which are updated every sub-step; so in the case of a physical object, it should be possible to write an action that says, “Follow the hand of the avatar picking you up”.
- Then, as long as a the physics engine knows the position of the “holding” hand, the object could move with it; while there would be a small degree of physics lag, as long as the viewer knows to render the object at the avatar’s hand, rather than where the physics updates are saying he object is, this should not be visually noticeable.
- This approach would not require an explicit hierarchy system, but it would require the viewer to send updates on the avatar’s hand position to the simulator’s physics engine – which is available.
- The idea is modelled on the old physics action that perhaps most famously allowed a beach ball to be grabbed and pushed / lifted up onto a table as a part of the orientation process to using SL incoming new users used to go through, and can still be used today to push suitably scripted objects around.
- If possible, this approach would also need some form of constraints and permissions (e.g. you don’t really want to make your entire building capable of being grabbed and shunted around willy-nilly).
General Notes
- There was a general conversation on Permissions within avatar-to-avatar interactions and where they need to fall.
- As noted above, there will need to be some form of explicit granting of permission for Puppetry-generated hugs, handshakes, high-fives, etc.
- However, concern was raised about permissions being needed for basic one-way interactions – such as pointing at someone. The concern here being that the LookAt debug targets has become so conflated with ideas of “privacy” tools (“Stop looking at me, perv! You don’t have permission!”) TPVs have surfaced the means to disable LookAts being sent via a viewer so that people do not get shouted at if their LookAt cross-hairs happen to reside on another avatar. Thus, the worry was that if there is a similar kind of visual indicator used for Puppetry, it might result in a similar reaction.
- The short answer to this was, no, it should not be an issue as avatar locations can already be obtained through LSL without the need for a visual indicator being generated by the viewer.
Date of Next Meeting
- Thursday, April 13th, 2023, 13:00 SLT.