2023 SL Puppetry project week #6 summary

Puppetry demonstration via Linden Lab – see below.  Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”

The following notes have been taken from chat logs and audio recording of the Thursday, February 9th, 2023 Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).

Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.

Project Summary

General description of the project and its inception:

LL’s renewed interest in puppetry was primarily instigated by Philip joining LL as official advisor, and so it really was about streaming mocap. That is what Philip was interested in and why we started looking at it again. However since Puppetry’s announcement what I’ve been hearing from many SL Residents is: what they really want from “puppetry” is more physicality of the avatar in-world: picking up objects, holding hands, higher fidelity collisions. 
As a result, that is what I’ve been contemplating: how to improve the control and physicality of the the avatar. Can that be the new improved direction of the Puppetry project? How to do it?

Leviathan Linden

  • Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the  LLSD Event API Plug-in (LEAP) system.
    • Note that facial expressions and finger movements are not currently enabled.
    • Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
  • The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
  • Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
  • No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
  • This project is taking in a lot of additional ideas – animation standards, improving the current animation system, enabling truer avatar / avatar and avatar object interactions such that it is likely to evolve into a rolling development, with immediate targets for development / implementation as they are agreed upon, to be followed by future enhancements.
  • As such, much of what goes into the meetings at present is general discussion and recommendations for consideration, rather than confirmed lines o development.
  • There is a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).

Bugs, Feature Requests and Code Submissions

  • For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
  • There is also a public facing Kanban board with public issues – those experiencing issues can also contact Wulf Linden.
  • Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:

Further Information

Meeting Notes

Animation Streaming

  • Leviathan Linden has been experimenting with animation streaming over the viewer’s animation channel, such that whatever is sent from the controlling viewer is played directly by all receiving viewers without any further processing of the animations by those receiving viewers.
  • This had been discussed in previous meetings as a potential means of lightening the load of animation data processing individual viewers would have to perform, reducing the potential performance impact in situations where animation synchronisation is important. It also lays something of a further foundation for more procedural-based animation processing, allowing viewers to work smarter – sending less data more frequently, which will in turn help enable synchronised animations through puppetry, such as spontaneously sharing a high five.
  • With this initial test, featuring just one viewer using puppetry and one receiving it, actually revealed a more noticeable lag in streaming compared to individual processing and playback of received animation data. It is not clear at this time whether this would worsen in situations where multiple puppetry animations are being streamed / received.
  • The videos were initially posted to the restricted-access Second Life Content Creation Discord server, and then combined into a single video by Kadah Coba, which is reproduced here as an animated GIF – my thanks to Kadah for the work in combining the videos.
Puppetry streaming test: top – animation played in one viewer (large image) with data sent for processing by a receiving video (inset). Bottom: the same animation played on the same viewer and then streamed to the receiving viewer (inset) and played on receipt without any additional animation processing.
  • Leviathan notes this is a very quick and dirty test, requiring “some hackery” within the viewer’s animation code, but does not (as yet) require any changes to the server-side puppetry management code.
  • Further refinement of the code is required, together with further testing to see if the approach can be smoothed / improved, as such, the work is not currently fit for integration into the Puppetry Project viewer, although it has been suggested it might be offered as a temporary, separate test viewer to allow broader testing.
  • On potential issue is that the streaming is currently dependent on the reliability of the originating viewer; if it is running at a low FPS, potentially causing the receiving viewers see a “choppier” result with lagging and smooth animations within the stream.

LSL Integration

  • See: OPEN-375: “LSL Functions for reading avatar animation positions”.
  • Rider Linden has been working on an LSL API to control animation on an avatar.
Conceptually it would behave like LEAP from LSL. The simulator will send the animation instructions down to the targeted viewer which will perform the IK and animation, and then send the results back as though they were coming from any other LEAP plugin. Targeting objects should be possible with the API (although I’ll add a world position/rotation parameter so you don’t have to do the math yourself).

– Rider Linden

  • This work is possibly best described as moving a step towards enabling an avatar using puppetry to reach out and pick up an apple by allowing a script to position the avatar’s hand at the location of the apple, from where the user can use a supported capture tool to “pick up” the apple.
  • The envisioned actions would be: the user move their avatar’s arm towards the apple, the apple detects the collision between the avatar hand and itself and attaches to the hand as if it has been directly picked up.
  • Further work is required involving collisions between the apple and the avatar’s hand, so the apple knows it is being “grabbed”. This might be achieved by using an existing collision event as the trigger for attachment, or an entirely new event.
  • One problem is to avoid having multiple extra collision objects bouncing around the physics engine for every single attachment point on an avatar (55 in total), which would add-up, performance-wise, very quickly.
    • One suggestion for mitigating this is that as region knows where your hand is (which is true with the attachment update stream), it could be possible by implementing a new “grab” action that works in the physics simulation for picking up small objects; however, it would likely need some hint/magic on the viewer to render the object “at the hand” rather than “near the hand”..
  • Beyond this, there is also additional work to allow avatar-to-avatar interactions via puppetry – such as the aforementioned high five – which involves addressing some permission issues.

In Brief

  • Concern was raised that emphasis on puppetry over the traditional canned animation assets is that it could make SL inaccessible for some (because of the need for additional motion capture hardware, a possible need for more powerful client computers, etc). In response, those at the meeting pointed out:
    • The approach being taken by the Lab is not new – it has been a common factor (albeit implemented in a verity of ways) within games for well over a decade, and is used in multi-player games without participants being “lagged” to the point where gameplay is broken.
    • What is being proposed with Puppetry is not even “new” (in the broadest sense); rather it is adding a further layer of animation capabilities to Second life which can enable a greater sense of interactivity to the platform.
    • In terms of hardware, it was further pointed out that while some at the meeting are using VR hardware – headsets and peripherals – all that is actually required to start leveraging the capabilities (and as LL have demonstrated in the animated GIF forming the banner of this summary is a basic webcam.
  • In a more general conversation, it was pointed out by those at the meeting and the Lab engineers that:
    • Whilst things like streaming puppetry animations may at times result in more visible lag / animation desynchronization, it offers so much more in the way of avatar interaction with the world, it would be more than worthwhile.
    • This work is purely about puppetry and interactivity; it does not actually alter the way more general animations – walking, standing, etc., work, as the underpinning locomotion engine within the simulator and how the viewer calculates motion based on data from the simulator is not being altered.
    • Instead, the LSL API (and the LEAP API?) will enable general avatar orientation and attachment point orientation / movement to ensure that arm correctly reaches out to “grab” the apple mentioned above, by effectively running in conjunction with the locomotion engine.

Date of Next Meeting

  • Thursday, February 23rd, 2023, 13:00 SLT.

2023 SL Puppetry project week #2 summary

Puppetry demonstration via Linden Lab – see below.  Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”

The following notes have been taken from chat logs and audio recording of the Thursday, January 12th, 2023 Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).

Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.

Project Summary

General description of the project and its inception:

LL’s renewed interest in puppetry was primarily instigated by Philip joining LL as official advisor, and so it really was about streaming mocap. That is what Philip was interested in and why we started looking at it again. However since Puppetry’s announcement what I’ve been hearing from many SL Residents is: what they really want from “puppetry” is more physicality of the avatar in-world: picking up objects, holding hands, higher fidelity collisions. 
As a result, that is what I’ve been contemplating: how to improve the control and physicality of the the avatar. Can that be the new improved direction of the Puppetry project? How to do it?

Leviathan Linden

  • Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the  LLSD Event API Plug-in (LEAP) system.
    • Note that facial expressions and finger movements are not currently enabled.
    • Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
  • The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
  • Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
  • No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
  • This project is taking in a lot of additional ideas – animation standards, improving the current animation system, enabling truer avatar / avatar and avatar object interactions such that it is likely to evolve into a rolling development, with immediate targets for development / implementation as they are agreed upon, to be followed by future enhancements.
  • As such, much of what goes into the meetings at present is general discussion and recommendations for consideration, rather than confirmed lines o development.
  • There is a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).

Bugs, Feature Requests and Code Submissions

  • For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
  • There is also a public facing Kanban board with public issues – those experiencing issues can also contact Wulf Linden.
  • Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:

Further Information

Meeting Notes

LSL Integration

  • See: OPEN-375: “LSL Functions for reading avatar animation positions”.
  • Rider Linden is starting to look at LSL integration  – the first step being to make the simulator aware of what is actually animating.
  • Currently, the code he has developed lets the server know the position of an avatars attachment points; this sends details of 55 points (HUD points excepted). Attachment points have been selected over bones, as the simulator already has a solid concept of attachment points, and it avoids complications with rigged meshes “doing their own thing” with bone positions.
  • A concern with this is the number of updates being sent to the server for processing.
    • One idea is to refine the the code so that only the attachment points which change relative to the avatar centre (avatar frame/Local Position relative to the avatar) actually send information to the server, in order to reduce the number of updates being generated.
    • Another idea might be to only send updates every n frames, rather than every frame. This would reduce the fidelity of movement, but could still provide sufficient data while reducing the load on the simulator, particularly where multiple avatars in a region are using puppetry.
  • This issue is related to synchronising puppetry actions across multiple viewers as well; a long-standing issues, given that animation playback of animations is viewer-side, and not genuinely across viewers (the resync function found in some TPVs only does so locally).
  • All of the above lead to a discussions of ways and means to best allow LSL integration with animations and ensure a reasonable transmission of results together with decent synchronisation between the viewer and the simulator, whether by frame count or time stamp, in order to ensure predictability of results across multiple viewers. .
  • In addition, the discussion included the advantage in enhancing Second Life to support procedural animations as well as the current canned animations.
  • Rider is also looking into a script enhancement to register collisions.
  • There was some conflating of ideas during the discussion – immediate first steps in opening Puppetry to LSL, and more far reaching goals – setting position, registering collisions (per the above), defining better interpolation for positioning (e.g. as defined in the Khronos glTF specification), etc., which caused a degree of confusion.
  • However, the openness towards making Puppetry a good foundation for future enhancement (such as moving more to procedural-based animations, enabling SL to support “industry standard” animation workflows to encourage animators into the platform, etc., remains, together with (hopefully) enabling more realistic avatar / avatar and avatar / object interactions.
  • That said, Simon Linden did offer a not of caution to all discussing the work:
Not to pop the bubble, but every one please keep in mind all the stuff we’re talked about is experimental and really interesting. I have no idea what we can make into real features and what can work with crowds and all the other interesting problems to make it happen well – we’ll see what we all can do this year 🙂

– Simon Linden

Date of Next Meeting

  • Thursday, January 26th, 2023, 13:00 SLT.

2022 Puppetry project week #49 summary

Puppetry demonstration via Linden Lab – see below.  Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”

The following notes have been taken from chat logs and audio recording of the Thursday, December 8th Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).

Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.

Project Summary

  • Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the  LLSD Event API Plug-in (LEAP) system.
    • Note that facial expressions and finger movements are not currently enabled.
    • Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
  • The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
  • Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
  • No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
  • There is now a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).

Bugs, Feature Requests and Code Submissions

  • For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
  • There is also a public facing Kanban board with public issues – those experiencing issues can also contact Wulf Linden.
  • Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:

Further Information

Meeting Notes

Viewer and Plug-in Updates

  • A new version of the Puppetry project viewer – version 6.6.8.576972 was issued on December 8th, available on the Alternate Viewers page (until a further version is issued).
    • This version includes an overhaul to the protocol used between LEAP plug-ins and the viewer. For example, Inverse Kinematics calculations are done earlier in the process which will make viewer performance better when more than one avatar is using Puppetry.
    • Please be certain to use both a new viewer and a new set of plug-ins from https://github.com/secondlife-3p, and update any projects or code you might be working on.
    • This version may have a crash bug.
  • Leviathan Linden is still working on updating the wiki documentation to reflect the new API.
Leviathan Linden demonstrating the use of puppetry to move his avatar whilst doing the “pane of glass” mime in front of a suitable capture device. Note that his legs remain static (moving in line with his hips) as puppetry does not (yet) support full body tracking
We changed the way Puppetry expects to get its data for two reasons: 1) we want to only do IK for your own avatar, then just send the joint rotations to everybody else; 2) if someone writes a plug-in that happens to know best what all the joint rotations should be (e.g. it has done its own IK, or is doing full mocap) then it can just specify all parent-frame rotations of the joints. So, now that THAT plug-in mode is unblocked, we can start trying to fix our own IK.

– Leviathan Linden explaining the changes to the way puppetry and managed by the viewer

In Brief

  • There are currently no updates to the Inverse Kinematics (IK); it is described as being “hard”.
  • It has been suggested that the viewer could “do more” in respects of IK, etc.
    • However, this information needs to be transmitted to other viewers able to “see” what is going on, which requires messaging and updates through the simulator, which can lead to the viewer being less than trustworthy in terms of what it is showing and what is going on (due to missed updates, etc due to the bandwidth load).
    • But, if the simulator is tasked with managing all the computations for IK and sending the results to connected viewers (reducing the amount of traffic and potential loss of messaging), it puts a potentially high compute load on the simulator (imagine the simulator trying to manage the IK for 50+ avatars at an event, tracking the movement, interactions, etc.).
    • A potential trade off here is to have a viewer run the IK calculations for the avatar it is controlling, and package that information for streaming to other viewers connected to the region’s simulator with minimal sanity checking (e.g. to ensure the avatar’s position in the viewer is properly constrained to within a few metres of its location as calculated by the simulator). On receipt, the sanity-checked data can then be played back without the need for the receiving viewer having to carry out IK calculations for itself on the avatar(s) it is “watching”, and carrying out some minimal sanity checks of its own.
  • Rider Linden is investigating having the simulator track the motion from a puppetry viewer in a way that does not impact simulator performance “too badly”. The options he’s looking at are:
    • Having the simulator suck the data out of the puppetry messages as they are sent through it, or
    • Using a new message the viewer can use to report the locations of it’s attachment points, and the simulator tracks these – which Rider sees as the preferred option.
    • This later method could – among other things – be expanded to work with animations in general. In addition, if tied tied in to the entire animation system (viewer computes its animation frame puppetry+legacy) to produce results, then the results could be made to scripts.
    • In regards to interfaces for this, Rider is of the opinion that scripts should reference attachment points since those can act as proxies for bone locations and most general creators, scripters and residents are familiar with them. and it avoids having to introduce a new concept of bones into LSL.
  • Collisions: the above spawned a related discussion on providing additional data such as geometry information (e.g. spherical bounding radius or a shape approximation) to allow collisions to be enabled from IK, and allowing “snap to” functionality (e.g. you reach for a glass and the avatar hand snaps to it on detecting the collision).
    • However, rather than allowing allow physics collisions on an attachment points (which might over-complicate the avatar model in the Havok physics engine), Rider suggested having a property (sphere) that could be set on an attachment that enabled it as a physics volume.
My thought is that collision aware attachment points, along with being able to detect and set their positions in space will be enough to get us 90% of the way towards being able to hold hands in world.

– Rider Linden

  • Leviathan noted there was a bug where an avatar with non-unity scale on its bones would be broken under puppetry (misalignment of bones). This should now be fixed, but it allowed him to add the theoretical ability to modify the scale of a joint in its parent-frame (an example plug-in script to demonstrate ho this works has yet to be written).

Date of Next Meeting

  • Thursday, January 19th, 2023, 13:00 SLT.

2022 Puppetry project week #45 summary

Puppetry demonstration via Linden Lab – see below.  Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”

The following notes have been taken from chat logs and audio recording of the Thursday, November 10th Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).

Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.

Project Summary

  • Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the  LLSD Event API Plug-in (LEAP) system.
    • Note that facial expressions and finger movements are not currently enabled.
    • Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
  • The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
  • Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
  • No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
  • There is now a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).

Bugs, Feature Requests and Code Submissions

  • For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
  • There is also a public facing Kanban board with public issues – those experiencing issues can also contact Wulf Linden.
  • Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:

Further Information

Meeting Notes

Viewer and Plug-in Updates

  • The puppetry team is working on updating the viewer and LEAP plug-in, and an update to the project viewer is liable to be released in week #46.
  • This viewer includes:
    • The ability to move the avatar pelvis.
    • Ability to stretch other bones – although this is awaiting testing at the time of writing. However, the reference frame scale is that of the normal puppetry targets, so you would have to scale the data correctly; therefore additional work on this is required to provide a way for the plug-in to get the data necessary to know now to scale individual joint bones (e.g. change their parent-relative positions).
  • It still won’t be possible to clear puppetry target/config data, which remains on the teams “to do” list.
  • Aura Linden noted the new LEAP module it initialises on-demand rather than via instantiation (as with puppetry). LL will provide demos of using the new module.

Kincect v2 Support

  • Simon Linden has been working on an experimental plug-in taking inputs from a Kincect v2 device.
  • He describes the the code as being “pretty rough”  and using only basic geometry, but it allows avatar elbows / arms to be moved around.
  • This work in part utilises the data syntax described in OPEN-366 “Simplify Puppetry Configuration Through LEAP”, the new protocol proposed by Leviathan Linden as per previous meeting notes.
  • The code is not ready to be pushed to a public branch as let, and doing so is somewhat dependent on feedback from developers /creators.

Avatar Constraints / Interactions

  • OPEN-368 “[Puppetry] [LEAP]: Location Constraints” – LL have indicated there is “much” within this Jira they would like to support “eventually”.
  • The feeling at the Lab is that constraints can “definitely” be improved  – although what this may look like has yet to be properly determined. However, the general feeling is that there should be constraint data associated with a given skeleton, for example, so we’re not just imposing a human-centric model on the SL avatar.
  • A  good portion of the meeting was given over to a general discussion of how best to handle puppetry and avatar animations – and the potential to need to move away from canned animations and provide a more direct means of avatar animation.
  • Avatar-object interactions are potentially complex issue (e.g. how can an avatar accurately take and hold an in-world object – say an apple) through puppetry? If the apple is a physical object, does it collide when held? Does it become an attachment? If  the latter, how is this registered, together with hold is it properly released from the attachment system? etc.).
    • A suggestion for handling avatar’s handling objects is to have some for of temp-attach system or to use a key frame motion (KFM) system to match the position to the avatar’s hand, allowing the avatar can hold the object without directly “owning” it (thus also avoiding permission system issues).
  • Collisions also raise questions: avatar arms currently do not collide, and so would not under puppetry. So what about cases of simple interactions – flicking a light switch or similar. These are not “proper” collisions per se, but are rather event-triggered; how can this be managed if there is no actual collision between the scripted object and an avatar’s arm / hand to trigger the associated event?

In Brief

  • It has been suggested that a version number is included in puppetry-related messaging, so that changes to message formats are not read by versions of the viewer unable to do so, thus reducing the risk of crashes during development / testing.
  • It has been indicated that puppetry will eventually have LSL support for LEAP. Although what form this will take and how the simulator will track things is  still TBD, as currently animations are entirely viewer-side and untracked by the simulator.
  • There is concern that understanding of the potential of the puppetry project isn’t being fully understood by creators (and others) as it is being seen more as a “VR thing” than an ability to much improve avatar animations and their supporting systems / constraints, including the IK system.
  • How to manage network latency also formed a core discussion, together with making better use of the Havok physics sub-licence to allow the viewer do a lot more of the work, and simply stream the results through the simulator to other viewers.

Date of Next Meeting

  • Thursday, December 8th, 2022, 13:00 SLT.

2022 Puppetry project week #43 summary

Puppetry demonstration via Linden Lab – see below.  Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”

The following notes have been taken from chat logs and audio recording of the Thursday, October 27th Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).

Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.

Project Summary

  • Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the  LLSD Event API Plug-in (LEAP) system.
    • Note that facial expressions and finger movements are not currently enabled.
    • Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
  • The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
  • Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
  • No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
  • There is now a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).

Bugs, Feature Requests and Code Submissions

  • For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
  • There is also a public facing Kanban board with public issues – those experiencing issues can also contact Wulf Linden.
  • Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:

Further Information

Meeting Notes

Protocol Overhaul

At the previous meeting, Leviathan Linden noted the project team is going to overhaul the Puppetry/LEAP protocol. Since then:

OpenXR Support

Leviathan Linden asked for feedback on what the requested “OpenXR support” mean to those requesting it – e.g.: Is it to run an OpenXR app and have a VR experience in SL,  or is it to run an OpenXR app as a plug-in to provide measurement input to Puppetry?

The general response was a mix of both:

  • To generally provide the means for “proper” hardware support for motion capture such that puppetry isn’t just a “best guess” response via a webcam
  • To allow for more accurate interactions between avatars and objects; eventually moving to provide full support for VR headsets and controllers (requiring the ability to intact with scripted devices, operating levers, controls, etc., which could be correctly interpreted and acted upon by said scripts).

Currently, LL are more willing to consider OpenXR support as a part of the Puppetry work whilst regarding it as a potential step towards wider VR support in SL in the future.

Avatar Constraints / Interactions

The above question led to a broader discussion on avatar-to-avatar and avatar-to-object interactions starting with the avatar constraints / collision system.

  • As they are right now, avatar constraints and collisions within SL have been adequate for the platform, but lacking (collisions, for example have no concept of the avatars arms / legs, limiting interactions with them and other objects).
  • OPEN-368 “[Puppetry] [LEAP]: Location Constraints” is a feature request outlining the benefits of overhauling the SL avatar constraints system to allow better interactions with objects, etc. This is currently open to those wishing to add further comments and feedback.
  • The question was raised as to how “fast” / reliable the required communications (including all the required bone interactions) could be made in order to ensure adequate / accurate response times with actions (e.g..so when shaking hands, he hands of each avatar arrive at the same point at the seem time to be seen as  shaking in both viewers).
  • Also discussed was determining how “reactions” might best be defined – could it be as “simple” a pre-set animation?
  • One issue with this – interactions, OPEN-368, etc., – is that direct hooks from Puppetry to LSL had been seen as outside the scope of the project, simply because puppetry and the LEAP API are entirely viewer-side, and LSL simulator-side.  However, the discussion opened a debate on whether some means for this interaction should be provided, with two options being put forward:
    • Broadening the LEAP protocol, essentially using it to make the viewer scriptable with plug-ins that run on their own threads.
    • Providing a specific LSL function that would enable LSL to be able to communicate / interact with the LEAP protocol / JSON (as is the case with the RLV / RLVa APIs used by some third-party viewers).
    • Both of these approaches were seen as potentially “doable”, if beyond the intended scope of the puppetry project.
  • A further issue  with interactions and bone tracking (which would be required for accurate avatar-based interactions) as that bone tracking via LSL is as best limited to non-existent; this raised the subject of possibly using attachment points as a proxy.
    • An additional problem here is whether or not is possible to track the location of the attachment points in 3D space relative to any animation the avatar is playing (e.g. if an animation causes the avatar to raise their arm, is it possible to check the position of the wrist point)? This is currently something of an unknown, as it would either:
      • Require the simulator to inject a lot of additional calculations for joint and attach positions;
      • Or require a  new (optional) protocol where the viewer would just supply its in-world positions at some frame rate – which would require some calculation overhead on the part of the viewer;
      • Or – given work is in-hand to add the in world camera position relative the viewer, and also the avatar’s world orientation and look at target – provide a straight dump of the animation mixdown together with the skeleton data, enabling the processing to be carried out in a module rather than the viewer.
  • As a result of these discussions, time has been requested to investigate the various options (which will likely include a determination of what, if anything is to be included in the current project in terms of these additional capabilities).

Date of Next Meeting

  • Thursday, November 10th, 2022, 13:00 SLT.

2022 Puppetry project week #41 summary

Puppetry demonstration via Linden Lab – see below.  Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”

The following notes have been taken from chat logs and audio recording of the Thursday, October 13th Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).

Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.

Project Summary

  • Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the  LLSD Event API Plug-in (LEAP) system.
    • Note that facial expressions and finger movements are not currently enabled.
    • Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
  • The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
  • Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
  • No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
  • There is now a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).

Bugs, Feature Requests and Code Submissions

  • For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
  • There is also a public facing Kanban board with public issues – those experiencing issues can also contact Wulf Linden.
  • Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:

Further Information

Meeting Notes

New Viewer Version – 6.6.3.575529 Dated October 12th

  • This viewer uses a different, more efficient data format sending updates up to the region, and from the region to viewers.
    • The new and old formats and viewers are not compatible; someone on the new project viewer will be unable to see puppetry rendered for someone using the older viewer version, and vice-versa.
    • It is hoped that severe breakages between viewer versions like this will be avoided going forward, but this change was deemed necessary
  • This viewer also a crash (deadlock) fix, and puppetry animations should fade in/out when starting or explicitly stopping (animations may stop abruptly should the LEAP plugin crash, or the data stream is lost, etc.).
  •  Those self-compiling viewers with the puppetry code should ensure they are pulling the updated code from the  6.6.3.575529 (or later as new versions appear) repositories.

Protocol Overhaul

Leviathan Linden Linden noted the project team is going to overhaul the Puppetry/LEAP protocol.

  • The intent is to replace all the current LEAP commands (“move”, “set_this”, “set_that”, etc.), and replace with just two commands: “set” and “get”.
  • On the “set” side:
    • It will be possible set avatar joint transforms, or specify IK targets, and also set various configuration settings as necessary.
    • These set commands will be “incremental” in nature (so that changes can be made to reach the final state), and once set, they stay at the defined value until modified, cleared, or the plug-in “goes away”.
  • On the “get” side:
    • get_skeleton and any other get_foo commands (if used) will be replaced with {get: [skeleton, foo, …]}.
    • A message will be generated and set back to the viewer making the Get request, but the form of the message is still TBD.
  • Meanwhile, the viewer will only do IK for your own avatar, and will transmit the full parent-relative joint transforms of all puppeted joints through the server to other viewers, and LL will make it possible for a plug-in to just supply full parent-relative joint transforms if desired (e.g. no IK, just play the data)
  • This overhaul will also provide:
    • A way to move the Pelvis. This will include both a pre-IK transform (which is just setting the Pelvis transform) and also a post-IK transform, in case the avatar is to be moved after setting all the joints.
    • A “terse” format for the LEAP/Puppetry protocol to simplify some “set” commands to reduce data going over the LEAP data channel. It will be possible to mix these “terse” command with long-form explicit commands.
  • Leviathan plans to break all of this work down into a set of Jira issues and place them on the kanban board for ease of viewing.

The overall aim of this overhaul is to make the protocol more easily extendible in the future.

To the above, Simon Linden added:

The data stream is radically different than what we started with. Essentially your viewer will do the work for your avatar: send[ing] all data needed for your puppetry animations [so] the people seeing you just have to use those positions – no IK or significant processing. That should help out in the long run with crowds 

Example Script

Simon Linden has produced a simple example script that is pushed to the Leap repository:

  • It reads a JSON file and sends that puppetry data to the viewer.
  • Using it, is is possible to edit some values, save the JSON text file, and see bones move as an example of doing so.

In Brief

  • BUG-232764 “[PUPPETRY] [LEAP] Puppetry should be able to ‘Get’ and ‘Set’ avatar camera angle” has been raised to go with the protocol overhaul, and while it has yet to be formally accepted, has been viewed as a good idea by the Puppetry team.
  • Puppetry does not support physics feedback or collisions as yet, and work for it to do so is not on the short list of “things to do next”
  • There is currently an issue of “near-clipping” within a a first-person (e.g. Mouselook) view and using puppetry (so, for example, holding a hand up in front of your avatar’s face in Mouselook results in the hand being clipped and now fully rendering). This is believed to by an artefact of the viewer still rendering the head (even though unseen when in first-person view), and this interfering with rendering near-point objects like hands. The solution for this is still TBD.

Date of Next Meeting

  • Thursday, October 27th, 2022, 13:00 SLT.