2022 Puppetry project week #49 summary

Puppetry demonstration via Linden Lab – see below.  Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”

The following notes have been taken from chat logs and audio recording of the Thursday, December 8th Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).

Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.

Project Summary

  • Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the  LLSD Event API Plug-in (LEAP) system.
    • Note that facial expressions and finger movements are not currently enabled.
    • Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
  • The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
  • Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
  • No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
  • There is now a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).

Bugs, Feature Requests and Code Submissions

  • For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
  • There is also a public facing Kanban board with public issues – those experiencing issues can also contact Wulf Linden.
  • Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:

Further Information

Meeting Notes

Viewer and Plug-in Updates

  • A new version of the Puppetry project viewer – version 6.6.8.576972 was issued on December 8th, available on the Alternate Viewers page (until a further version is issued).
    • This version includes an overhaul to the protocol used between LEAP plug-ins and the viewer. For example, Inverse Kinematics calculations are done earlier in the process which will make viewer performance better when more than one avatar is using Puppetry.
    • Please be certain to use both a new viewer and a new set of plug-ins from https://github.com/secondlife-3p, and update any projects or code you might be working on.
    • This version may have a crash bug.
  • Leviathan Linden is still working on updating the wiki documentation to reflect the new API.
Leviathan Linden demonstrating the use of puppetry to move his avatar whilst doing the “pane of glass” mime in front of a suitable capture device. Note that his legs remain static (moving in line with his hips) as puppetry does not (yet) support full body tracking
We changed the way Puppetry expects to get its data for two reasons: 1) we want to only do IK for your own avatar, then just send the joint rotations to everybody else; 2) if someone writes a plug-in that happens to know best what all the joint rotations should be (e.g. it has done its own IK, or is doing full mocap) then it can just specify all parent-frame rotations of the joints. So, now that THAT plug-in mode is unblocked, we can start trying to fix our own IK.

– Leviathan Linden explaining the changes to the way puppetry and managed by the viewer

In Brief

  • There are currently no updates to the Inverse Kinematics (IK); it is described as being “hard”.
  • It has been suggested that the viewer could “do more” in respects of IK, etc.
    • However, this information needs to be transmitted to other viewers able to “see” what is going on, which requires messaging and updates through the simulator, which can lead to the viewer being less than trustworthy in terms of what it is showing and what is going on (due to missed updates, etc due to the bandwidth load).
    • But, if the simulator is tasked with managing all the computations for IK and sending the results to connected viewers (reducing the amount of traffic and potential loss of messaging), it puts a potentially high compute load on the simulator (imagine the simulator trying to manage the IK for 50+ avatars at an event, tracking the movement, interactions, etc.).
    • A potential trade off here is to have a viewer run the IK calculations for the avatar it is controlling, and package that information for streaming to other viewers connected to the region’s simulator with minimal sanity checking (e.g. to ensure the avatar’s position in the viewer is properly constrained to within a few metres of its location as calculated by the simulator). On receipt, the sanity-checked data can then be played back without the need for the receiving viewer having to carry out IK calculations for itself on the avatar(s) it is “watching”, and carrying out some minimal sanity checks of its own.
  • Rider Linden is investigating having the simulator track the motion from a puppetry viewer in a way that does not impact simulator performance “too badly”. The options he’s looking at are:
    • Having the simulator suck the data out of the puppetry messages as they are sent through it, or
    • Using a new message the viewer can use to report the locations of it’s attachment points, and the simulator tracks these – which Rider sees as the preferred option.
    • This later method could – among other things – be expanded to work with animations in general. In addition, if tied tied in to the entire animation system (viewer computes its animation frame puppetry+legacy) to produce results, then the results could be made to scripts.
    • In regards to interfaces for this, Rider is of the opinion that scripts should reference attachment points since those can act as proxies for bone locations and most general creators, scripters and residents are familiar with them. and it avoids having to introduce a new concept of bones into LSL.
  • Collisions: the above spawned a related discussion on providing additional data such as geometry information (e.g. spherical bounding radius or a shape approximation) to allow collisions to be enabled from IK, and allowing “snap to” functionality (e.g. you reach for a glass and the avatar hand snaps to it on detecting the collision).
    • However, rather than allowing allow physics collisions on an attachment points (which might over-complicate the avatar model in the Havok physics engine), Rider suggested having a property (sphere) that could be set on an attachment that enabled it as a physics volume.
My thought is that collision aware attachment points, along with being able to detect and set their positions in space will be enough to get us 90% of the way towards being able to hold hands in world.

– Rider Linden

  • Leviathan noted there was a bug where an avatar with non-unity scale on its bones would be broken under puppetry (misalignment of bones). This should now be fixed, but it allowed him to add the theoretical ability to modify the scale of a joint in its parent-frame (an example plug-in script to demonstrate ho this works has yet to be written).

Date of Next Meeting

  • Thursday, January 19th, 2023, 13:00 SLT.
Advertisement