2022 week #50: CCUG and TPVD meetings summary

Power up for Charge, October 2022 – blog post
The following notes were taken from:
  • My audio recording and chat log transcript of the Content Creation User Group (CCUG) meeting held on Thursday, December 15th 2022 at 13:00 SLT.
  • My chat log transcript and video recording by Pantera Północy of the Third-Party Developer Meeting (TPVD) held on Friday, December 16th, 2022 at 13:00 SLT. My thanks to her for the video (embedded towards the end of this article).
These meetings are chaired by Vir Linden, and their dates and times can be obtained from the SL Public Calendar; also note that the following is a summary of the key topics discussed in the meetings and is not intended to be a full transcript of either meeting.

Official Viewers Status

[Video: 1:44-2:13]

Available Viewers

  • On Friday, December 16th, Linden Lab issued the Maintenance (Q)uality RC viewer, version 6.6.9.577220, with several new features and various fixes.
  • On Wednesday, December 14th, the Love Me Render (LMR) 6 graphics improvements project viewer updated to version 7.0.0.577157.
The rest of the current crop of official viewers is as follows:
  • Release viewer: Maintenance P (Preferences, Position and Paste) RC viewer version 6.6.8.576863 Monday, December 12 – NEW.
  • Release channel cohorts:
    • Performance Floater / Auto-FPS RC viewer, version 6.6.8.576737, November 28.
    • VS  2022 RC viewer, version 6.6.8.576310, issued November 4 – utilises Visual Studio 2022 in the Windows build tool chain.
  • Project viewers:
    • Puppetry project viewer, version 6.6.8.576972, December 8.
    • PBR Materials project viewer, version 7.0.0.576966, December 3.
      • This viewer will only function on the following Aditi (beta grid) regions: Materials1; Materials Adult and Rumpus Room 1 through 4.

CCUG Specific

glTF Materials and Reflection Probes

Project Summary
  • To provide support for PBR materials using the core glTF 2.0 specification Section 3.9 and using mikkTSpace tangents, including the ability to have PBR Materials assets which can be applied to surfaces and also traded / sold.
  • To provide support for reflection probes and cubemap reflections.
  • The overall goal is to provide as much support for the glTF 2.0 specification as possible.
  • The project viewer is available via the Alternate Viewers page, but will only work on the following regions on Aditi (the Beta grid):  Materials1; Materials Adult and Rumpus Room 1 through 4.
Status
  • Please also see previous CCUG meeting summaries for further background on this project.
  • The focus remains on bug and regression issue fixing within the viewer and quality of life improvements.
  • There has been a discussion on revising the workflow for setting reflection probes & in providing a debug setting to make things easier to see when manipulating reflection probes.
    • This has been prompted be those testing the PBR viewer setting-up their own reflection probes incorrectly (e.g. by making the probe sphere / cube shiny prior to converting it to a reflection probe – which should NOT be done), failing to achieved the anticipated results and then filing bug reports.
    • It is possible that some of the required workflow could be factored into the UI itself to off a more intuitive sense as to how reflection probes are supposed to work.
    • An alternative to this might be to introduce a new “reflection probe” prim type, which includes all the core parameters – however the additional work and messaging required to achieve this would lead to a further lengthening of the project’s development time.
    • Tutorials to help people get to grips with reflection probes (and, I presume, PBR as a whole are “in the works”).
  • Screen Space Reflections (SSR): Geenz Linden is working on this within the PBR viewer. Whilst a checkbox for SSR has been added to the viewer, there is further work to be done on the integration and rendering sides (e.g. getting SSR to work on water & transparent surfaces).
  • Hardware profiling / optimising the viewer is still on-going (although it is likely that whatever is done, SSR will result in a noticeable performance hit if enabled).
  • [TPV Video: 9:11-10:44] The decision has yet to be made on whether or not to axe the forward rendering (i.e. non-ALM) path from the viewer.
    • This decision is awaiting more information on hardware performance on lower-spec systems using the updated deferred rendering (i.e. ALM) rendering path.
    •  It is known that Mac rendering performance on the PBR viewer is particularly bad, which may be down to a configuration issue.

In Brief

  • Mirrors (in some capacity) are described as the “very next thing” the graphics team will commence work on after the PBR / reflection probes work gets to RC status (as Vir Linden pointed out, Mirrors currently require more reflection on how they are to be implemented!).
  • The meeting in places broadened into general discussion on content breakage (and the need, where possible, to maintain the functionality of content users have purchased in expectation of its longevity; the benefits / disadvantages with morph bones versus blend shapes for customising the avatar shape (and the possible routes to an “avatar 2.0”), the need for a better animation system (or even an actual animation system), etc.
    • Much of this was more esoteric in nature at this point in time, although the likes of Puppetry is laying the foundations for broader animation work; as does the glTF 2.0 specification, which is the baseline specification for the PBR / reflections probe work, and will be expanded upon in future projects  – such as with mesh uploads.
    • In terms of animation systems, some of the groundwork is already in place inasmuch as SL already effectively treats morph bones and animated bones as individual animation tracks, offering to potential for this to ne “unwrapped” and moved to a glTF approach to animation management.
  • Some of the above conversation touched on the ideas of market renewal / fragmentation. For example: the introduction of reflection probes offers a degree of market renewal for creators of buildings, skyboxes, etc., through the provisioning of new builds (or new versions of existing builds) leveraging the capability; however, the adoption of an upload schema for fully customised avatar skeletons might lead to greater market fragmentation (how do you ensure, for example, that human animation/pose A will work equally on custom human (style) avatar skeletons W, X, Y and Z, or will they each require custom animations?).

TPVD Meeting Specific

Inventory Thumbnails

[Video: 2:25-7:08]
  • A new project (commencing in 2023) to add thumbnail previews to inventory, allowing users to see a small image of a given object within inventory (thumbnails can be individual items or, if preferred an entire folder).
  • The first phase of the work is determining how to generate the thumbnail images in the first place on a manual or (preferably) automated basis, and then ensure it maintains an association with the object to which it is related (e.g. so if an item is sold or transferred to another user, the thumbnail goes with it).
  • Once this has been decided, the next phase will be to build-out the UI so that such thumbnails can be viewed from inventory
  • It is likely that thumbnails will have a fixed / limited size resolution, and will not be subject to any upload fee.
  • This work:
    • Is not an adjunct (or related) to the Outfit previews currently available in the viewer.
    • Will not prevent creators from including high resolution images with their products if they wish.
    • Will only apply to inventory objects where it makes sense (e.g. textures and notecards would likely be excluded).
    • Is seen as a possible foundational piece to adding new fields to the inventory database, which could open the door to further information fields being added to inventory in the future,

Github Move “Phase 2”

[Video: 7:12-8:18]
  • Following the switch-over to using Github for viewer code repositories on Monday, November 21st, 2022, work is now progressing on “Phase 2”.
  • This is remaining TeamCity operations with Github Actions for viewer builds (so those pulling an official viewer repo will be able to build it directly).
  • This work will also incorporate the rebuilding of those third-party libraries involved in the viewer build process, as and where required.

In Brief

  • [Video 16:10-18:41] LL has been digging into viewer crash rates by operating system, with the note that the number of crashes on the Mac OS appears to be disproportionately high. It is hoped that if the underlying causes can be readily identified, these issues can be subjected to rapid fixing. It was not clear at the meeting if those TPVs supporting OS X are seeing a similar elevated crash rate.
  • [Video 20:40-21:35] It is estimated that the split between viewer operating systems for the official viewer  is roughly:
    • 1-2% Linux (although this does not include running the Windows viewer under emulation on Linux).
    • 6% Mac.
    • The rest: Windows.
    • Firestorm appears to mirror the above.
    • [Video: 35:08-36:10] It is possible the the work on the viewer library refresh will allow LL to look again at the issue of providing a Linux viewer build.
  • The meeting has a general discussion on operating systems, & variants (64-bit vs. 32-bit), etc, and the lack of an official Linux build,
  • A reminder that Microsoft ceases official support for Windows 8 on January 10th, 2023. This means that from that date, Windows 8 will no longer be officially supported by Second Life as a viewer operating system, and LL will not guarantee the viewer will run as expected on Win 8 going forward from that date.

Linden Lab Holiday Closure

A reminder that Linden Lab will be effectively closed (outside of Support cover) from end of business (PST) on Friday, December 23rd, 2022 through until start of business (PST) on Monday, January 2nd, 2023.

Next Meetings

  • CCUG: Thursday, January 12th, 2023.
  • TPVD: Friday, January 20th, 2023.

2022 Puppetry project week #49 summary

Puppetry demonstration via Linden Lab – see below.  Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”

The following notes have been taken from chat logs and audio recording of the Thursday, December 8th Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).

Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.

Project Summary

  • Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the  LLSD Event API Plug-in (LEAP) system.
    • Note that facial expressions and finger movements are not currently enabled.
    • Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
  • The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
  • Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
  • No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
  • There is now a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).

Bugs, Feature Requests and Code Submissions

  • For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
  • There is also a public facing Kanban board with public issues – those experiencing issues can also contact Wulf Linden.
  • Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:

Further Information

Meeting Notes

Viewer and Plug-in Updates

  • A new version of the Puppetry project viewer – version 6.6.8.576972 was issued on December 8th, available on the Alternate Viewers page (until a further version is issued).
    • This version includes an overhaul to the protocol used between LEAP plug-ins and the viewer. For example, Inverse Kinematics calculations are done earlier in the process which will make viewer performance better when more than one avatar is using Puppetry.
    • Please be certain to use both a new viewer and a new set of plug-ins from https://github.com/secondlife-3p, and update any projects or code you might be working on.
    • This version may have a crash bug.
  • Leviathan Linden is still working on updating the wiki documentation to reflect the new API.

Leviathan Linden demonstrating the use of puppetry to move his avatar whilst doing the “pane of glass” mime in front of a suitable capture device. Note that his legs remain static (moving in line with his hips) as puppetry does not (yet) support full body tracking

We changed the way Puppetry expects to get its data for two reasons: 1) we want to only do IK for your own avatar, then just send the joint rotations to everybody else; 2) if someone writes a plug-in that happens to know best what all the joint rotations should be (e.g. it has done its own IK, or is doing full mocap) then it can just specify all parent-frame rotations of the joints. So, now that THAT plug-in mode is unblocked, we can start trying to fix our own IK.

– Leviathan Linden explaining the changes to the way puppetry and managed by the viewer

In Brief

  • There are currently no updates to the Inverse Kinematics (IK); it is described as being “hard”.
  • It has been suggested that the viewer could “do more” in respects of IK, etc.
    • However, this information needs to be transmitted to other viewers able to “see” what is going on, which requires messaging and updates through the simulator, which can lead to the viewer being less than trustworthy in terms of what it is showing and what is going on (due to missed updates, etc due to the bandwidth load).
    • But, if the simulator is tasked with managing all the computations for IK and sending the results to connected viewers (reducing the amount of traffic and potential loss of messaging), it puts a potentially high compute load on the simulator (imagine the simulator trying to manage the IK for 50+ avatars at an event, tracking the movement, interactions, etc.).
    • A potential trade off here is to have a viewer run the IK calculations for the avatar it is controlling, and package that information for streaming to other viewers connected to the region’s simulator with minimal sanity checking (e.g. to ensure the avatar’s position in the viewer is properly constrained to within a few metres of its location as calculated by the simulator). On receipt, the sanity-checked data can then be played back without the need for the receiving viewer having to carry out IK calculations for itself on the avatar(s) it is “watching”, and carrying out some minimal sanity checks of its own.
  • Rider Linden is investigating having the simulator track the motion from a puppetry viewer in a way that does not impact simulator performance “too badly”. The options he’s looking at are:
    • Having the simulator suck the data out of the puppetry messages as they are sent through it, or
    • Using a new message the viewer can use to report the locations of it’s attachment points, and the simulator tracks these – which Rider sees as the preferred option.
    • This later method could – among other things – be expanded to work with animations in general. In addition, if tied tied in to the entire animation system (viewer computes its animation frame puppetry+legacy) to produce results, then the results could be made to scripts.
    • In regards to interfaces for this, Rider is of the opinion that scripts should reference attachment points since those can act as proxies for bone locations and most general creators, scripters and residents are familiar with them. and it avoids having to introduce a new concept of bones into LSL.
  • Collisions: the above spawned a related discussion on providing additional data such as geometry information (e.g. spherical bounding radius or a shape approximation) to allow collisions to be enabled from IK, and allowing “snap to” functionality (e.g. you reach for a glass and the avatar hand snaps to it on detecting the collision).
    • However, rather than allowing allow physics collisions on an attachment points (which might over-complicate the avatar model in the Havok physics engine), Rider suggested having a property (sphere) that could be set on an attachment that enabled it as a physics volume.
My thought is that collision aware attachment points, along with being able to detect and set their positions in space will be enough to get us 90% of the way towards being able to hold hands in world.

– Rider Linden

  • Leviathan noted there was a bug where an avatar with non-unity scale on its bones would be broken under puppetry (misalignment of bones). This should now be fixed, but it allowed him to add the theoretical ability to modify the scale of a joint in its parent-frame (an example plug-in script to demonstrate ho this works has yet to be written).

Date of Next Meeting

  • Thursday, January 19th, 2023, 13:00 SLT.

2022 week #48: CCUG meeting summary

Village de Roqueblanche, October 2022 – blog post
The following notes were taken from my audio recording and chat log of the Content Creation User Group (CCUG) meeting held on Thursday, December 1st 2022 at 13:00 SLT.  These meetings are chaired by Vir Linden, and their dates and times can be obtained from the SL Public Calendar. This is a summary of the key topics discussed in the meeting and is not intended to be a full transcript. Note: unfortunately, my recording software failed at the 50 minute mark of the meeting, so the end of the meeting and the after-meeting discussion were not recorded, so some discussion points are missing from this summary.

Official Viewers Status

Available Viewers

On Friday, December 2nd, the PBR Materials project viewer updated to version 7.0.0.576966 on Friday December 2nd. This viewer will only function on the following Aditi (beta grid) regions: Materials1; Materials Adult and Rumpus Room 1 through 4. The following reflect the rest of the current official viewers available through Linden Lab.
  • Release viewer: version 6.6.7.576223 – MFA and TOS hotfix viewer – November 1 – No change.
  • Release channel cohorts (please see my notes on manually installing RC viewer versions if you wish to install any release candidate(s) yourself).
    • Performance Floater / Auto-FPS project viewer, version 6.6.8.576737, November 28.
    • Maintenance P (Preferences, Position and Paste) RC viewer version 6.6.8.576812 on Monday, November 28.
    • VS  2022 RC viewer, version 6.6.8.576310, issued November 4 – utilises Visual Studio 2022 in the Windows build tool chain.
  • Project viewers:
    • Puppetry project viewer, version 6.6.3.575529,  issued on October 12.
    • Love Me Render (LMR) 6 graphics improvements project viewer 6.6.2.573263, July 21.

General Viewer Notes

  • The Lab is aiming to get the Maintenance P RC viewer promoted to release status before year-end.

glTF Materials and Reflection Probes

Project Summary

  • To provide support for PBR materials using the core glTF 2.0 specification Section 3.9 and using mikkTSpace tangents, including the ability to have PBR Materials assets which can be applied to surfaces and also traded / sold.
  • To provide support for reflection probes and cubemap reflections.
  • The overall goal is to provide as much support for the glTF 2.0 specification as possible.
  • The project viewer is available via the Alternate Viewers page, but will only work on the following regions on Aditi (the Beta grid):  Materials1; Materials Adult and Rumpus Room 1 through 4.

Status

  • Please also see previous CCUG meeting summaries for further background on this project.
  • Bugs and regressions continue to be reported via the project viewer on Aditi,  and it is now unlikely the project will advance to a release state before early 2023 (end of 2022 always seemed ambitious).
  • Despite hopes to the contrary, a lot of legacy content is impacted when rendered via the glTF PBR path,  largely as a result of using linear alpha blending.
    • As a result, LL are working on turning to smooth around some of the edge cases without having to introduce a code fork in the viewer between “legacy” alpha handling and PBR alpha handling.
    • The aim remains to preserve as much of the appearance of legacy content under PBR rendering and reflection probes, without necessarily being slaved to preserving its looks over time. The major exception to this is would be situations where PBR rendering reveals seams between layers.
    • However, if it proves necessary, an “opt out” button will be provided to switch out of linear alpha blending.
  • mikkTSpace tangents affect all objects rendered in the PBR viewer. Unfortunately, due to the way mesh data has been stored prior to the PBR viewer, this means that creators wishing to get exactly the same results in their mesh models as they saw in their tool of choice (e.g. blender, etc.) when building their models, they will need to re-upload those models. This might result in a slight increase in the object’s Land Impact, if done.
    • This does mean, however that all tangents on uploaded meshes will be correctly handled going  forward.
  • Capabilities will change with the PBR viewer: for example, there will be a basic glTF materials editor within the viewer; texture will rotate from a corner, rather than the centre (per the glTF specification extension for texture transforms); the upload will allow for the upload of individual materials from a glTF file.

Bakes on Mesh and Materials

  • Providing materials support to Bakes on Mesh has been a long-standing request which has been thus far resisted by the Lab on the grounds of the impact it would have on the Bake Service  – both in terms of code updates and the potential number of servers used.
  • However, at the CCUG, Runitai and Vir floated the idea of materials on Bakes on Mesh being added – but only for the PBR rendering path. This would “drastically” cut down on the amount of back-end work required to make materials on BOM possible, and would mean that all avatar wearables would be Materials-capable.
  • This is not something that has been currently road mapped for implementation at any time by LL, and it would require time and effort to determine a mechanism to manage it, but the support were there for it, it is something that might be considered.
  • If this work was carried out, it might also pave the way for terrain painting – another popular request.
  • As PBR Materials is released, it will become the focus for SL going forward; whilst efforts will be made to ensure “pre-PBR” contact continues to look right, whether or not work is put into trying to “pull” legacy materials content into PBR is questionable, simply because the layered complexity of the underpinning code =, the updating / altering of which can result in content breakage.

In Brief

  • As noted in the summary of the last meeting, there are reports that the PBR project viewer generating some 10% more CPU temperature and 17% more GPU temperature.
    • LL are working on updates they hope will see any GPU increases return to the levels of the current release viewer.
    • Issues with CPU temperature are not believed to be related to PBR, but more to general texture rendering, where a couple of errors have crept in and are being actively corrected.
  • LODs and Land Impact:
    • Lower value LODs on models. Some creators attempt to “game” Land Impact by having extremely low values as their low-end models, thinking they are “never seen”; unfortunately, these are seen by people running SL on very low-end machines and thus can account for new users feeling SL “looks rubbish due to the official viewer defaulting to a RenderVolumeLODFactor of 1.25.
    • The RenderVoumeLODFactor setting will be more dynamic be default in the upcoming Performance Floater / Auto FPS viewer, although extremely lower LODs on models should still be avoided.
    • More broadly, LL is still considering how to better adjust the Land Impact system and the mesh uploader so that creators are not penalised for creating accurate LODs on their models (one of the goals originally stated for project ARCTan).
  • Linden Lab will be closed over the holiday period from end of business on Friday, December 23rd, 2022 through until start of business on Tuesday, January 3rd, 2023 (except for urgent support cases).

Next Meeting

  • Thursday, December 15th, 2022.

2022 week #46: CCUG and TPVD meeting summaries

Vue Sur Mer, September 2022 – blog post

The following notes were taken from:

  • My audio recording and chat log of the Content Creation User Group (CCUG) meeting held on Thursday, November 17th, 2022 at 13:00 SLT.
  • My audio recording and chat log from the Third-Party Viewer Developer (TPVD) meeting held on Friday, November 18th, 2022 at 13:00 SLT. Pantera attended the meeting, but was unfortunately held-up beforehand, so around the first 10 minutes is absent from her video (embedded at the end of this summary).

Both meetings are chaired by Vir Linden, and their dates and times can be obtained from the SL Public Calendar.

This is a summary of the key topics discussed in the meeting and is not intended to be a full transcript.

Official Viewers Status

[Video: 0:00-2:00]

Available Viewers

The following reflect the list of current official viewers available through Linden Lab.

  • Release viewer: version 6.6.7.576223 – MFA and TOS hotfix viewer – November 1.
  • Release channel cohorts:
    • Maintenance P (Preferences, Position and Paste) RC viewer version 6.6.8.576431 on Monday, November 14.
    • VS  2022 RC viewer, version 6.6.8.576310, issued November 4 – utilises Visual Studio 2022 in the Windows build tool chain.
  • Project viewers:
    • PBR Materials project viewer, version 7.0.0.576331, issued on November 3.
      • This viewer will only function on the following Aditi (beta grid) regions: Materials1; Materials Adult and Rumpus Room 1 through 4.
    • Puppetry project viewer, version 6.6.3.575529,  issued on October 12.
    • Performance Floater / Auto-FPS project viewer, version 6.6.5.575378, October 4.
    • Love Me Render (LMR) 6 graphics improvements project viewer 6.6.2.573263, July 21.

General Viewer Notes

  • There is unlikely to be any major changes to the list above in week #47, as this is a short working week for the Lab due to the Thanksgiving holiday (Thursday / Friday).

Github Changeover and Streamlining the Code Contributions Process

Github Work

As previously announced, there is an initiative to improve continuous update integration in the viewer and improve the viewer deployment process.

  • For TPVs and developers, the most visible aspect of this is moving the viewer repositories from BitBucket to Github. This includes the viewer code base and the other public code bases currently in BitBucket (Autobuild, LLCA, etc.).
  • There is still no firm date as to when the actual switch-over to using the new repositories will occur, but the viewer development team is working steadily towards it, and the plan remains to provide plenty of advanced warning to TPVs on when LL plan to cut over to the new repositories before making a clean cut-over.

Status

  • The switch-over to using Github is now slated for “early in the morning” (PST / SLT) on Monday, November 21st, 2022.
  • This means that from the point of switch-over, the Bitbucket repositories will be locked and carry a warning that they are no longer current, and developers / viewer builders should use the GitHub repositories, as directed to in the warning.
  • The master viewer branch is already up-to-date on GitHub as of Friday, November 18th, 2022.
  • Notes on the switch-over were posted to developers on the open-source developers list.
  • There is no plan in place to phase out the Bitbucket repositories immediately, but they may well be removed in time.

PBR: Materials and Reflections

Project Summary

  • To provide support for PBR materials using the core glTF 2.0 specification Section 3.9 and using mikkTSpace tangents, including the ability to have PBR Materials assets which can be applied to surfaces and also traded / sold.
  • To provide support for reflection probes and cubemap reflections.
  • The overall goal is to provide as much support for the glTF 2.0 specification as possible.

Status

  • Please also see previous CCUG meeting summaries for further background on this project.
  • The  viewer is available as a project viewer via the Alternate Viewers page, but will only work on the following regions on Aditi (the Beta grid):  Materials1; Materials Adult and Rumpus Room 1 through 4.
  • The focus is currently on reviewing bugs reported by those testing the viewer, and creators using glTF PBR workflows to create content are encouraged to test out their content on the project viewer to ensure things look as expected when imported into SL.
  • One aspect of SL that does require testing is EEP assets; changes have had to be made to the shaders which mean that commercial EEP settings may not render as expected (things like haze density my  be modified, etc).
    • EEP creators may therefore want to test their settings using the project viewer, and Jira issues – please include “non PBR viewer” shots of how the settings should look and shots taken with the PBR project viewer for easier comparison / understanding of the issues seen.
  • Issues with alpha blending are being investigated – particularly linear space alpha blending and how best to carry forward “pre-PBR” alpha blending, which does not use linear space.
  • It has been reported that the PBR project viewer generating some 10% more CPU temperature and 17% more GPU temperature – this may be the result of the viewer having to work that much harder with PBR in order to maintain frame rates.
  • Reflection probes: concern was again raised over people excessively using probes (e.g. attaching them to all their products / objects they own).
    • Providing a means for users to disable probes attached to objects they purchase has been previously discussed within the project / CCUG meetings.
    • It is hoped the latter (people just randomly setting-up probes all over their house / land) can be mitigated against through a mix of education and explicit warnings / actions in the viewer.
    • Concern was also raised about the potential of localised reflection probes (essentially an invisible prim) could interfere with intended interactions (e.g. a probe enclosing a chair could block people’s ability to sit on the chair). If cases like this are found in testing, a request has been made for bug reports.
    • The PBR viewer does include a Build / Edit option to “ignore invisible prims” so that when building / editing, an attempt to select an object within the volume of a reflection probe prim will ensure the object is selected, not the probe prim, but it is acknowledged that this kind of operation needs to be better integrated with other mouse click options.
  • (TPV Meeting video)
    • It was noted that this project also marks the start of the deprecation (and eventual removal) of some real-time console performance reports, such as Fast Timers.
    • It has been reported that some testing the PBR viewer are finding that within the texture console, Bias is constantly being reported at 5 (which tends to cause texture thrashing in non-PBR viewers). However, within the PBR viewer  a Bias of 5 should indicate that the viewer is swapping – and keeping to – a lower texture resolution, and so should not result in any texture thrashing.

Screen Space Reflections (SSR)

  • LL does have a “subtle” SSR system working to replace Linden water reflections as currently rendered.
  • There is some further work to be carried out before it is “viewer-ready”, with the focus being on ironing out the bumps and cleaning-up bugs.
  • It was made clear that while this implementation of SSR will be applicable to scenes as a whole, it is not intended to be a replacement for creating mirrors – and so expectations should be set accordingly.

CCUG Meeting Specific Notes

The majority of the meeting was given over to a general discussion, per the cliff-notes below:

  • VR hardware and Second Life:
    • While frame rates are not so much an issue now in broad terms, there is still the need to get a high frame rate on a consistent basis for VR viewing to be enjoyable – and this is still not a giving with SL.
    • However, VR isn’t just about headsets; its about the “full” experience in using a range of associated peripherals: controllers, haptic gloves, the ability to have a “full-body experience” that VR users would want and expect, such as seeing your hand and arm when going to pick something up, being able to grip an object in your hands and feel responses from it when using it (e.g. the strike of a blade against another or a baseball sat against the ball, etc.).
    • The level of support for this more complete sense of immersiveness requires some extensive re-engineering of Second Life (e.g. facilitating a full and proper Inverse Kinematics system  as a single example).
    • The Puppetry project may facilitate some work towards this. However, the focus of this project is not on providing any form of in-depth VR support (it is intended to work with a broader range of peripherals, notably webcams, although use of some VR software support via OpenVR is being considered), being intended for use by as broad a cross-section of SL users as might be interested in it.
    • Another problem is determining what people want from VR; is it purely a mechanism for games, or does it need to be capable of “realistic” social interactions (some do still proclaim games, other demand social capabilities; the truth is probably more a blending of the two, just like life in the physical realm).
  • New users and their experience:
    • Much was made of the need for a completely new avatar system (LL is actually working on a new all-mesh “starter avatar”, but using the existing skeleton) – but overhauling / replacing the current avatar system raises its own compatibility / market issues.
    • A fair point was raised by LL concerning the use of eviction / ban scripts that specifically ban avatars that are less than X days old and / or avatars that do not have Payment Information On File.
      • These tend to get used at event spaces etc., to try to prevent griefers / trolls using throwaway accounts from accessing the space, rather than using other available moderation tools (which, admittedly tend to require human intervention with the right permissions, which might not always be available).
      • The problem here is that often, a “genuine” new user will often sign-up, go through the new user experience, then try to go to a event – only to immediately find they are denied entry or are ejected without warning or explanation, and as a result, they log-out.
      • The need for improved tools to handle griefing and trolling are likely required – but no discussion on what form these might take.
  • Audio: a general conversation on the potential of “full” spatial audio, including directional (e.g. so an audio stream at a disco appears to be coming from the speaker system in the venue) and also sound volumes.
    • Nothing is currently planned for audio in the roadmap, but it was noted that the first test of using volumes (reflection probes) is in Aditi testing, and LL will obviously seeing how well that works.
  • It  was made clear that scripts will “NEVER” be allowed to create new assets (so no ability to generate notecards from a script). One of the reasons cited for this is the cost of storing assets if there is an uncontrolled ability to generate them through scripts (particularly given SL’s asset count is already in the petabytes range in terms of storage).

TPVD Meeting Specific Notes

Atlassian Jira and Bug Reporting

  • Atlassian has announced it will be restructuring how it licenses the Jira bug reporting product from 2024 onwards. Currently, LL can run a public-facing Jira reporting system that effectively allows every Second Life user to create an account on it to view all the public SL bug reports and feature requests.
  • Under the 2024 pricing restructure (which will be pretty much per-account), this will be prohibitively expensive for LL, so they are starting into the process of looking for an alternative means to provide some form of user-facing bug reporting mechanism.
    • one option being considered is to continue to use Jira internally (where the number of licensed accounts can be controlled), and use a different mechanism for public bug reporting, with a bridge between the two.
    • Another is to move away from Jira entirely.
  • No decision has been made as yet, but as Firestorm also use Jira, the suggestion has been made that the two develop a joint plan to resolve the situation in terms of future tools.
  • The topic lead to a general discussion on possible options, but no conclusions drawn – and there is sufficient lead-time on the matter for various options to be looked into, allowing for the time required to transition to any alternate that might be selected / finding the means to keep the wealth of information on the SL Jira available for reference.

General Notes

  • There is a report that Active Voice on at least some viewers is not updating correctly (e.g. following a teleport, and it is proving easier to enable  / disable Voice morphing vie the menu than to disconnect / reconnect Voice in order to correct.
    • The issue here is not so much which is the preferred method to correct the problem, but whether the Active Voice list failing to update is a widespread issue with viewers, and whether it can be reproduced on the official viewer & a bug report raised.
  • Beq Janus has written a post on alpha blending issues (see: Alpha blend issues? Get them sorted – subtitled Making it easier to avoid alpha-clashes with Second Life and OpenSim outfits). Rather than have me  decimate the discussion, please refer to the video fat 35:56 through video end for more.

Next Meetings

  • CCUG: Thursday, December 1st, 2022.
  • TPVD: Friday, December 23rd, 2022.

2022 Puppetry project week #45 summary

Puppetry demonstration via Linden Lab – see below.  Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”

The following notes have been taken from chat logs and audio recording of the Thursday, November 10th Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).

Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.

Project Summary

  • Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the  LLSD Event API Plug-in (LEAP) system.
    • Note that facial expressions and finger movements are not currently enabled.
    • Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
  • The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
  • Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
  • No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
  • There is now a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).

Bugs, Feature Requests and Code Submissions

  • For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
  • There is also a public facing Kanban board with public issues – those experiencing issues can also contact Wulf Linden.
  • Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:

Further Information

Meeting Notes

Viewer and Plug-in Updates

  • The puppetry team is working on updating the viewer and LEAP plug-in, and an update to the project viewer is liable to be released in week #46.
  • This viewer includes:
    • The ability to move the avatar pelvis.
    • Ability to stretch other bones – although this is awaiting testing at the time of writing. However, the reference frame scale is that of the normal puppetry targets, so you would have to scale the data correctly; therefore additional work on this is required to provide a way for the plug-in to get the data necessary to know now to scale individual joint bones (e.g. change their parent-relative positions).
  • It still won’t be possible to clear puppetry target/config data, which remains on the teams “to do” list.
  • Aura Linden noted the new LEAP module it initialises on-demand rather than via instantiation (as with puppetry). LL will provide demos of using the new module.

Kincect v2 Support

  • Simon Linden has been working on an experimental plug-in taking inputs from a Kincect v2 device.
  • He describes the the code as being “pretty rough”  and using only basic geometry, but it allows avatar elbows / arms to be moved around.
  • This work in part utilises the data syntax described in OPEN-366 “Simplify Puppetry Configuration Through LEAP”, the new protocol proposed by Leviathan Linden as per previous meeting notes.
  • The code is not ready to be pushed to a public branch as let, and doing so is somewhat dependent on feedback from developers /creators.

Avatar Constraints / Interactions

  • OPEN-368 “[Puppetry] [LEAP]: Location Constraints” – LL have indicated there is “much” within this Jira they would like to support “eventually”.
  • The feeling at the Lab is that constraints can “definitely” be improved  – although what this may look like has yet to be properly determined. However, the general feeling is that there should be constraint data associated with a given skeleton, for example, so we’re not just imposing a human-centric model on the SL avatar.
  • A  good portion of the meeting was given over to a general discussion of how best to handle puppetry and avatar animations – and the potential to need to move away from canned animations and provide a more direct means of avatar animation.
  • Avatar-object interactions are potentially complex issue (e.g. how can an avatar accurately take and hold an in-world object – say an apple) through puppetry? If the apple is a physical object, does it collide when held? Does it become an attachment? If  the latter, how is this registered, together with hold is it properly released from the attachment system? etc.).
    • A suggestion for handling avatar’s handling objects is to have some for of temp-attach system or to use a key frame motion (KFM) system to match the position to the avatar’s hand, allowing the avatar can hold the object without directly “owning” it (thus also avoiding permission system issues).
  • Collisions also raise questions: avatar arms currently do not collide, and so would not under puppetry. So what about cases of simple interactions – flicking a light switch or similar. These are not “proper” collisions per se, but are rather event-triggered; how can this be managed if there is no actual collision between the scripted object and an avatar’s arm / hand to trigger the associated event?

In Brief

  • It has been suggested that a version number is included in puppetry-related messaging, so that changes to message formats are not read by versions of the viewer unable to do so, thus reducing the risk of crashes during development / testing.
  • It has been indicated that puppetry will eventually have LSL support for LEAP. Although what form this will take and how the simulator will track things is  still TBD, as currently animations are entirely viewer-side and untracked by the simulator.
  • There is concern that understanding of the potential of the puppetry project isn’t being fully understood by creators (and others) as it is being seen more as a “VR thing” than an ability to much improve avatar animations and their supporting systems / constraints, including the IK system.
  • How to manage network latency also formed a core discussion, together with making better use of the Havok physics sub-licence to allow the viewer do a lot more of the work, and simply stream the results through the simulator to other viewers.

Date of Next Meeting

  • Thursday, December 8th, 2022, 13:00 SLT.

2022 Puppetry project week #43 summary

Puppetry demonstration via Linden Lab – see below.  Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”

The following notes have been taken from chat logs and audio recording of the Thursday, October 27th Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).

Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.

Project Summary

  • Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the  LLSD Event API Plug-in (LEAP) system.
    • Note that facial expressions and finger movements are not currently enabled.
    • Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
  • The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
  • Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
  • No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
  • There is now a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).

Bugs, Feature Requests and Code Submissions

  • For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
  • There is also a public facing Kanban board with public issues – those experiencing issues can also contact Wulf Linden.
  • Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:

Further Information

Meeting Notes

Protocol Overhaul

At the previous meeting, Leviathan Linden noted the project team is going to overhaul the Puppetry/LEAP protocol. Since then:

OpenXR Support

Leviathan Linden asked for feedback on what the requested “OpenXR support” mean to those requesting it – e.g.: Is it to run an OpenXR app and have a VR experience in SL,  or is it to run an OpenXR app as a plug-in to provide measurement input to Puppetry?

The general response was a mix of both:

  • To generally provide the means for “proper” hardware support for motion capture such that puppetry isn’t just a “best guess” response via a webcam
  • To allow for more accurate interactions between avatars and objects; eventually moving to provide full support for VR headsets and controllers (requiring the ability to intact with scripted devices, operating levers, controls, etc., which could be correctly interpreted and acted upon by said scripts).

Currently, LL are more willing to consider OpenXR support as a part of the Puppetry work whilst regarding it as a potential step towards wider VR support in SL in the future.

Avatar Constraints / Interactions

The above question led to a broader discussion on avatar-to-avatar and avatar-to-object interactions starting with the avatar constraints / collision system.

  • As they are right now, avatar constraints and collisions within SL have been adequate for the platform, but lacking (collisions, for example have no concept of the avatars arms / legs, limiting interactions with them and other objects).
  • OPEN-368 “[Puppetry] [LEAP]: Location Constraints” is a feature request outlining the benefits of overhauling the SL avatar constraints system to allow better interactions with objects, etc. This is currently open to those wishing to add further comments and feedback.
  • The question was raised as to how “fast” / reliable the required communications (including all the required bone interactions) could be made in order to ensure adequate / accurate response times with actions (e.g..so when shaking hands, he hands of each avatar arrive at the same point at the seem time to be seen as  shaking in both viewers).
  • Also discussed was determining how “reactions” might best be defined – could it be as “simple” a pre-set animation?
  • One issue with this – interactions, OPEN-368, etc., – is that direct hooks from Puppetry to LSL had been seen as outside the scope of the project, simply because puppetry and the LEAP API are entirely viewer-side, and LSL simulator-side.  However, the discussion opened a debate on whether some means for this interaction should be provided, with two options being put forward:
    • Broadening the LEAP protocol, essentially using it to make the viewer scriptable with plug-ins that run on their own threads.
    • Providing a specific LSL function that would enable LSL to be able to communicate / interact with the LEAP protocol / JSON (as is the case with the RLV / RLVa APIs used by some third-party viewers).
    • Both of these approaches were seen as potentially “doable”, if beyond the intended scope of the puppetry project.
  • A further issue  with interactions and bone tracking (which would be required for accurate avatar-based interactions) as that bone tracking via LSL is as best limited to non-existent; this raised the subject of possibly using attachment points as a proxy.
    • An additional problem here is whether or not is possible to track the location of the attachment points in 3D space relative to any animation the avatar is playing (e.g. if an animation causes the avatar to raise their arm, is it possible to check the position of the wrist point)? This is currently something of an unknown, as it would either:
      • Require the simulator to inject a lot of additional calculations for joint and attach positions;
      • Or require a  new (optional) protocol where the viewer would just supply its in-world positions at some frame rate – which would require some calculation overhead on the part of the viewer;
      • Or – given work is in-hand to add the in world camera position relative the viewer, and also the avatar’s world orientation and look at target – provide a straight dump of the animation mixdown together with the skeleton data, enabling the processing to be carried out in a module rather than the viewer.
  • As a result of these discussions, time has been requested to investigate the various options (which will likely include a determination of what, if anything is to be included in the current project in terms of these additional capabilities).

Date of Next Meeting

  • Thursday, November 10th, 2022, 13:00 SLT.