Village de Roqueblanche, October 2022 – blog post
The following notes were taken from my audio recording and chat log of the Content Creation User Group (CCUG) meeting held on Thursday, December 1st 2022 at 13:00 SLT. These meetings are chaired by Vir Linden, and their dates and times can be obtained from the SL Public Calendar.
This is a summary of the key topics discussed in the meeting and is not intended to be a full transcript.
Note: unfortunately, my recording software failed at the 50 minute mark of the meeting, so the end of the meeting and the after-meeting discussion were not recorded, so some discussion points are missing from this summary.
Official Viewers Status
Available Viewers
On Friday, December 2nd, the PBR Materials project viewer updated to version 7.0.0.576966 on Friday December 2nd. This viewer will only function on the following Aditi (beta grid) regions: Materials1; Materials Adult and Rumpus Room 1 through 4.
The following reflect the rest of the current official viewers available through Linden Lab.
Release viewer: version 6.6.7.576223 – MFA and TOS hotfix viewer – November 1 – No change.
Performance Floater / Auto-FPS project viewer, version 6.6.8.576737, November 28.
Maintenance P (Preferences, Position and Paste) RC viewer version 6.6.8.576812 on Monday, November 28.
VS 2022 RC viewer, version 6.6.8.576310, issued November 4 – utilises Visual Studio 2022 in the Windows build tool chain.
Project viewers:
Puppetry project viewer, version 6.6.3.575529, issued on October 12.
Love Me Render (LMR) 6 graphics improvements project viewer 6.6.2.573263, July 21.
General Viewer Notes
The Lab is aiming to get the Maintenance P RC viewer promoted to release status before year-end.
glTF Materials and Reflection Probes
Project Summary
To provide support for PBR materials using the core glTF 2.0 specification Section 3.9 and using mikkTSpace tangents, including the ability to have PBR Materials assets which can be applied to surfaces and also traded / sold.
To provide support for reflection probes and cubemap reflections.
The overall goal is to provide as much support for the glTF 2.0 specification as possible.
The project viewer is available via the Alternate Viewers page, but will only work on the following regions on Aditi (the Beta grid): Materials1; Materials Adult and Rumpus Room 1 through 4.
Bugs and regressions continue to be reported via the project viewer on Aditi, and it is now unlikely the project will advance to a release state before early 2023 (end of 2022 always seemed ambitious).
Despite hopes to the contrary, a lot of legacy content is impacted when rendered via the glTF PBR path, largely as a result of using linear alpha blending.
As a result, LL are working on turning to smooth around some of the edge cases without having to introduce a code fork in the viewer between “legacy” alpha handling and PBR alpha handling.
The aim remains to preserve as much of the appearance of legacy content under PBR rendering and reflection probes, without necessarily being slaved to preserving its looks over time. The major exception to this is would be situations where PBR rendering reveals seams between layers.
However, if it proves necessary, an “opt out” button will be provided to switch out of linear alpha blending.
mikkTSpace tangents affect all objects rendered in the PBR viewer. Unfortunately, due to the way mesh data has been stored prior to the PBR viewer, this means that creators wishing to get exactly the same results in their mesh models as they saw in their tool of choice (e.g. blender, etc.) when building their models, they will need to re-upload those models. This might result in a slight increase in the object’s Land Impact, if done.
This does mean, however that all tangents on uploaded meshes will be correctly handled going forward.
Capabilities will change with the PBR viewer: for example, there will be a basic glTF materials editor within the viewer; texture will rotate from a corner, rather than the centre (per the glTF specification extension for texture transforms); the upload will allow for the upload of individual materials from a glTF file.
Bakes on Mesh and Materials
Providing materials support to Bakes on Mesh has been a long-standing request which has been thus far resisted by the Lab on the grounds of the impact it would have on the Bake Service – both in terms of code updates and the potential number of servers used.
However, at the CCUG, Runitai and Vir floated the idea of materials on Bakes on Mesh being added – but only for the PBR rendering path. This would “drastically” cut down on the amount of back-end work required to make materials on BOM possible, and would mean that all avatar wearables would be Materials-capable.
This is not something that has been currently road mapped for implementation at any time by LL, and it would require time and effort to determine a mechanism to manage it, but the support were there for it, it is something that might be considered.
If this work was carried out, it might also pave the way for terrain painting – another popular request.
As PBR Materials is released, it will become the focus for SL going forward; whilst efforts will be made to ensure “pre-PBR” contact continues to look right, whether or not work is put into trying to “pull” legacy materials content into PBR is questionable, simply because the layered complexity of the underpinning code =, the updating / altering of which can result in content breakage.
In Brief
As noted in the summary of the last meeting, there are reports that the PBR project viewer generating some 10% more CPU temperature and 17% more GPU temperature.
LL are working on updates they hope will see any GPU increases return to the levels of the current release viewer.
Issues with CPU temperature are not believed to be related to PBR, but more to general texture rendering, where a couple of errors have crept in and are being actively corrected.
LODs and Land Impact:
Lower value LODs on models. Some creators attempt to “game” Land Impact by having extremely low values as their low-end models, thinking they are “never seen”; unfortunately, these are seen by people running SL on very low-end machines and thus can account for new users feeling SL “looks rubbish due to the official viewer defaulting to a RenderVolumeLODFactor of 1.25.
The RenderVoumeLODFactor setting will be more dynamic be default in the upcoming Performance Floater / Auto FPS viewer, although extremely lower LODs on models should still be avoided.
More broadly, LL is still considering how to better adjust the Land Impact system and the mesh uploader so that creators are not penalised for creating accurate LODs on their models (one of the goals originally stated for project ARCTan).
Linden Lab will be closed over the holiday period from end of business on Friday, December 23rd, 2022 through until start of business on Tuesday, January 3rd, 2023 (except for urgent support cases).
My audio recording and chat log of the Content Creation User Group (CCUG) meeting held on Thursday, November 17th, 2022 at 13:00 SLT.
My audio recording and chat log from the Third-Party Viewer Developer (TPVD) meeting held on Friday, November 18th, 2022 at 13:00 SLT. Pantera attended the meeting, but was unfortunately held-up beforehand, so around the first 10 minutes is absent from her video (embedded at the end of this summary).
Both meetings are chaired by Vir Linden, and their dates and times can be obtained from the SL Public Calendar.
This is a summary of the key topics discussed in the meeting and is not intended to be a full transcript.
The following reflect the list of current official viewers available through Linden Lab.
Release viewer: version 6.6.7.576223 – MFA and TOS hotfix viewer – November 1.
Release channel cohorts:
Maintenance P (Preferences, Position and Paste) RC viewer version 6.6.8.576431 on Monday, November 14.
VS 2022 RC viewer, version 6.6.8.576310, issued November 4 – utilises Visual Studio 2022 in the Windows build tool chain.
Project viewers:
PBR Materials project viewer, version 7.0.0.576331, issued on November 3.
This viewer will only function on the following Aditi (beta grid) regions: Materials1; Materials Adult and Rumpus Room 1 through 4.
Puppetry project viewer, version 6.6.3.575529, issued on October 12.
Performance Floater / Auto-FPS project viewer, version 6.6.5.575378, October 4.
Love Me Render (LMR) 6 graphics improvements project viewer 6.6.2.573263, July 21.
General Viewer Notes
There is unlikely to be any major changes to the list above in week #47, as this is a short working week for the Lab due to the Thanksgiving holiday (Thursday / Friday).
Github Changeover and Streamlining the Code Contributions Process
Github Work
As previously announced, there is an initiative to improve continuous update integration in the viewer and improve the viewer deployment process.
For TPVs and developers, the most visible aspect of this is moving the viewer repositories from BitBucket to Github. This includes the viewer code base and the other public code bases currently in BitBucket (Autobuild, LLCA, etc.).
There is still no firm date as to when the actual switch-over to using the new repositories will occur, but the viewer development team is working steadily towards it, and the plan remains to provide plenty of advanced warning to TPVs on when LL plan to cut over to the new repositories before making a clean cut-over.
Status
The switch-over to using Github is now slated for “early in the morning” (PST / SLT) on Monday, November 21st, 2022.
This means that from the point of switch-over, the Bitbucket repositories will be locked and carry a warning that they are no longer current, and developers / viewer builders should use the GitHub repositories, as directed to in the warning.
The master viewer branch is already up-to-date on GitHub as of Friday, November 18th, 2022.
Notes on the switch-over were posted to developers on the open-source developers list.
There is no plan in place to phase out the Bitbucket repositories immediately, but they may well be removed in time.
PBR: Materials and Reflections
Project Summary
To provide support for PBR materials using the core glTF 2.0 specification Section 3.9 and using mikkTSpace tangents, including the ability to have PBR Materials assets which can be applied to surfaces and also traded / sold.
To provide support for reflection probes and cubemap reflections.
The overall goal is to provide as much support for the glTF 2.0 specification as possible.
The viewer is available as a project viewer via the Alternate Viewers page, but will only work on the following regions on Aditi (the Beta grid): Materials1; Materials Adult and Rumpus Room 1 through 4.
The focus is currently on reviewing bugs reported by those testing the viewer, and creators using glTF PBR workflows to create content are encouraged to test out their content on the project viewer to ensure things look as expected when imported into SL.
One aspect of SL that does require testing is EEP assets; changes have had to be made to the shaders which mean that commercial EEP settings may not render as expected (things like haze density my be modified, etc).
EEP creators may therefore want to test their settings using the project viewer, and Jira issues – please include “non PBR viewer” shots of how the settings should look and shots taken with the PBR project viewer for easier comparison / understanding of the issues seen.
Issues with alpha blending are being investigated – particularly linear space alpha blending and how best to carry forward “pre-PBR” alpha blending, which does not use linear space.
It has been reported that the PBR project viewer generating some 10% more CPU temperature and 17% more GPU temperature – this may be the result of the viewer having to work that much harder with PBR in order to maintain frame rates.
Reflection probes: concern was again raised over people excessively using probes (e.g. attaching them to all their products / objects they own).
Providing a means for users to disable probes attached to objects they purchase has been previously discussed within the project / CCUG meetings.
It is hoped the latter (people just randomly setting-up probes all over their house / land) can be mitigated against through a mix of education and explicit warnings / actions in the viewer.
Concern was also raised about the potential of localised reflection probes (essentially an invisible prim) could interfere with intended interactions (e.g. a probe enclosing a chair could block people’s ability to sit on the chair). If cases like this are found in testing, a request has been made for bug reports.
The PBR viewer does include a Build / Edit option to “ignore invisible prims” so that when building / editing, an attempt to select an object within the volume of a reflection probe prim will ensure the object is selected, not the probe prim, but it is acknowledged that this kind of operation needs to be better integrated with other mouse click options.
It was noted that this project also marks the start of the deprecation (and eventual removal) of some real-time console performance reports, such as Fast Timers.
It has been reported that some testing the PBR viewer are finding that within the texture console, Bias is constantly being reported at 5 (which tends to cause texture thrashing in non-PBR viewers). However, within the PBR viewer a Bias of 5 should indicate that the viewer is swapping – and keeping to – a lower texture resolution, and so should not result in any texture thrashing.
Screen Space Reflections (SSR)
LL does have a “subtle” SSR system working to replace Linden water reflections as currently rendered.
There is some further work to be carried out before it is “viewer-ready”, with the focus being on ironing out the bumps and cleaning-up bugs.
It was made clear that while this implementation of SSR will be applicable to scenes as a whole, it is not intended to be a replacement for creating mirrors – and so expectations should be set accordingly.
CCUG Meeting Specific Notes
The majority of the meeting was given over to a general discussion, per the cliff-notes below:
VR hardware and Second Life:
While frame rates are not so much an issue now in broad terms, there is still the need to get a high frame rate on a consistent basis for VR viewing to be enjoyable – and this is still not a giving with SL.
However, VR isn’t just about headsets; its about the “full” experience in using a range of associated peripherals: controllers, haptic gloves, the ability to have a “full-body experience” that VR users would want and expect, such as seeing your hand and arm when going to pick something up, being able to grip an object in your hands and feel responses from it when using it (e.g. the strike of a blade against another or a baseball sat against the ball, etc.).
The level of support for this more complete sense of immersiveness requires some extensive re-engineering of Second Life (e.g. facilitating a full and proper Inverse Kinematics system as a single example).
The Puppetry project may facilitate some work towards this. However, the focus of this project is not on providing any form of in-depth VR support (it is intended to work with a broader range of peripherals, notably webcams, although use of some VR software support via OpenVR is being considered), being intended for use by as broad a cross-section of SL users as might be interested in it.
Another problem is determining what people want from VR; is it purely a mechanism for games, or does it need to be capable of “realistic” social interactions (some do still proclaim games, other demand social capabilities; the truth is probably more a blending of the two, just like life in the physical realm).
New users and their experience:
Much was made of the need for a completely new avatar system (LL is actually working on a new all-mesh “starter avatar”, but using the existing skeleton) – but overhauling / replacing the current avatar system raises its own compatibility / market issues.
A fair point was raised by LL concerning the use of eviction / ban scripts that specifically ban avatars that are less than X days old and / or avatars that do not have Payment Information On File.
These tend to get used at event spaces etc., to try to prevent griefers / trolls using throwaway accounts from accessing the space, rather than using other available moderation tools (which, admittedly tend to require human intervention with the right permissions, which might not always be available).
The problem here is that often, a “genuine” new user will often sign-up, go through the new user experience, then try to go to a event – only to immediately find they are denied entry or are ejected without warning or explanation, and as a result, they log-out.
The need for improved tools to handle griefing and trolling are likely required – but no discussion on what form these might take.
Audio: a general conversation on the potential of “full” spatial audio, including directional (e.g. so an audio stream at a disco appears to be coming from the speaker system in the venue) and also sound volumes.
Nothing is currently planned for audio in the roadmap, but it was noted that the first test of using volumes (reflection probes) is in Aditi testing, and LL will obviously seeing how well that works.
It was made clear that scripts will “NEVER” be allowed to create new assets (so no ability to generate notecards from a script). One of the reasons cited for this is the cost of storing assets if there is an uncontrolled ability to generate them through scripts (particularly given SL’s asset count is already in the petabytes range in terms of storage).
TPVD Meeting Specific Notes
Atlassian Jira and Bug Reporting
Atlassian has announced it will be restructuring how it licenses the Jira bug reporting product from 2024 onwards. Currently, LL can run a public-facing Jira reporting system that effectively allows every Second Life user to create an account on it to view all the public SL bug reports and feature requests.
Under the 2024 pricing restructure (which will be pretty much per-account), this will be prohibitively expensive for LL, so they are starting into the process of looking for an alternative means to provide some form of user-facing bug reporting mechanism.
one option being considered is to continue to use Jira internally (where the number of licensed accounts can be controlled), and use a different mechanism for public bug reporting, with a bridge between the two.
Another is to move away from Jira entirely.
No decision has been made as yet, but as Firestorm also use Jira, the suggestion has been made that the two develop a joint plan to resolve the situation in terms of future tools.
The topic lead to a general discussion on possible options, but no conclusions drawn – and there is sufficient lead-time on the matter for various options to be looked into, allowing for the time required to transition to any alternate that might be selected / finding the means to keep the wealth of information on the SL Jira available for reference.
General Notes
There is a report that Active Voice on at least some viewers is not updating correctly (e.g. following a teleport, and it is proving easier to enable / disable Voice morphing vie the menu than to disconnect / reconnect Voice in order to correct.
The issue here is not so much which is the preferred method to correct the problem, but whether the Active Voice list failing to update is a widespread issue with viewers, and whether it can be reproduced on the official viewer & a bug report raised.
Beq Janus has written a post on alpha blending issues (see: Alpha blend issues? Get them sorted – subtitled Making it easier to avoid alpha-clashes with Second Life and OpenSim outfits). Rather than have me decimate the discussion, please refer to the video fat 35:56 through video end for more.
Puppetry demonstration via Linden Lab – see below. Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”
The following notes have been taken from chat logs and audio recording of the Thursday, November 10th Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).
Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.
Project Summary
Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the LLSD Event API Plug-in (LEAP) system.
Note that facial expressions and finger movements are not currently enabled.
Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
There is now a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).
Bugs, Feature Requests and Code Submissions
For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
The puppetry team is working on updating the viewer and LEAP plug-in, and an update to the project viewer is liable to be released in week #46.
This viewer includes:
The ability to move the avatar pelvis.
Ability to stretch other bones – although this is awaiting testing at the time of writing. However, the reference frame scale is that of the normal puppetry targets, so you would have to scale the data correctly; therefore additional work on this is required to provide a way for the plug-in to get the data necessary to know now to scale individual joint bones (e.g. change their parent-relative positions).
It still won’t be possible to clear puppetry target/config data, which remains on the teams “to do” list.
Aura Linden noted the new LEAP module it initialises on-demand rather than via instantiation (as with puppetry). LL will provide demos of using the new module.
Kincect v2 Support
Simon Linden has been working on an experimental plug-in taking inputs from a Kincect v2 device.
He describes the the code as being “pretty rough” and using only basic geometry, but it allows avatar elbows / arms to be moved around.
This work in part utilises the data syntax described in OPEN-366 “Simplify Puppetry Configuration Through LEAP”, the new protocol proposed by Leviathan Linden as per previous meeting notes.
The code is not ready to be pushed to a public branch as let, and doing so is somewhat dependent on feedback from developers /creators.
Avatar Constraints / Interactions
OPEN-368 “[Puppetry] [LEAP]: Location Constraints” – LL have indicated there is “much” within this Jira they would like to support “eventually”.
The feeling at the Lab is that constraints can “definitely” be improved – although what this may look like has yet to be properly determined. However, the general feeling is that there should be constraint data associated with a given skeleton, for example, so we’re not just imposing a human-centric model on the SL avatar.
A good portion of the meeting was given over to a general discussion of how best to handle puppetry and avatar animations – and the potential to need to move away from canned animations and provide a more direct means of avatar animation.
Avatar-object interactions are potentially complex issue (e.g. how can an avatar accurately take and hold an in-world object – say an apple) through puppetry? If the apple is a physical object, does it collide when held? Does it become an attachment? If the latter, how is this registered, together with hold is it properly released from the attachment system? etc.).
A suggestion for handling avatar’s handling objects is to have some for of temp-attach system or to use a key frame motion (KFM) system to match the position to the avatar’s hand, allowing the avatar can hold the object without directly “owning” it (thus also avoiding permission system issues).
Collisions also raise questions: avatar arms currently do not collide, and so would not under puppetry. So what about cases of simple interactions – flicking a light switch or similar. These are not “proper” collisions per se, but are rather event-triggered; how can this be managed if there is no actual collision between the scripted object and an avatar’s arm / hand to trigger the associated event?
In Brief
It has been suggested that a version number is included in puppetry-related messaging, so that changes to message formats are not read by versions of the viewer unable to do so, thus reducing the risk of crashes during development / testing.
It has been indicated that puppetry will eventually have LSL support for LEAP. Although what form this will take and how the simulator will track things is still TBD, as currently animations are entirely viewer-side and untracked by the simulator.
There is concern that understanding of the potential of the puppetry project isn’t being fully understood by creators (and others) as it is being seen more as a “VR thing” than an ability to much improve avatar animations and their supporting systems / constraints, including the IK system.
How to manage network latency also formed a core discussion, together with making better use of the Havok physics sub-licence to allow the viewer do a lot more of the work, and simply stream the results through the simulator to other viewers.
Puppetry demonstration via Linden Lab – see below. Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”
The following notes have been taken from chat logs and audio recording of the Thursday, October 27th Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).
Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.
Project Summary
Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the LLSD Event API Plug-in (LEAP) system.
Note that facial expressions and finger movements are not currently enabled.
Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
There is now a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).
Bugs, Feature Requests and Code Submissions
For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
All of this work will include the ability to move the pelvis and the new Puppetry LEAP API, but will not include the ability to clear a Puppetry joint setting.
OpenXR Support
Leviathan Linden asked for feedback on what the requested “OpenXR support” mean to those requesting it – e.g.: Is it to run an OpenXR app and have a VR experience in SL, or is it to run an OpenXR app as a plug-in to provide measurement input to Puppetry?
The general response was a mix of both:
To generally provide the means for “proper” hardware support for motion capture such that puppetry isn’t just a “best guess” response via a webcam
To allow for more accurate interactions between avatars and objects; eventually moving to provide full support for VR headsets and controllers (requiring the ability to intact with scripted devices, operating levers, controls, etc., which could be correctly interpreted and acted upon by said scripts).
Currently, LL are more willing to consider OpenXR support as a part of the Puppetry work whilst regarding it as a potential step towards wider VR support in SL in the future.
Avatar Constraints / Interactions
The above question led to a broader discussion on avatar-to-avatar and avatar-to-object interactions starting with the avatar constraints / collision system.
As they are right now, avatar constraints and collisions within SL have been adequate for the platform, but lacking (collisions, for example have no concept of the avatars arms / legs, limiting interactions with them and other objects).
OPEN-368 “[Puppetry] [LEAP]: Location Constraints” is a feature request outlining the benefits of overhauling the SL avatar constraints system to allow better interactions with objects, etc. This is currently open to those wishing to add further comments and feedback.
The question was raised as to how “fast” / reliable the required communications (including all the required bone interactions) could be made in order to ensure adequate / accurate response times with actions (e.g..so when shaking hands, he hands of each avatar arrive at the same point at the seem time to be seen as shaking in both viewers).
Also discussed was determining how “reactions” might best be defined – could it be as “simple” a pre-set animation?
One issue with this – interactions, OPEN-368, etc., – is that direct hooks from Puppetry to LSL had been seen as outside the scope of the project, simply because puppetry and the LEAP API are entirely viewer-side, and LSL simulator-side. However, the discussion opened a debate on whether some means for this interaction should be provided, with two options being put forward:
Broadening the LEAP protocol, essentially using it to make the viewer scriptable with plug-ins that run on their own threads.
Providing a specific LSL function that would enable LSL to be able to communicate / interact with the LEAP protocol / JSON (as is the case with the RLV / RLVa APIs used by some third-party viewers).
Both of these approaches were seen as potentially “doable”, if beyond the intended scope of the puppetry project.
A further issue with interactions and bone tracking (which would be required for accurate avatar-based interactions) as that bone tracking via LSL is as best limited to non-existent; this raised the subject of possibly using attachment points as a proxy.
An additional problem here is whether or not is possible to track the location of the attachment points in 3D space relative to any animation the avatar is playing (e.g. if an animation causes the avatar to raise their arm, is it possible to check the position of the wrist point)? This is currently something of an unknown, as it would either:
Require the simulator to inject a lot of additional calculations for joint and attach positions;
Or require a new (optional) protocol where the viewer would just supply its in-world positions at some frame rate – which would require some calculation overhead on the part of the viewer;
Or – given work is in-hand to add the in world camera position relative the viewer, and also the avatar’s world orientation and look at target – provide a straight dump of the animation mixdown together with the skeleton data, enabling the processing to be carried out in a module rather than the viewer.
As a result of these discussions, time has been requested to investigate the various options (which will likely include a determination of what, if anything is to be included in the current project in terms of these additional capabilities).
The following notes were taken from my audio recording and chat log of the Content Creation User Group (CCUG) meeting held on Thursday, October 20th 2022 at 13:00 SLT. These meetings are chaired by Vir Linden, and their dates and times can be obtained from the SL Public Calendar.
This is a summary of the key topics discussed in the meeting and is not intended to be a full transcript.
Work is now entering a “public alpha” mode which will commence with the release of a formal project viewer, in order to put the viewer before a wider group of creators and obtain their input / feedback in addition to that gained during the “closed alpha” with those on the Discord channel.
This project viewer could be available in the next week(ish), subject to QA testing.
It is hoped this “alpha testing” will show that minimal changes to behaviour / functionality will be required, but it will allow LL time to gather broader stats on performance (particularly on hardware specs they have not bee able to test), and allow for additional tuning and optimisation.
LL are aware there are some performance regressions within the viewer (together with some issues with AMD drivers) that need to be addressed before it can proceed beyond project viewer status, but these are not seen as blockers to this wider testing, as they can be noted within their viewer release notes.
It is also noted that permissions will be “a little bit broken” to start with, and some edge case rendering cases may appear broken -but issues like this will be documented in the project viewer release notes.
Once these issues have been ironed out, it is anticipated that there will be no FPS loss when comparing the PBR Materials viewer to the current release viewer.
Following this “open alpha”, the viewer will move to “beta mode” (presumably as it reached RC status), when the focus will be on bug fixes rather than adding / changing features.; as such, the project viewer phase is seen as crucial in obtaining feedback encompassing concerns over things like file formats, functionality, concerns over changes to the rendering pipe, etc.
Reflection Probes
LL is concerned that reflection probes may prove to be an Achilles Heel with the new capabilities, simply because the technique in using them by the industry as a whole is somewhat counter-intuitive.
The core issue is getting people’s heads around the idea that the “shiny thing” is not itself the probe; but the probe must be correctly set, including its influence volume.
Typical documentation on setting-up reflection probes, such as that provided by Unreal and Unity – both of which share the same concepts around the influence volume as the SL implementation (although the latter has fewer parameters), doesn’t always make for easy reading.
The influence volume is a particular issue for SL due to the use of the automatic probes which exist within a region, and which are unlikely to always be in the right place, so may need to be countered by manually-placed probes.
One significant concern arising from this is that content will be sold with it own reflection probe which is precisely the wrong thing to do, as it could lead to dozens of probes in a single room, none of which have influence volumes that eat into viewer resources when a single probe for the room as a whole would achieve the same results.
A way around this would be to offer the means to disable the reflection probe as supplied within individual objects sold by creators.
However, this then begs the questions to what happens if said content is sold No Mod – blocking any ability to disable the associated reflection probe? Would it be sufficient to let market forces determine the success of such items?
No definitive solution for this has been determined – other than the need for full and proper education for creators (with documentation) – both of which assume creators will a) want to be educated; b) RTFM; and the meeting time drew to a close whilst this was still under discussion, so will likely be picked up at the next meeting.
Forward Rendering + Deprecating OpenGL 2.0
As has been previously stated by LL (and covered in these CCUG summaries), a key change that the PBR work will bring to the viewer is the removal for the forward rendering pipe – otherwise know as “disabling ALM”.
To help with this, LL have implemented additional optimisation within the PBR viewer designed to help low-end systems perform better with ALM (Advanced Lighting Model) always on, and more optimisations are to be added to the viewer.
However, one of the areas of feedback the Lab would like to obtain during the “alpha testing” is how well low-end hardware they may not have been able to test functions when running the PBR viewer.
Obviously, removal of forward rendering will mean the final end to invisiprims, which have for the last 10+ years required ALM to be turned off in order to mask other elements (e.g. system body parts, Linden Water, etc.).
It is acknowledged that forward rendering currently offers better anti-aliasing than ALM, but a means to improve this under ALM is being looked at.
The PBR Materials viewer will effectively see support for OpenGL 2 finally deprecated in the viewer, the minimum requirement being OpenGL 3 (Open GL 4 being required for reflection probes).
This thought to impact around 0.5% of all Windows / Linux users (stats still are collected for Linux, and many of those might be bots utilising older clients, with the majority of users – even those on elderly hardware – trending towards OpenGL 4 (which has been available for around a decade).
Given given basic GPU cards such as the Nvidia GTS 250 (introduced in 2009) and many laptops with integrated graphics can support at least OpenGL 3, it is hoped that even those still running OpenGL 2 will be able to updated to OpenGL 3 to avoid issues.
If you are concerned about your hardware supporting OpenGL 3, go to Help → About in your viewer, and you can check the OpenGL version you are running from there – most people will likely find they are running a version of OpenGL 4. Those on Mac OS systems might also use this tool to check.
The testing period will be sufficient enough for LL to determine whether they need to address the lack of OpenGL 2 support in future iterations of the viewer.
If you are worried about your older hardware supporting PBR Materials correctly, use Help → About to check the version of OpenGL you are using; you will likely find it is at least OpenGL 3, if not OpenGL 4+
Alpha Modes
With the PBR / Materials viewer, alphas are (as previously reported) move to linear colour space, which may cause problems for some pre-PBR content.
This is not been an issue for those creators working within the close Discord channel group using pre-release viewers (particularly given some TPVs already use linear colour space), but LL expects to get fuller feedback on the move once the project viewer is ready for use on a much broader basis.
If the move doesn’t work for pre-PBR content with alphas, the result might be that legacy content and PBR content will not alpha sort against each other, forcing an artificial sorting (e.g. sort rigged mesh alphas first, then sort non-rigged over).
It was described as a “cool but bloated”, and “great when you want broad spectrum asset import and don’t really care about doing it super well for any particular format, not so great when you want to support one thing really well.”
As such, LL would rather focus on just support glTF mesh, and as a part o this, as the viewer progresses and glTF mesh import is added, support for importing COLLADA files will likely be discontinued in the official viewer, and creators will be pointed to a COLLADA-to-glTF converter.
Terrain textures: LL is still looking at options for a future project to improve terrain textures.
One user-generated suggestion that has been accepted for consideration is BUG-232769.
The Graphics Team are also looking at materials paint maps for terrain.
IF this approach were to be adapted it might comprise of a set of four materials per parcel, which could be painted onto the terrain “at some fidelity” (e.g. around 1/4 m or so), over the existing elevation-based texturing.
This would allow parcel owner to, or example, “paint” something like a cobblestone path through a part of their parcel or add high quality grass for lawn, etc.
In particular, it could enable creators to provide “terrain ready” materials packs others can purchase and use in their parcels.
Note that currently, these are ideas. There is no actual project for revamping terrain textures in progress at the moment.
It was reiterated that there is an internal move to update the Second Life minimum system requirements to reflect the versions of required APIs which must be run on client hardware rather than specifying specific hardware (CPU, GPU, memory, etc.) as is currently the case.
The hoary old complaint of aviators having to fly through parcels with different EEP settings was again raised as an “issue” (while not perfect, applying a preferred EEP setting to your avatar – Fixed Sky or Day Cycle – still seems the easiest option to overcome this rather than working to re-jig the viewer as a whole).
A reminder Local KVP / “Linkset Data” should be deployed to the simhost RC channels in week #43. I’ll have a post out on this during that week.
When testing this capability it is important to not that data committed to a linkset will not survive the move of the linkset to a simulator channel that does not have the necessary support for the capability; ergo, all testing must take place within the defined RC channels following the initial deployment.
There will be a special article on this capability in these pages once it has rolled to RC.
Puppetry demonstration via Linden Lab – see below. Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”
The following notes have been taken from chat logs and audio recording of the Thursday, October 13th Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).
Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.
Project Summary
Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the LLSD Event API Plug-in (LEAP) system.
Note that facial expressions and finger movements are not currently enabled.
Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
There is now a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).
Bugs, Feature Requests and Code Submissions
For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
New Viewer Version – 6.6.3.575529 Dated October 12th
This viewer uses a different, more efficient data format sending updates up to the region, and from the region to viewers.
The new and old formats and viewers are not compatible; someone on the new project viewer will be unable to see puppetry rendered for someone using the older viewer version, and vice-versa.
It is hoped that severe breakages between viewer versions like this will be avoided going forward, but this change was deemed necessary
This viewer also a crash (deadlock) fix, and puppetry animations should fade in/out when starting or explicitly stopping (animations may stop abruptly should the LEAP plugin crash, or the data stream is lost, etc.).
Those self-compiling viewers with the puppetry code should ensure they are pulling the updated code from the 6.6.3.575529 (or later as new versions appear) repositories.
Protocol Overhaul
Leviathan Linden Linden noted the project team is going to overhaul the Puppetry/LEAP protocol.
The intent is to replace all the current LEAP commands (“move”, “set_this”, “set_that”, etc.), and replace with just two commands: “set” and “get”.
On the “set” side:
It will be possible set avatar joint transforms, or specify IK targets, and also set various configuration settings as necessary.
These set commands will be “incremental” in nature (so that changes can be made to reach the final state), and once set, they stay at the defined value until modified, cleared, or the plug-in “goes away”.
On the “get” side:
get_skeleton and any other get_foo commands (if used) will be replaced with {get: [skeleton, foo, …]}.
A message will be generated and set back to the viewer making the Get request, but the form of the message is still TBD.
Meanwhile, the viewer will only do IK for your own avatar, and will transmit the full parent-relative joint transforms of all puppeted joints through the server to other viewers, and LL will make it possible for a plug-in to just supply full parent-relative joint transforms if desired (e.g. no IK, just play the data)
This overhaul will also provide:
A way to move the Pelvis. This will include both a pre-IK transform (which is just setting the Pelvis transform) and also a post-IK transform, in case the avatar is to be moved after setting all the joints.
A “terse” format for the LEAP/Puppetry protocol to simplify some “set” commands to reduce data going over the LEAP data channel. It will be possible to mix these “terse” command with long-form explicit commands.
Leviathan plans to break all of this work down into a set of Jira issues and place them on the kanban board for ease of viewing.
The overall aim of this overhaul is to make the protocol more easily extendible in the future.
To the above, Simon Linden added:
The data stream is radically different than what we started with. Essentially your viewer will do the work for your avatar: send[ing] all data needed for your puppetry animations [so] the people seeing you just have to use those positions – no IK or significant processing. That should help out in the long run with crowds
Example Script
Simon Linden has produced a simple example script that is pushed to the Leap repository:
It reads a JSON file and sends that puppetry data to the viewer.
Using it, is is possible to edit some values, save the JSON text file, and see bones move as an example of doing so.
In Brief
BUG-232764 “[PUPPETRY] [LEAP] Puppetry should be able to ‘Get’ and ‘Set’ avatar camera angle” has been raised to go with the protocol overhaul, and while it has yet to be formally accepted, has been viewed as a good idea by the Puppetry team.
Puppetry does not support physics feedback or collisions as yet, and work for it to do so is not on the short list of “things to do next”
There is currently an issue of “near-clipping” within a a first-person (e.g. Mouselook) view and using puppetry (so, for example, holding a hand up in front of your avatar’s face in Mouselook results in the hand being clipped and now fully rendering). This is believed to by an artefact of the viewer still rendering the head (even though unseen when in first-person view), and this interfering with rendering near-point objects like hands. The solution for this is still TBD.