2023 week 13: SL CCUG meeting summary

Celestial Glade, February 2023 – blog post
The following notes were taken from my audio recording and chat log transcript of the Content Creation User Group (CCUG) meeting held on Thursday, March 30th, 2023 at 13:00 SLT.  These meetings are for discussion of work related to content creation in Second Life, including current work, upcoming work, and requests or comments from the community, together with viewer development work. They are chaired by Vir Linden, and dates and times can be obtained from the SL Public Calendar. Notes:
  • These meetings are conducted in mixed voice and text chat. Participants can use either to make comments / ask or respond to comments, but note that you will need Voice to be enabled to hear responses and comments from the Linden reps and other using it. If you have issues with hearing or following the voice discussions, please inform the Lindens at the meeting.
  • The following is a summary of the key topics discussed in the meeting, and is not intended to be a full transcript of all points raised.
Additional note: unfortunately, physical world matters meant I missed the initial part of the meeting, and as it is held in voice, there is little by way of chat transcript to reflect initial discussions prior to my arrival.

Official Viewer Status

  • On March 30th, the Maintenance R viewer, version 6.6.10.579060, was promoted to de facto release status.
  • On March 31st, the  Maintenance S RC viewer updated to version 6.6.11.579153, bringing it to parity with the above release viewer.
The rest of the current official viewers remain as:
  • Release channel cohorts:
    • Performance Floater / Auto FPS RC viewer updated to version 6.6.10.578172, February 21, 2023.
  • Project viewers:
    • PBR Materials project viewer, version 7.0.0.578921, March 23 – This viewer will only function on the following Aditi (beta grid) regions: Materials1; Materials Adult and Rumpus Room 1 through 4.
    • Puppetry project viewer, version 6.6.8.576972, December 8, 2022.

glTF Materials and Reflection Probes

Project Summary

  • To provide support for PBR materials using the core glTF 2.0 specification Section 3.9 and using mikkTSpace tangents, including the ability to have PBR Materials assets which can be applied to surfaces and also traded / sold.
  • To provide support for reflection probes and cubemap reflections.
  • The overall goal is to provide as much support for the glTF 2.0 specification as possible.
  • In the near-term, glTF materials assets are materials scenes that don’t have any nodes / geometry, they only have the materials array, and there is only one material in that array.
    • It is currently to early to state how this might change when glTF support is expanded to include entire objects.
  • The project viewer is available via the Alternate Viewers page, but will only work on the following regions on Aditi (the Beta grid):  Materials1; Materials Adult and Rumpus Room 1 through 4.
  • Please also see previous CCUG meeting summaries for further background on this project.

Status

  • The viewer is still being progressed, and will likely update to keep in line with the current release viewer.
    • There is still a bug with glow which needs to be addressed, but this is not seen as a significant issue in terms of fixing.
  • Information in support of PBR materials and reflection probes is in development. This includes new wiki pages and – at least for reflection probes, if not both, new tutorial videos.
    • A very much Work In Progress version of the wiki information can be found in: PBR Materials.
  • The methodology for modifying PBR materials via script is to have the materials contained within glTF assets which can be stored in inventory (containing an LLSD header and glTF JSON, with the texture / material UUID stored in the image URI field), with LSL APIs used to modify the parameters within the glTF file.
    • Manual modification of the parameters can be done via the viewer UI in a similar manner to manipulating textures, etc., currently offered by the viewer.
    • The LSL APIs will work in a similar manner to functions such as llSetPrimPratams, etc. The only thing that cannot be changed in is actual image data.
  • For testing, it was noted the “local materials” should allow Materials to be tested directly from a user’s computer in a similar manner to local textures (and only visible in that user’s world view), which dynamic updating of the materials in the session as they are locally modified.

In Brief

  • Requests have been made for a more visual coding capability in SL to help those who are not programmers  / scripters, with the likes of capabilities similar to Blueprints in Unreal Engine. While such tools as the latter are acknowledged as being useful for putting snippets of code together, the notion of a visual coding system is seen by LL as potentially cumbersome for the results they would garner.
  • A very general discussion on games in SL.

Next Meeting

  • Thursday, April 6th, 2023.

2023 SL Puppetry project week #12 summary

Puppetry demonstration via Linden Lab – see below.  Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”

The following notes have been taken from chat logs and audio recording of the Thursday, March 23rd, 2023 Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).

Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.

Project Summary

General Project Description as Originally Conceived

LL’s renewed interest in puppetry was primarily instigated by Philip joining LL as official advisor, and so it really was about streaming mocap. That is what Philip was interested in and why we started looking at it again. However since Puppetry’s announcement what I’ve been hearing from many SL Residents is: what they really want from “puppetry” is more physicality of the avatar in-world: picking up objects, holding hands, higher fidelity collisions. 
As a result, that is what I’ve been contemplating: how to improve the control and physicality of the the avatar. Can that be the new improved direction of the Puppetry project? How to do it?

Leviathan Linden

  • Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the  LLSD Event API Plug-in (LEAP) system.
    • Note that facial expressions and finger movements are not currently enabled.
    • Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
  • The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
  • Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
  • No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
  • There is a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).

Additional Work Not Originally In-Scope

  • Direct avatar / object / avatar-avatar interactions (“picking up” an apple; high-fives. etc.
  • Animations streaming: allowing one viewer to run animations and have them sent via the simulator to all receiving viewers without any further processing of the animations by those viewers.
  • Enhanced LSL integration for animation control.
  • Adoption of better animation standards – possibly glTF.
  • Given the project is incorporating a lot of additional ideas, it is likely to evolve into a rolling development, with immediate targets for development / implementation decided as they are agreed upon, to be followed by future enhancements. As such, much of what goes into the meetings at present is general discussion and recommendations for consideration, rather than confirmed lines o development.

Bugs, Feature Requests and Code Submissions

  • For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
  • There is also a public facing Kanban board with public issues.
  • Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:

Further Information

Meeting Notes

Viewer Progress

  • An updated version of the project viewer is still “close” – it did not make it past QA, but it is hoped it will be available soon.
  • Leviathan Linden is working on the IK system in order to try to make it more robust to handle the kind of work Rider Linden is doing, but is not at the point of having anything ready for delivery into a viewer, although the idea was to have something possibly ready by the viewer update after the one that is still waiting to be cleared for project release.

Server-Side Work

  • Following the meeting, the current Puppetry test regions on Aditi were due to be updated with a simulator version which merges the server-side Puppetry code with the latest release of the simulator code.
  • Rider Linden is continuing to work on llSetAttachmentPoint, intended to allow avatars “pick up” objects using puppetry. At the time of the meeting he was completing work on ANIM_POS/ROT_TARGET which is intended to try to keep the attachment point directed  at an object in world (as opposed to a fixed location or a location relative to the avatar.
    • This uses a command to tell the viewer to carry out the necessary IK work to move an attachment point as close to a location as it can get, using the target object’s UUID / location as reported by the simulator.
    • The idea is that this functionality will work not only with hands / arms but also with other appendages (e.g. wings).
    • In theory, this should also allow avatars to touch attachment point on other avatars (e.g. holding hands), however, exactly how this works within the framework of the Permissions system – in terms of others accepting / refusing any direct interaction through something like a dialogue request as we see today – has yet to be worked out.
  • This led to a broader discussion on attaching object to avatars, the core of which is summarised below.

 Object Parenting vs. Temp Attachments

  • The idea of being able to use Puppetry to reach out and grasp a (suitably scripted) object in-world – for example, an apple, a bottle or similar) raised questions on how the process will work.
    • Currently, “temp” attachment can be made to avatars (e.g. via an Experience), but this still actually requires the object being temporarily transfers to the avatar’s inventory (where it does not how up) and from there attached to the relevant attach point (e.g. a hand).
    • This is somewhat slow and cumbersome – particularly if you want to do something with the object (e.g. if it is a ball, throw it), as the object needs to be picked up, held, follow the throwing motion of the avatar’s arm, detach at the point of release, resume its status as a physical object in-world, have direct and velocity applied, and then move in the direction of the throw.
    • The suggestion was made that to simplify things, a concept of avatar-to-object parenting needs to be introduced to SL – so when the ball is picked up, it immediately becomes a child of that avatar – no need for the passing to inventory, attaching from there, detaching to inventory / deletion, etc., as seen with temp attachments.
  • Developing a hierarchy scheme for SL is already being mused through the Content Creation User Group, so it was suggested this could help present a road to object parenting with avatars. However, as it will take some time for any hierarchy system to be developed and implemented – and given it falls outside of the Puppetry project – , its might mean that existing mechanisms may have to be accepted, even if they do place some limitations on what might be achieved until such time as a hierarchy system can be introduced.
  • As an alternative, it was suggested that the physics engine might be used to attach a degree of object parenting to an avatar:
    • The physics engine allows actions to be created, which are updated every sub-step; so in the case of a physical object, it should be possible to write an action that says, “Follow the hand of the avatar picking you up”.
    • Then, as long as a the physics engine knows the position of the “holding” hand, the object could move with it; while there would be a small degree of physics lag, as long as the viewer knows to render the object at the avatar’s hand, rather than where the physics updates are saying he object is, this should not be visually noticeable.
    • This approach would not require an explicit hierarchy system, but it would require the viewer to send updates on the avatar’s hand position to the simulator’s physics engine – which is available.
    • The idea is modelled on the old physics action that perhaps most famously allowed a beach ball to be grabbed and pushed / lifted up onto a table as a part of the orientation process to using SL incoming new users used to go through, and can still be used today to push suitably scripted objects around.
    • If possible, this approach would also need some form of constraints and permissions (e.g. you don’t really want to make your entire building capable of being grabbed and shunted around willy-nilly).

General Notes

  • There was a general conversation on Permissions within avatar-to-avatar interactions and where they need to fall.
    • As noted above, there will need to be some form of explicit granting of permission for Puppetry-generated hugs, handshakes, high-fives, etc.
    • However, concern was raised about permissions being needed for basic one-way interactions – such as pointing at someone. The concern here being that the LookAt debug targets has become so conflated with ideas of “privacy” tools (“Stop looking at me, perv! You don’t have permission!”) TPVs have surfaced the means to disable LookAts being sent via a viewer so that people do not get shouted at if their LookAt cross-hairs happen to reside on another avatar. Thus, the worry was that if there is a similar kind of visual indicator used for Puppetry, it might result in a similar reaction.
    • The short answer to this was, no, it should not be an issue as avatar locations can already be obtained through LSL without the need for a visual indicator being generated by the viewer.

Date of Next Meeting

  • Thursday, April 13th, 2023, 13:00 SLT.

2023 week #11: SL TPV Developer meeting summary

The Great Library of Eruanna, January 2023 – blog post
The following notes were taken from m y audio recording and chat log transcript of the TPV Developer (TPVD) meeting held on Friday,  March 17th 2023 at 13:00 SLT. These meetings are chaired by Vir Linden, and their dates and times can be obtained from the SL Public Calendar; also note that the following is a summary of the key topics discussed in the meeting and is not intended to be a full transcript of all points raised.

Official Viewers Status

  • Release viewer: Maintenance Q(uality) viewer, version 6.6.9.577968 Thursday, February 2.
  • Release channel cohorts (please see my notes on manually installing RC viewer versions if you wish to install any release candidate(s) yourself).
  • Project viewers:
    • PBR Materials project viewer, version 7.0.0.578792, March 15 – This viewer will only function on the following Aditi (beta grid) regions: Materials1; Materials Adult and Rumpus Room 1 through 4.
    • Puppetry project viewer, version 6.6.8.576972, December 8, 2022.

General Viewer Notes

  • The Performance Floater / Auto FPS viewer is still being worked on in the hope that performance can be further improved on lower-end systems.
  • Work is also being carried out to have the viewer work with a broader cross-section of translation tools.

Inventory Enhancement Project

  • The work to provide thumbnail images of  folders and items in Inventory is progressing on the viewer-side, but deployment will be dependent on both viewer availability (project / RC to release) and assorted back-end service and simulator updates to handle the new data.
  • Once the thumbnail preview work has been completed, it is possible the Lab will look to further enhancements to inventory management. One future enhancement under consideration is support for folders to be included in the Contents inventory of individual objects.

In Brief

  • See also my CCUG meeting summary, as this meeting crosses topics with that.
  • Estate Level Scripted Agent Controls (aka “Ban the Bots”): per my SUG meeting notes, there is a new simulator release due to be deployed which will provide estate / region holders and their managers limit access to their regions by scripted agents (bots).
    • This work will initially be console-based in the viewer until UI updates can be made and a viewer update with them deployed.
    • However, a bug has been found in the simulator code, which is currently being worked upon. The hope is this will be fixed without any delay to the code being rolled-out.
  • It was re-iterated that the next major new graphics project following glTF materials is likely to be support for Vulkan / MotenVK as the graphics API of preference (Windows / Mac). No time frame on when this work will commence, tho.
  • As I reported in November 2022, Atlassian has announced it will be restructuring how it licenses the Jira bug reporting product from 2024 onwards.
    • No decision on the direction LL will take as this change in made has as yet been taken, but there are ongoing internal discussions on options.
    • However, at this point it appears as if whatever route LL decides to take, they will need to review how public issues are raised and passed to them, etc.

 Next Meeting

  • Friday, April 14th, 2023.

2023 week 11: SL CCUG meeting summary

Gothbrooke Forest, January 2023 – blog post
The following notes were taken from my audio recording and chat log transcript of the Content Creation User Group (CCUG) meeting held on Thursday, March 16th, 2023 at 13:00 SLT.  These meetings are for discussion of work related to content creation in Second Life, including current work, upcoming work, and requests or comments from the community, together with viewer development work. They are chaired by Vir Linden, and dates and times can be obtained from the SL Public Calendar. Notes:
  • These meetings are conducted in mixed voice and text chat. Participants can use either to make comments / ask or respond to comments, but note that you will need Voice to be enabled to hear responses and comments from the Linden reps and other using it. If you have issues with hearing or following the voice discussions, please inform the Lindens at the meeting.
  • The following is a summary of the key topics discussed in the meeting, and is not intended to be a full transcript of all points raised.

glTF Materials and Reflection Probes

Project Summary

  • To provide support for PBR materials using the core glTF 2.0 specification Section 3.9 and using mikkTSpace tangents, including the ability to have PBR Materials assets which can be applied to surfaces and also traded / sold.
  • To provide support for reflection probes and cubemap reflections.
  • The overall goal is to provide as much support for the glTF 2.0 specification as possible.
  • In the near-term, glTF materials assets are materials scenes that don’t have any nodes / geometry, they only have the materials array, and there is only one material in that array.
    • It is currently to early to state how this might change when glTF support is expanded to include entire objects.
  • The project viewer is available via the Alternate Viewers page, but will only work on the following regions on Aditi (the Beta grid):  Materials1; Materials Adult and Rumpus Room 1 through 4.
  • Please also see previous CCUG meeting summaries for further background on this project.

Status

  • The PBR Materials project viewer updated to version 7.0.0.578792, on March 15th 2023. Note that this viewer will only function on the following Aditi (beta grid) regions: Materials1; Materials Adult and Rumpus Room 1 through 4.
  • Texture handling / management:
    • As a result of data gathered by the Lab revealing a lot of users only have around 1 GB of texture memory, Dave P (Runitai Linden) has been making another pass through texture handling to making loading faster and memory use more efficient.
    • VRAM management has been improved to more selectively release texture memory on systems which might otherwise “run low” on available VRAM.
    • The hope is these will reduce texture trhashing issues (texture blurring, clearing, blurring, clearing) in the future for those so affected.
  • Geenz Linden continues to work on the Mac side of the PBR work; Comic Linden is finalising UV treatment  and Bed Linden is working on the one remaining server-side bug the team is aware of and  is working on atmospherics and issues with rendering them in linear space.
  • Brad Linden is working on a series of bugs in PBR materials handling where editing via LSL or manually sees the updates (changes) dropped rather than applied in various edge-cases and situations.
    • The simulator-side fixes for this issues are in place; fixes within the viewer are awaiting inclusion in a upcoming viewer update.

In Brief

  • glTF format for geometry (mesh), animations, etc., this is something the Lab does want to do, but will take the form of follow-on project(s) from the current glTF PBR materials work.
    • supporting glTF geometry imports is seen as a major project as it will likely require handling of arbitrary hierarchies, which is not something SL currently handles – although it is acknowledged that once done, will offer a lot of benefits.
  • There  was a general discussion on terrain improvements. This is something that LL had been considering, but content creators attending the CCUG meeting favoured the PBR work and graphics updates, so the terrtain updates have just to be put back onto the road map. Where it would slot, is not clear, as the desire from creators is to see the glTF work continue with geometry import support, etc., as noted above.
  • Another major graphic project waiting in the wings is the introduction of support for the Vulkan graphics API / MoltenVK (for Mac). This would likely take priority over any significant terrain work.

Next Meeting

  • Thursday, March 30th, 2023.

2023 SL Puppetry project week #10 summary

Puppetry demonstration via Linden Lab – see below.  Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”

The following notes have been taken from chat logs and audio recording of the Thursday, March 9th, 2023 Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).

Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.

Project Summary

General Project Description as Originally Conceived

LL’s renewed interest in puppetry was primarily instigated by Philip joining LL as official advisor, and so it really was about streaming mocap. That is what Philip was interested in and why we started looking at it again. However since Puppetry’s announcement what I’ve been hearing from many SL Residents is: what they really want from “puppetry” is more physicality of the avatar in-world: picking up objects, holding hands, higher fidelity collisions. 
As a result, that is what I’ve been contemplating: how to improve the control and physicality of the the avatar. Can that be the new improved direction of the Puppetry project? How to do it?

Leviathan Linden

  • Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the  LLSD Event API Plug-in (LEAP) system.
    • Note that facial expressions and finger movements are not currently enabled.
    • Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
  • The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
  • Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
  • No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
  • There is a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).

Additional Work Not Originally In-Scope

  • Direct avatar / object / avatar-avatar interactions (“picking up” an apple; high-fives. etc.
  • Animations streaming: allowing one viewer to run animations and have them sent via the simulator to all receiving viewers without any further processing of the animations by those viewers.
  • Enhanced LSL integration for animation control.
  • Adoption of better animation standards – possibly glTF.
  • Given the project is incorporating a lot of additional ideas, it is likely to evolve into a rolling development, with immediate targets for development / implementation decided as they are agreed upon, to be followed by future enhancements. As such, much of what goes into the meetings at present is general discussion and recommendations for consideration, rather than confirmed lines o development.

Bugs, Feature Requests and Code Submissions

  • For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
  • There is also a public facing Kanban board with public issues.
  • Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:

Further Information

Meeting Notes

Viewer Progress

  • An updated version of the project viewer is due to be made available once it has cleared LL’s QA process. This includes:
    • Using the binary protocol for the LEAP module communication, with new logic which causes LEAP modules to one be loaded by the viewer when they are used.
    • The AgentIO LEAP module adds the ability to adjust the look at target, viewer camera and agent orientation.
    • Support for sending the joint position of your avatar to the server, which is then available in LSL.
      • The code reports the post animation location for attachment points, allowing the the sever to know where things like hands and wings, etc.,  are, and this in turn allows LSL to query where that attachment point is in space and how it is rotated.
  • HOWEVER, the animation streaming code (see previous Puppetry meeting notes) will not be in the next viewer update.

Server-Side Work

  • The simulator code now has llGetAttachmentPointAnim() support, which should be recognised by the upcoming viewer update.
  • The Aditi puppetry regions are to be merged with the updated code so this can be tested.
  • While there has been some work completed on animation imports since the last meeting, there was nothing significant for LL to report on progress at this meeting.

General Notes

  • There is additional work going on to try to improve the IK system, with the aim of having the basics working better than is currently the case – better stability, etc. This work may appear in the viewer update after the one currently being prepared to go public.
  • Performance:
    • To prevent puppetry generating too much messaging traffic (UDP) between the viewer and simulator, a throttle is being worked on so that when the simulator is under a heavy load from multiple viewers running puppetry code, it can tell them all to tone down the volume of messages.
    • There will also be some switches and logic put into place that can be used when needed, helping to protect regions in case the load gets overwhelming.
    • A further suggestion made is to ensure the simulator does not broadcast puppetry messages for avatars seated and not using the code (such as an audience at a performance) to further reduce to volume of messaging, this is viewed as a potentially good avenue of work to consider.
    • There is also a threshold in place – if an attachment point does not move beyond it, it is not considered as moved, which will hopefully also reduce the amount of messaging the simulator has to handle.
  • LSL Integration:
    • See: OPEN-375: “LSL Functions for reading avatar animation positions”.
    • This work is now paused. Rider Linden developed a proof of concept, but found that in order to better manipulate parameters within the constraints, a configuration file should be used. He is therefore refactoring the code to do this before proceeding further.
    • The configuration file will be called avatar_constraints.llsd and it will live alongside avatar_lad.xml in the character directory.
  • Questions were again raised on whether Puppetry is for VR / will enable the viewer to run VR.
    • It was again pointed out that while Puppetry lays more foundational work which could be leveraged for use with VR headsets, than is not the aim of the Puppetry project.
    • Providing VR headset support is a much broader issue, which would require the involvement of other teams from LL – Product, the Graphics Team, the viewer developers, etc.

Date of Next Meeting

  • Thursday, March 23rd, 2023, 13:00 SLT.

2023 week 9: SL CCUG meeting summary – PBR

Cloud Edge, January 2023 – blog post
The following notes were taken from my audio recording and chat log transcript of the Content Creation User Group (CCUG) meeting held on Thursday, March 2nd, 2023 at 13:00 SLT.  These meetings are for discussion of work related to content creation in Second Life, including current work, upcoming work, and requests or comments from the community, together with viewer development work. They are chaired by Vir Linden, and dates and times can be obtained from the SL Public Calendar. Notes:
  • These meetings are conducted in mixed voice and text chat. Participants can use either to make comments / ask or respond to comments, but note that you will need Voice to be enabled to hear responses and comments from the Linden reps and other using it. If you have issues with hearing or following the voice discussions, please inform the Lindens at the meeting.
  • The following is a summary of the key topics discussed in the meeting, and is not intended to be a full transcript of all points raised.

Official Viewers Summary

The PBR Materials project viewer updated to version 7.0.0.578526, on March 3rd, 2023. Note that this viewer will only function on the following Aditi (beta grid) regions: Materials1; Materials Adult and Rumpus Room 1 through 4.

Available Viewers

General Viewer Notes

  • The Maintenance R and the Performance Improvements / Auto-FPS RC viewers are both now apparently in line for promotion to de facto release status, although both may go through further RC updates prior to being promoted.

glTF Materials and Reflection Probes

Project Summary

  • To provide support for PBR materials using the core glTF 2.0 specification Section 3.9 and using mikkTSpace tangents, including the ability to have PBR Materials assets which can be applied to surfaces and also traded / sold.
  • To provide support for reflection probes and cubemap reflections.
  • The overall goal is to provide as much support for the glTF 2.0 specification as possible.
  • In the near-term, glTF materials assets are materials scenes that don’t have any nodes / geometry, they only have the materials array, and there is only one material in that array.
    • It is currently to early to state how this might change when glTF support is expanded to include entire objects.
  • The project viewer is available via the Alternate Viewers page, but will only work on the following regions on Aditi (the Beta grid):  Materials1; Materials Adult and Rumpus Room 1 through 4.
  • Please also see previous CCUG meeting summaries for further background on this project.

Status

  • Work continues on viewer-side bug fixes.
  • Tone mapping: work is progressing on implementing the Krzysztof Narkowicz variant of ACES tone mapping, which should – depending on the monitor being used  / viewer preferences set – produce better graphical results. As the result can vary by monitor / eye, this will include both an exposure slider and an option to disable the option.
  • Geenz Linden is working on the Mac side of the PBR work; Comic Linden is finalising UV treatment  and Bed Linden is working on the one remaining server-side bug the team is aware of and Dave P (Runitai Linden) is working on atmospherics and issues with rendering them in linear space.
  •  Linear space alpha blending: there are still issues with this, particularly at either end of the scale (high colours / high transparency and low colours / low transparency). This is being worked on, but may end up with a debug setting to disable linear space alpha blending by those who need to, with a warning that this is not how scenes are intended to be viewed.
A scene imported by Nagachief Darkstone and WindowsCE to demonstrate reflection probes (note the reflections on the knight’s armour – these are not generated by attached environment lights but by a reflection probe within the building structure. Image courtesy of Rye Cogtail

In Brief

  • It now looks as if the move away from the OpenGL API will be to Vulkan for Windows (/Linux?) and MoltenVK for Mac.
  • LL is interested in implementing something similar to the Firestorm Local Mesh capability by Beq Janus and Vaalith Jinn (see here and here for more), possibly as a result of a code contribution.
  • Land Impact:
    • Some creators are using the Animesh checkbox on upload to try to get around large mesh objects having heavy Land Impact values. LL gave notice at the meeting that this is regarded as an exploit, and it will be patched – so those doing so should really cease in order to avoid people facing unplanned object returns when their parcel start reporting they are over capacity.
    • In terms of Land Impact overall, it was acknowledged that while updated to allow for mesh, etc., the formula does still have some shortfalls; however, redressing this would require work which also involves bandwidth and server memory, and is not currently on the cards.
    • It is hoped that the move to support glTF mesh imports will offer a means to address LOD issues and Land Impact, as it will bring with it a fundamental shift in the data model
  • Cull distance volumes: one way to reduce the render load on a system is to have cull distance volumes. The PBR reflection probes are being seen by LL as a means to test data gathering which can eventually be used in cull distance volumes (e.g. so you can set-up a volume inside a room and have it so that the viewer does not start rendering anything within that room until a camera is within X metres of the room).
    • This could potentially make Land Impact more dynamic in terms of content streaming costs, based on the use of cull volumes / camera position.
    • It could also be used to assist in privacy matters (e.g. “don’t render what’s in this room unless people are in this room”).

Next Meeting

  • Thursday, March 16th, 2023.