The following notes are taken from my audio recording of the Content Creation User Group (CCUG) meeting held on Thursday, December 19th 2019 at 13:00 SLT. These meetings are chaired by Vir Linden, and agenda notes, meeting SLurl, etc, are available on the Content Creation User Group wiki page.
The majority of this meeting was a generic conversation of ideas such as moving Second Life to support PBR, what might be done to improve Pathfinding, etc., none of which are on the road map for Second Life at present; as such these notes keep the the current projects that are in progress at the Lab.
A new Maintenance viewer, code named Xanté, was released on Thursday, December 19th. Version 18.104.22.1683748 contains around 30 fixes for reported issues and bugs. All other viewer remain as per my Current Viewer Release List.
With regards to viewers:
- The Lab’s focus has been on transitioning their Bitbucket viewer build repositories from Mercurial to Git – see my week #50 TPVD meeting notes for more.
- As well as the current pipelines of viewers, work is also in hand to ensure the viewer is ready to manage Name Changes when that capability is deployed in early 2020.
Environment Enhancement Project
A set of environmental enhancements (e.g. the sky, sun, moon, clouds, and water settings) to be set region or parcel level, with support for up to 7 days per cycle and sky environments set by altitude. It uses a new set of inventory assets (Sky, Water, Day), and includes the ability to use custom Sun, Moon and cloud textures. The assets can be stored in inventory and traded through the Marketplace / exchanged with others, and can additionally be used in experiences.
Due to performance issues, the initial implementation of EEP will now likely not include certain atmospherics such as crepuscular rays (“God rays”).
- Bug fixing continues, notably around alpha rendering issues.
- The hope is that of the remaining issues, some my be related, and so solving one will help to solve others of a similar nature.
An attempt to re-evaluate object and avatar rendering costs to make them more reflective of the actual impact of rendering both. The overall aim is to try to correct some inherent negative incentives for creating optimised content (e.g. with regards to generating LOD models with mesh), and to update the calculations to reflect current resource constraints, rather than basing them on outdated constraints (e.g. graphics systems, network capabilities, etc).
- Vir is working on getting things to a state where he can do so practical testing over the holiday period to ensure the relevant data is being collected. This is dependent on whether he has the time to confirm the internal version of the viewer is logging everything it needs to be logging.
- The work is still very much focused on the data collection aspect, rather than doing anything with the data that is gathered.
- The kind of data being gathered includes: what are the graphics and geometric properties of the objects in a scene, what rendering settings are being used, poly count for different LODs with a model, what are the graphics properties in use (materials, texture + texture size, etc.), plus the time required to generate a frame successfully given the work required to render the scene.
- Once the data has been gathered, the idea is to run the viewer on multiple hardware configurations (GPU, CPU, etc.), and gather data on the the impacts of changes those various properties.
- The aim is to get a more accurate feel for how performance is impacted, and how significantly changes impact performance (e.g. what’s the impact of enabling Full Bright compared to enabling materials? Which is genuinely better: properly optimised mesh or plain faces with materials or a combination of low-resolution mesh + materials?
- As well as allowing the complexity calculations for avatar attachments and in-world objects to be better refined, the data gathered might, further down the line in the project, enable LL to make plausible forecasts of what might be seen by way of performance improvements in relation to suggested constraints being put on objects as a part of the creation process.
- Textures are still proving a problem in terms of measuring impact (e.g. is it more a total threshold limit being hit, rather than the number of textures used within an individual object?).
- Anther limiting aspect is the number of different bottlenecks users can experience quite outside of the Lab’s control (e.g. their network connection, what else is going on across that connection at the same time, etc)., and bottlenecks within individual systems that can vary.
- One attempt to improve things that has been made in Firestorm is for the matrix calculations for worn mesh to be cached the the bones to which the mesh has been rigged hasn’t moved between frames. This can save up to 7 sets of calculations for a mesh with 8 faces that the viewer may not actually need to make. This may be contributed to LL for evaluation.
** The next Content Creation User Group Meeting should be on Thursday, January 9th, 2020, but check the wiki page for confirmation **