2020 Content Creation User Group week #14 summary

Garrigua, February 2020 – blog post

The following notes were taken from my audio recording and chat log of the Content Creation User Group (CCUG) meeting held on Thursday, April 2nd 2020 at 13:00 SLT. These meetings are chaired by Vir Linden, and agenda notes, meeting SLurl, etc, are are available on the Content Creation User Group wiki page.

A large part of the meeting concerned options for what might be done when handling complex avatars that fall outside of what is currently being done through ARCTan, including esoteric discussions on when things like impostering should occur in the download / rendering cycle, etc. Discussions also touched on the sale of Sansar (see elsewhere in this blog) and SL’s uptick in user numbers as a result of the current SARS-Cov-2 pandemic.

Environment Enhancement Project

Project Summary

A set of environmental enhancements (e.g. the sky, sun, moon, clouds, and water settings) to be set region or parcel level, with support for up to 7 days per cycle and sky environments set by altitude. It uses a new set of inventory assets (Sky, Water, Day), and includes the ability to use custom Sun, Moon and cloud textures. The assets can be stored in inventory and traded through the Marketplace / exchanged with others, and can additionally be used in experiences.

Resources

Current Status

  • Is caught on a couple rendering bugs related to Linden Water and how the water / things under water are rendered by EEP.
  • The plan is still to have EEP promoted before any other viewer project is promoted to release status.

ARCTan

Project Summary

An attempt to re-evaluate object and avatar rendering costs to make them more reflective of the actual impact of rendering both. The overall aim is to try to correct some inherent negative incentives for creating optimised content (e.g. with regards to generating LOD models with mesh), and to update the calculations to reflect current resource constraints, rather than basing them on outdated constraints (e.g. graphics systems, network capabilities, etc).

As of January 2020 ARCTan has effectively been split:

  • Immediate viewer-side changes, primarily focused on revising the Avatar Rendering Cost (ARC) calculations and providing additional viewer UI so that people can better visibility and control to seeing complexity. This work can essentially be broken down as:
    • Collect data.
    • Update ARC function.
    • Design and provide tool within the viewer UI (i.e. not a pop-up) that presents ARC information in a usable manner and lets users make decisions about rendering / performance.
  • Work on providing in-world object rendering costs (LOD models, etc.) which might affect Land Impact will be handled as a later tranche of project work, after the avatar work.
  • The belief is that “good” avatar ARC values can likely be used as a computational base for these rendering calculations.

Current Status

  • Internal testing is awaiting a Bake Service update related to the issue Vir identified that was causing issues in gathering data.
  • In the interim, Vir has been looking at the tools available for manipulating viewer performance (e.g. imposters, the Jelly Dolls tools, blocking, etc.). He’s specifically been looking at “peculiarities” in how the various options work and raising internal questions on possibly re-examining aspects of how they work.
  • One point with imposters / Jelly Dolls is that while the settings may be used – and as was raised as a concern prior to that project being deployed – is that rendering data for all attachments on an impostered or jelly dolled avatar is still downloaded to the viewer, which is not optimal.
    • Removing attachment data could improve performance, but would also make jelly dolled avatars in particular look even more rudimentary.
  • A bug with the  Jelly Doll code means setting an avatar to never render causes it to load more slowly than just lowering the complexity threshold so it doesn’t render. This is viewed as a known bug.
  • There have been suggestions for trying to limit access to regions (particularly events) based on avatar complexity.
    • Right now, this would be difficult, as the simulator does not have authoritative information on avatar complexity – it’s calculated in the viewer, which in turn is based on data the simulator doesn’t even load.
    • This means there would have to be a significant refactoring of code before the simulator could be more proactive around avatar complexity. Given the cloud uplift work, this is not something the Lab wishes to tackle at this point in time.

General Discussion

  • Arbitrary skeletons: The question was raised on SL allowing entirely custom / arbitrary skeletons.
    • This again would be a complex project, one that was rejected during the Bento project due to the risk of considerable scope creep.
    • There is already a volume of available humanoid mesh avatars, each operating with their own (mutually incompatible) ecosystems of clothing and accessories that can already cause confusion for users. Adding completely arbitrary skeleton rigs to this could make things even more complicated and confusing.
  • The major reason there is little work being put into developing new LSL capabilities is because the majority of the LSL development resources are deeply involved in – wait for it – cloud uplift work.

Next Meeting

Due to the Lab’s monthly Al Hands meeting, the next CCUG meeting will take place on Thursday, April 16th, 2020

Have any thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.