2019 Content Creation User Group week #41 summary

Cherishville, August 2019 – blog post

The following notes are taken from my audio recording of the Content Creation User Group (CCUG) meeting, held on Thursday, October 10th 2019 at 13:00 SLT. These meetings are chaired by Vir Linden, and agenda notes, meeting SLurl, etc, are available on the Content Creation User Group wiki page.

Graphics Team

There are two new Lindens now on the rendering team – Euclid Linden, who has been with the Lab for around a month at the time of writing, and Ptolemy Linden, who has been a Linden for the last couple of weeks, again at the time of writing. Both will be working on various rendering projects which will include the Love Me Render viewer updates and also projects like the Environment Enhancement Project (EEP) – which is considered a priority in order to move that project towards release.

Euclid Linden goes full-on shark-man, while Ptolemy goes a little more conservative with a starter avatar

Viewers

No further updates thus far in the week. The hope is that the Vinsanto Maintenance RC viewer (version 6.3.2.530962 at the time of writing) looks to be in “good shape” for promotion, but currently requires a little more time in its release cohort.

This leaves the official viewer pipelines at the time of the meeting as follows:

  • Current Release version 6.3.1.530559, formerly the Umeshu Maintenance RC viewer, dated, September 5 – No Change.
  • Release channel cohorts:
  • Project viewers:
    • Legacy Profiles viewer, version 6.3.2.530836, September 17. Covers the re-integration of Viewer Profiles.
    • Project Muscadine (Animesh follow-on) project viewer, version 6.4.0.530473, September 11.
    • 360 Snapshot project viewer, version 6.2.4.529111, July 16.
  • Linux Spur viewer, version 5.0.9.329906, dated November 17, 2017 and promoted to release status 29 November 2017 – offered pending a Linux version of the Alex Ivy viewer code.
  • Obsolete platform viewer, version 3.7.28.300847, May 8, 2015 – provided for users on Windows XP and OS X versions below 10.7.

ARCTan

Project Summary

An attempt to re-evaluate object and avatar rendering costs to make them more reflective of the actual impact of rendering both. The overall aim is to try to correct some inherent negative incentives for creating optimised content (e.g. with regards to generating LOD models with mesh), and to update the calculations to reflect current resource constraints, rather than basing them on outdated constraints (e.g. graphics systems, network capabilities, etc).

Current Status

  • Work is progressing on building a predictive model based on the data LL has been gathering on mesh complexity, frame times, etc.
  • This model will be tested across a wider range of client hardware types and different ranges of settings.
  • The data thus far confirms that geometric complexity plays a large part in performance reduction, but also that there are a lot of other variables in play: rigged meshes are very different in behaviour impact to static meshes; some graphics properties can make a “big difference” in frame time, etc.
  • Details on the impact of textures has yet to be folded into the project.

Project Muscadine

Project Summary

Currently: offering the means to change an Animesh size parameters via LSL.

Current Status

Still largely on hold while ARCTan is being focused on.

Other Items in Brief

  • Mesh Uploader: a couple of points were brought up concerning the mesh uploader:
    • At the time mesh was introduced, materials were no supported; therefore, in the uploader there is code to discard tangent space (which can be used by normal maps). This means normals must be calculated in real time, causing both performance problems and inconsistencies between how normals appear in Second Life and how they appear in the 3D software used to create them. It’s been suggested this issue should be the subject of a Jira.
    • Allowing for the work on ARCTan, some see the uploader unfairly punishing on grounds of size and LI.
      • It what pointed out that a very large mesh that can be complex to render get hit with a high LI and high upload cost, but a very small object  – which may still have tens of thousands of triangles – is not penalised to the same degree, even though it might be as costly to render.
      • The alternative suggested was to have costs based not on LOD boundaries & changes rather than a simple size / LI basis. The idea here being that the cost is more reflective of what is seen and rendered by the viewer, which is seen as “levelling” the playing field (if a small object has a really high LOD tri count, then it would incur higher costs, in theory making creators more conservative in how they construct their models.
      • It was pointed out that in some respects complexity / LODs are already being gamed (e.g. by having one high LOD model then setting the medium and low LOD levels to use the same low poly version of the model for both and avoid costs for a proper mid-level LOD model), and such an approach as suggested might further encourage similar gaming.
      • Vir’s view is that the issue is not really that tied to the uploader per se, but is more in the realm of overall cost calculations (although LOD models obviously impact upload costs). As such, ARCTan is really the first step in trying to deal with these kinds of issues, and may help alleviate some of the perceived imbalance seen with upload costs.
  • Materials and Bakes on Mesh: a request was again put forward for LL to provide materials support for Bakes on Mesh. This is not an easy capability to supply, because:
    • System layers for clothing do not have a means to support any materials properties.
    • The Bake Service has no mechanism for identifying and handling materials properties to ensure they are correctly composited.
    • Thus, in order to support materials, both the system wearables and the Bake Service would require a large-scale overhaul which, given all that is going on right now (e.g. trying to transition services to being provisioned via AWS services), the Lab is unwilling to take on.
  • A request was made to allow 2K textures to be displayed by Second Life under “controlled conditions”, the idea being that a single 2K texture could eliminate the need for multiple smaller textures. The two main problems here are:
    • There is already a propensity for people to use high-res textures across all surfaces, whether required or not on the grounds “higher must be visually better”, so allowing even higher resolution textures to be displayed could exacerbate this.
    • Given there is no real gate keeping on how textures are used in-world once uploaded, how would any “controlled conditions” on the use of certain textures actually be implemented (both technically and from a user understanding perspective)?

7 thoughts on “2019 Content Creation User Group week #41 summary

  1. Do you remember the freebie SL16B outfits. They used 1024-textures for everything, even the texture for the small chains on one pair of boots.

    I am struggling to find documentation of the texture display system. There are disorganised fragments all over the place, most over a decade old. This is well before Mesh and LOD. This describes a part of the system, but is vague about some of the details: http://wiki.secondlife.com/wiki/Image_System There’s some SL-specific jargon that doesn’t seem to get used elsewhere.

    There are references elsewhere to the mip maps being generated by the Viewer, which would mean that a 1024-texture would be downloaded and cached, but the 1024 texture might never be used to render an image.

    There is other material about choosing useful texture sizes, but those SL16B outfits leads me to wonder if anyone has ever bothered to read it.

    I suppose it’s to be expected that programmers will talk about technical solutions, but with SL dependent on user-created content Linden Lab really need to up their game on documentation. I rather pity these new Lindens, who will have to figure out how things work, all the while being shouted at by pixel-greedy ignorant users.

    Liked by 1 person

  2. While I’d love to have 2k textures, I do think something has to be done to discourage overuse of 1k textures first. I have seen some truly ridiculous examples, including a small decor item that used a grand total of 16 1k maps, including normal and spec maps.

    Any penalties for using excessively large textures should possibly be tied to the size of the object, though — there’s a difference between using a 1k texture for the pavement of a plaza and using it on a pair of earrings.

    While I”m sure education might help, it’ll probably take an actual penalty to convince some that large textures aren’t necessarily worth it.

    While on the subject of penalties, let’s not forget that rezzed objects are penalized but attached ones are not… pet peeve 🙂

    Like

    1. “Any penalties for using excessively large textures should possibly be tied to the size of the object, though — there’s a difference between using a 1k texture for the pavement of a plaza and using it on a pair of earrings.”

      How do you engineer the simulator to manage this (viewer-side management would be unreliable, as it could potentially be circumvented)? How is it to respond to incorrect usage? Pump LI? Automatically force a downsizing of the texture (itself not a bad idea)? How do you set the size constraints (some will be obvious – as with ear-rings to pavements; but other could well be less so)? What checks need to be added? How will this affect viewer-side workflow? What additional layers of understanding need to be communicated to users? That said, I agree that penalties tend to drive lessons home a lot more than expecting people to read best practices (which is not to offer any excuse for a lack of cohesive documentation).

      “While on the subject of penalties, let’s not forget that rezzed objects are penalized but attached ones are not… pet peeve”

      ARCTan should be looking at that, as the Lab is well aware of the issue.

      Liked by 1 person

      1. Sorry about the slow response — no idea how I missed this.

        I would probably consider both an LI penalty plus automatic downsizing of the texture. Maybe it could even be linked to viewing distance or object size in the viewport — similar to LODs. This would of course need some fleshing out and experimentation, but I’m convinced that a system along these lines would be very good for performance.

        As for best practices, I suspect most creators already know that using a dozen or 1024s on a single object is… suboptimal 🙂

        Liked by 1 person

        1. No worries on the “slow” response – just glad you’re still poking at this blog 🙂 .

          The idea of downsizing based on MIP uploads has been loosely discussed and various CCUG meetings, as has the idea of making all the MIP files from a texture upload available to try to encourage people to look at the different sizes and consider which is best for use on a specific surface / face they are texturing.

          In terms of a LOD-style linking of textures to sample size, this has actually just been done in Sansar on the avatar side (admittedly different in SL to in-world objects). It uses a combination of a set per avatar texture memory cap (initially 100 MB) and various other parameters (e.g. distance from your camera), to determine which textures sizes are used. I *believe* it works in a number of ways: a) if an avatar exceeds the cap, its textures are downsampled; if an avatar is further away from you, it can be down sampled; if there are a lot of avatars within a scene and pushing resources, the cap can be adjusted across all of them. The system has only just been introduced there, and so is still being tweaked and adjusted, but given avatars are already one of the biggest impacts on viewer-side performance, *if* something similar to this could be introduced to SL, it might help (but that is likely a big “if”).

          As to knowing the sub-optimal nature of always using 1024 textures in all cases, you’d certainly hope most creators would understand – but time and again, in-world objects show this not to be the case (and again, it can be even worse with avatar accessories), simply because of the blanket application of the belief “bigger / higher is always better”. Nor is this restricted purely to creators – users uploading their own textures for personal build or for retexturing good they have purchased can also fall into the same trap.

          Liked by 1 person

  3. I “poke” at your blog all the time — it’s the one cannot-miss blog on SL and Sansar!

    I haven’t paid in-depth attention to Sansar in a while, but it’s *really* interesting that they’re taking this approach. While you’re probably right that it’s a big if, I would love to see it ported to SL. The improvements might even allow us to dare to dream about 2k maps down the line… 🙂

    I will forgive the casual user for falling into the 1024 trap, but I think empirical evidence shows that the only way creators can be persuaded is through penalties. For rezzed objects, a sizable LI hit might work; attached objects could increase the render complexity, thus increasing the chances that avatars are rendered as jelly dolls.

    No one *likes* limitations and penalties, but it may be the best approach to resolving these issues. There is no doubt that platform performance would improve, potentially by quite a bit.

    Like

    1. Thank you, re: blog!

      Yeah, the Sansar avatar approach is interest – building on the basic idea of discards from SL. Waiting to see how it all works out once things settle down.

      You’re right on the penalties aspect – again, users / creators at the CCUG frequently raise that exact point. As per the notes, Vir is working on trying to factor in texture costs in ARC calculations for avatars and the rendering cost of in-world objects – so, ARCTan, as it surfaces and is refined, could be a very interesting exercise and learning curve!

      Liked by 1 person

Comments are closed.