2020 Content Creation User Group week #18 summary

The Getaway – Nutmeg, March 2020 – blog post

The following notes were taken from my audio recording and chat log of the Content Creation User Group (CCUG) meeting held on Thursday, April 30th 2020 at 13:00 SLT. These meetings are chaired by Vir Linden, and agenda notes, meeting SLurl, etc, are are available on the Content Creation User Group wiki page.

Unfortunately, my recording software crashed some 2/3rds of the way through the meeting (and I was afk, so didn’t spot it), so I missed recording around the last 15-20 minutes of the discussions.

Jelly Dolls / Avatar Rendering

  • As noted in my week #16 CCUG summary, Vir has been looking at the jelly doll rendering code, which is not well optimised (e.g. it still draws rigged attachments) and it handles some operations inconsistently (e.g.setting an avatar to never render is actually more processing expensive that simply leaving it jelly dolled).
  • One of the things Vir has been experimenting with is displaying Jelly Dolls as monochrome system avatars sans rigged mesh and attachments.
  • An issue with this approach  is that non-human avatars use animations to reposition bones and joints, when can result in the system avatar looking very weird, even in monochrome. Vir has therefore been focused on finding a way to pause the animations when a non-human avatar is jelly dolled, and just running something like one or two of the default animations from the system locomotion graph.

In brief

BUG-228564 -Feature Request: New object property “Intangible”

This is possibly a duplicate request (those listed on the Jira are for different functionality, so not true duplicated), requesting an option to make certain in-world objects “invisible” to the viewer’s ray casting, so they they do not react to mouse clicks, but the objects beyond can.

Such a capability would be useful where semi-transparent objects are used to imitate sun beams or fog or rain, etc., otherwise block the ability to click on objects (e.g. seats, etc.), they surround / are in front of. However, such a change would require both viewer-side and back-end changes so, even if the Jira isn’t a duplicate of an existing request and is something LL accept, it is unlikely to be worked on until after the cloud uplift work has been completely, simply because it will require the introduction of a new object property on the simulator side / back end.

Education / Awareness

Much of the meeting was a general discussion on how to better inform / educate creators and users on the benefits of optimised content, and exactly what can impact things like perceived SL performance.

The major crux of this discussion came down to providing better documentation / information that both creators and users could be pointed to (e.g. more detailed information on mesh creation, including topics such as LOD generation, tri counts, use of maps, etc. for the former; clearly-worded instructions and benefits of using tools like ARC, etc., in the viewer to improve performance, etc., for the latter).

  • It was pointed out that LL have limited resources for the production of comprehensive best practices, and that perhaps the best sources for these might be creators themselves.
  • As the SL wiki is currently closed to general editing, those who have a specific desire to edit wiki pages / build articles can request access by sending an e-mail outlining who they are and why they want access to: letmein-at-lindenlab.com.

2020 Content Creation User Group week #16 summary

Otter Lake, February 2020 – blog post

The following notes were taken from my audio recording and chat log of the Content Creation User Group (CCUG) meeting held on Thursday, April 16th 2020 at 13:00 SLT. These meetings are chaired by Vir Linden, and agenda notes, meeting SLurl, etc, are are available on the Content Creation User Group wiki page.

Environment Enhancement Project

Project Summary

A set of environmental enhancements (e.g. the sky, sun, moon, clouds, and water settings) to be set region or parcel level, with support for up to 7 days per cycle and sky environments set by altitude. It uses a new set of inventory assets (Sky, Water, Day), and includes the ability to use custom Sun, Moon and cloud textures. The assets can be stored in inventory and traded through the Marketplace / exchanged with others, and can additionally be used in experiences.

Resources

Current Status

  • The (possibly last) RC version of the viewer  – version 6.4.0.540188 was issued on Wednesday, April 15th.
  • If all goes well with this RC viewer, then EEP will likely be promoted at week #17 (commencing Monday, April 20th).
  • Once EEP has been promoted, the flow of RCs in the coming weeks also being promoted – allowing for the Lab preferring to keep to one promotion per every 2 weeks – should increase.

ARCTan

Project Summary

An attempt to re-evaluate object and avatar rendering costs to make them more reflective of the actual impact of rendering both. The overall aim is to try to correct some inherent negative incentives for creating optimised content (e.g. with regards to generating LOD models with mesh), and to update the calculations to reflect current resource constraints, rather than basing them on outdated constraints (e.g. graphics systems, network capabilities, etc).

As of January 2020 ARCTan has effectively been split:

  • Immediate viewer-side changes, primarily focused on revising the Avatar Rendering Cost (ARC) calculations and providing additional viewer UI so that people can better visibility and control to seeing complexity. This work can essentially be broken down as:
    • Collect data.
    • Update ARC function.
    • Design and provide tool within the viewer UI (i.e. not a pop-up) that presents ARC information in a usable manner and lets users make decisions about rendering / performance.
  • Work on providing in-world object rendering costs (LOD models, etc.) which might affect Land Impact will be handled as a later tranche of project work, after the avatar work.
  • The belief is that “good” avatar ARC values can likely be used as a computational base for these rendering calculations.

Current Status

  • Vir is looking at the avatar visibility controls – jelly dolls (which are not optimised for avatars with a lot of attachments) and imposters.
    • He’s been particularly looking at getting better performance from jelly dolls (e.g. avoiding any drawing of rigged attachments for jelly dolled avatars, reducing the memory required to handle them).
    • There are places in the code where jelly dolled avatars are handled inconsistently (e.g. setting an avatar to never render should see it treated the same as jelly doll, but this in not the case – it can actually be more expensive to render as shadows are still turned on for it, etc).
    • Improvements arising from this work could be issued within a maintenance RC viewer, rather than awaiting a specific ARCTan viewer to fix them.
  • Another thing Vir has looked at briefly with a view to possibly looking at in more detail in the future, is the time taken to compute a mesh preview when right-clicking an avatar, which can impact the time it takes for the corresponding menu to be displayed. How big an effort it might be to improve this is unclear, but it “would be nice” to see it improved.

More on Jelly Dolls

  • One of the things Vir has been experimenting with vis jelly dolls, is displaying them as monochrome system avatars, so the system avatar mesh is used, and so rigged mesh it is wearing is ignored (as per notes above). A disadvantage here is that non-human avatar forms that are jelly dolled then look “a little weird”.
    • This could be avoided by ignoring all scripted transforms contained in any mesh the avatar is wearing, as these most directly deform the avatar, and so ignoring them would prevent the monochrome system avatar  “looking weird”.
    • The question was asked if having non-human avatars appear in a humanoid shape if jelly dolled would be a problem, with the opinion broadly being that would be up to the person using the jelly doll option.
  • Some alternatives to jelly dolling discussed at the meeting included:
    • Simply render jelly dolled avatars as elliptical capsules.
    • Follow Firestorm’s lead and provide an option to only render avatars on a user’s Friend list (all others are ignored and not rendered).
    • Offer improved lower LOD options for avatars that could be automatically swapped (or used when jelly dolled).
  • One of the issues with jelly dolls is whether or not the capability is widely used – people tend to complain more about seeing mono-coloured avatars in their view than worrying about having their performance hit by fully rendering all the avatars around them; it’s not clear if alternative options would change this.

In brief

  • Mesh Uploader project viewer:
    • Not available as yet, but getting close to a project viewer release.
    • Incorporates Beq Janus’ contributions, as see in Firestorm.
    • Also adds additional information about joint offsets and provides better logging.
  • Next meeting: Thursday, April 23rd.

2020 Content Creation User Group week #14 summary

Garrigua, February 2020 – blog post

The following notes were taken from my audio recording and chat log of the Content Creation User Group (CCUG) meeting held on Thursday, April 2nd 2020 at 13:00 SLT. These meetings are chaired by Vir Linden, and agenda notes, meeting SLurl, etc, are are available on the Content Creation User Group wiki page.

A large part of the meeting concerned options for what might be done when handling complex avatars that fall outside of what is currently being done through ARCTan, including esoteric discussions on when things like impostering should occur in the download / rendering cycle, etc. Discussions also touched on the sale of Sansar (see elsewhere in this blog) and SL’s uptick in user numbers as a result of the current SARS-Cov-2 pandemic.

Environment Enhancement Project

Project Summary

A set of environmental enhancements (e.g. the sky, sun, moon, clouds, and water settings) to be set region or parcel level, with support for up to 7 days per cycle and sky environments set by altitude. It uses a new set of inventory assets (Sky, Water, Day), and includes the ability to use custom Sun, Moon and cloud textures. The assets can be stored in inventory and traded through the Marketplace / exchanged with others, and can additionally be used in experiences.

Resources

Current Status

  • Is caught on a couple rendering bugs related to Linden Water and how the water / things under water are rendered by EEP.
  • The plan is still to have EEP promoted before any other viewer project is promoted to release status.

ARCTan

Project Summary

An attempt to re-evaluate object and avatar rendering costs to make them more reflective of the actual impact of rendering both. The overall aim is to try to correct some inherent negative incentives for creating optimised content (e.g. with regards to generating LOD models with mesh), and to update the calculations to reflect current resource constraints, rather than basing them on outdated constraints (e.g. graphics systems, network capabilities, etc).

As of January 2020 ARCTan has effectively been split:

  • Immediate viewer-side changes, primarily focused on revising the Avatar Rendering Cost (ARC) calculations and providing additional viewer UI so that people can better visibility and control to seeing complexity. This work can essentially be broken down as:
    • Collect data.
    • Update ARC function.
    • Design and provide tool within the viewer UI (i.e. not a pop-up) that presents ARC information in a usable manner and lets users make decisions about rendering / performance.
  • Work on providing in-world object rendering costs (LOD models, etc.) which might affect Land Impact will be handled as a later tranche of project work, after the avatar work.
  • The belief is that “good” avatar ARC values can likely be used as a computational base for these rendering calculations.

Current Status

  • Internal testing is awaiting a Bake Service update related to the issue Vir identified that was causing issues in gathering data.
  • In the interim, Vir has been looking at the tools available for manipulating viewer performance (e.g. imposters, the Jelly Dolls tools, blocking, etc.). He’s specifically been looking at “peculiarities” in how the various options work and raising internal questions on possibly re-examining aspects of how they work.
  • One point with imposters / Jelly Dolls is that while the settings may be used – and as was raised as a concern prior to that project being deployed – is that rendering data for all attachments on an impostered or jelly dolled avatar is still downloaded to the viewer, which is not optimal.
    • Removing attachment data could improve performance, but would also make jelly dolled avatars in particular look even more rudimentary.
  • A bug with the  Jelly Doll code means setting an avatar to never render causes it to load more slowly than just lowering the complexity threshold so it doesn’t render. This is viewed as a known bug.
  • There have been suggestions for trying to limit access to regions (particularly events) based on avatar complexity.
    • Right now, this would be difficult, as the simulator does not have authoritative information on avatar complexity – it’s calculated in the viewer, which in turn is based on data the simulator doesn’t even load.
    • This means there would have to be a significant refactoring of code before the simulator could be more proactive around avatar complexity. Given the cloud uplift work, this is not something the Lab wishes to tackle at this point in time.

General Discussion

  • Arbitrary skeletons: The question was raised on SL allowing entirely custom / arbitrary skeletons.
    • This again would be a complex project, one that was rejected during the Bento project due to the risk of considerable scope creep.
    • There is already a volume of available humanoid mesh avatars, each operating with their own (mutually incompatible) ecosystems of clothing and accessories that can already cause confusion for users. Adding completely arbitrary skeleton rigs to this could make things even more complicated and confusing.
  • The major reason there is little work being put into developing new LSL capabilities is because the majority of the LSL development resources are deeply involved in – wait for it – cloud uplift work.

Next Meeting

Due to the Lab’s monthly Al Hands meeting, the next CCUG meeting will take place on Thursday, April 16th, 2020

2020 Content Creation User Group week #13 summary

Lakeside, February 2020 – blog post

The following notes were taken from my audio recording and chat log of the Content Creation User Group (CCUG) meeting held on Thursday, March 26th 2020 at 13:00 SLT. These meetings are chaired by Vir Linden, and agenda notes, meeting SLurl, etc, are available on the Content Creation User Group wiki page.

SL Viewers

Following the promotion of the Premium RC viewer in week #12, the following viewers were merged up to that code base on March 25th:

At the time this report was written, the rest of the SL viewer pipelines remain as:

  • Current Release version  version 6.3.8.538264, dated March 12, promoted March 18th. Formerly the Premium RC viewer.
  • Release channel cohorts):
    • EEP RC viewer updated to version 6.4.0.538823, March 20.
    • Zirbenz Maintenance RC viewer, version 6.3.9.538719, issued March 19.
  • Project viewers:
    • Copy / Paste viewer, version 6.3.5.533365, December 9, 2019.
    • Project Muscadine (Animesh follow-on) project viewer, version 6.4.0.532999, November 22, 2019.
    • Legacy Profiles viewer, version 6.3.2.530836, September 17, 2019. Covers the re-integration of Viewer Profiles.
    • 360 Snapshot project viewer, version 6.2.4.529111, July 16, 2019.

Environment Enhancement Project

Project Summary

A set of environmental enhancements (e.g. the sky, sun, moon, clouds, and water settings) to be set region or parcel level, with support for up to 7 days per cycle and sky environments set by altitude. It uses a new set of inventory assets (Sky, Water, Day), and includes the ability to use custom Sun, Moon and cloud textures. The assets can be stored in inventory and traded through the Marketplace / exchanged with others, and can additionally be used in experiences.

Resources

Current Status

  • Is now “really close” to be ready for release, with all of the graphic team working hard to eliminate the last of the issues that have been seen as blocker to moving the project to formal release status.
  • There  may only be two remaining blockers that need to be cleared.

ARCTan

Project Summary

An attempt to re-evaluate object and avatar rendering costs to make them more reflective of the actual impact of rendering both. The overall aim is to try to correct some inherent negative incentives for creating optimised content (e.g. with regards to generating LOD models with mesh), and to update the calculations to reflect current resource constraints, rather than basing them on outdated constraints (e.g. graphics systems, network capabilities, etc).

As of January 2020 ARCTan has effectively been split:

  • Immediate viewer-side changes, primarily focused on revising the Avatar Rendering Cost (ARC) calculations and providing additional viewer UI so that people can better visibility and control to seeing complexity. This work can essentially be broken down as:
    • Collect data.
    • Update ARC function.
    • Design and provide tool within the viewer UI (i.e. not a pop-up) that presents ARC information in a usable manner and lets users make decisions about rendering / performance.
  • Work on providing in-world object rendering costs (LOD models, etc.) which might affect Land Impact will be handled as a later tranche of project work, after the avatar work.
  • The belief is that “good” avatar ARC values can likely be used as a computational base for these rendering calculations.

Current Status

  • Vir is still trying to resolve the appearance  / Bake Service issue he thought he might have a fix for.that has been causing problems with ARCTan testing. This has yet to be QA tested. Should it pass, then it will mean internal testing can resume.

Project Muscadine

Project Summary

Currently: offering the means to change an Animesh size parameters via LSL.

Current Status

  • Still technically on hold, but Vir has been looking at what will be required to get what had been worked up back up-to-date This work, when it can be tackled will include:
    • Merging the project viewer up to the current release viewer / EEP.
    • Updating the server code with all of the updates made to the simulator code, which is described as a “fairly major” piece of work.

General Discussion

  • LL is continuing to see a rise in Second life use as a result of SARS-Cov-2, and the majority of the services are handling things well.
  • There is a report that larger Animesh objects do not LOD (level of distance) swap gracefully if the viewer cache has been heavily used (e.g. as a result of going to an even), even if the Animesh has been previously cached. The only ways to clear the issue appear to be re-logging or clearing cache.
    • This is not a known issue or something LL have seen, and a Jira has been requested on the problem.
  • There is an issue with the LL viewer getting confused between RC viewers when updating to a more recent RC update. This is a known issue and is being investigated.
  • There was a discussion over animation priorities and expanding the current range of priorities (with one suggestion they should go as high as 15!).
    • An advantage with a greater range is that in theory allow for more granular control of animation types (e.g. 0-1 for default system animations; 2 for general AO animations (standing, walking, running, flying); 3-4 for common AO animations (e.g sitting); 5 for “speciality / custom” AOs; 6 for “must run in all cases”.
    • The flip side to this is the issue of creators just opting for the higher-end settings “because they are there”.
  • The ability to dynamically set animations via LSL was also re-mentioned and discussed.
  • Vir noted that were LL to look at implemented the dynamic application of animations, they might also look at priorities and priority ranges.
  • A further request was made for a “standalone”alpha channel for materials (separate to the one pre-baked into the diffuse texture channel. This is something that has been requested in the past (e.g. see: BUG-224928), and something not under current consideration.

2020 Content Creation User Group week #9 summary

The Cold Rose, January 2020 – blog post

The following notes were taken from my audio recording of the Content Creation User Group (CCUG) meeting held on Thursday, February 20th 2020 at 13:00 SLT. These meetings are chaired by Vir Linden, and agenda notes, meeting SLurl, etc, are available on the Content Creation User Group wiki page.

Environment Enhancement Project

Project Summary

A set of environmental enhancements (e.g. the sky, sun, moon, clouds, and water settings) to be set region or parcel level, with support for up to 7 days per cycle and sky environments set by altitude. It uses a new set of inventory assets (Sky, Water, Day), and includes the ability to use custom Sun, Moon and cloud textures. The assets can be stored in inventory and traded through the Marketplace / exchanged with others, and can additionally be used in experiences.

Resources

Current Status

  • Final review of issues is due on Friday, February 28th. If the project passes this review, the EEP will be cleared for promotion to release status.
  • There is a viewer build that the Lab has internally that is liable to be the release version; it’s not clear if this viewer will go to RC prior to promotion or be issued as the de facto release viewer .
  • It has again been noted that EEP will not give a precise one-to-one rendering of absolutely every environment (sky, lighting, etc.) in SL when compared to Windlight, as EEP uses a completely different and updated set of shaders, but it is hoped that most will be “very close”.
  • Once EEP has has reached release status, it is anticipated that their will be a “fairly rapid” cycle of viewer promotions to clear the remaining RC viewers in the pipelines (i.e. one new promotion every other week).

ARCTan

Project Summary

An attempt to re-evaluate object and avatar rendering costs to make them more reflective of the actual impact of rendering both. The overall aim is to try to correct some inherent negative incentives for creating optimised content (e.g. with regards to generating LOD models with mesh), and to update the calculations to reflect current resource constraints, rather than basing them on outdated constraints (e.g. graphics systems, network capabilities, etc).

As of January 2020 ARCTan has effectively been split:

  • Immediate viewer-side changes, primarily focused on revising the Avatar Rendering Cost (ARC) calculations and providing additional viewer UI so that people can better visibility and control to seeing complexity. This work can essentially be broken down as:
    • Collect data.
    • Update ARC function.
    • Design and provide tool within the viewer UI (i.e. not a pop-up) that presents ARC information in a usable manner and lets users make decisions about rendering / performance.
  • Work on providing in-world object rendering costs (LOD models, etc.) which might affect Land Impact will be handled as a later tranche of project work, after the avatar work.
  • The belief is that “good” avatar ARC values can likely be used as a computational base for these rendering calculations.

Current Status

  • Vir believes he has a fix for the appearance  / Bake Service issue that has been causing problems with ARCTan testing. This has yet to be QA tested. Should it pass, then it will mean internal testing can resume.
  • UI tools: one of the issues with the current ARC capability is how the information is presented and how it is interpreted. The question was therefore asked (by Vir) about possible ARC-related tools that could be incorporated into the viewer.
    • There are tools already in the viewer (Max Complexity Setting, Always Render Friends, etc.), although how well these are used is open to debate.
    • A concern with added further tools is that they could just additionally confuse for users (“more options and sliders!”) or just be ignored.
    • Automated  / semi automated means of adjusting complexity settings was favoured by some at the meeting.
    • The problem with full automation could be difficult to implement due to the broad variance in hardware used to access SL, the complexity of existing content (avatar heads, bodies, etc.), plus people’s personal preferences, etc.
    • A mechanism for adjusting  / bypassing an automated process could be provided, but then it defeats trying to automate as people will just opt to bypass a the process and ramp up settings.
    • An alternative might be to make the current tools more intuitive / easier to access and also more granular, then gradually move towards greater automation (with overrides) as people gain more familiarity with the whole issue of optimised content and performance.
    • A suggestion from the Lab was to have some form of “temporary” thresholds: such as when teleporting into a busy region switches to some form of frame-rate threshold / asset load prioritisation that helps to maintain a reasonable frame rate whilst also prioritising CPU cycle use to speed up the initial loading period, then switching back up when done. The complication with this approach is, not everyone has the same bottleneck areas, so a threshold setting that works well for some, might not show any benefit for others.
  • Bound up with this is the question of educating users as to:
    • What tools are available and how they work (e.g. a capability one of those at the meeting was espousing as something that would be “nice” to see in the viewer, has in fact been a part of it for almost five years).
    • What actually is impacting their experience with SL (it is so easy to blame “the servers” and “LL” when actually many of the problems are in fact viewer-side and could be better managed by a user than might otherwise be the case).

2020 Content Creation User Group week #8 summary

Catena et Cavea, January 2020 – blog post

The following notes were taken from my audio recording of the Content Creation User Group (CCUG) meeting held on Thursday, February 20th 2020 at 13:00 SLT. These meetings are chaired by Vir Linden, and agenda notes, meeting SLurl, etc, are available on the Content Creation User Group wiki page.

Environment Enhancement Project

Project Summary

A set of environmental enhancements (e.g. the sky, sun, moon, clouds, and water settings) to be set region or parcel level, with support for up to 7 days per cycle and sky environments set by altitude. It uses a new set of inventory assets (Sky, Water, Day), and includes the ability to use custom Sun, Moon and cloud textures. The assets can be stored in inventory and traded through the Marketplace / exchanged with others, and can additionally be used in experiences.

Resources

Current Status

  • Work is continuing to clear the remaining rendering bugs, which are being described as “resilient”.
  • The hope is EEP could be ready to move forward by the end of the month.
  • There is a backlog of potential fixes / enhancements for EEP (e.g. further rendering improvements, improving the brightness of stars, etc). Some of these will form future EEP enhancements, others may be dealt with as part of other work such as on-going rendering system improvements, rather than being held for a future EEP-specifc project”.

ARCTan

Project Summary

An attempt to re-evaluate object and avatar rendering costs to make them more reflective of the actual impact of rendering both. The overall aim is to try to correct some inherent negative incentives for creating optimised content (e.g. with regards to generating LOD models with mesh), and to update the calculations to reflect current resource constraints, rather than basing them on outdated constraints (e.g. graphics systems, network capabilities, etc).

As of January 2020 ARCTan has effectively been split:

  • Immediate viewer-side changes, primarily focused on revising the Avatar Rendering Cost (ARC) calculations and providing additional viewer UI so that people can better visibility and control to seeing complexity. This work can essentially be broken down as:
    • Collect data.
    • Update ARC function.
    • Design and provide tool within the viewer UI (i.e. not a pop-up) that presents ARC information in a usable manner and lets users make decisions about rendering / performance.
  • Work on providing in-world object rendering costs (LOD models, etc.) which might affect Land Impact will be handled as a later tranche of project work, after the avatar work.
  • The belief is that “good” avatar ARC values can likely be used as a computational base for these rendering calculations.

Current Status

  • Vir is still working on the Bake Service issue I’ve noted in my last two CCUG updates. However, he believes he now has a fix, and this is currently going through internal testing.
  • One thing that ARCTan testing has shown is the degree of variability in frame rates in terms of how long each frame takes to process. Part of this might be due to multiple operations running in the same thread when they should perhaps be separated into their own threads, particularly in terms of avatar loading.

Project Muscadine

Project Summary

  • Currently: offering the means to change an Animesh size parameters via LSL.

Current Status

  • Still on hold, but the Aditi simhost that did have the back-end code has also been re-purposed for other project work, so the back-end support for Muscadine is currently unavailable.

In Brief

  • Viewer caching project: this has been a long-term project, which has recently re-started (and which is usually a subject for discussion at the TPVD meetings).
    • There is code related to the VFS caching (referenced in the message seen at viewer-start up) the in in-memory processes that sit on top of it that has not been updated in a long while and which can give rise to stability issues.
    • The Lab now plans to work on this code “extensively” over the next few months.
  • There are claims that use of Animesh impacts simulator performance. As Animesh is predominantly a viewer-side capability, it is hard to see how it could impact simulator performance; it is possible that those experiencing issues could be conflating viewer and simulator performance.
  • Poser project: a contribution from the Black Dragon viewer, this is a project that is currently on hold.
    • The idea is to allow local (i.e. viewer side) joint-by-joint poses by entering different values for each of the required positions and rotations for a joint.
    • The fact that the tool is viewer-side with the results unseen by other users has been seen by the Lab as the project’s core limitation.
    • The Lab’s view is that the easiest way to share the results would be to place them in a single frame animation that puts the avatar into the required pose and which can be seen by other viewers, and this would like be the approach taken when / if the project is resumed.
    • This work has nothing to do with the pupeeteering project from 2011.
  • A further project awaiting resumption is the move to HTTP 2, which will hopefully improve things like asset data fetching, offer improved stability in data handling and improve scene loading.
  • Tidbit: the mesh uploader for Second Life apparently took around 10 people over 2 years to develop / get to work (and still has a UI element that might be incomprehensible to some). As such there is some concern at the Lab that attempt to extend SL to support other modelling formats (e.g. FBX) could result in something equally / more confusing – although this is not to suggest LL is resolutely against supporting other file formats for use with SL.