Burn2 2019 opens its gates at 17:00 SLT on Friday, October 11th, and will run through until midnight on Sunday, October 20th 2019, culminating in the burning of the Man the Temple on the closing weekend of the event.
The theme for this year is Metamorphoses, and is described thus:
Many cultures of the world have mythologies about transformations, or as the Greeks and Romans called them, metamorphoses.
In today’s world maybe humans are not turned into animals and frogs don’t become princes when kissed. But transformations can happen for a person or collectively for a group and these metamorphoses can have a powerful effect on our lives.
We invite you to immerse yourself in transformative experiences, explore the ideas and thoughts and dreams of creatives around you at Burn2.
A week of activities have been planned for the event, including music, dancing, lamplighters processions and – of course – the burning of The Man (Saturday, October 19th at 12:00 noon SLT) and The Temple (Sunday, October 20th, 12:00 noon SLT). You can keep track of all BURN2 activities through the event schedule on Google Calendar.
To ease your explorations of the regions, don’t forget you can pick up a ride at the Department of Mutant Vehicles, car rezzing point close to the welcome area, and there are balloon rides to be had, while the festival volunteers can point you in the right direction and / or give you note cards listing the camps and points of interest.
As with previous years, participants have fully entered into the spirit of the BURN2 theme, and the regions offer a tremendous carnival atmosphere.
Burn2 is an extension of the Burning Man festival and community into the world of Second Life. It is an officially sanctioned Burning Man regional event, and the only virtual world event out of more than 100 real world Regional groups and the only regional event allowed to burn the man.
The Burn2 Team operates events year around, culminating in an annual major festival of community, art and fire in the fall – a virtual echo of Burning Man itself.
The following notes are taken from my audio recording of the Content Creation User Group (CCUG) meeting, held on Thursday, October 10th 2019 at 13:00 SLT. These meetings are chaired by Vir Linden, and agenda notes, meeting SLurl, etc, are available on the Content Creation User Group wiki page.
There are two new Lindens now on the rendering team – Euclid Linden, who has been with the Lab for around a month at the time of writing, and Ptolemy Linden, who has been a Linden for the last couple of weeks, again at the time of writing. Both will be working on various rendering projects which will include the Love Me Render viewer updates and also projects like the Environment Enhancement Project (EEP) – which is considered a priority in order to move that project towards release.
No further updates thus far in the week. The hope is that the Vinsanto Maintenance RC viewer (version 22.214.171.1240962 at the time of writing) looks to be in “good shape” for promotion, but currently requires a little more time in its release cohort.
This leaves the official viewer pipelines at the time of the meeting as follows:
Current Release version 126.96.36.1990559, formerly the Umeshu Maintenance RC viewer, dated, September 5 – No Change.
Linux Spur viewer, version 188.8.131.529906, dated November 17, 2017 and promoted to release status 29 November 2017 – offered pending a Linux version of the Alex Ivy viewer code.
Obsolete platform viewer, version 184.108.40.2060847, May 8, 2015 – provided for users on Windows XP and OS X versions below 10.7.
An attempt to re-evaluate object and avatar rendering costs to make them more reflective of the actual impact of rendering both. The overall aim is to try to correct some inherent negative incentives for creating optimised content (e.g. with regards to generating LOD models with mesh), and to update the calculations to reflect current resource constraints, rather than basing them on outdated constraints (e.g. graphics systems, network capabilities, etc).
Work is progressing on building a predictive model based on the data LL has been gathering on mesh complexity, frame times, etc.
This model will be tested across a wider range of client hardware types and different ranges of settings.
The data thus far confirms that geometric complexity plays a large part in performance reduction, but also that there are a lot of other variables in play: rigged meshes are very different in behaviour impact to static meshes; some graphics properties can make a “big difference” in frame time, etc.
Details on the impact of textures has yet to be folded into the project.
Currently: offering the means to change an Animesh size parameters via LSL.
Still largely on hold while ARCTan is being focused on.
Other Items in Brief
Mesh Uploader: a couple of points were brought up concerning the mesh uploader:
At the time mesh was introduced, materials were no supported; therefore, in the uploader there is code to discard tangent space (which can be used by normal maps). This means normals must be calculated in real time, causing both performance problems and inconsistencies between how normals appear in Second Life and how they appear in the 3D software used to create them. It’s been suggested this issue should be the subject of a Jira.
Allowing for the work on ARCTan, some see the uploader unfairly punishing on grounds of size and LI.
It what pointed out that a very large mesh that can be complex to render get hit with a high LI and high upload cost, but a very small object – which may still have tens of thousands of triangles – is not penalised to the same degree, even though it might be as costly to render.
The alternative suggested was to have costs based not on LOD boundaries & changes rather than a simple size / LI basis. The idea here being that the cost is more reflective of what is seen and rendered by the viewer, which is seen as “levelling” the playing field (if a small object has a really high LOD tri count, then it would incur higher costs, in theory making creators more conservative in how they construct their models.
It was pointed out that in some respects complexity / LODs are already being gamed (e.g. by having one high LOD model then setting the medium and low LOD levels to use the same low poly version of the model for both and avoid costs for a proper mid-level LOD model), and such an approach as suggested might further encourage similar gaming.
Vir’s view is that the issue is not really that tied to the uploader per se, but is more in the realm of overall cost calculations (although LOD models obviously impact upload costs). As such, ARCTan is really the first step in trying to deal with these kinds of issues, and may help alleviate some of the perceived imbalance seen with upload costs.
Materials and Bakes on Mesh: a request was again put forward for LL to provide materials support for Bakes on Mesh. This is not an easy capability to supply, because:
System layers for clothing do not have a means to support any materials properties.
The Bake Service has no mechanism for identifying and handling materials properties to ensure they are correctly composited.
Thus, in order to support materials, both the system wearables and the Bake Service would require a large-scale overhaul which, given all that is going on right now (e.g. trying to transition services to being provisioned via AWS services), the Lab is unwilling to take on.
A request was made to allow 2K textures to be displayed by Second Life under “controlled conditions”, the idea being that a single 2K texture could eliminate the need for multiple smaller textures. The two main problems here are:
There is already a propensity for people to use high-res textures across all surfaces, whether required or not on the grounds “higher must be visually better”, so allowing even higher resolution textures to be displayed could exacerbate this.
Given there is no real gate keeping on how textures are used in-world once uploaded, how would any “controlled conditions” on the use of certain textures actually be implemented (both technically and from a user understanding perspective)?