Puddlechurch – blog post
The majority of the following notes are taken from the Content Creation User Group (CCUG) meeting, held on Thursday, April 16th 2019 at 13:00 SLT. These meetings are chaired by Vir Linden, and agenda notes, meeting SLurl, etc, are usually available on the Content Creation User Group wiki page.
All other SL viewers in the pipelines remain unchanged:
- Release channel cohorts:
- Project viewers:
- Linux Spur viewer, version 22.214.171.1249906, dated November 17, 2017 and promoted to release status 29 November – offered pending a Linux version of the Alex Ivy viewer code.
- Obsolete platform viewer, version 126.96.36.1990847, May 8, 2015 – provided for users on Windows XP and OS X versions below 10.7.
Environment Enhancement Project
A set of environmental enhancements allowing the environment (sky, sun, moon, clouds, water settings) to be set region or parcel level, with support for up to 7 days per cycle and sky environments set by altitude. It uses a new set of inventory assets (Sky, Water, Day), and includes the ability to use custom Sun, Moon and cloud textures. The assets can be stored in inventory and traded through the Marketplace / exchanged with others, and can additionally be used in experiences.
Due to performance issues, the initial implementation of EEP will not include certain atmospherics such as crepuscular rays (“God rays”).
The bug stomping continues.
Vir is now looking at adding shape support (or similar) to Animesh, which Vir sees as possibly being approached in a couple of ways:
- To make Animesh objects behave as much as possible like avatars. This might be done by issuing a command to load a given shape into an Animesh, or just have a similar appearance resolution to avatars, which would allow associations with body parts for any attachments contained within the Animesh’s contents.
- Advantage: either route offers the closest compatibility to the way in which avatars work, making it easy to port stuff over from using with avatars to using with Animesh (e.g. Animesh NPCs).
- This is a much more complex project to implement as it requires substantial changes to the Bake Service, which can be a performance bottleneck. So a concern is that adding Animesh support to the Bake Service could have a further adverse impact on its general performance.
- While applying a body shape could be done via the simulator (avoiding the Bake Service), but this again involves added complexity in the amount of asset information fetching the Simulator already has to do.
- An alternative approach would be to offer a more granular control, using LSL to set the values usually set by shape sliders.
- Advantages: It can reduce the complexity by allowing s subset of slider changes to be replicated via LSL (e.g. face, hands, etc), rather than trying to have the entire slider system replicated.
- Disadvantages: This doesn’t give the same level of compatibility to the way avatars work, and if all the sliders were required, it would add considerable additional work with LSL calls for the 130+ sliders.
Which approach should be taken is down to whatever the most common use-case for customising Animesh might be (a likely topic for discussion). Currently, either approach will require additional server / viewer messaging, so Vir is looking at that.
There are also questions on what else might be preferable to add to Animesh (e.g. extending Bakes on Mesh to support Animesh, adding attachments support, etc), and the relative priorities people place against the various options as to any order as to how things might be tackled (would applying shapes be sufficient? Should it be shapes then another requirement, or is there another requirement that should take priority over shape support?).
Attachments are an issue in themselves; as Animesh doesn’t have an associated agent, there are no attachment tables for it to use, making basic attachment to s specified point difficult. Also, avatar attachments are effectively individual linksets applied to a common root – the avatar.
However, as an Animesh object is a single linkset, adding attachment to one object is more akin “merging” the attachment’s linkset into that of the Animesh, making them one continuous linkset. This clearly add complications; for example, how do you identify all the parts of the attachment to remove them when detaching, and how do you ensure they detach as a single object, rather than a coalesced group of unlinked items?.
One potential solution might be to have a means by which individual prims within the Animesh linkset can be flagged with an associated joint within the skeleton, thus allowing attachments to be made to that joint, and somehow “faking” the fact that the attachment linkset is not part of the Animesh linkset.
Exactly how this would work in practice still has to be properly determined, together with an mechanism for handling local position and the attachment’s position and rotation offsets. It is further unclear at present whether this approach might required support from and additional viewer UI element or could be controlled entirely through LSL.
Bakes On Mesh
Extending the current avatar baking service to allow wearable textures (skins, tattoos, clothing) to be applied directly to mesh bodies as well as system avatars. This involves viewer and server-side changes, including updating the baking service to support 1024×1024 textures, but does not include normal or specular map support, as these are not part of the existing Bake Service, nor are they recognised as system wearables. Adding materials support may be considered in the future.
Anchor Linden is dealing with issues related to handling alpha layers in the new baking channels – with dome of them not getting correctly baked, and which may need some fixes in the baking process. BUG-226599 is also being looked at; although a feature request, it might actually be the result of an underpinning bug.
Following the April 11th CCUG, Cathy Foil carried out further tests to apply materials to a Bakes of Mesh surface. This involves using a script to take the UUID for one of the new universal bake channels (e.g.BAKED_ AUX1), and pointing it to a normal map (shown in the place holder normal map image “BAKED AUX1 IMG”, right), then wearing a universal wearable that uses the same bake channel. This results in the normal map then being applied to the desired face, as show in the image of the normal map in the Edit floater (arrowed on the right, above). This approach also appeared to allow a layering of normals on a face. However, the method is not currently seen as a recommended approach to materials with BoM, and probably won’t be treated as a supported technique.