SL project updates 21/2: NEW project: applying baked textures on mesh avatars

During the Content Creation User Group meeting held on Thursday, May 25th, Vir Linden announced that Linden Lab is now formally investigating applying baked textures to mesh avatars in Second Life, a project that has been on the request list since at least the Bento project.

In short, if it can be implemented, it would mean that textures such as skins and make-up layers could be applied to a mesh avatar in much the same way as system layer clothing can currently be applied to system avatars, thus in theory reducing the complexity of mesh avatars by reducing the number of “onion layers” they currently require in order to simulate the capabilities of the baking system.  This in turn should ease the rendering load mesh avatars place on CPUs and GPUs, thus hopefully improving people’s broader Second Life experience.

HOWEVER, the project is only at its earliest stages, and it will be a while before there is anything visible to see with regards to it. The following is a summary of the project’s current status:

  • The first aspect of the work will be to update the existing baking service.
    • This currently operates at a maximum texture resolution of 512×512.
    • For mesh purposes, this needs to be increased to 1024×1024 (which can already be used directly on avatar meshes via textures and / or applier systems).
    • As the baking service hasn’t been touched in some time, updating it may take a while, and any progress on the rest of the project is dependent upon it being completed.
    • Once the baking service has been updated, then the actual work of extending it to support mesh avatars should be fairly straightforward.
  • The exact specifications for how the bakes will work have yet to be defined, so there are no feature / capability details at present.
  • The capability will not support the use of materials, as the baking service as a whole has no notion of materials at present; it only produces a composite of diffuse textures, and there would be a considerable amount of additional work required to make it “materials aware”, marking it as (perhaps) a separate project.

It is important to note that this capability is not necessarily intended to replace applier systems; rather it is to add flexibility to using texture bakes with mesh, and potentially reduce the complexity of mesh avatars.

Further updates on this work will come via the Content Creation User Group (CCUG) meetings, and I’ll report on them through my usual CCUG meeting updates.

The following is an audio extract from the May 25th CCUG, at which Vir announced the project.

Note: there was a broader discussion on the avatar baking service, and this will be covered in my upcoming report on the CCUG itself.

SL project updates week 20/2: Content Creation User Group w/audio

The Content Creation User Group meeting, at the Hippotropolis Camp Fire Circle (stock)

The following notes are taken from the Content Creation User Group meeting, held on  Thursday, May 18th, 2017 at 1:00pm SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are available on the Content Creation User Group wiki page.

Audio extracts are provided within the text, covering the core points of the meeting. Please note, however, that comments are not necessarily presented in the chronological order in which they were discussed in the meeting. Instead, I have tried to place a number of related comments by Vir on specific topics into single audio extracts and with their associated notes, in the hope of making those topics easier to follow, and without changing the context of the comments themselves.  If you would prefer to listen to the discussion and comments in the order the meeting unfolded, I have embedded a video recorded at the meeting by Medhue Simoni. My thanks to him making it available.

Supplemental Animations

While this is now an adopted project, the focus has been on animated objects, and so there is no significant progress on this work at present.

Applying Baked Textures to Mesh Avatars

No movement on this.

Animated Objects

Vir has spent most of the week since the last meeting working on animated objects and developing prototypes and looking at proof-of-concept to see how objects might be animated using the avatar skeleton. He describes the results thus far as encouraging whilst also pointing out it is still early days with the work, so it is still far too early to determine what the final architecture will be.

The viewer already has a notion of an avatar without a human operator, which is notably seen when uploading an avatar mesh or animation. This notional avatar isn’t rendered graphically, but is oriented using transforms so that an object can use it as a source of joint motions. This is not necessarily how things will work with any finished product, but it is enough to demonstrate what might be possible.

Currently, Vir is working with single object rigged meshes, and would be happy to receive similar models, preferably with associated animation, if people have anything they believe would be useful for helping with these tests.

It is hoped that “being animated” will be an additional property which does not require a new mesh upload option, so that any rigged mesh for which you have Edit permissions for can be set to use the property  so that it can be driven by its own animations.  Currently:

  • This will likely mean the object will no longer be attachable to an avatar
  • It has yet to be determined if this property will be a new prim type or an additional field added to an existing object, etc
  • It will not require any changes to the current mesh uploader; the property to convert a mesh to an animated object can be set post upload.

A suggestion was made that the animated mesh should use its own skeleton when independently rezzed in-world, but a sub-set of a controlling avatar’s skeleton if it is attached. This would allow things like animated horses to be rezzed in-world and then sat on for riding or pets to be “picked up” and carried,  as is currently the case with some scripted animals already.

The testing carried out thus far hasn’t looked at animated attachments, although Vir appreciates the potential in having them. However, there are concerns over potential additional performance impacts, the risk of bone conflicts (what happens if your avatar is already using one or more bones some something and these same bones are used by an animated attachment).

While not ruling the potential out, Vir’s tests so far haven’t encompassed animated attachments to determine what issue might arise.  There are also other factors involved in avatar control which need to be looked at with animated objects: hover height, offsets, position, etc., all of which might affect how an animated object might be seen / behave.

Scripting / LSL Commands

The current work has not so far looked at LSL commands or command sets for the new capability. However the intent remains that scripts for controlling an animated object will be held within the inventory for that object, and able to call animations for the object also contained within the object’s inventory, so things are not straying too far from what can already be doing vis scripted control of in-world objects.

Performance Impact

Similarly, it is hard at this point to know what the likely performance hit might be. Bento has shown that adding more bones to the avatar skeleton doesn’t create a notable performance hit, so providing a skeleton for in-world objects shouldn’t cause any greater impact than a basic avatar. However, associating a rigged mesh object with than skeleton, then animating the joints, etc., will have an impact, particularly if a lot of animated objects are used in any given place.

This is something that will be looked at in greater detail once there is a project viewer available for testing alongside any server-side updates, although the Lab doesn’t intend to make it easy for a region to be spammed with multiple versions of an animated object, and this may in part be linked to the Land Impact associated with such objects.

Attachment Points on Animated Objects and Linksets with Animated Objects

While attachment points are also joints within the skeleton being used by an animated object, and so can be animated, they would not actually support having other objects attached to them, as the animated object doesn’t have links to other objects in the way an avatar does.

An animated objects could be a linkset of rigged meshes which are identified as a single object, with all of the rigged meshes referencing the same skeleton. Things might be more difficult if static mesh objects form a part of the object, as it is not clear how the positioning of these would be controlled, and more testing is required along these lines.

Body Shapes and Animation Scaling

Requests were made to allow animated objects to have body shapes (which would allow slider support, etc.), and  / or animation scaling.

Because of the changes that would be involved in both, coupled with the potential for conflicts in the case of animation scaling, Vir does not see either as being part of this work – as previously noted, assigning a body shape to an animated object would impact a number of other back-end systems (such as the baking service), adding significant overheads to the project.

As such, the Lab would rather keep the work focused, building on something that could be rolled-out relatively quickly, and then iterated upon. However, one option that might be considered is having some kind of root node scale, based on the scale of the animated object that would size the skeleton to the scale of the object, rather than vice versa, possibly by altering how the mPelvis bone is managed for such objects.

[56:37-1:02:30] The final part of the meeting delved into the relative efficiency of mesh and sculpts, and matrix maths on CPUs / GPUs, and the complexities of rendering animated objects, together with a reminder that object rendering costs are currently being re-examined.

Other Items

In-World Mesh Editing?

[41:00-55:55] Maxwell Graf raises the idea of having a simple in-world mesh editor / enhancements to the editing tools which would allow creators to adjust individual face, edge or point in an object, presenting a reason for mesh creators to spend more time in-world and which might allow non-mesh builders more flexibility in what they can do as well.

The current toolset  – mesh uploader and editing tools – would not support such a move. There are also a number of potential gotchas on a technical level which would need to be understood and dealt with, and in order for the Lab to consider such a project, any proposal would have to consider the smallest subset of capabilities available in dedicated mesh creation / editing tools like Blender and Maya that would be useful to have in-world, so that it might be possible to define the overall scope of the work required in terms of resources, etc., and what the overall return might be on the effort taken.

Based on the conversation, Max is going to try to put together a feature request / proposal, even if only for the purposes of future discussion.

 

SL project updates 19/2: NEW projects – supplemental animations and animated objects

The Kubrick Roomsblog post

The following notes are taken from the Content Creation User Group meeting, held on  Thursday, May 11th, 2017 at 1:00pm SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are available on the Content Creation User Group wiki page.

Audio extracts are provided within the text – although please note, these are not necessarily presented in the chronological order in which they were discussed in the meeting. Rather, I have tried to place a number of related comments by Vir on specific topics together – project scope, constraints, etc., where in the meeting they may have been discussed / reiterated at different times. Medhue Simoni recorded the meeting, and his video is embedded at the end of this report for those wishing to following the discussion chronologically. My thanks to him for the recording.

The meeting held two major announcements: supplemental animations and animated objects, both of which are being loosely referred to under the umbrella of “animation extensions”.

Supplemental Animations

This is an idea to overcome issues of animations states keyed by the server-side  llSetAnimationOverride() conflicting with one another. This problem has been particularly noticeable since the arrival of Bento, and a typical example is that an animation to flap Bento wings, if played to have natural wing movement while walking, results in a conflict with the walk animation, causing the avatar to slide along the ground.

  • Supplemental animations will allow additional animations to run alongside the basic llSetAnimationOverride() locomotion graph, requiring updates to the server-side animation code, rather than any viewer updates.
  • The changes will allow for more than one supplemental animation to run at the same time – so you could have wings flapping while walking and a tail swinging – providing the animations are restricted to using discrete sets of bones and do not clash (e.g. the wing flapping doesn’t call on a bone used in tail wagging or walking). If there is an overlap, the normal animation priorities would then determine which animation is played.
  • While the syntax still has to be worked out, it will likely be a call to add a set of supplemental animations associated with a specific state (e.g. walking) on attaching a relevant object (such as wings), and a call to remove the animation set when the item is subsequently detached.

Animated Rigged Objects

The Lab is starting work on adding the ability to animate rigged objects to Second Life – something which has been the focus of ongoing discussions within the Content Creation User Group for the past several months.

General Overview, Initial Limitations – *NOT* NPCs

  • At this point in time, this is not about adding fully functional, avatar-like non-player characters (NPCs) to Second Life.
  • It  is about providing a means to use the Bento skeleton with rigged mesh to provide things like independently moveable pets / creatures, and animated scenery features via scripted animation
  • The project may be extended in the future.
  • It will involve both back-end and viewer-side changes, likely to encompass new LSL commands to trigger and stop animations (held in an object’s inventory)
  • It’s currently not clear if this will involve a new type of mesh object, or whether it will need a new flag added to an existing rigged object type in order for the object to be given its own skeleton. But either way, it will be in the same family of rigged mesh objects which is current available.

  • While these objects may use the avatar skeleton, they are not avatars.
  • They will not:
    • Have any concept of body shape or avatar physics associated with them.
    • Use a Current Outfit Folder for wearables.
    • Utilise any system wearables (body shape, system layers, physics objects, etc.).
    • Be influenced by the shape sliders, or have any gender setting (as this is determined by the shape / shape sliders).
  • They will only be related to avatars in that they have associated animations driving them.

  • Given this is not an attempt to implement fully avatar-like NPCs in Second Life, use of the term “NPC” with them is not encouraged.
  • At the moment, the preferred term is simply “animated objects”.

Performance Concerns

  • There is liable to be two areas of impact for this capability:  in-world land impact, directly affecting the simulator, and a rendering impact on the viewer.
  • Right now, the Lab has no idea how great either might be, but they do mean that what can be supported could be limited (hence a reason for not jumping directly to providing “full” NPC capabilities).  However, it will be something that will be monitored as the project moves forward.

General Q&A

This news prompted a range of questions, which Vir sought to address:

  • Would this mean custom avatar skeletons?  – No, it would use the existing (Bento) skeleton, and attaching it to an animated rigged object. However, joint positions and offsets will be supported, allowing the skeleton to be modified to meet different uses.

  • Will this allow the use of Animation Overriders on objects?  – No. objects would at this stage not have  their own locomotion graph like an avatar does, and therefore would not have any notion of walking or flying, etc. All animations would have to be scripted.

  • Does this mean limits associated with the current avatar skeleton – such as the limit of placing a bone no further than 5 metres from the avatar’s centre via an animation – will still apply? Yes, any limits baked into animation will remain. The idea is for existing meshes and existing animations would be able to leverage this capability. In terms of the 5 metre offset limitation.
  • Could animated objects be attached to an avatar?  – This is not necessarily what is being looked at, which is not to rule it out; rather, the emphasis at the moment is getting things animated independently of avatars. There is also a concern over the potential additional impact of animated attachments to an avatar may have.

  • What happens if a script tried to drive the rigged mesh, rather than the avatar skeleton? – Normally, the scripts driving an avatar are in the attachments to that avatar, so “crossing the beams” is not something the Lab would recommend.
  • Is the Lab using this to help fix Pathfinding? – Not really. Pathfinding has its own set of issues and these are unlikely to be tackled as part of this project.
  • Can the skeleton for an animated object be assigned via script from an inventory object? – This might cause permissions issues.
  • How will a script know which object to animate? – The basic thinking is that the script would be inside the object it is animating (as is currently the case for placing scripts in an object), and so has permissions to animate that object. Using a single script to animate multiple independent objects would be more complicated and require some kind of object ID.
  • Could several rigged objects (rigged the same) be linked and have the same animation played? – Yes; the difference would be the object would be animated with respect to its internal skeleton rather than an actual avatar skeleton.
  • Would it be possible to sit on animated objects? – Possibly; although there might be issues, things might look odd. The Lab hasn’t investigated far enough to determine potential gotchas for this, but the hope is animated objects could work for vehicles.

  • Could animation scaling be used to adjust the size of an animated object? – It might make more sense to add some kind of “global scale” which would allow a skeleton to accommodate itself to the size of its object (rather than the object’s size being defined by the skeleton).

  • Will this allow animated objects to have wearables and attachments? – Not at this stage (although mesh clothing could in theory be a part of a the linkset making-up an animated object).  This is a very focused project at this point: playing animations in-world on rigged objects.

Other Points

  • A suggested name for the animated objects project is “Project Alive” – this might actually be adopted!
  • The are no plans for a blog post announcing the project. However, a mechanism will be provided for people to keep involved and comment on the work, possibly via a forum thread, as was the case with Bento. This might at some point utilise polls to focus down on people’s preferences.
  • The in-world forum for discussing this work will be the Content Creation User Group.
  • Between the 44:24 and 51:10 there is a discussion of adding a prim cube) as the root of the skeleton, allowing it to inherit physics and the abilities associated with a prim, morphing physics, plus using IK (inverse kinematics) with rigged object skeletons etc. Pros and cons of these ideas are discussed – largely in chat.  In short: the Lab are still considering how physics might be handled, although they are unlikely to opt for animated or morphing physics, while IK would also need to be looked at.
  • At present, there are no clear time frames as to how long these projects – supplemental animations and animated objects – will take, or when they will be implemented, simply because they are in their early phases. However, given the supplemental animations are restricted to server-side changes and do not require associated viewer updates, they might arrive sooner than animated objects.

Applying Baked Textures to Mesh Avatars

This remains under consideration, with Vir noting animated rigged objects could add a level of complexity to it, were it to be formally adopted as a project.

 

SL project updates 17/2: server, viewer Content Creation UG

Patankarblog post

Server Deployments

As always, please refer to the server deployment thread for the latest information  / updates.

  • There was no deployment to, or restart of, the Main (SLS) channel on Tuesday, April 25th
  • Depending on the outcome of late QA testing, the three RC channels may be updated on Wednesday, April 26th as follows:

DRTSIM-343: Allow Public Access Region / Parcel Settings Changes

This is the update to region / parcel access that will mean that if a region is explicitly set to Allow Public Access, parcel holders on the region will no longer be able to override the setting at the parcel level (see my update here). It had been deployed to the three RC channels a couple of weeks ago, but was then withdrawn. This may now be reappearing on an RC in week #18 (commencing Monday, May 1st, 2017), with Rider linden noting:

There were a number of suggestions about additions to the project. I just finished getting the code in that will send a notification to the parcel owner if their access settings are changed out from under them. Rider Linden: I’ve also fixed it so that the previous settings are stored in the simstate and restored if the override is reverted.

SL Viewer

The AssetHTTP project viewer, which shifts remaining asset types to delivery over HTTP via the Content Delivery Network(s) leveraged by the Lab, was updated to version 5.0.5.325825 on Thursday, April 27th. This is primarily a bug-fix release, aimed at reducing the high crash rate exhibited by the previous version.

Content Creation User Group Meeting

The following notes are taken from the Content Creation User Group meeting, held on  Thursday April 27th, 2017 at 1:00pm SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are available on the Content Creation User Group wiki page.

The meeting was more of a general Q&A session, live streamed / recorded by Medhue Simoni, and that video is embedded at the end of this update, my thanks to him for providing it. Timestamps in the text below will take readers directly to the relevant point in the video (in a separate tab) where topics are discussed. Note that there was a lot of discussion via text, with topics overlapping. The notes here, together with the time stamps and audio extracts from my own recording refer to the key topics where Vir Linden provided input / feedback.

Attachments Using non-Hand Bones Following Hand Movements

[5:31] Medhue Simoni has been trying to work out a way to have attachments used by the hand (such as a gun, or nunchuk system, for example) rigged to non-hand bones in the skeleton correctly track with the hands. This would, for example allow someone perform a set of nunchuk exercises without the weapons massively separating from the hands, or allow a gun to be drawn, fired, twirled on a trigger finger, etc, and then be returned to the holster in a fluid, hand-following movement.

The problem here is that s non-hand bones which might be used for this aren’t actually connected to the hands, they have no way of knowing where the hand might be placed. However, Medhue believes that given time, he might be able to solve the problem.

“Layering” Meshes using Alpha Textures

[9:20] Some content creators have been taking advantage of placing two meshes in the same location and using an alpha texture as an overlay, thereby forcing one mesh to always be on top.  This can add a certain level of realism to objects such as plants without the need for additional textures / baking.

However, how meshes with alphas are “sorted” at present appears to be more a factor of how the rendering pipeline is working at present, rather than being an intentional feature, therefore using layered meshes and alphas in this way is not recommended, as it cannot be guaranteed that a future update to the rendering system won’t change the behaviour.

Advanced Lighting Model and Lower-End Systems

[20:38] A question was asked if Advanced Lighting Model (ALM) “be made to work on lower spec computers”, so that more people have the opportunity to see materials in use.

ALM has tended to be a controversial subject, as it is often blamed for causing significant performance hits. However, on medium-to-high end systems, this is perhaps a case of people confusing enabling ALM with enabling ALM together with enabling shadow rendering (which does cause a performance hit); enabling ALM by itself shouldn’t result in any significant hit.

Lower specification systems and older GPU systems, however, are different. Some are not capable of handling ALM, regardless as to whether shadows are disabled, and a performance hit is noticed simply by turning it on. This, coupled with a number of other factors, means that trying to adjust ALM so be of use to lower specification systems isn’t really something that the Lab could realistically engineer.

Note that the above discussion continues for a large part of the meeting, mostly in text chat.

JIRA Feature Acceptance & Action

[26:38] Feature requests submitted via the JIRA can go in one of several ways. If a request proposes something that the Lab believes cannot reasonably be done, or which cannot be done, or which offer what is thought to be a small return for the amount of investment in terms of effort, which tend to get rejected.

Where a request is accepted by the Lab and pulled into their internal JIRA, this doesn’t mean it will definitely result in it being implemented. Again, this doesn’t automatically mean the idea will be implemented; it simply means the Lab is considering the idea. Again, it comes down to matters of overall benefit, resource requirements and availability, etc., as to whether it is actually implemented.

Supplemental Animations and Animated Mesh

[41:32] Both are still under consideration at the Lab, but no news on actual projects being on the horizon. Concerns about performance with animated meshes has been raised internally where people to fill there region / space with lots of animated meshes (NPS, trees with branches swaying in the wind, animals, etc.).

Next Meeting

The next content Creation User Group meeting will be on Thursday, May 11th.

SL project updates 16/2: Content Creation User Group

The Content Creation User Group meeting, at the Hippotropolis Camp Fire Circle (stock)

The following notes are taken from the Content Creation User Group meeting, held on  Thursday April 20th, 2017 at 1:00pm SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are available on the Content Creation User Group wiki page.

The meeting was live streamed / recorded by Medhue Simoni, and that video is embedded at the end of this update, my thanks to him for providing it. Timestamps in the text below will take readers directly to the relevant point in the video (in a separate tab) where topics are discussed.

Supplemental Animations

This is an idea to overcome issues of animations states keyed by the server-side  llSetAnimationOverride() conflicting with one another. This problem has been particularly noticeable since the arrival of Bento, and a typical example is that an animation to flap Bento wings, if played to have natural wing movement while walking, results in a conflict with the walk animation, causing the avatar to slide along the ground.

Again, this is only an issue with the server-side animation triggers; older scripted animation overriders, such as ZHAO, which use the scripted llplayanimation function are unaffected.

[03:41] This is still under discussion at the Lab, and if anyone has specific views / ideas, they are invited to attend the CCUG meetings and discuss them.

Scripted Avatar Skeleton Reset

[05:45] A question was raised (via chat) on whether it would be possible to reset the avatar skeleton via script (rather than replying on the local Reset Avatar options in the viewer), as bone offsets defined within some meshes are not resetting when the mesh is removed (see BUG-11310).

Vir indicated that having a means to do this via script may well be useful, but is unsure of where it might slot into the work priorities were it to be adopted, particularly as it would require a new server-side message for passing the update notification to all surrounding viewers.

Skeleton Reset on Region Entry

[13:52] The above spawned a question on having the viewer automatically perform a Reset Skeleton function on entering a region. to try to also combat problems of avatars appearing deformed.

There are some cases where this might be useful. If you run an animation just once to trigger a deliberate avatar deformation, (e.g. from human to quadruped), that animation is not persistent, so if you enter another region, there is a chance other people will see you as deformed. However, trying to fix such situation through an automated skeleton reset is seen as problematic.  For example, an automatic skeleton reset by the viewer on entering a region could conflict with the receipt of animation updates intended to deform an avatar in your view so it looks correct, resulting in the avatar being unintentionally deformed (again, as in the case of an avatar which is supposed to be a quadruped appearing as a deformed biped, as the Viewer has ignored the necessary appearance information as a result of the automated reset.

Projects Under Consideration

The last several Content Creation User Group meetings have included discussions on two potential projects for the Lab. Neither has been as yet adopted, but are under possible consideration by the Lab, and Vir offered a recap of both.

Applying Baked Textures to Mesh Avatars

[20:57] This is essentially taking the ability of the default avatar baking service to manage compositing and baking system layers (skins, shirts, pants, jackets, tattoos, etc), onto a default avatar, and extending it to be used on mesh avatars, potentially reducing the complexity of the latter (which have to be built up in “onion skin” layers in order to be able to handle things like tattoos, skins, make-up options, etc.), and thus making them more render-efficient.

As mentioned in previous Content Creation updates in this blog, there are some issues with doing this: the baking service would likely need to be tweaked to handle 1024×1024 textures (it currently only support up to 512×512); it does not handle materials (although this is not seen as a major issue, as it is felt among those at the CCUG meetings that the primary use of the baking service would be for compositing skins + make-up /  tattoos into a single texture for mesh application), etc.

Animated Objects / NPCs

[25:17] Animated support of object in-world could be used from a wide variety of things, from non-player characters (NPCs), which do not require a link to an active viewer / client, through to animating things like tree branches swaying in a breeze, etc.

This work has various levels of complexity associated with it, and were it to be adopted, might actually result in a series of the related projects. NPCs in particular would be an extensive project, as it would need to encapsulate a means to define what the NPCs are wearing / have attached, how to make NPCs customisable through something like the avatar shape sliders, etc.

As such, were the Lab to decide to pursue animated objects, decisions would need to be made on overall scope for the project, what might be initially tackled, what might be seen as a follow-on project, etc.

How To Encourage the Lab to Adopt Projects

[28:03] Supplemental animations, bakes on meshes and animated objects have been the three major topics of discussion within the Lab in terms of possible major projects. Until they actually get on to the Lab’s roadmap (*if* they get that far), it is not easy for specific time frames, etc., to be discussed at meetings.

However, the best way to actively encourage the Lab to continue looking into such potential projects is to attend meetings like the CCUG and discuss them. Despite using them in the past, polls aren’t seen as particularly useful, whereas direct discussion through meetings like the CCUG tend to bring a consistent set of interests / ideas / suggestion to the service.

Continue reading “SL project updates 16/2: Content Creation User Group”

SL project updates 2017 15/2: server updates / content creation (w/audio)

The Incredible 4blog post

Server Deployments – Recap

As always, please refer to the server deployment thread for the latest information.

  • The was no Main (SLS) channel deployment or re-start on Tuesday, April 11th.
  • On Wednesday, April 12th, the three RC channels all received a new server maintenance package which includes:
    • Several internal fixes and two new internal logging modes
    • Another adjustment to fix issues with off-line IM and Group Notice delivery reliability
    • Fixes an issue where large numbers of objects could be returned after a rolling restart.

Upcoming Server Updates

Note: DRMSIM is the Aditi (beta grid) channel reference.

DRTSIM-323: New Simulator Build

This update was delayed in its deployment to a release candidate channel on Agni (the Main grid), with Mazidox Linden indicating it will now be moving in week #16 (commencing Monday, April 17th). Region holders wishing to test their applications / services etc., on the new simulator build should contact Concierge about having their region moved to the appropriate RC channel(s) when known.

DRTSIM-332: Updated Avatar Capacity / Access for Regions

See either Improved Region Capacity and Access from the Lab or Lab announces improved region capacity and access in Second Life in this blog for details. This update is currently on a micro-channel (McRib) on Agni, with Mazidox noting the following:

After careful consideration, we’re making the new limits a default, rather than requiring intervention from Estate Managers (of course it is still possible to set limits lower using the region console or the current Maintenance viewer at http://wiki.secondlife.com/wiki/Linden_Lab_Official:Alternate_Viewers ). As a result there’s a new version on Aditi and we’ll have an updated version for McRib soon.

DRTSIM-343: Allow Public Access Region / Parcel Settings Changes

This is the updates to region / parcel access that will mean that if a region is explicitly set to Allow Public Access, parcel holders on the region will no longer be able to override the setting at the parcel level (see my update here). The update had been deployed to the three RC channels, where it was awaiting a viewer-side update featuring revised / improved land controls. However, Mazidox notes the code:

Was removed from RC after residents raised concerns about losing their previous state of “Allow Public Access”. To alleviate concerns of accidental change by landlord wiping out existing access settings on all parcels, we’re saving parcel setting until the region restarts. This means that while it won’t be saved permanently, it will be restored if the landlord removes their Ban Lines override. DRTSIM-343 will likely be back this week in testing, but may not have a corresponding viewer change yet.

DRTSIM-347: Fix for Incorrect Object Returns

This fix is designed to prevent a bug wherein objects might be returned from a region when it is restarted. It is about to undergo testing on Aditi prior to onwards deployment to Agni.

Content Creation User Group

The following notes are taken from the Content Creation User Group meeting, held on  Thursday April 13th, 2017 at 1:00pm SLT at the the Hippotropolis Campfire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are available on the Content Creation User Group wiki page. The audio extracts are from my own recording, and a video of the entire meeting, livestreamed by Medhue Simoni, is embedded at the end of this update.

HTTP Asset Fetching

Vir re-iterated that the HTTP asset viewer, which utilises HTTP and the Content Delivery Network(s) leveraged by the Lab to deliver all Second Life asset types to users (rather than just mesh and texture assets and avatar baking data) as is currently the case with release versions of viewers), is now at Release Candidate status. The viewer has gone through two rapid-fire updates, with the latest version (at the time of writing) being 5.0.4.325368. I have an overview of the viewer, which includes an explanation of CDN use, for those who may not be familiar with the way things work.

Applying Baked Textures to Mesh Avatar

[Video: 2:04-18:35] The Lab is digging further into this idea, which if implemented would in theory allow at least some texture layers to be baked onto avatar meshes in a manner akin to that used with system layers and the default avatar. This could greatly reduce the complexity of mesh avatar bodies by removing the need for them to be “onion skinned” with multiple layers, although there are some issues which would need to be addressed.

One is that the baking service cannot currently handle materials (normal and specular maps; only texture – diffuse – maps), and the Lab has no idea as to how widespread either might be among clothing makers specifically producing applier systems for mesh bodies (which can support the use of materials). As such, they do not have a clear idea as to whether support for materials through the baking service would be required; something that could make any project related to this idea far more complicated to develop.

This led to an extensive discussion on what would be required from a system which could bake directly onto mesh layers, how widely materials might be used, and exactly how such a system would be used. The broad consensus of the discussion was that in terms of baking down layers, most creators like likely prefer to see things like skins and skin variations, tattoos, lingerie, freckles and eyebrows  baked down to a single layer, none of which tend to require significant materials support.

The discussion also touched upon the opportunity to present better alpha masking / alpha blending than chopping up mesh bodies into sections for masking / having alpha blending on different layers get into a fight, etc. Overall, it was felt that presenting some means to bake down some layers and reduce the complexity inherent in mesh avatars would be preferable to waiting on the Lab to be able to undertake a more widespread overhaul of the baking service to provide “full” materials support through it. Vir is going to take these points raised back to the Lab for further consideration.

Animating Objects

Animating mesh objects is another project under consideration. This would be especially useful for things like non-player characters (NPCs) and the like. There are several ways this might be approached, but as Vir explained in reply to a question, none of them would involve animating the Collada file. He also pointed out that animated objects is still only at the discussion stage, so whether any project – were it to be taken on – would include the ability for animated objects / NPCs to attach static objects (e.g. a hat or similar), hasn’t been addressed.

Rapid Round-Up

Animating Objects (/ NPCs): This is also still under consideration, but yet to be adopted as a project. However, were it to go ahead, it would likely not involve animated the Collada files, as suggested during the meeting, but leverage SL’s existing animation capabilities. It’s also too early in the process to say whether or not animated objects would support static objects being attached to them (e.g. an NPC being able to wear different hats).

Supplemental animations: The idea is to allow “supplemental” animations to run alongside the animation states keyed by llSetAnimationOverride(), effectively allowing them to play together, rather than conflicting with one another as is the case at the moment. Suggested some time again, it is still be considered, but no work has been carried out as yet.

Rendering Cost Calculations: he Lab is running a background project to look at the cost of rendering a wide range of Second Life features across a range of different client systems running the viewer. The tests for this work have now been defined and are about to be put into use. Once sufficient data has been gathered, the Lab will use it to determine what might need to be done to improve the accuracy of the avatar rendering calculations. Vir further defined the project’s status:

Re-using Linden Water Maps: The question was asked about re-using Linden Water data (Linden water being a combination of animated diffuse and normal maps) so that it might be re-used on other surfaces. The obvious use here would be to enable mirrors. Quite aside for the feared performance hit this could cause, it seems likely that the water system probably stands as its own unique implementation which would not easily lend itself to other uses.

Bouncing Bewb Improvements: In creating a set of animation for a client, Medhue Simoni noticed an issue with breast attachments and rigged mesh breasts. Essentially, as breast attachments go to the chest, they don’t necessarily follow the movement of the breasts when physics are used (which utilise the volume bone). His suggested solution would be to allow breast (/ nipple) attachments to be attached to the volume bone instead, allowing them to naturally follow breast movement,

Other Items

Aditi Inventory Server Update

The Lab has upgraded the hardware for one of the Aditi inventory servers. Almost all users logging-in to Aditi should be using it. There is a request that if those who do file a bug report f they notices anything strange related to inventory: lag, failure to load, textures looking incorrect etc.