SL project updates week 22/1: server, viewer

Meadow Rose III, Tyme; Inara Pey, May 2017, on Flickr Meadow Rose IIIblog post

Server Deployments

As always, please refer to the server deployment thread for the latest news.

  • On Tuesday, May 23rd, the Main (SLS) channel was updated with a server maintenance package (#17.05.22.326523), containing a fix for BUG-100704, “[Server] If Anyone Can visit is selected after Allow Group was set only group members can enter”, related to the parcel overrides update.
  • On Wednesday, May 31st, the RC channels should be updated as follows:
    • BlueSteel and LeTigre should each receive the same server maintenance package (#17.05.26.326655), comprising “Tweaks to help with capability loss”.
    • Magnum should receive a server maintenance package (#17.05.26.326659) for the simulator operating system update, which does not contain and functionality changes.

OS Update Notes

Alongside the Server Deployment notes for Magnum, Linden Lab also state they are working on a fix for an issue addressed with 17.05.23.326524 from last week (BUG-100737 “Shoutcast receivers unable to relay on RC Magnum”). This has been diagnosed, and they are working on a solution which will require a simple update to affected scripts.

SL Viewer

  • Current Release version 5.0.5.326444, released on May 18th, promoted May 23rd – formerly the Maintenance RC viewer – overview
  • RC viewers:
    • Project AssetHttp project viewer updated to version 5.0.6.326593 on May 26th – This viewer moves fetching of several types of assets to HTTP / CDN – overview
    • Voice RC viewer, version 5.0.5.325998, re-released on Friday, May 5th
  • Project viewers:
    • Project Alex Ivy 64-bit viewer, version 5.1.0.505089, updated on May 11th
    • 360-degree snapshot viewer updated to version 4.1.3.321712 on November 23rd, 2016 – ability to take 360-degree panoramic images
  • Obsolete platform viewer version 3.7.28.300847 dated May 8th, 2015 – provided for users on Windows XP and OS X versions below 10.7.

SL project updates week 21/3: Content Creation UG w/audio

The Content Creation User Group meeting, at the hippotropolis Camp Fire (stock)

The following notes are taken from the Content Creation User Group meeting, held on  Thursday, May 25th, 2017 at 1:00pm SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are available on the Content Creation User Group wiki page.

Audio extracts are provided within the text, covering the core points of the meeting. Please note, however, that comments are not necessarily presented in the chronological order in which they were discussed in the meeting, but are ordered by subject matter.

A video recorded at the meeting by Medhue Simoni is embedded at the end of this update, my thanks to him making it available. However, do not that this cuts out mid-way through the meeting. Timestamps in the text below refer to this recording.

Applying Baked Textures to Mesh Avatars

[1:54] This was announced as a new project – see my separate update for details.

The meeting saw additional questions asked about the baking service, which are summarised below.

Will the Baking Service Support Animated Objects?

  • Not initially. Baked textures are only relevant to your Current Outfit Folder (COF), affecting your appearance only. Animated objects will not have any notion of a COF (as they do not have an associated inventory structure as avatars do), so whose textures would an animated object show?
  • Also, even if you could assign your own COF-defined appearance to an animated object, it would only be valid until you change your own appearance, which would discard the bake used by the object, probably leaving it blank.
  • One solution might be allowing arbitrary textures to be sent to the baking service (see below). Another would be to allow animated objects to have their own notion of a COF contained within the object itself which the baking service could somehow reference
    • WERE this kind of work to be adopted, this would be Vir’s preferred approach. However, it is not currently a part of either the animated objects project or baking textures on meshes.

Baking Arbitrary Textures

Would it be possible to have a LSL function to request baking arbitrary textures?

  • Not as a part of applying baked textures to mesh, although it might be considered in the future.
  • However, the baking service could offer considerable flexibility of use were it to be extended, simply because of the way it defines the body area (head, upper body, lower body).
  • A problem is that, as noted above, baked textures are held only so long as your current avatar appearance defined via your COF is relevant, after which they are discarded. For the system to be useful with arbitrary textures, the resultant composite textures would need more rigorous storage, perhaps as a new asset class or retained in some form of “temporary” texture store – either of which would have to be defined and allowed for.
  • Thus, the problem is the amount of work involved in extending the baking service and (potentially) the asset handling required to support it.

HTTP Asset Viewer

[4:22] The HTTP Asset viewer was updated to version 5.0.6.326593 on Friday, May 26th. This update primarily bring the viewer to parity with the recently promoted release viewer, and so primarily comprise the revised region / parcel access controls, and the updates to Trash emptying behaviour.

Supplemental Animations

[6:53] As well as working on animated meshes, Vir is now also working on the LSL side of supplemental animations alongside of LSL changes need for animated objects. The work is designed  to overcome issues of animations states keyed by the server-side  llSetAnimationOverride() conflicting with one another.

Animated Objects

Current Project Status

Vir has got basic prototyping working in a “hacked up” single version of the viewer. He’s now working on the shared experience – how is an animated object seen by multiple viewers.

There is still no details on what limits beyond land impact which may be applied to animated objects (e.g. number of animated objects – not avatars – permitted per region type, etc), as there is not at this point any solid data on potential performance impact to help indicate the kind of limits which might be required..

Number of Allowed Animation Motions

[8:52] Currently, SL supports a total of 64 animation motions playing at one time per agent (hence walks, arm swings, wing flaps, tail swishes, etc., all of which can happen at the same time). It’s not been tested to see how much of an actual load running multiple animations places on a system. The limit might have to be changed as a result of animated objects – or it might not; it’ll come down to testing.

Other Items of Discussion

Avatar Scaling

[12:24-video end] There is a lengthy discussion on avatar scaling.

  • Essentially, the size slider works within a certain range; go beyond this, and distortions of body parts (e.g. facial features) can start to occur, as some sliders stop working properly.
    • Obviously, it is possible to scale avatars using animations, but again, doing so also doesn’t play nicely with the sliders.
  • This problem is particularly impactful with Tiny and Petite  avatars (although it also affects really large avatars). One workaround is to upload a mesh without joint positions of the affected bones, but this causes breakages in the mesh.Thus, having a slider which could handle the avatar’s scale over a broader range might be beneficial. However:
    • Changing the definition of the current scale slider to work over a broader range isn’t an option, due to the risk of existing content breakage.
    • Adding a new “global scale” slider to the system might be possible. However, while its is relatively simple at the viewer end of things, SL is already close to its limit of 255 sliders, and any additional global slider will require significant changes to the back-end.
  • A further problem is motion is not affected by scale, but is keyed to the current avatar size range. So, additional work would be required to the locomotion system to ensure the distance covered by an avatar’s stride is consistent with its size, adding further complexity to any changes.
  • Also, the ability to scale avatars would also require using rotations only, as any use of translations could result in locomotion issues noted above (e.g. so a really small avatar would appear to zip along at 100s of miles an hour), and rotation-only animations are somewhat limiting.

BUG-20027: Allow joint-offset-relative translations in animations

Created during the Bento project, this feature request was originally closed as something the Lab could not implement. It has now been re-opened as people wanted to add further feedback to it. So, if you have an interest – please go and comment on the JIRA.

Cost of Animating via Bones vs. Using Flexis

The Lab views animating via flexis as being very inefficient, but have no numbers for a direct comparison to the cost of animating bones.

Improving IK Support

General requests have been made for SL to better support Inverse Kinematics (IK) to add greater flexibility of joint / extremity positioning. Vir has requested that if someone could start a feature request JIRA, open for comments, on what might be sought, it would be helpful.

Next Meeting

The next CCUG meeting will be Thursday, June 8th, 2017.

SL project updates 21/2: NEW project: applying baked textures on mesh avatars

During the Content Creation User Group meeting held on Thursday, May 25th, Vir Linden announced that Linden Lab is now formally investigating applying baked textures to mesh avatars in Second Life, a project that has been on the request list since at least the Bento project.

In short, if it can be implemented, it would mean that textures such as skins and make-up layers could be applied to a mesh avatar in much the same way as system layer clothing can currently be applied to system avatars, thus in theory reducing the complexity of mesh avatars by reducing the number of “onion layers” they currently require in order to simulate the capabilities of the baking system.  This in turn should ease the rendering load mesh avatars place on CPUs and GPUs, thus hopefully improving people’s broader Second Life experience.

HOWEVER, the project is only at its earliest stages, and it will be a while before there is anything visible to see with regards to it. The following is a summary of the project’s current status:

  • The first aspect of the work will be to update the existing baking service.
    • This currently operates at a maximum texture resolution of 512×512.
    • For mesh purposes, this needs to be increased to 1024×1024 (which can already be used directly on avatar meshes via textures and / or applier systems).
    • As the baking service hasn’t been touched in some time, updating it may take a while, and any progress on the rest of the project is dependent upon it being completed.
    • Once the baking service has been updated, then the actual work of extending it to support mesh avatars should be fairly straightforward.
  • The exact specifications for how the bakes will work have yet to be defined, so there are no feature / capability details at present.
  • The capability will not support the use of materials, as the baking service as a whole has no notion of materials at present; it only produces a composite of diffuse textures, and there would be a considerable amount of additional work required to make it “materials aware”, marking it as (perhaps) a separate project.

It is important to note that this capability is not necessarily intended to replace applier systems; rather it is to add flexibility to using texture bakes with mesh, and potentially reduce the complexity of mesh avatars.

Further updates on this work will come via the Content Creation User Group (CCUG) meetings, and I’ll report on them through my usual CCUG meeting updates.

The following is an audio extract from the May 25th CCUG, at which Vir announced the project.

Note: there was a broader discussion on the avatar baking service, and this will be covered in my upcoming report on the CCUG itself.

SL project updates week 21/1: server, viewer

Costa Blanco, Costa Blanco; Inara Pey, May 2017, on Flickr Costa Blancoblog post

Server Deployments

As always, please refer to the server deployment thread for the latest updates and news.

  • On Tuesday, May 23rd, the Main (SLS) channel was updated with the server maintenance package previously deployed to LeTigre in week #20, containing the updated server-side parcel access override settings (more below)
  • On Wednesday, May 24th, the RC channels should be updated with a new server maintenance package containing the parcel access override controls, together with a fix for BUG-100704, “[Server] If Anyone Can visit is selected after Allow Group was set only group members can enter”.

SL Viewer

The Maintenance viewer, version 5.0.5.326444, dated May 18th, 2017 was promoted to release status on Tuesday, May 23rd.

This viewer is notable for its inclusion of some improvements to Trash purging behaviour, and support for the parcel access overrides which as of this week are fully deployed across the grid.

I have an overview of this viewer, which examines both the Trash purging changes and the parcel access overrides in detail, as well as touching on the other updates included in the release.

Outside of this, the current pipeline remains as:

  • Release channel cohorts (please see my notes on manually installing RC viewer versions if you wish to install any release candidate(s) yourself):
    • Voice RC viewer, version 5.0.5.325998, dated May 5th
    • Project AssetHttp project viewer,  version 5.0.5.325940, dated May 4th – This viewer moves fetching of several types of assets to HTTP / CDN – overview
  • Project viewers:
    • Project Alex Ivy 64-bit viewer, version 5.1.0.505089, dated May 11th
    • 360-degree snapshot viewer, version 4.1.3.321712, dated November 23, 2016 – ability to take 360-degree panoramic images.
  • Obsolete platform viewer version 3.7.28.300847 dated May 8th, 2015 – provided for users on Windows XP and OS X versions below 10.7.

Expect the two RC viewers to be updated to bring them to parity with the new release viewer viewer.

Infrastructure Updates

As noted in my week #20 updates (notably the TPV Developer meeting notes), the Lab is working on a range of infrastructure updates, together with updates to things like the teleport re-try throttle (which can place a considerable load on the “receiving” region’s simulator). It is hoped that these updates will a) help SL progress into the future, b) potentially offer further increases in the numbers of avatar regions can support.

Other Items

Changes to Mesh Upload Access

Linden Lab recently changed the requirements for being able to upload mesh content to Second Life. For more information, see my separate update.

Instancing Regions?

Note: this is not an actual project with the Lab, nor is it necessarily on the roadmap for SL development. What follows is purely in the realm of speculative discussion.

During the Simulator User Group Meeting on Tuesday, May 23rd, it was asked if instancing regions (notably private standalone regions) might some day be possible. That is, if there is a stand-alone region hosting a very popular event, a copy of the entire region might be spawned, which would then allow people into it to deal with the demand.

While there are a wide range of infrastructure, permission system  and other issues associated with it (how is the instance to be paid for? what about No Copy items, can they be reproduced in an instance of a region, where technically they aren’t a duplicate? How is the required hardware managed & what happens when demand for additional server space isn’t high?, etc.), the idea wasn’t ruled as being completely out-of-bounds. Right now it remains purely on the “someday maybe” wish list.

SL project updates 20/3: TPV Developer meeting

Nitroglobus Roof Gallery: Black and White Women – blog post

The majority of the notes in this update are taken from the TPV Developer meeting held on Friday, May 19th, 2017. The video of that meeting is embedded at the end of this update, my thanks as always to North for recording and providing it. Timestamps in the text below will open the video in a separate window at the relevant point for those wishing to listen to the discussions.

Server Deployments Re-cap

  • There was no Main (SLS) channel deployment or restart on Tuesday, May 16th.
  • On Wednesday, May 17th, the three RC channels were updated as follows:

SL Viewer

[1:00] The Voice RC viewer has an elevated crash rate, and the Lab currently haven’t determined why.

The Maintenance RC viewer updated to version  5.0.5.326444 on Thursday, May 18th. This viewer currently has a lower crash rate that the other RC viewers (although it has not been out that long), so might be a candidate for promotion. I have an overview of this viewer for those interested.

64-bit Viewer

[2:23] The last major functional addition for the 64-bit Alex Ivy viewer is currently with the Lab’s QA. If all goes well, a further project viewer update should arrive in week #21 (commencing Monday, 22nd May).

This introduces a new executable to the viewer – SL Launcher – which runs an update check at start-up. If there is a new version of the viewer available, the Launcher manages the download and installation – including ensuring Windows users get the right version for their operating system (32-, or 64-bit).  If there is no new version to install, or once the viewer installation has completed, the Launcher will launch the viewer as a child process, and will shut down when the viewer exits at the end of a session.

The plan is to move the crash data capture package to the Launcher in the future, which will give full end-to-end monitoring of the viewer in the event of a crash.

360 Snapshot Viewer

[6:07] The work on the 360 snapshot viewer is once again progressing. A new library has been added, which provides the appropriate meta data so that websites supporting 360-degree viewing can correctly such images taken by the viewer on upload, eliminating the need to process them separately via the web service currently supplied by the Lab.

This work is currently being tested, and should find its way into a project viewer update some time in the next two weeks or so, with a release candidate hopefully not too far behind that.

Region Crossing Hand-off / Caps Router Issues

[7:43] Fantasy Faire experienced very high levels of region crossing hand off problems with avatars trying to move between the various regions. A similar issue has surfaced at the just-opened Home and Garden Expo.

While it issue isn’t new, the Lab found a cause is the Caps Router running out of connections due to the number of avatars it is attempting to serve. New monitoring has been put in place which will determine how many connections the Caps Router is using, and when it is approaching its limits. The data gathered will be used to help better determine how many connections are needed, allowing the Lab to adjust the number supported.

This work is going to be carried out incrementally, starting with an initial RC deployment in week #21 containing conservative adjustments in the hope of avoiding creating additional bottlenecks in changing things too radically at one time. However, the hope is that the changes will in time result in two improvements:

  • It could result in an increase in the number of avatars a region can comfortably support
  • As this is an issue at the SERVER level (not the simulator), the changes should help reduce people on regions with few avatars on them experiencing issues as a result of the region being hosted on the same server as one (or more) regions with a lot of avatars on them.

As a result of understanding the problem, the Lab was aware the issue was impacting the Home and Garden Expo even before it had been reported.

Unsuccessful Teleports Impacting Region Performance

[14:20] During investigations into the region issues at Fantasy Faire, the Lab noted that a simulator running a busy region has to carry out a lot of work to determine whether or not someone can teleport into it, which can degrade overall simulator performance.

To combat this, the Lab is going to change the teleport re-try throttle following a failed TP. As viewer-initiated teleports are already somewhat throttled, the change should not affect them. However, it will likely mean that the very rapid retry TP HUDs (aka “TP hammers”) will break or degrade in their performance unless adjusted.

The hope is that by reducing the load placed on a simulator as it tries to deal with too rapid a succession of TP requests which cannot be granted as the region is full, overall performance will be improved and those already in the region will enjoy a better experience.

This change should be appearing in a server RC update soon.

Additional comments on teleport failures:

  • A queuing system will not be added, as this is deemed to be too difficult to implement and manage.
  • There is no relationship between the size of an avatar’s inventory and the frequency with which that avatar may experience teleport failures. However, the amount of items attached to an avatar, the scripts they are running, etc.
  • The Lab can monitor teleport failures in real-time.

Automatic Additional Logging after Region Crashes

[29:27] It was asked if additional logging could be automatically enabled on a region crash. This is something that cannot be done, and Oz’s belief is that doing so would result in an additional load on the simulator during recovery, and so not be a good idea.

Avatar and Object Rendering Cost Investigations

[31:00] The Lab is continuing work in reviewing the rendering cost calculations for in-world objects and avatars, work I first reported on in September 2016. However, the numbers aren’t at a point where any adjustments can be made to the calculations.

Fun Fact

Oz Linden marked his seventh anniversary at the Lab this week – so a belated happy rezday to him! Some of us can likely remember his 2010 appearance at the SLCC, when Esbee Linden introduced him to the audience in Boston 🙂 .

Oz at one of the viewer / open-source panels at SLCC 2010, with Esbee Linden just visible to the right

 

 

 

 

SL project updates week 20/2: Content Creation User Group w/audio

The Content Creation User Group meeting, at the Hippotropolis Camp Fire Circle (stock)

The following notes are taken from the Content Creation User Group meeting, held on  Thursday, May 18th, 2017 at 1:00pm SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are available on the Content Creation User Group wiki page.

Audio extracts are provided within the text, covering the core points of the meeting. Please note, however, that comments are not necessarily presented in the chronological order in which they were discussed in the meeting. Instead, I have tried to place a number of related comments by Vir on specific topics into single audio extracts and with their associated notes, in the hope of making those topics easier to follow, and without changing the context of the comments themselves.  If you would prefer to listen to the discussion and comments in the order the meeting unfolded, I have embedded a video recorded at the meeting by Medhue Simoni. My thanks to him making it available.

Supplemental Animations

While this is now an adopted project, the focus has been on animated objects, and so there is no significant progress on this work at present.

Applying Baked Textures to Mesh Avatars

No movement on this.

Animated Objects

Vir has spent most of the week since the last meeting working on animated objects and developing prototypes and looking at proof-of-concept to see how objects might be animated using the avatar skeleton. He describes the results thus far as encouraging whilst also pointing out it is still early days with the work, so it is still far too early to determine what the final architecture will be.

The viewer already has a notion of an avatar without a human operator, which is notably seen when uploading an avatar mesh or animation. This notional avatar isn’t rendered graphically, but is oriented using transforms so that an object can use it as a source of joint motions. This is not necessarily how things will work with any finished product, but it is enough to demonstrate what might be possible.

Currently, Vir is working with single object rigged meshes, and would be happy to receive similar models, preferably with associated animation, if people have anything they believe would be useful for helping with these tests.

It is hoped that “being animated” will be an additional property which does not require a new mesh upload option, so that any rigged mesh for which you have Edit permissions for can be set to use the property  so that it can be driven by its own animations.  Currently:

  • This will likely mean the object will no longer be attachable to an avatar
  • It has yet to be determined if this property will be a new prim type or an additional field added to an existing object, etc
  • It will not require any changes to the current mesh uploader; the property to convert a mesh to an animated object can be set post upload.

A suggestion was made that the animated mesh should use its own skeleton when independently rezzed in-world, but a sub-set of a controlling avatar’s skeleton if it is attached. This would allow things like animated horses to be rezzed in-world and then sat on for riding or pets to be “picked up” and carried,  as is currently the case with some scripted animals already.

The testing carried out thus far hasn’t looked at animated attachments, although Vir appreciates the potential in having them. However, there are concerns over potential additional performance impacts, the risk of bone conflicts (what happens if your avatar is already using one or more bones some something and these same bones are used by an animated attachment).

While not ruling the potential out, Vir’s tests so far haven’t encompassed animated attachments to determine what issue might arise.  There are also other factors involved in avatar control which need to be looked at with animated objects: hover height, offsets, position, etc., all of which might affect how an animated object might be seen / behave.

Scripting / LSL Commands

The current work has not so far looked at LSL commands or command sets for the new capability. However the intent remains that scripts for controlling an animated object will be held within the inventory for that object, and able to call animations for the object also contained within the object’s inventory, so things are not straying too far from what can already be doing vis scripted control of in-world objects.

Performance Impact

Similarly, it is hard at this point to know what the likely performance hit might be. Bento has shown that adding more bones to the avatar skeleton doesn’t create a notable performance hit, so providing a skeleton for in-world objects shouldn’t cause any greater impact than a basic avatar. However, associating a rigged mesh object with than skeleton, then animating the joints, etc., will have an impact, particularly if a lot of animated objects are used in any given place.

This is something that will be looked at in greater detail once there is a project viewer available for testing alongside any server-side updates, although the Lab doesn’t intend to make it easy for a region to be spammed with multiple versions of an animated object, and this may in part be linked to the Land Impact associated with such objects.

Attachment Points on Animated Objects and Linksets with Animated Objects

While attachment points are also joints within the skeleton being used by an animated object, and so can be animated, they would not actually support having other objects attached to them, as the animated object doesn’t have links to other objects in the way an avatar does.

An animated objects could be a linkset of rigged meshes which are identified as a single object, with all of the rigged meshes referencing the same skeleton. Things might be more difficult if static mesh objects form a part of the object, as it is not clear how the positioning of these would be controlled, and more testing is required along these lines.

Body Shapes and Animation Scaling

Requests were made to allow animated objects to have body shapes (which would allow slider support, etc.), and  / or animation scaling.

Because of the changes that would be involved in both, coupled with the potential for conflicts in the case of animation scaling, Vir does not see either as being part of this work – as previously noted, assigning a body shape to an animated object would impact a number of other back-end systems (such as the baking service), adding significant overheads to the project.

As such, the Lab would rather keep the work focused, building on something that could be rolled-out relatively quickly, and then iterated upon. However, one option that might be considered is having some kind of root node scale, based on the scale of the animated object that would size the skeleton to the scale of the object, rather than vice versa, possibly by altering how the mPelvis bone is managed for such objects.

[56:37-1:02:30] The final part of the meeting delved into the relative efficiency of mesh and sculpts, and matrix maths on CPUs / GPUs, and the complexities of rendering animated objects, together with a reminder that object rendering costs are currently being re-examined.

Other Items

In-World Mesh Editing?

[41:00-55:55] Maxwell Graf raises the idea of having a simple in-world mesh editor / enhancements to the editing tools which would allow creators to adjust individual face, edge or point in an object, presenting a reason for mesh creators to spend more time in-world and which might allow non-mesh builders more flexibility in what they can do as well.

The current toolset  – mesh uploader and editing tools – would not support such a move. There are also a number of potential gotchas on a technical level which would need to be understood and dealt with, and in order for the Lab to consider such a project, any proposal would have to consider the smallest subset of capabilities available in dedicated mesh creation / editing tools like Blender and Maya that would be useful to have in-world, so that it might be possible to define the overall scope of the work required in terms of resources, etc., and what the overall return might be on the effort taken.

Based on the conversation, Max is going to try to put together a feature request / proposal, even if only for the purposes of future discussion.