SL project updates week 35/2: Content Creation UG

Content Creation User Group Meeting, Hippotropolis Camp Fire Circle

The following notes are taken from the Content Creation User Group meeting, held on  Thursday, August 31st, 2017 at 13:00 SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are usually available on the Content Creation User Group wiki page.

Medhue Simoni live steamed the meeting to You Tube, and his video is embedded at the end of this article. These notes present the meeting in terms of topics discussed, rather than a chronological breakdown of the meeting, so the provided time stamps may appear to be out of sequence in places. All time stamps are provided as links which will open the video in a separate browser tab, allowing the discussion to be heard in full.

Animated Mesh (Animesh)

Project Summary

The goal of this project is to provide a means of animating rigged mesh objects using the avatar skeleton, in whole or in part, to provide things like independently moveable pets / creatures, and animated scenery features via scripted animation.

  • Animated objects can be any rigged / skinned mesh which is correctly flagged as an animated object (so it has a skeleton associated with it), and contains the necessary animations and controlling scripts in its own inventory  (Contents tab of the Build floater) required for it to animate itself.
  • The capability will most likely include a new flag added to an existing rigged object type in order for the object to be given its own skeleton.
  • At this point in time, this is not about adding fully functional, avatar-like non-player characters (NPCs) to Second Life.
  • Animated objects will not (initially):
    • Have an avatar shape associated with them
    • Make use of an avatar-like inventory (although individual parts can contain their own inventory such as animations and scripts)
    • Make use of the server-side locomotion graph for walking, etc., and so will not use an AO
    • Use the avatar baking service
  • The project may be extended in the future.
  • It will involve both back-end and viewer-side changes, likely to encompass new LSL commands to trigger and stop animations (held in the object’s contents).

LSL Animation Functions

[1:18-3:04] The LSL functions needed for animating mesh objects are being tested. These comprise:

The first two commands are fairly straightforward in terms of use. The GetObjectAnimationNames function is intended to stop all animations currently playing in an animated object, or to check whether a particular animation whether an animated object is currently playing (the animations and scripts being stored in the object’s Contents tab inventory).

The documentation for these commands is still in progress, so the information on the wiki pages is subject to update. Also, keep in mind these commands will not work until a public animesh project viewer is available.

[10:12-11:16] These commands could, in theory, be used with a scripted HUD to provide some degree of control over animated objects by their owner. An example HUD may or may not be provided in the test content the Lab may supply (see below).

Test Content

[3:23-4:43] Test content for animesh has been a request at past meetings, the idea being to have models available in the Library people can use to familiarise themselves with created animated mesh objects and using the associated scripting controls. The Moles are now looking at adding some content for the Library in preparation for animesh becoming more widely available.

[41:26-45:12] A more general discussion on promoting animesh (when available), test content, and making people aware of animesh.

General Performance

Alexa Linden has been converting existing mesh content  into animesh objects. She’s been using the mesh werewolf starter avatar originally released in 2014, and which is already available in the Library, for this work, and produced a short video of the result, presented as an animated GIF, below.

Alexa’s test use of the Lab’s mesh werewolf avatar as an animated mesh object

Again, note that these are not actual avatars connected to the simulator via individual viewers; they are purely in-world objects being animated by scripts they contain driving a set of animations also contained within them.

[8:00-9:04] More work is also required on the general controls / limits on animated mesh objects: how many are going to be allowed in a scene, the numbered of animated attachments an avatar can wear, calculating their rendering impact, etc. These will not be final when a public viewer appears, but will be used as a baseline which can then be tweaked one way or another once more intensive testing gets started.

[22:05-24:29] In her initial tests with dancing werewolves, Alexa managed to get over 300 dancing together, each using 6 dance animations and a control script. She didn’t notice much in the way of a frame rate impact whilst also moving her avatar around. However, she did notice some update issues with the Interest List (which controls how things are rendered in the viewer as you move your avatar / camera) when zooming in / out of the scene.

The test was by no means definitive. For example, it was using multiple copies of the same basic mesh model and animations, and this may have boosted performance somewhat than might have been the case with 300 different mesh models running in a scene, each with its own unique animations. She also carried out her tests on a region that doesn’t have a lot of other content on it.

[25:24-26:36] Comparing Alexa’s tests with avatar capacity / performance (e.g. 40 avatar in a region) isn’t easy as avatars can be a lot more individually complex. There are also various aspects of managing avatars which don’t necessarily apply to animated objects. For example, animesh items should really only have a limited number of updates associated with them, whereas avatars tend to have a lot more going on (interactions, messaging, etc.,), all of which increases the volume of traffic the simulator and viewers must handle.

Project Viewer

[6:47-7:58] Still no date for when a project viewer will appear, other than the Lab will release it “as soon as we can”.

Right now the biggest piece of work within the viewer is defining  how the skeleton get positioned relative to the object with which it is associated. This currently various depending on the model being used, and currently can result in things “jumping” as they start to animate.

This problem also has an impact when trying to use an animated object as an attachment (e.g. when holding and animated pet), with the result the object can be “floating” around the avatar, rather than obviously attached to it, and does not rotate / move appropriately as the attachment point moves relative to the avatar.

[11:55-12:30] Vir doesn’t believe this is a huge problem to solve, it just requires work on the transform matrix, and this shouldn’t add a significant delay in any project viewer appearing.

[21:05-21:45] However, should fixing it prove to be more complicated than anticipated, it may have to be taken into account in terms of lead times, particularly as having the ability to wear / carry animated pets is seen as one of the more popular user-cases for animesh.

Finally, internal testing of the viewer by the Lab has resulted in some suggestions being made which may also be incorporated into the viewer prior to a public project version appearing, in order to ensure that what does appear offers a solid foundation on which to build, and gives creators an opportunity to give feedback.

Tracking Complexities

[15:03-15:56] As animated objects will be manipulating avatar skeleton bones whether they are attached to an avatar or operating as in-world objects, it will require more tracking of such movements than is currently the case to ensure they are correctly represented by viewers able to see them.

Size Limitations

[16:00-18:12] Animated objects will likely be limited in terms of both their physical size and their poly / tri count. Limits for both have yet to be determined; however, high poly count objects will likely in part be limited by the impact they have on performance. The physical size of animated objects, unless  special limits are set, will likely be defined by the same rules as currently exist for avatar skeleton.

[24:31-25:22] The Interest List / animation issues Alexa encountered (e.g objects some distance from the camera position moving much faster than the should be, and then slowing down to a “normal” speed when zoomed at) are not unique to animated objects. These are issues which the Lab is liable to look at in more detail, but are not seen as something which will necessarily delay progress on animesh, simply because the issue has been around for some time (it can be seen when zoomed out from a group of dancing avatars, for example).

In Brief

[28:12-29:22] Animesh and Region Crossings: The question is asked whether an animated mesh object scripted to follow an avatar would be able to follow the avatar across a physical region boundary. Short answer: yes.

[31:08-33:27] Animesh LODs: Animesh essentially uses the same rules for level of detail (LOD) swapping as ordinary rigged mesh on avatars, although some of the baseline criteria may be different as a result of the differences between avatars and animated mesh.

[33:38-34:51] Animesh physics: the Lab hasn’t done a lot of work with animesh and physics. However, animesh objects / linskets should support physics, but the animations driving them are not physical.

[34:53-36:06] Avatars riding animesh objects: Vir has encountered some problems in trying to have an avatar ride an animesh animal (e.g. a horse). If the animal uses an animation that moves its entire body, it doesn’t actually move the sit point for the avatar, which remains fixed to a point in space with the animesh animal seemingly randomly moving around under it, rather than the rider moving in response to the animal’s movements.

[37:53-38:48] Animesh with Pathfinding: Vir hasn’t tested this, but believes the two should work together.

[38:50-39:46] Animesh and AI: the Lab has no plans to provide any form of AI capabilities for animesh NPCs (the latter being seen as a follow-on o the initial animesh project, when avatar shapes and wearables are added to the animesh capabilities). Rather, they see this as being something scripters might add themselves.

[54:25-56:32] Personal hopes- Vir and Alexa respond to a question on what they’d personally like to see with animesh use.

Bakes on Mesh

Project Summary

Extending the current avatar baking service to allow wearable textures (skins, tattoos, clothing) to be applied directly to mesh bodies as well as system avatars. This involves server-side changes, including updating the baking service to support 1024×1024 textures. This may lead to a reduction in the complexity of mesh avatar bodies and heads.

Current Status

[26:40-27:54 and 49:05-52:17] This work is still at the point of working on the updates to the baking service to support 1024×1024 texture bakes (rather than the current 512×512). A version of the updated baking service is currently on internal testing by the Lab.

Overall performance /load on the baking service in handling many thousands of high-resolution baking requests, also needs to be assessed (will extra server hardware be required to support the load?). Also, tests on the viewer side need to be carried to ensure the viewer is happy with handling the higher resolution bakes, or whether it will also require an update.

All of this work needs to be completed before the system can be updates to apply texture bakes to mesh surfaces – which as actually seen as the “easy” part of the work.

Other Items

Changing Animation Speeds and Additional Movement Options

[9:54-10:10 and 56:34-58:49] The ability for animation speeds (e.g. walking, running) to better suit avatars of different types / sizes has been a frequent request. This isn’t likely to be added to the current animated mesh work, but it is in the pipeline of things the Lab may look at down the road. Similarly, adding more movement options – crawling prone on the ground, tiptoeing, etc., – to the locomotion graph is also a possibility, but is not on the current roadmap.

Accepting a JIRA

[58:54-59:58] Vir offered a reminder that having a JIRA flagged as “accepted” doesn’t necessarily mean it will be implemented in the form suggested. Rather, it points to the fact that the Lab is sufficiently interested in the idea to import it and track it internally as it might be something which fits with an upcoming project, or be something the Lab might want to tackle in some form in the future.

Date of Next Meeting

Due to the Lab’s monthly internal meeting, the next Content Creation User Group meeting will be at 13:00 SLT on Thursday, September 14th.

Advertisements

Have any thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s