SL project updates 37/2: Content Creation User Group

Content Creation User Group Meeting, Hippotropolis Camp Fire Circle

The following notes are taken from the Content Creation User Group meeting, held on  Thursday, September 14th, 2017 at 13:00 SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are usually available on the Content Creation User Group wiki page.

Medhue Simoni live steamed the meeting to You Tube, and his video is embedded at the end of this article. However, due to power issues at his end, the first few minutes are missing from the recording. Time stamps to that recording are included below, and clicking on any of them will launch the video in a separate browser tab at the assigned point. However as these notes present the meeting in terms of topics discussed, rather than a chronological breakdown of the meeting, time stamps may appear to be out of sequence in relation to the recording.

Animesh (Animated Mesh)

“I like the name ‘animated objects’ because I think it’s unambiguous, but it takes a long time to type!” – Vir Linden joking about the name “Animesh”.

Project Summary

The goal of this project is to provide a means of animating rigged mesh objects using the avatar skeleton, in whole or in part, to provide things like independently moveable pets / creatures, and animated scenery features via scripted animation.

  • Animated objects can be any rigged / skinned mesh which is correctly flagged as an animated object (so it has a skeleton associated with it), and contains the necessary animations and controlling scripts in its own inventory  (Contents tab of the Build floater) required for it to animate itself.
  • The capability will most likely include a new flag added to an existing rigged object type in order for the object to be given its own skeleton.
  • At this point in time, this is not about adding fully functional, avatar-like non-player characters (NPCs) to Second Life.
  • Animated objects will not (initially):
    • Have an avatar shape associated with them
    • Make use of an avatar-like inventory (although individual parts can contain their own inventory such as animations and scripts)
    • Make use of the server-side locomotion graph for walking, etc., and so will not use an AO
    • Use the avatar baking service
  • The project may be extended in the future.
  • It will involve both back-end and viewer-side changes, likely to encompass new LSL commands to trigger and stop animations (held in the object’s contents).

Project Viewer

[pre-video start and 46:06-46:40] Work on the viewer is focused on cleaning up how the viewer handles animation when dealing with Animesh objects, as there are some elements which simply aren’t used. The transform matrix has been adjusted, so that Animesh attachments now look as if they are attached to the avatar, rather than potentially floating somewhere reasonably close to it (so an Animesh object attached to a hand will now appear attached to the hand and move with the hand, for example). Further work is required, but things are now behaving a lot better; there’s still no ETA on the appearance of the project viewer, however.

Animesh Constraints

Some basic constraints on attaching Animesh objects to an avatar or in-world, and on the overall allowable complexity of Animesh objects have yet to be defined and incorporated into the viewer. These are necessary to prevent Animesh objects  negatively impacting performance to an undue degree. The initial constraints set within the project viewer will be subject to refinement in testing.

[32:21-33:06] In terms of avatar attachments, there is already a server-side limit on how many attachments can currently be worn by an avatar (38), so the Lab could look at the type of attachments being used, and limit Animesh in terms of an allowed number within this global attachment limit (e.g. 2 out of the 38 global limit for attachments may be Animesh).

Load Testing

Alexa provided a couple of GIFs demonstrating Animesh. The first showed her army of dancing bears – around 100 – all happily dancing on a region without causing an appreciable load.

Alexa’s army of dancing bears. Note that these are not actual avatars connected to the simulator via individual viewers; they are purely in-world objects being animated by scripts they contain driving a set of animations also contained within them.

[13:39-16:54] However, whether populated regions (in terms of avatars and objects) could handle such a load is open to question. Also, these bears are all the same optimised mesh, and are probably not placing the kind of load on the simulator and viewer as would be in the case of multiple and different kinds of mesh with varying levels of complexity. To help determine reasonable limits, the Lab will be establishing some test regions once the projects viewer is available, to allow for more comprehensive testing with assorted Animesh models, and which will used to further refine the Animesh constraints noted above.

[18:11-18:40] As a part of her own testing, Alex also intends to use the mesh starter avatars produced by the Lab, mixing them together in a scene using different animations, etc., to see how things behave.

Animesh and Pathfinding

[10:35-11:14] A couple of previous meetings have raised the idea of Animesh working with Pathfinding (the means by which scripted objects – people, animals, monsters, etc,– be set to move in a region / parcel and react to avatars, etc). Dan Linden is  looking into how Animesh and Pathfinding might work together, and he and Alexa shared a GIF image of some of the work, with some of Alexa’s dancing bears skating around their own pathfinding routes, which provide a quick demonstration that the two can likely be used together.

Dancing Bears following pathfinding Navmesh routes

Alexa has also been experimenting with Animesh and ice-skating, taking the view that having creatures and animals ice-skating in winter scenes (which can actually be common in “wintered” regions, with skating penguins and the like).

Animesh Attachments: Fluidity and Clothing

How smoothly attached Animesh objects work is liable to be dependent upon the animations themselves, and whether or not they move the object’s pelvis bone or nor. As with all things, some experimentation and fine tuning is likely to be required be creators in order to optimise the motions of their Animesh attachments.

Some people have been looking at Animesh as a means to get clothing to move more naturally with an avatar. However, as Vir pointed out in the meeting, utilising additional skeletons in clothes may not be the most efficient way to achieve this, when it should be possible – with a little care – to use existing some of the bones in the avatar skeleton to achieve the same results (e.g. skinning the cloth of a gown or skirt or cape to the wing bones, etc).

This would allow the clothing to move far more seamlessly and in sync with body movements than could be achieved with Animesh attachments, which would not have any direct means of syncing with an avatar’s movement.

Root Positioning

[1:39-3:03] Vir is has been working on aligning the root joint of a skeleton associated with a Animesh to the position of the object in-world. Sometimes this gives good results, sometimes it doesn’t, resulting in objects jumping around when animations is played or moving into the air or sinking into the ground, etc, as the server thinks they are somewhere else than the visual position shown in the viewer. Because of the mixed results, he’s considering alternative approaches, such as using the object’s bounding box, and will be exploring options before pushing out any project viewer. One of the balances he’s trying to achieve is presenting a nice visual result without over-complicating what has to be done in order to achieve that result.

Changing the Orientation of the Skeleton via LSL / LSL Commands

[34:12-34:45] Will there be a scripted function to change the default orientation of a skeleton to match an Animesh object? Conceivably; however, Vir is hoping to develop a reasonable default behaviour when attaching a skeleton which will allow simple editing of the object to achieve the desired result, if required. Should this be shown not to work sufficiently well enough, additional LSL support will likely be looked at.

[4:54] A question was asked about the list animations command (LlGetObjectAnimationNames). This is one of three new LSL commands being introduced to support Animesh – please refer to my week #35 CCUG update for details on these.

Continue reading “SL project updates 37/2: Content Creation User Group”

Advertisements

SL project updates week 35/2: Content Creation UG

Content Creation User Group Meeting, Hippotropolis Camp Fire Circle

The following notes are taken from the Content Creation User Group meeting, held on  Thursday, August 31st, 2017 at 13:00 SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are usually available on the Content Creation User Group wiki page.

Medhue Simoni live steamed the meeting to You Tube, and his video is embedded at the end of this article. These notes present the meeting in terms of topics discussed, rather than a chronological breakdown of the meeting, so the provided time stamps may appear to be out of sequence in places. All time stamps are provided as links which will open the video in a separate browser tab, allowing the discussion to be heard in full.

Animated Mesh (Animesh)

Project Summary

The goal of this project is to provide a means of animating rigged mesh objects using the avatar skeleton, in whole or in part, to provide things like independently moveable pets / creatures, and animated scenery features via scripted animation.

  • Animated objects can be any rigged / skinned mesh which is correctly flagged as an animated object (so it has a skeleton associated with it), and contains the necessary animations and controlling scripts in its own inventory  (Contents tab of the Build floater) required for it to animate itself.
  • The capability will most likely include a new flag added to an existing rigged object type in order for the object to be given its own skeleton.
  • At this point in time, this is not about adding fully functional, avatar-like non-player characters (NPCs) to Second Life.
  • Animated objects will not (initially):
    • Have an avatar shape associated with them
    • Make use of an avatar-like inventory (although individual parts can contain their own inventory such as animations and scripts)
    • Make use of the server-side locomotion graph for walking, etc., and so will not use an AO
    • Use the avatar baking service
  • The project may be extended in the future.
  • It will involve both back-end and viewer-side changes, likely to encompass new LSL commands to trigger and stop animations (held in the object’s contents).

LSL Animation Functions

[1:18-3:04] The LSL functions needed for animating mesh objects are being tested. These comprise:

The first two commands are fairly straightforward in terms of use. The GetObjectAnimationNames function is intended to stop all animations currently playing in an animated object, or to check whether a particular animation whether an animated object is currently playing (the animations and scripts being stored in the object’s Contents tab inventory).

The documentation for these commands is still in progress, so the information on the wiki pages is subject to update. Also, keep in mind these commands will not work until a public animesh project viewer is available.

[10:12-11:16] These commands could, in theory, be used with a scripted HUD to provide some degree of control over animated objects by their owner. An example HUD may or may not be provided in the test content the Lab may supply (see below).

Test Content

[3:23-4:43] Test content for animesh has been a request at past meetings, the idea being to have models available in the Library people can use to familiarise themselves with created animated mesh objects and using the associated scripting controls. The Moles are now looking at adding some content for the Library in preparation for animesh becoming more widely available.

[41:26-45:12] A more general discussion on promoting animesh (when available), test content, and making people aware of animesh.

General Performance

Alexa Linden has been converting existing mesh content  into animesh objects. She’s been using the mesh werewolf starter avatar originally released in 2014, and which is already available in the Library, for this work, and produced a short video of the result, presented as an animated GIF, below.

Alexa’s test use of the Lab’s mesh werewolf avatar as an animated mesh object

Again, note that these are not actual avatars connected to the simulator via individual viewers; they are purely in-world objects being animated by scripts they contain driving a set of animations also contained within them.

[8:00-9:04] More work is also required on the general controls / limits on animated mesh objects: how many are going to be allowed in a scene, the numbered of animated attachments an avatar can wear, calculating their rendering impact, etc. These will not be final when a public viewer appears, but will be used as a baseline which can then be tweaked one way or another once more intensive testing gets started.

[22:05-24:29] In her initial tests with dancing werewolves, Alexa managed to get over 300 dancing together, each using 6 dance animations and a control script. She didn’t notice much in the way of a frame rate impact whilst also moving her avatar around. However, she did notice some update issues with the Interest List (which controls how things are rendered in the viewer as you move your avatar / camera) when zooming in / out of the scene.

The test was by no means definitive. For example, it was using multiple copies of the same basic mesh model and animations, and this may have boosted performance somewhat than might have been the case with 300 different mesh models running in a scene, each with its own unique animations. She also carried out her tests on a region that doesn’t have a lot of other content on it.

[25:24-26:36] Comparing Alexa’s tests with avatar capacity / performance (e.g. 40 avatar in a region) isn’t easy as avatars can be a lot more individually complex. There are also various aspects of managing avatars which don’t necessarily apply to animated objects. For example, animesh items should really only have a limited number of updates associated with them, whereas avatars tend to have a lot more going on (interactions, messaging, etc.,), all of which increases the volume of traffic the simulator and viewers must handle.

Project Viewer

[6:47-7:58] Still no date for when a project viewer will appear, other than the Lab will release it “as soon as we can”.

Right now the biggest piece of work within the viewer is defining  how the skeleton get positioned relative to the object with which it is associated. This currently various depending on the model being used, and currently can result in things “jumping” as they start to animate.

This problem also has an impact when trying to use an animated object as an attachment (e.g. when holding and animated pet), with the result the object can be “floating” around the avatar, rather than obviously attached to it, and does not rotate / move appropriately as the attachment point moves relative to the avatar.

[11:55-12:30] Vir doesn’t believe this is a huge problem to solve, it just requires work on the transform matrix, and this shouldn’t add a significant delay in any project viewer appearing.

[21:05-21:45] However, should fixing it prove to be more complicated than anticipated, it may have to be taken into account in terms of lead times, particularly as having the ability to wear / carry animated pets is seen as one of the more popular user-cases for animesh.

Finally, internal testing of the viewer by the Lab has resulted in some suggestions being made which may also be incorporated into the viewer prior to a public project version appearing, in order to ensure that what does appear offers a solid foundation on which to build, and gives creators an opportunity to give feedback.

Tracking Complexities

[15:03-15:56] As animated objects will be manipulating avatar skeleton bones whether they are attached to an avatar or operating as in-world objects, it will require more tracking of such movements than is currently the case to ensure they are correctly represented by viewers able to see them.

Size Limitations

[16:00-18:12] Animated objects will likely be limited in terms of both their physical size and their poly / tri count. Limits for both have yet to be determined; however, high poly count objects will likely in part be limited by the impact they have on performance. The physical size of animated objects, unless  special limits are set, will likely be defined by the same rules as currently exist for avatar skeleton.

[24:31-25:22] The Interest List / animation issues Alexa encountered (e.g objects some distance from the camera position moving much faster than the should be, and then slowing down to a “normal” speed when zoomed at) are not unique to animated objects. These are issues which the Lab is liable to look at in more detail, but are not seen as something which will necessarily delay progress on animesh, simply because the issue has been around for some time (it can be seen when zoomed out from a group of dancing avatars, for example).

Continue reading “SL project updates week 35/2: Content Creation UG”

SL project updates week 32/2: Content Creation UG

Content Creation User Group Meeting, Hippotropolis Camp Fire Circle

The following notes are taken from the Content Creation User Group meeting, held on  Thursday, August 10th, 2017 at 13:00 SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are usually available on the Content Creation User Group wiki page.

Medhue Simoni live steamed the meeting to You Tube, and his video is embedded at the end of this article. These notes present the meeting in terms of topics discussed, rather than a chronological breakdown of the meeting, so the provided time stamps may appear to be out of sequence in places. All time stamps are provided as links which will open the video in a separate browser tab, allowing the discussion to be heard in full.

Note: Due to Vir’s time on vacation, the next official CCUG meeting will be on Thursday, August 31st. Details will be posted on the wiki page.

Project Summary

The goal of this project is to provide a means of animating rigged mesh objects using the avatar skeleton, in whole or in part, to provide things like independently moveable pets / creatures, and animated scenery features via scripted animation.

  • At this point in time, this is not about adding fully functional, avatar-like non-player characters (NPCs) to Second Life
  • Animated objects will not (initially):
    • Have an avatar shape associated with them
    • Make use of an avatar-like inventory (although individual parts can contain their own inventory such as animations and scripts)
    • Make use of the server-side locomotion graph for walking, etc., and so will not use an AO
    • Use the avatar baking service
    • Be adjustable using the avatar shape sliders
  • The project may be extended in the future.
  • It will involve both back-end and viewer-side changes, likely to encompass new LSL commands to trigger and stop animations (held in the object’s contents)
  • It will most likely include a new flag added to an existing rigged object type in order for the object to be given its own skeleton.

Recent Progress

[06:05] Alexa Linden is leading the product side of the animated objects project, and is working on build documentation for the viewer, test plans and related information, etc., for LL’s internal use.

[30:32-31:45] Will it be possible to attach a rigged mesh (e.g. clothing) onto an animated object? If it is rigged mesh, it doesn’t actually need to be an attachment; it can just be specified as a part of the linkset comprising the animated object, and animated against the same skeleton. However, static attachments will not be initially supported with animated objects.

[33:28-34:56] Animesh and sliders:  it’s unlikely that slider support will be implemented for animated objects in the sense that you right-click on an animesh, edit its shape and then adjust the sliders as with an avatar. What would be more likely is to allow body shapes which already contain all the slider settings to be taken and applied to animated objects to given them a desired shape.

This work will likely follow-on for the current project and the work with the baking service, as it would require baking service support to work correctly, just as body shapes for avatars are currently supported through the baking service.

[36:12-36:58 and 37:44-38:10] There is no time frame on when the viewer will appear, but the Lab wants to build on Bento’s experience: get a test viewer out, gain feedback and suggestions, and then improve on it. This doesn’t mean everything people would like to see associated with animated mesh reach the viewer – or at least in one release of the viewer -, but the idea is very much on collaborative efforts to develop the capability. Internal testing the viewer has revealed a couple more things which need to be tackled before its made more generally available (and, of course, test regions need to be established on Aditi).

[57:16-58:05] Performance impact with animated objects won’t really be understood until more widespread testing begins with a public project viewer. There will be some limitations places on animesh intended to help reduce any negative impact (e.g render cost, land impact, maximum number allowed in a region, etc.), but these are all still TBD at this point in time.

Rendering Cost Calculations

[07:29 – 08:25] Related to the above (but not confined to animesh) and as has been previously noted in a several of my SL project updates, the Lab is re-visiting how the rendering cost calculations are handled within Second Life, and Vir has most recently been involved in this work. The aim is to make the calculations a lot more reliable and accurate when establishing the render cost of objects, and thus possibly encourage people to make more efficient content. This work will involve both internal testing by the Lab and “external” testing involving users.

Project EEP (Environment Enhancement Project)

Project Summary

To enhance windlight environment settings and capabilities, including: making environment settings an inventory asset (so they can be sold / bought / swapped); the ability to define the environment (sky, sun, moon, clouds) at the parcel level; LSL scripted support for experience environments / per agent; extended day settings (e.g. having a 24-hour day for a region and 7-day cycles) and extended environmental parameters (possibly including Godrays, distance fog, etc).

See also:

[12:44-13:50] Rider has been busy with other projects since the work was first announced, but will hopefully provide updates when the work resumes.

[44:15-46:32] Further summary of the work by Rider.

Bakes On Mesh

Project Summary

Extending the current avatar baking service to allow wearable textures (skins, tattoos, clothing) to be applied directly to mesh bodies as well as system avatars. This involves server-side changes, including updating the baking service to support 1024×1024 textures. This may lead to a reduction in the complexity of mesh avatar bodies and heads.

Recent Progress

[22:33-23:12] Work is progressing. The updates to the baking service to support 1024×1024 textures are currently on internal testing by the Lab using at least one of the development grids. It’s in a “pre-Aditi” (Beta grid) state, but will hopefully be moving forward soon.

Other Items

Note that some of the following are the subject of more extensive commentary in local chat.

[11:46-12:40] Adjustable walk / run speeds: (see: feature request BUG-7006 for example) nothing happening on this “immediately”. The JIRA has been pulled in by the Lab as it may tie-in with some work being considered for animation playback. However, things are unlikely to be looked at until the next round of animation updates, which will include supplemental animations. The specs for this work have to be fully determined.

Alexa Linden: Product Manager for Animated Objects (Animesh)

[16:29-17:00] Increasing script memory limits: not currently on the roadmap.

[18:00-19:35 and 21:36-22:15] Development kits for the default mesh avatars: in short, nothing planned on the Lab’s part at present. There are, of course, various models and samples available through the Bento wiki pages which might be useful as teaching tools.

[23:14-29:56] Adding further bones to the avatar skeleton for clothing, etc / custom skeletons: adding further bone to the avatar skeleton is unlikely. As it is, the additional Bento bones – if carefully used – can be re-purposed for a wide variety of uses beyond their default names, including in clothing, etc., although custom animations will be required as well. However, this can – within limits – allow creators to build semi-customised skeletons.

A particular consideration with custom skeletons is the issue compatibility between different objects wanting different skeletons, it makes it much harder to ensure different avatar part work together smoothly (e.g. a pair of wings from one avatar working with the quadruped body of another).

[25:33-26:02] Near-term roadmap: the current near-term roadmap for content creation features is; animated objects (animesh), bakes on mesh, then a follow-on to allow bakes on mesh to be used on animesh objects together with some additional features, in order to enable more NPC-like character creation.

[49:44-50:34] Dynamic mirrors: (see STORM-2055) these continue to be periodically raised at meetings, the Lab remains disinclined to implement anything on the grounds of performance impact, particularly as dynamic reflective surfaces would, in all likelihood, be used indiscriminately by many.

[50:45-51:41 and 53:24-54:54] Terrain texture resolution and adding materials to terrain: SL terrain textures suffer from having a relatively large pixel size/ low pixel density, resulting in terrain looking blurred. This can be exacerbated when default terrain is mixed with mesh terrain, where the latter can use the same textures and benefit from the use of materials.  Currently, there is nothing on the SL roadmap for making changes to SL terrain textures.  The pixel size / density issues is seen as a non-trivial fix, given the impact it would have on terrain as a whole and how it may affect those using custom textures on their land.

[59:10-1:01:20] Lab-provided building learning centres: the question was raised about the Lab providing more in-world locations where people could learn about building in SL (“building islands”). There are already a good number of user-provided areas in SL, however, the idea here is to provide more of a self-teach facility (think the Ivory Tower of Prims) rather than one which relies on classroom based teaching, and which includes the best practices, access to test models, etc. Alexa said she’d run the idea past the LDPW team.

SL project updates week 30/2: Content Creation UG

Content Creation User Group Meeting, Hippotropolis Camp Fire Circle

The following notes are taken from the Content Creation User Group meeting, held on  Thursday, July 27th, 2017 at 13:00 SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are usually available on the Content Creation User Group wiki page.

Medhue Simoni live steamed the meeting to You Tube, and his video is embedded at the end of this article. These notes present the meeting in terms of topics discussed, rather than a chronological breakdown of the meeting, so the provided time stamps may appear to be out of sequence in places. All time stamps are provided as links which will open the video in a separate browser tab, allowing the discussion to be heard in full.

Note: Due to the monthly internal meeting at LL and Vir’s time on vacation, there will only be two CCUG meetings in August: Thursday, August 10th and Thursday, August 31st. Details will be posted on the wiki page.

Bakes On Mesh

Project Summary

Extending the current avatar baking service to allow wearable textures (skins, tattoos, clothing) to be applied directly to mesh bodies as well as system avatars. This involves server-side changes, including updating the baking service to support 1024×1024 textures. This may lead to a reduction in the complexity of mesh avatar bodies and heads.

Recent Progress

[1:31-2:01] The project is reaching a point where internal testing at the Lab can begin, allowing the impact of the text increase to be assessed. If this proves successful, the work will start the march towards more general visibility (e.g. availability of a project viewer, probable Aditi testing, etc).

Animated Objects

Project Summary

The goal of this project is to provide a means of animating rigged mesh objects using the avatar skeleton, in whole or in part, to provide things like independently moveable pets / creatures, and animated scenery features via scripted animation.

  • At this point in time, this is not about adding fully functional, avatar-like non-player characters (NPCs) to Second Life
  • Animated objects will not (initially):
    • Have an avatar shape associated with them
    • Make use of an avatar-like inventory (although individual parts can contain their own inventory such as animations and scripts)
    • Make use of the server-side locomotion graph for walking, etc., and so will not use an AO
    • Use the avatar baking service
    • Be adjustable using the avatar shape sliders
  • The project may be extended in the future.
  • It will involve both back-end and viewer-side changes, likely to encompass new LSL commands to trigger and stop animations (held in the object’s contents)
  • It will most likely include a new flag added to an existing rigged object type in order for the object to be given its own skeleton.

Recent Progress

[2:04-2:20] The focus has remained on getting wire frames and right-click selections to work correctly ((e.g. when you right-click on a mesh, the object stops moving, the correct menu is displayed and the mesh is shown as a selected wire frame). Testing region crossings with animated objects has also started.

[2:21-3:26 and 4:19-4:47] Sitting avatars to animated objects: this has been part of a wider discussion on attaching avatars to animated objects and vice-versa. Vir’s view is that there is no restriction on avatars sitting on animated objects, however, the catch is that the sit point isn’t going to be animated – if it is, and as there is no relationship between the animated object’s skeletal locomotion and the avatar’s locomotion, the two will get out of sync. One suggestion for dealing with this is BUG-100864 “A means of visually rigging a sitter to an animesh skeleton bone”, and Vir indicated that the Lab is thinking along those lines, but it’s unlikely to be in the initial project viewer, when that appears.

[3:33-3:57] Animated objects as avatar attachments: unless there is an unforeseen issue, Vir is hopeful this will be possible. However, it is still awaiting work.

[5:19-5:57] Physics for animated objects: should work the same way as for non-animated objects, although this has yet to be tested.

[7:10-8:43] Scaling animated objects: this will not initially be possible using the avatar shape sliders, as animated objects will not initially have any notion of the avatar shape. However, it will likely be possible as a result of follow-on updates to the initial work.

What Vir is hoping to achieve is a method for reliably scaling an object’s skeleton based on the object’s own scale. That is: you could have three different sizes of an object (baby bear, mama bear and papa bear, say), and the skeleton will scale to whichever model is applied to it, rather than having the mesh default to the size of the skeleton (as is currently the case), which is currently defined by the joint positions.

[8:48-12:33] Shapes and Skeletons: the reason for no body shape support at present is that it makes animated mesh a much more extensive project, requiring the objects have a Current Outfit Folder, which requires them to have a dedicated, avatar-style inventory,  make use of the baking service, and so on. All of this makes for a far more complicated, drawn-out project where the Lab would prefer to develop capabilities incrementally, starting with the provision of the avatar skeleton and then building from there.

This is seen as preferable to trying to incorporate everything people want to see – or believe is required – at a first pass, driving out development over a much longer period and risk developing a feature set the wider creative community in SL doesn’t want. Developing incrementally means features can be built upon and the project as a whole iterated, with deliverables presented in much shorter time frames.

[13:49-15:10] Land impact:  this remains a concern for several reasons (too high, and it could stymie the use of animated objects, too low and it could thump performance for the viewer / simulator). Right now, the Lab has no clear idea of how LI will be calculated for animated objects, and Vir re-states that any LI values provided in the project viewer will be place holders which will be refined as testing and creator feedback  gives more information on what the calculations should be.

[16:45-18:04] LI and scaling: concern was raised that if an animated object does not affect the bounding box, but could be scaled via animation, it could lead to the LI being game. Vir pointed out that scaling via animation is something of a hack, and for scaling with animated mesh, he is referring to the scale of the skeleton being determined by the scale of the model (which would effectively be a “static” scale), rather than having something dynamic. As such, the bounding box should reflect the object size and thus correctly influence LI calculations.

[20:14-24:41] Rigging to attachment Points: rigging to attachment points is being seen by some as the ideal means of animating attachments (e.g. twirling a gun in your hand), due to accuracy involved. Vir’s view is that problems people have encountered in uploading items rigged to attachment points is more of a bug with the LL viewer, and so the behaviour will be allowed (with the possible exception of the newer Bento bones attachment points); there is. however the concern that the ability might lead to attachment points being used as additional free-floating bones.

[37:50-39:36] Animated objects in games: could they be used for interactive elements in games, such as walls which bend when walked into, items which might interact with one another / players, traps which could physically react to an avatar (something wrapping around an avatar, for example). Short answer: yes and no. Yes, these are various interactions that are possible: trees swaying in the breeze, animals and other creature roaming and responding to avatars. No, in that animated objects will not initially have their own physics, so wrapping around an avatar, being used as some kind of clothing, etc.

[39:39-41:58] Will there be limits places on the number of animated objects in a region: there will be limits, but what these will be cannot really be determined until testing can be performed and the Lab can get better metrics on likely performance impacts animated objects have. This could also feed into how the limits are set (e.g. through the LI applied to animated objects, or limiting the number of animated objects which can be attached to an avatar or which can follow an avatar, etc), all of which might be used individually or in some combination(s) depending on the objects in question. Impact viewer-side could also be limited by having attached animated objects impact the avatar’s rendering cost.

[42:00-42:21] Imposters are also likely to be extended to apply to animated objects, although work hasn’t started on this a yet.

Other Items

[0:44-1:21] Bento wiki information: It was mentioned in a previous meeting that some of the Bento wiki content was broken – links weren’t working expected downloads weren’t available. This should now all be fixed.

[25:04-36:15] Development kits for the default mesh avatars: In short, nothing planned on the Lab’s part at present, although due note was taken that there could be potential for such kits and the provision of better starter content for new users to help them in their understanding of what might be possible in SL with content creation.

The idea behind the initial question being to help give those new to mesh content creation the means to better understand what can / cannot be done with mesh in-world, get to grips with some basics of mesh development and modelling. This quickly expanded into discussions on “good” and “bad” content, broadening the availability of of content guides / best practices through to more formalised attempts at education those coming into mesh content creation An argument against this is that it could lead to misunderstands and the creation of poor content in SL, with the suggestion that more extensive best practices guidelines would be better.

[43:20-49:29] Why can’t animators replace default facial expressions in the same way they can replace walk animations? Because AOs affecting walks, sits, etc., all interact with the server-side locomotion graph which has a notion / manages these things. Facial expressions, etc., are not recognised by the locomotion graph, but are enacted viewer-side and the results effectively “passed through” the simulator (which is aware an animation – smile, frown, whatever – is being played, just not what the animation is actually doing).  There is the potential to change this by extending the animation system, but outside of supplemental animation, there is no current commitment to doing this at present.

This discussion extends out into a discussion of the system avatar morph capability and sliders / limitations, which runs through until 53:11.

[54:13-1:03:00] Adjustable walk / run speeds: the ability to adjust / scale walk and run speeds to be in accordance with the size of an avatar, etc., has been a common request (see: feature requests BUG-7006 and SVC-7824 for example). Vir points out that currently, the speeds are set simulator-side and that adjusting them of any on-the-fly changes could be problematic as it involves an array of simulator and viewer changes. As such, scripted capabilities which adjust the viewer-side animation speeds might be an easier solution (Tapple Gao already supplies an AO for avatar creators which allows for some degree of speed control in their products, but something that is more generally usable is seen as ideal).