SL project updates week 29/2: Content Creation UG

Content Creation User Group Meeting, Hippotropolis Camp Fire Circle

The following notes are taken from the Content Creation User Group meeting, held on  Thursday, July 20th, 2017 at 13:00 SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are usually available on the Content Creation User Group wiki page.

Medhue Simoni live steamed the meeting to You Tube, and his video is embedded at the end of this article. These notes present the meeting in terms of topics discussed, rather than a chronological breakdown of the meeting, so the provided time stamps may appear to be out of sequence in places. All time stamps are provided as links which will open the video in a separate browser tab, allowing the discussion to be heard in full.

Animated Mesh

Project Summary

The goal of this project is to provide a means of animating rigged mesh objects using the avatar skeleton, in whole or in part, to provide things like independently moveable pets / creatures, and animated scenery features via scripted animation.

  • At this point in time, this is not about adding fully functional, avatar-like non-player characters (NPCs) to Second Life
  • Animated objects will not (initially):
    • Have an avatar shape associated with them
    • Make use of an avatar-like inventory (although individual parts can contain their own inventory such as animations and scripts)
    • Make use of the server-side locomotion graph for walking, etc., and so will not use an AO
    • Use the avatar baking service
  • The project may be extended in the future.
  • It will involve both back-end and viewer-side changes, likely to encompass new LSL commands to trigger and stop animations (held in the object’s contents)
  • It will most likely include a new flag added to an existing rigged object type in order for the object to be given its own skeleton.

Current Status

[3:33- 4:29] Work is focused on getting right-click selections and wire frames to work correctly (e.g. when you right-click on a mesh, the object stops moving, the correct menu is displayed and the mesh is shown as a selected wire frame. All of this seems to be working well, and while Vir is still not in a position to give a time frame estimate of when a project viewer will appear, his feeling is that there are no major obstacles sitting in the way.

[13:24-16:54] Performance impact assessment: The work isn’t sufficiently advanced to carry out any kind of assessment on the impact multiple animated objects may have on simulator and viewer performance. Animated objects should not have as significant a cost as avatars tend to have, but it could get expensive with multiple rigged vertices being animated in a single location.

Vir’s view is that the relevant test will likely be how many joints are actively animated and rigged to, rather than just how many bones are in the scene, given that idle bones aren’t really going to impact anything. Until sress tests can be held and the figures refined, any cost / impact values assigned in a project viewer will be place holders, subject to change.

[43:50-58:00] Attaching animated objects to avatars / rigging prims or non-rigged mesh to skeleton bones:  A discussion encompassing BUG-134018 and attaching animated objects to avatars. Vir sees the problem with the latter as being primarily a performance question / costing issue (large numbers of objects with potentially no land impact be which influence performance).

Attaching avatars to animated objects (outside of sitting on them, as is done with vehicles, etc., currently) is seen as more complicated because the animated object is being controlled by a viewer-side skeleton about which the simulator has no notion, and so ideas of attachment points become vague, and questions open up on how things are tracked by the simulator, given there is no notion of agents associated with animated objects. This discussion also encompasses issues of attaching avatars to bones within animated objects, raising questions of parenting, animation synchronisation etc.

Bakes on Mesh

Project Summary

Extending the current avatar baking service to allow wearable textures (skins, tattoos, clothing) to be applied directly to mesh bodies as well as system avatars. This involves server-side changes, including updating the baking service to support 1024×1024 textures. This may lead to a reduction in the complexity of mesh avatar bodies and heads.

Current Status

[4:32-4:39] Anchor Linden has been working on other projects recently, but the hope is he’ll be back working on bakes on mesh soon.

[5:40-6:19] Bakes on mesh should allow alphas / transparent textures to be used as they are with the system avatar at present: if an alpha / transparent channel is in a bake, it will place “holes” in the mesh just as they can be used to blank parts of the system avatar. However, once the bakes on mesh project has progressed far enough, this will requiring testing to confirm.

[58:27-58:48] There is no estimate on when a project viewer, etc., for bakes on mesh might appear.

Other Items

Increasing the Build / Mesh Import Size

[6:24-12:18] It was asked if the current upper limit imported objects could be increased so that very large items such as region-sized landscape / terrain models could be imported with having to break them into smaller segments.

There are currently no plans to make any increase in the size of prims / single mesh imports. It’s also unclear how massive objects might be affected by land impact. The latter include things like the overall area of the object, the amount of screen space it might occupy based on its dimensions, etc; so having one large piece of terrain could have an exponentially larger land impact than  using a number of smaller sized pieces to achieve the same result. Additional concerns include the increased risk of encroachment issues, etc., for very large objects.

JIRA feature requests outlining why an increase might be useful and the kind of use cases it could meet are invited.

Maya .ANIM Exporter

[17:39-18:28] Aura Linden continues to work on the .ANIM exporter for Maya (which she has been developing in her own time), and which she plans to make available as a open-source offering. There are also the pre-Bento (translations on mPelvis only) and post-Bento (translations on all bones) .BVH exporters available on the Bento testing page of the wiki.

In Brief

  • [18:59-19:42] Various requests were put forward to extend the mesh object physics types which can be specified at upload (e.g. cube, basic havok presets, etc.). Vir requested a JIRA be raised so it could be noted and as / if / when time allows, pain point could be looked at and perhaps improvements / changes to the uploader made.
    • [20:40-22:54] Discussion about a specific issue in uploading a model cat using the official viewer (which crashes) and Firestorm (which manages the upload).
  • [25:13-29:26] Discussion (primarily text) on dynamic mirrors STORM-2055 and water as a reflective surface.
  • [31:35-34:18] Discussion (primarily text) about BUG-134006 “Viewer code is not aligned to server code when calculating physics shape for thin objects”. This has been accepted by the Lab, but as it is seen as a conflict between the viewer and the server, no decision has been made on whether it should be a server-side or viewer fix. Firestorm have adopted a viewer-side fix, which will appear in the next release. The root of the problem appears to be changes made in the physics costings as a result of mesh being introduced. This is followed by a further conversation on the physics uploader, “custom pivot points, and issues through to 42:14.
Advertisements

SL project updates week 28/2: Content Creation UG

Content Creation User Group Meeting, Hippotropolis Camp Fire Circle

The following notes are taken from the Content Creation User Group meeting, held on  Thursday, July 13th, 2017 at 13:00 SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are usually available on the Content Creation User Group wiki page.

Audio extracts are provided where relevant. Note that this article is organised (as far as possible) by topic, and does not necessarily reflect the chronological order in which items were discussed. Medhue Simoni live steamed the meeting to You Tube, and his video is embedded at the end of this article. Time stamps in the text refer to that recording, and will open the video at the relevant point in a separate browser tab for ease of reference.

Note that the region crashed at around 56 minutes into the meeting.

Animated Objects

Project Summary

The goal of this project is to provide a means of animating rigged mesh objects using the avatar skeleton, in whole or in part, to provide things like independently moveable pets / creatures, and animated scenery features via scripted animation.

  • At this point in time, this is not about adding fully functional, avatar-like non-player characters (NPCs) to Second Life
  • Animated objects will not (initially):
    • Have an avatar shape associated with them
    • Make use of an avatar-like inventory (although individual parts can contain their own inventory such as animations and scripts)
    • Make use of the server-side locomotion graph for walking, etc., and so will not use an AO
    • Use the avatar baking service
  • The project may be extended in the future.
  • It will involve both back-end and viewer-side changes, likely to encompass new LSL commands to trigger and stop animations (held in the object’s contents)
  • It will most likely include a new flag added to an existing rigged object type in order for the object to be given its own skeleton.

Recent Progress

  • [1:50] The issue with current viewers crashing when used on a region where the server-side support for animated mesh has been enabled (due to the viewer receiving unrecognised messages from the simulator) appears to have been resolved.
  • Vir is working on ways to get a good match between the object position in-world and the skeleton position, and this will likely be the subject of a future meeting discussion.
  • [6:30] As animated objects are set using a flag, any that are modifiable will be switchable as animated or not by users.
  • [17:20] There currently isn’t any documentation on animated objects, but hopefully by the time the project viewer appears, information will be made available on what animated objects can do, how they can be used, etc., together with some test files.
  • [29:16] There is still no date on when a project viewer is liable to appear.
  • [52:58] A reminder that the current project isn’t intended to solve for all potential uses of animated mesh; things like “full” NPCs utilising their own avatar-like inventory for bakes, etc., support of attachment points for non-rigged objects, etc, are seen as more ambitious, and thus follow-ons.

Attaching Avatars and Animated Objects To One Another

[9:34 and 1:02:12] This follows on from the last meeting. No decision has been made on whether or not it will be implemented as an initial part of the animated mesh project, as it involves several complications (such as defining some kind of skeletal hierarchy to differentiate between avatar skeleton and animated object skeleton). Vir’s thinking is perhaps to push to get a project viewer out without any such capability, and use that as a means to generate feedback on what people would like to see and how they think it might work.

Bakes on Mesh

Project Summary

Extending the current avatar baking service to allow wearable textures (skins, tattoos, clothing) to be applied directly to mesh bodies as well as system avatars. This involves server-side changes, including updating the baking service to support 1024×1024 textures. This may lead to a reduction in the complexity of mesh avatar bodies and heads.

Recent Progress

[2:59]  Anchor linden is continuing to make progress on this work. It seems the update to support 1024×1024 textures is now complete (or pretty much so), as Vir indicated that the next step is performance testing the baking service in handling 1024×1024 bakes.

[5:53] The updated system should allow textures of different sizes (512×512 and 1024×1024) to me mixed and correctly composited, but testing will be required to confirm this.

[7:30] The mechanics of how bakes are to be applied to meshes is still being worked out. Vir’s thinking it will be another editable property which can be used with meshes to determine which face(s) on the mesh use the baked textures.

Supplemental Animations

Project Summary

To provide server-side support for running multiple animations without them clashing (e.g. wings flapping without interfering with walking).

Status

No progress thus far. Once the animated objects project viewer is available, supplemental animations will likely get some attention.

Other Items

Limits

[11:23-21:25Removing the 5m bone translation limit: A question was asked about removing the 5m bone translation limit from its origin to allow for really, really large avatars. The belief behind the question being that this was a recent change which now prevents translations greater than 5m where previously they code be encoded and uploaded.

This lead to a lengthy discussion which encapsulated the idea that the limit appears be deeply embedded in the animation system, and has has always been there; disputes on whether or not it was possible using .ANIM files rather than .BVH, suggestions that an earlier version of the .ANIM format (v.0 – although long dead) may have allowed it, etc.

Vir’s stated belief is that the limit is too deeply embedded in the animation system to have ever been different, although he acknowledged that as the Lab has tightened a range of limits to ensure they are properly observed by the system, it is possible that the limit is now being more rigorously enforced. Either way the limit is unlikely to change. It was also pointed out that rotation, joint position, and scaling the mPelvis bone up are all means by which larger avatar models could be produced.

The core voice comments from Vir are below, please refer to the video for the full conversation – which was also partially text-based.

[22:38 and 31:12] Increasing the animation limit: this has previously come up for discussion. Currently, SL should support up to 64 concurrent animation motions playing at one time per agent (e.g. walks, arm swings, wing flaps, tail swishes, etc.). However, how many can be reliably played at any one time without encountering problems might be lower. A request to increase the limit was made on the basis of then allowing LSL-based description of join positions which could be translated into animations, to allow improved scripted locomotion of NPCs.

[+/-35:45 onwards] Pivot point support and improved IK support: There is a largely text-based discussion on pivot point support for mesh and improved Inverse Kinematics (IK) support  / targeting to allow for better interactions between avatars and avatars and objects (e.g. with IK, being able to have avatar hold hands, “grip” a ladder and climb it, etc). Pivot points would allow improved rotations (doors swinging at the hinge, for example) – although it was pointed out that animated mesh could achieve the same goals.

This conversation touches on having custom / arbitrary skeletons in Second Life – although as Medhue Simoni has pointed out, the Bento skeleton already allows for a fairly wide range of “custom” quadruped and biped skeleton forms to be created. As it is, arbitrary skeletons are not something the Lab will be tackling in the near future.

SL project updates week 27/2: Content Creation UG

The Content Creation User Group meeting, at the Hippotropolis Camp Fire Circle (stock)

The following notes are taken from the Content Creation User Group meeting, held on  Thursday, July 6th, 2017 at 13:00 SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are usually available on the Content Creation User Group wiki page.

Audio extracts are provided where relevant. Note that this article is organised (as far as possible) by topic, and does not necessarily reflect the chronological order in which items were discussed. Medhue was a little late to the meeting, and so missed the first 15 minutes. However, his video is embedded at the end of this article, and time steps to it, where applicable, are provided and will open the video at that point in a separate browser tab for ease of reference.

New Starter Avatars

The Lab issued new starter avatars on Wednesday, July 5th. Six out of the eight classic avatars utilised Bento extensions for rideable horses or wings. See my write-up on them for more.

Animated Objects

General Update

Work is continuing on trying to get linksets to work correctly. This is focused on ensuring there is sufficient back-end code to correctly handle multiple animated requests from different elements within an animated object.

Some general questions related to animated mesh were asked at various points in the meeting, these are addressed below.

  • Will animated objects use the Bento skeleton – yes.
  • [5:07] Will animated mesh allow the return of mesh UUID flipping (removed due to the ability being badly abused) – very unlikely.
  • [6:12] Where will animations for animated objects be stored? Within the object (or elements of the object) itself, and called via the object’s own scripts – as per scripted attachments on avatars are handled.
  • [7:15] Will animated objects use an AO? Not in the sense of an avatar AO, as animated objects will not make use of the basic system animations / locomotion graph. There was some debate over the effectiveness of using the AO system, although it was pointed out it could make it easier when having pets following you, running when you run, etc. One suggestion was that pathfinding might be adopted to act as a pseudo-AO.
  • [29:02] There is still no data on an animated objects project viewer will be available.

Attaching Avatars and Animated Objects To One Another

There is obviously considerable interest in enabling avatars and animated objects attach one to another. For example,  being able to walk up to a free roaming horse and then mount it and ride it, or having a pet running around on the ground you could “pick up” and have it sit on your shoulder, move between your shoulders, look around, lie around your neck, etc.

Achieving this raises numerous issues – how should two skeletal objects attach one to another, how are the independent animation sets handled, how are they kept in sync, how the hierarchy is managed (which is the parent, which is the child, etc.

Some options have been suggested for allowing avatars to attach to animated objects – such by having a specific “sit bone” which could be targeted and then used as an anchor point to help maintain some semblance of synchronisation between the animated object and the avatar’s own animations. Feature request BUG-100864 offers a similar suggestion, utilising a scripted approach. Vir has suggested that this feature request perhaps be used as the basis for further discussion, and welcomes JIRAs on alternative approaches.

“First Pass” at Animated Objects

[09:59] Vir reminded people that the current work is only a first pass at animated objects, designed to provide basic, usable functionality. Providing more NPC-like capabilities: animated objects with locomotion graphs and using the system animations; attaching animated objects to avatars / avatars to animated objects; giving animated objects the notion of an inventory and wearables, etc., are all seen as potential follow-up projects building on the initial capability, rather than attempting to do everything at once.

Caching  / Pre-loading Animations

Sounds and animations can suffer a noticeable delay on a first-time play if they have the be fetched directly at the time they’re needed. For sounds, this can be avoided by using LSL to pre-cache them (e.g. using llPreloadSound) so they are ready for the viewer to play when needed, but there is no similar capability for animations.

A feature request (BUG-7854) was put forward at the end to December 2015, but has not moved beyond Triage. Vir’s view is that pre-loading animations in a manner similar to sounds makes sense, should be relatively straight-forward and could help with syncing animations in general. However, whether or not it might / could be done within the animated objects project is TBD.

Other Items

Sample Code and Code Libraries

[11:39-27:45] Medhue Simoni opens a discussion on code availability – noting that Pathfinding had suites of example code which appear to have vanished, suggesting that the Lab could do more to provide more complex examples of how new capabilities could be used and then made available to everyone could help leverage such capabilities more effectively.

From this came ideas of open-sourcing more of the Lab’s own code for experiences (like Linden Realms), the potential for abuse this could present (people developing cheats for games), the complexities (or otherwise) of LSL coding, the fact that often when the Lab develops something, they’re not aware of exactly what directions creators will take it, and so masses of example code might be of limited value, etc., – although code demonstrating how to do specific things would undoubtedly be of benefit.

Vir points out that the Lab’s resources are finite for coding, and an alternative might be for a more recognised open-source repository to store, reference and obtain documented code and examples might be in order – there are already libraries and resources on the SL wiki, but these aren’t necessarily easy to navigate. There is also the LSL wiki – although this may be in need of update, as well as resources on a number of forums.

[25:47] Within this conversation, the question was asked if the 64Kb limit on scripts could be raised, and the short answer – as Vir doesn’t deal directly with the scripting side of things is – unknown.

[29:56-end] This conversation then spins out into the technical limitations of Second Life (CPU core usage, etc.) when compared to other platforms as seen by one creator. some of the broader comments in voice and text seem predicated on misunderstandings (e.g. the Lab is investing in newer hardware where possible, but are hamstrung by a need to ensure back compatibility with existing content, which sometimes limits just what can be done; the idea that the new starter avatars are No Mod  – they’d fully mod, etc), and which also touches on the basic need for education on content creation (e.g. responsible text sizing and use), before spinning out into general concerns on overall security for content in SL.

SL project updates week 26/2: Content Creation UG

The Content Creation User Group meeting, at the Hippotropolis Camp Fire Circle (stock)

The following notes are taken from the Content Creation User Group meeting, held on  Thursday, June 29th, 2017 at 1:00pm SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are usually available on the Content Creation User Group wiki page.

Medhue Simoni live streamed the meeting via his You Tube channel, and I’ve embedded the video at the end of this article. Time stamps in the text below reference this video. Note, however, that items are presented by topic, not necessarily in chronological order. Audio extracts are also provided, but please not these may have been comprised to just the core points of discussion / statements (while avoiding any loss of context).

Rigging To Attachment Points

[1:11-8:45] There has been some discussion around this for the last couple of meetings. In essence, rigging to attachment points was used by some content creators in the past to overcome a shortage of bones. With Bento, it was decided that rigging to attachment points should not be supported in new uploads, but would still be allowed for older meshes using it, to avoid content breakage. However, it now turns out that there is a conflict between the simulator (which allows rigging to attachment points) and the viewer (which shouldn’t allow mesh rigged to attachment point to be uploaded – although some TPVs still do, by accident or design).

Vir is still looking into this to determine how best to handle things going forward. However, as it has been pointed out that there is legacy content which cannot be easily updated if uploads of meshes rigged to attachment points is blocked, and clothing cannot be made for mesh bodies using rigged attachment points, His current feeling is that the simulator behaviour will likely not be changed and that the viewer  – based on a JIRA he has raised – will be updated to be consistent with the simulator’s rules, although he made a request that new avatars are not made with meshes rigged to attachment points.

Note: the discussion on the video includes references to Firestorm (version 5.0.1 onwards) no longer accepting uploads for mesh rigged to attachment points due to an accidental breakage (the fix didn’t make the cut for the 5.0.7 release).

Animated Objects

Attachment Points on Animated Objects

[10:29-14:21] Animated objects will have attachment points as they use the avatar skeleton. However, the following should be kept in mind:

  • In relation to rigging to attachment points (above) – this should work for animated objects (so this could allow existing avatars rigged to attachment points and volume bones to be converted to animated objects, for example)
  • The Lab is undecided on including attachment points at this point in time in order to allow items to be attached to animated objects (or animated objects to one another). They are simply there as a part of the avatar skeleton.

General Status

[39:59-41:30] The animated objects (aka “Animesh”) project is progressing. Still no ETA on a project viewer. Vir is still working on getting the avatar skeleton to work with linksets of multiple meshes making an object.  Most of this is working, although the graphics pipeline still gets upset in places if changing objects from animated to static or vice versa at the wrong time.

Still to be done is evaluating the land impact of animated objects, deciding whether or not to implement support of attachment points now or in the future.

Given that objects already have a land impact, the current thinking is that when converted to animated objects, they will likely incur an addition LI overhead – although what this will be can only be determined in time. Hence, for the project viewer, once available, it may be an arbitrary figure, subject to adjustment.

Bakes on Mesh

[17:28-18:10] Anchor linden is making “good progress” on updating the Baking Service to support increased texture resolutions (512×512 to 1024×1024). Once this work is completed, the next step is to run performance testing on the baking service to assess how well it can support the increased resolution, and whether any additional hardware, etc., might be required in support of the increased loads.

Other Items

“Crazy Bone Scaling Animation”

[9:00-10:05] During the week #25 meeting, a bone scaling animation was demonstrated which could rescale an avatar  to huge proportions, as if it were being “blown up” / inflated. Vir looked at this and believes it is the result of storing animations in a way that’s “not normalised” and which are not being handle correctly for scaling. So while useful in the way it currently performs, the technique isn’t useful for accurately rescaling the avatar skeleton.

Hires Proxy Mesh Rigging

[16:33-16:49] This came out of the last meeting, and Beq Janus is working on a design outline for it, covering how it could supported in-world and protect mesh body creators’ intellectual property at the same time. She plans to offer the document via Google Docs, and those wishing to read it and provide feedback should e-mail her at beq.janus-at-phoenixviewer.com for access.

Mesh Uploader and LOD Options

[20:35-43:00] A suggestion was put forward to change the Level of Detail (LOD) buttons on the mesh uploader from the current “Generate” default to “Load from File” in an attempt to encourage creators to make their own, efficient, LOD files, rather than relying on the auto generation process – which is not always as efficient as custom LOD files.

Feedback was that changing the buttons would not help, but could encourage people simply to generate a single high LOD file and use that (a problem already evident when custom LOD files are used). An alternative suggestion was to remove the ability to adjust the LOD auto-generation process (so no spinners on the uploader) – so unless creators supply their own LOD files, they have to accept whatever the uploader generates for each level.

Suggested mesh uploader change that sparked a discussion

The core of the discussion in voice is below, but please refer to the video to hear it in full.

This led to a lengthy (primarily text) discussion about how to encourage creators to use their own sensible and custom LODs, which is interspersed with other topics. Some of the idea offered by users at the meeting were:

  • Making customer LOD uploads cheaper than if generating them through the uploader
  • Offering similar incentives to encourage creators reduce their high-end poly counts and not fudge their low-end LODs
  • Improving the preview option in the uploader to better represent LOD sampling
  • Adding a field on the marketplace similar to the Land Impact one but for Display Weight on worn meshes (on the basis that a high display weight can be indicative of poor LOD usage), and in theory encourage creators to be more efficient in their use / provision of LOD files
  • Have a render meta mode like physics, that shows the quality of the LODs as a colour map (e.g. look at the volumetric relationship between the LODs on the basis that a good LOD should hold volume)
  • Instructional videos from Torley – although Medhue Simoni has a 3-part series on LODs: Part 1, Part 2, Part 3.

In Brief

  • [14:52-15:36] The link on the SL wiki Rigging Fitted Mesh page to download the avatar skeleton is currently broken.
  • [19:04-20:22] Inverse Kinematics via LSL function with global position – this has been suggested a number of times. While noting it would be useful (it might, for example, enable an animation to make it appear as if an avatar is opening a door when standing before it), Vir stated it has not received in-depth thought at the Lab in terms of being implemented or how it would work, given the server currently doesn’t know where the joints in an avatar are, so it introduces a level of complexity as to how things would be handled.
  • As most people know, initially accessing Aditi is a case of submitting a support ticket. Inventory is now merged between Agni (the Main grid) and Aditi around 24 hours after initially logging-in to the latter (a merge process is run every day for all accounts which have been logged into since the last run). However, it now appears that changing your SL password can break your Aditi access, requiring a further support ticket.
  • [43:09-end] Discussion on copybotting, policies, banning, etc., which threads through to the end of the meeting, and split between Voice and chat.