SL project updates week 27/2: Content Creation UG

The Content Creation User Group meeting, at the Hippotropolis Camp Fire Circle (stock)

The following notes are taken from the Content Creation User Group meeting, held on  Thursday, July 6th, 2017 at 13:00 SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are usually available on the Content Creation User Group wiki page.

Audio extracts are provided where relevant. Note that this article is organised (as far as possible) by topic, and does not necessarily reflect the chronological order in which items were discussed. Medhue was a little late to the meeting, and so missed the first 15 minutes. However, his video is embedded at the end of this article, and time steps to it, where applicable, are provided and will open the video at that point in a separate browser tab for ease of reference.

New Starter Avatars

The Lab issued new starter avatars on Wednesday, July 5th. Six out of the eight classic avatars utilised Bento extensions for rideable horses or wings. See my write-up on them for more.

Animated Objects

General Update

Work is continuing on trying to get linksets to work correctly. This is focused on ensuring there is sufficient back-end code to correctly handle multiple animated requests from different elements within an animated object.

Some general questions related to animated mesh were asked at various points in the meeting, these are addressed below.

  • Will animated objects use the Bento skeleton – yes.
  • [5:07] Will animated mesh allow the return of mesh UUID flipping (removed due to the ability being badly abused) – very unlikely.
  • [6:12] Where will animations for animated objects be stored? Within the object (or elements of the object) itself, and called via the object’s own scripts – as per scripted attachments on avatars are handled.
  • [7:15] Will animated objects use an AO? Not in the sense of an avatar AO, as animated objects will not make use of the basic system animations / locomotion graph. There was some debate over the effectiveness of using the AO system, although it was pointed out it could make it easier when having pets following you, running when you run, etc. One suggestion was that pathfinding might be adopted to act as a pseudo-AO.
  • [29:02] There is still no data on an animated objects project viewer will be available.

Attaching Avatars and Animated Objects To One Another

There is obviously considerable interest in enabling avatars and animated objects attach one to another. For example,  being able to walk up to a free roaming horse and then mount it and ride it, or having a pet running around on the ground you could “pick up” and have it sit on your shoulder, move between your shoulders, look around, lie around your neck, etc.

Achieving this raises numerous issues – how should two skeletal objects attach one to another, how are the independent animation sets handled, how are they kept in sync, how the hierarchy is managed (which is the parent, which is the child, etc.

Some options have been suggested for allowing avatars to attach to animated objects – such by having a specific “sit bone” which could be targeted and then used as an anchor point to help maintain some semblance of synchronisation between the animated object and the avatar’s own animations. Feature request BUG-100864 offers a similar suggestion, utilising a scripted approach. Vir has suggested that this feature request perhaps be used as the basis for further discussion, and welcomes JIRAs on alternative approaches.

“First Pass” at Animated Objects

[09:59] Vir reminded people that the current work is only a first pass at animated objects, designed to provide basic, usable functionality. Providing more NPC-like capabilities: animated objects with locomotion graphs and using the system animations; attaching animated objects to avatars / avatars to animated objects; giving animated objects the notion of an inventory and wearables, etc., are all seen as potential follow-up projects building on the initial capability, rather than attempting to do everything at once.

Caching  / Pre-loading Animations

Sounds and animations can suffer a noticeable delay on a first-time play if they have the be fetched directly at the time they’re needed. For sounds, this can be avoided by using LSL to pre-cache them (e.g. using llPreloadSound) so they are ready for the viewer to play when needed, but there is no similar capability for animations.

A feature request (BUG-7854) was put forward at the end to December 2015, but has not moved beyond Triage. Vir’s view is that pre-loading animations in a manner similar to sounds makes sense, should be relatively straight-forward and could help with syncing animations in general. However, whether or not it might / could be done within the animated objects project is TBD.

Other Items

Sample Code and Code Libraries

[11:39-27:45] Medhue Simoni opens a discussion on code availability – noting that Pathfinding had suites of example code which appear to have vanished, suggesting that the Lab could do more to provide more complex examples of how new capabilities could be used and then made available to everyone could help leverage such capabilities more effectively.

From this came ideas of open-sourcing more of the Lab’s own code for experiences (like Linden Realms), the potential for abuse this could present (people developing cheats for games), the complexities (or otherwise) of LSL coding, the fact that often when the Lab develops something, they’re not aware of exactly what directions creators will take it, and so masses of example code might be of limited value, etc., – although code demonstrating how to do specific things would undoubtedly be of benefit.

Vir points out that the Lab’s resources are finite for coding, and an alternative might be for a more recognised open-source repository to store, reference and obtain documented code and examples might be in order – there are already libraries and resources on the SL wiki, but these aren’t necessarily easy to navigate. There is also the LSL wiki – although this may be in need of update, as well as resources on a number of forums.

[25:47] Within this conversation, the question was asked if the 64Kb limit on scripts could be raised, and the short answer – as Vir doesn’t deal directly with the scripting side of things is – unknown.

[29:56-end] This conversation then spins out into the technical limitations of Second Life (CPU core usage, etc.) when compared to other platforms as seen by one creator. some of the broader comments in voice and text seem predicated on misunderstandings (e.g. the Lab is investing in newer hardware where possible, but are hamstrung by a need to ensure back compatibility with existing content, which sometimes limits just what can be done; the idea that the new starter avatars are No Mod  – they’d fully mod, etc), and which also touches on the basic need for education on content creation (e.g. responsible text sizing and use), before spinning out into general concerns on overall security for content in SL.

SL project updates week 26/2: Content Creation UG

The Content Creation User Group meeting, at the Hippotropolis Camp Fire Circle (stock)

The following notes are taken from the Content Creation User Group meeting, held on  Thursday, June 29th, 2017 at 1:00pm SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are usually available on the Content Creation User Group wiki page.

Medhue Simoni live streamed the meeting via his You Tube channel, and I’ve embedded the video at the end of this article. Time stamps in the text below reference this video. Note, however, that items are presented by topic, not necessarily in chronological order. Audio extracts are also provided, but please not these may have been comprised to just the core points of discussion / statements (while avoiding any loss of context).

Rigging To Attachment Points

[1:11-8:45] There has been some discussion around this for the last couple of meetings. In essence, rigging to attachment points was used by some content creators in the past to overcome a shortage of bones. With Bento, it was decided that rigging to attachment points should not be supported in new uploads, but would still be allowed for older meshes using it, to avoid content breakage. However, it now turns out that there is a conflict between the simulator (which allows rigging to attachment points) and the viewer (which shouldn’t allow mesh rigged to attachment point to be uploaded – although some TPVs still do, by accident or design).

Vir is still looking into this to determine how best to handle things going forward. However, as it has been pointed out that there is legacy content which cannot be easily updated if uploads of meshes rigged to attachment points is blocked, and clothing cannot be made for mesh bodies using rigged attachment points, His current feeling is that the simulator behaviour will likely not be changed and that the viewer  – based on a JIRA he has raised – will be updated to be consistent with the simulator’s rules, although he made a request that new avatars are not made with meshes rigged to attachment points.

Note: the discussion on the video includes references to Firestorm (version 5.0.1 onwards) no longer accepting uploads for mesh rigged to attachment points due to an accidental breakage (the fix didn’t make the cut for the 5.0.7 release).

Animated Objects

Attachment Points on Animated Objects

[10:29-14:21] Animated objects will have attachment points as they use the avatar skeleton. However, the following should be kept in mind:

  • In relation to rigging to attachment points (above) – this should work for animated objects (so this could allow existing avatars rigged to attachment points and volume bones to be converted to animated objects, for example)
  • The Lab is undecided on including attachment points at this point in time in order to allow items to be attached to animated objects (or animated objects to one another). They are simply there as a part of the avatar skeleton.

General Status

[39:59-41:30] The animated objects (aka “Animesh”) project is progressing. Still no ETA on a project viewer. Vir is still working on getting the avatar skeleton to work with linksets of multiple meshes making an object.  Most of this is working, although the graphics pipeline still gets upset in places if changing objects from animated to static or vice versa at the wrong time.

Still to be done is evaluating the land impact of animated objects, deciding whether or not to implement support of attachment points now or in the future.

Given that objects already have a land impact, the current thinking is that when converted to animated objects, they will likely incur an addition LI overhead – although what this will be can only be determined in time. Hence, for the project viewer, once available, it may be an arbitrary figure, subject to adjustment.

Bakes on Mesh

[17:28-18:10] Anchor linden is making “good progress” on updating the Baking Service to support increased texture resolutions (512×512 to 1024×1024). Once this work is completed, the next step is to run performance testing on the baking service to assess how well it can support the increased resolution, and whether any additional hardware, etc., might be required in support of the increased loads.

Other Items

“Crazy Bone Scaling Animation”

[9:00-10:05] During the week #25 meeting, a bone scaling animation was demonstrated which could rescale an avatar  to huge proportions, as if it were being “blown up” / inflated. Vir looked at this and believes it is the result of storing animations in a way that’s “not normalised” and which are not being handle correctly for scaling. So while useful in the way it currently performs, the technique isn’t useful for accurately rescaling the avatar skeleton.

Hires Proxy Mesh Rigging

[16:33-16:49] This came out of the last meeting, and Beq Janus is working on a design outline for it, covering how it could supported in-world and protect mesh body creators’ intellectual property at the same time. She plans to offer the document via Google Docs, and those wishing to read it and provide feedback should e-mail her at beq.janus-at-phoenixviewer.com for access.

Mesh Uploader and LOD Options

[20:35-43:00] A suggestion was put forward to change the Level of Detail (LOD) buttons on the mesh uploader from the current “Generate” default to “Load from File” in an attempt to encourage creators to make their own, efficient, LOD files, rather than relying on the auto generation process – which is not always as efficient as custom LOD files.

Feedback was that changing the buttons would not help, but could encourage people simply to generate a single high LOD file and use that (a problem already evident when custom LOD files are used). An alternative suggestion was to remove the ability to adjust the LOD auto-generation process (so no spinners on the uploader) – so unless creators supply their own LOD files, they have to accept whatever the uploader generates for each level.

Suggested mesh uploader change that sparked a discussion

The core of the discussion in voice is below, but please refer to the video to hear it in full.

This led to a lengthy (primarily text) discussion about how to encourage creators to use their own sensible and custom LODs, which is interspersed with other topics. Some of the idea offered by users at the meeting were:

  • Making customer LOD uploads cheaper than if generating them through the uploader
  • Offering similar incentives to encourage creators reduce their high-end poly counts and not fudge their low-end LODs
  • Improving the preview option in the uploader to better represent LOD sampling
  • Adding a field on the marketplace similar to the Land Impact one but for Display Weight on worn meshes (on the basis that a high display weight can be indicative of poor LOD usage), and in theory encourage creators to be more efficient in their use / provision of LOD files
  • Have a render meta mode like physics, that shows the quality of the LODs as a colour map (e.g. look at the volumetric relationship between the LODs on the basis that a good LOD should hold volume)
  • Instructional videos from Torley – although Medhue Simoni has a 3-part series on LODs: Part 1, Part 2, Part 3.

In Brief

  • [14:52-15:36] The link on the SL wiki Rigging Fitted Mesh page to download the avatar skeleton is currently broken.
  • [19:04-20:22] Inverse Kinematics via LSL function with global position – this has been suggested a number of times. While noting it would be useful (it might, for example, enable an animation to make it appear as if an avatar is opening a door when standing before it), Vir stated it has not received in-depth thought at the Lab in terms of being implemented or how it would work, given the server currently doesn’t know where the joints in an avatar are, so it introduces a level of complexity as to how things would be handled.
  • As most people know, initially accessing Aditi is a case of submitting a support ticket. Inventory is now merged between Agni (the Main grid) and Aditi around 24 hours after initially logging-in to the latter (a merge process is run every day for all accounts which have been logged into since the last run). However, it now appears that changing your SL password can break your Aditi access, requiring a further support ticket.
  • [43:09-end] Discussion on copybotting, policies, banning, etc., which threads through to the end of the meeting, and split between Voice and chat.

SL project updates week 25/2: Content Creation UG w/audio

The Content Creation User Group meeting, at the Hippotropolis Camp Fire Circle (stock)

The majority of the following notes are taken from the Content Creation User Group meeting, held on  Thursday, June 22nd , 2017 at 1:00pm SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are usually available on the Content Creation User Group wiki page.

Audio extracts are provided within the text, covering the core project LL has in hand. Please note, however, that comments are not necessarily presented in the chronological order in which they were discussed in the meeting, but are ordered by subject matter.

Server Deployments Week 25 – Recap

As always, please refer to the server deployment thread for the latest updates.

  • On Tuesday, June 20th, the Main (SLS) channel was updated with a new server maintenance package (#17.06.12.327066), containing fixes to help with the caps (capabilities) router (see here for details).
  • On Wednesday, June 21st, the RC channels were updated as follows:
    • BlueSteel and LeTigre should receive the same server maintenance package (#17.06.19.327206) containing internal fixes
    • Magnum should receive a server maintenance package (#17.06.19.327192) intended to fix BUG-100830 (“HTTP_CUSTOM_HEADER no longer works on RC 17.06.13.327111”) and BUG-100831 (“Lelutka Simone bento head spits a script error when attached on 17.06.13.327111 regions (Magnum & Cake)”).

Animated Objects

Vir has been trying to get animated objects using the avatar skeleton to scale in a reasonable way and that linksets are correctly referencing the same skeleton, and things are handled corrected when they are attached or detached. He’d also be interested in hearing from makers of the “current generation” of pets on how they work – how do they maintain ground contact, how they follow along, how the physics is getting managed, so that he can look into trying to make animated mesh objects operate in a compatible manner.

So, if you are a pet maker and can help Vir in this, please either attend the Content Creation User Group meetings, or contact him directly.

Attaching Animated Objects to Avatars and Avatars to Animated Objects

One of the popular aspects of pets today is the ability to attach them to an avatar (so you can carry them, have them sitting on your shoulder, etc), and this is seen as a potentially important aspect of animated mesh. However attempting to do so does present issues, as it would mean linking two avatar skeletons in some manner, something that is not currently possible. While there are some potential ways this could be done, it could add considerable overhead to the existing project, and also brings potential challenges with it – such as ensuring an attached skeleton is correctly oriented, determining the potential performance hit, etc..

Similarly, BUG-100864 suggests a means of going the other way – linking an avatar to an animated object – such as being able to walk up to a free-roaming horse on a region and being able to mount it and ride it, for example. However, this also raises some of the same concerns.

While not ruling either out, Vir is focused on bringing forward a relatively basic means of animating mesh objects using the avatar skeleton, one which can offer a series of potential uses whiles conceivably allowing existing mesh creations (such as pets) to easily be converted to use it. As such, he sees it as a foundation project, which can then be expanded to incorporate other capabilities in the future, rather than trying to pack everything into a single project which could run the risk of very long development times or becoming overly complicated with all it is trying to achieve right from the start.

Baked Textures on Mesh

Work is still focused on the baking service infrastructure updates required to support baking textures on mesh avatars. These are quite extensive, involving changes to the underpinning tools, the servers (including updating Linux), and so on.

Rigging To Attachment Points

There has been some confusion of late as to whether rigging to attachment points is allowed or not. From the Lab’s perspective, it is not allowed for uploaded since the introduction of Bento, but should still work for legacy items. However, what appears to be a server-side glitch in the last couple of weeks seems to have exacerbated the confusion.

Vir’s recommended rule-of-thumb for TPVs to test against the Lab’s official viewer and ensure behaviours match, otherwise confusion could occur down the road once the current glitches have been corrected. To help with matter, he’s going to refresh his mind on what limitations are enforced server-side, and hopefully bring a list of them to the next meeting to help TPVs ensure they are following the requirements in order to avoid future problems.

Other Items

Mesh Body Dev Kits / Clothing Making / “Standardised” Mesh Avatar

This topic took up the core part of the meeting, and as such, the following is an attempt to precis the core points into a readable summary

At the moment, all mesh bodies in Second Life are unique to their creator, utilising their own core shapes and skin weightings, which have a considerable amount of IP bound up in them. Because there is no available “standardised” mesh model available in Second Life, it means that the body creators need to provide developer kits to mesh clothing and attachment makers, which include this core information –  skin weights (in Blend or Maya or DAE or OBJ files) for rigging clothing and the shapes, which potentially makes it very easy for someone to create their own avatar bodies.

To try to reduce this risk, mesh body makers tend to have license agreements clothing makers are required to agree to, and by sometimes limiting who may or may not be deemed eligible to obtain such a kit.   This has  caused some friction / frustration in the cloth making community.

One suggestion put forward to help reduce fears on the part of mesh avatar creators and allow clothing makers more readily support avatar body brands, was that avatar makers should perhaps consider offering only the body shape to clothing makers – and then offer a fee-based rigging service to clothing makers. This would remove the need for avatar makers to give out their skin weight files, offer them a revenue stream and allow clothing makers more equitably create clothing for the more popular mesh bodies.

While there are no projects on the roadmap aimed at the SL avatar system, two other ideas were put forward which Vir agreed, could be worth consideration down the road:

  • One is a suggestion that LL look to emulate the ability in Maya and Blender to copy skin weights from an avatar model to an item of mesh clothing by running an algorithm to match the weighting from the avatar to the nearest vertices in the clothing. This would allow the clothing to fit almost any mesh body “automatically”, removing the need for clothing makers to specific weight their clothing to each of the mesh bodies they wish to support.
  • The development of a news “SL mesh avatar” designed to operate alongside the existing system avatar (so no content breakage for those preferring to continue using the current system avatar). If this avatar had a sufficient density of vertices, it offers two potential uses:
    • Mesh body makers could use its weightings with their custom shapes to produce individually unique mesh bodies, but which all have a “standardised” set of skin weights, reducing the amount of work involved in creating them (or they could continue to use their own custom skin weights if they wished
    • It could offer clothing makers a single source of skin weights for clothing, simplifying clothing making, which – if combined with the vertices matching algorithm mentioned above – would help ensure the clothing “fits” custom weighted mesh bodies.

The vertices matching algorithm idea might be the more difficult of these two ideas to implement – were either to be considered. However, the development of a mesh avatar that could exist alongside the system avatar could have a lot of merit and help “standardise” the more technical aspects of mesh avatars without impacting their individual shape / look.

Further, as mesh objects can support multiple UV sets, it would be possible for such an avatar to use the legacy UV map use to define the texture spaces on the three parts of the system avatar (thus allowing it to use existing skins, etc), or it could support more “advanced” UV maps (so skin creators could finally design skins with two arms, rather than having the one arm “mirrored” on the avatar, as is currently the case.

Why isn’t Scaling Bones by Animations Allowed?

Scaling bones using animations has never been supported in SL, although Vir isn’t clear on why (and pseudo bone scaling via animations has been possible through attachment point scaling or animating the point positions). However, one of the things that makes designing avatars harder is multiple ways to manipulation and aspect of a bone, because of the potential for conflicts. An example of this is bone translations, which can be affected by both animations and the shape sliders, and so can cause issues.

However, during the Bento project, the advantages of allowing translations through animations was such that the Lab opted to permit it, even allowing for the potential for issues. As scaling bones through animations could bring about a similar level of possible complexity to avatar design (as bones can obviously be scaled via the sliders, this could be the reason scaling bones via animations hasn’t been supported. Currently, this is unlikely to change, if for no other reason it would require a change to the animation format, which currently has no means to interpret bone scaling.

SL project updates week 24/2: Content Creation UG w/audio

The Content Creation User Group meeting, at the Hippotropolis Camp Fire Circle (stock)

The following notes are taken from the Content Creation User Group meeting, held on  Thursday, June 15th, 2017 at 1:00pm SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are usually available on the Content Creation User Group wiki page.

Audio extracts are provided within the text, covering the core points of the meeting. Please note, however, that comments are not necessarily presented in the chronological order in which they were discussed in the meeting, but are ordered by subject matter.

Animated Objects

Vir is continuing to work on this project, which has been given the informal name of “animesh” – which, as was pointed out in the meeting by some, sounds a lot like “any mesh”, although it seems to have some support among attendees, who have been doing their best to propagate the term ahead of the Lab settling on a project name.

Viewer Status

There is no ETA for a project viewer, as the current test viewer still has a habit of crashing other viewers in the same region by sending them unrecognised messages. This need to be fixed before a viewer supporting animated meshes goes into circulation, even as a project viewer.

Scaling Animated Objects

There has been some discussion around editing animated objects in order to adjust their scale, with the associated skeleton being automatically adjusted to match the desired size of the object. In testing the idea, Vir has found it a lot harder to do than expected due to how things are coded in the viewer. Essentially, there is no overall way to scale the skeleton; every individual bone in the skeleton has to be scaled.

However, these does appear to be one viable means of achieving the scaling up / down of an animated object, and Vir is going to take a look to see if it can be made to work in a semi-predictable way.

Suggestions on how to handle this have included adding a root prim to animated objects or using a script to apply scale or using the object’s bounding box (the physics bounding box isn’t seen as suitable, as some animated objects may not have physics associated with them). While the latter might be a little more fiddly to use, it is the option Vir seems to prefer, although as he notes, he still needs to do more testing. If the approach doesn’t work, use of LSL commands might be looked at as an alternative.

Baked Textures on Meshes

Anchor Linden is working on the project. At the moment the focus is on baking service infrastructure updates to support the increased baking requirements (including support of 1024×1024 textures, which is seen as the “easy” part).  There is no ETA for this work at present, but the rough work flow is:

  • Update the baking service
  • Carry out performance testing – increasing the number of avatar bakes for a large number of avatars is going to increase the cost of the baking process, so the Lab needs to be sure any requirements for additional baking servers are understood
  • Issue an updated viewer which supports rendering the new bakes, and has a compatible “local baking” (used to define your initial look for transfer to the baking service) which is fully consistent with the baking service.

Once these are in place, then work can commence on how to flag mesh faces as being surfaces on which the baked textures are to be applied. This will include  a mechanism  for hiding the existing (default) avatar body without using an alpha layer.

Updating the baking service to support bakes on meshes will not involve adding materials support to the baking service, although that may be considered as a future project. The focus here is purely on extending the baking service to support using the baked textures already available on mesh avatar bodies.

Alpha Masking Mesh Bodies

The question was raised on whether use of the baking service would allow clothing creators to use alphas as a means to hide body elements to stop them showing through mesh clothing worn by an avatar (as tends to be done with the system avatar and mesh clothing today, rather than or alongside side of the current mechanism where a mesh body (Maitreya, Slink, TMP, etc.), is split into numerous parts with multiple faces which can have individual alphas applied too hide them.

Vir believes the baking service should be able to provide suitable body masking, given it can already for the system avatar, where alpha baked into an appearance can be used to hide all or parts of an existing system avatar when seen by others.

Cathy Foil also suggested a means to “turn off” the default body parts on the system avatar (head, upper body, lower body) or the use of a second alpha channel. The first option is useful, but constrained – you can’t turn off hands or feet, for example as they are defined within the upper / lower body part. A second alpha channel offers greater flexibility, but adds to the complexity of implementation.

Overall, masking through the baking service – given there have been tests by body creators in the past to see how alphas within bakes work on mesh bodies – is seen as the more direct answer. It will obviously require people to go through a learning curve, vis understanding applying bakes to meshes and any UI changes, etc. The project viewer – once available – is seen as a means of starting on this learning process, as well as a means of determining what has been missed / may additional be required to make the capability useful.

Mixing Bento Hand Animations and Non-Bento Hand Morphs

BUG-100819, “Default hands spread wide during bento hand animations, making it impossible for Bento and non-Bento owners to play together” came up for discussion at the meeting.

In brief: the default system avatar uses a set of morphs to allow the hand to form a series of basic shapes: a relaxed hand pose, a fist pose, a spread fingers (default) pose, etc. Which can be triggered by an animation utilising an identifier. Bento animations, however, directly manipulate the 30-odd bones the hand to produce hand and finger poses. As the system avatar cannot used these bones, the Bento animations are effectively ignored when run on a system avatar.

However, the underpinning system hand morphs can still be used by the system avatar providing the required morph is identified within the animation itself. When this is done, the animation will play for Bento avatars, or be ignored by system avatars in favour of the defined morph. But if no morph value is specified within the animation, the system avatar hand will adopt the default splayed fingers morph – which appears to be what is happening in the JIRA, possibly combined with an animation priority clash.

Medhue Simoni recently produced a live stream walk-through of mixing Bento animations and default hand morphs, and provided the link to that session at the meeting, which I’ve embedded below.

It has been suggested that the splayed fingers issue could be avoided by changing the system so that if a null value is specified in an animation (as opposed to leaving the field blank), the system avatar will adopt the relax hand morph. While Vir has agreed to look into this, adding such a null value will not automatically resolve the problem for animations which doe not have any morph value defined – the system avatar will continue to use the splayed fingers morph.

Another suggestion is to have the exporter in the tool used to create the animation (e.g. Avatar) display a reminder that hand animations should have a morph value defined. This would make more sense, as it would be within the application where the animator can easily add a value if they had forgotten to do so.

General Discussions

  • Re-purposing Bento bones for pets – yes this can be done, providing the re-purposed bones are not being used for anything else (e.g. if a pet attached to your avatar skeleton uses facial bones and you have a Bento head using the same bones, wearing both at the same time will result in conflicts.
  • Animated object will overcome this, by allowing completely independent pets, but is it’s not clear at this point if these could be attached to an avatar, as that would me combining two independent skeletons.
  • A request was made to increase the largest allows size for prim creation (64m x 64m). This is unlikely to happen.

Bento Bones and Weapons

Bento bones can be used with weapons, again providing they do not class with other mesh using the same bones. In this, the wing bones would seem to be a good choice, given groin, tail and rear leg bones can have a wide variety of uses, and may be more prone to clashes.

One problem with weapons is getting them to align with the hands. As Medhue pointed out in the meeting, he has discovered that getting rigged weapons to stay aligned to the hands when the avatar’s shape is changed is next to impossible. Instead, he recommends not rigging the weapon, then using the hand attachment point and animating that instead. This both allows the weapon to be animated and ensures the weapon remains closely matched to the hand no matter how the avatar is resized.

SL project updates, 23/2: Content Creation Meeting

The Content Creation User Group meeting, at the Hippotropolis Camp Fire (stock)

The following notes are taken from the Content Creation User Group meeting, held on  Thursday, June 8th, 2017 at 1:00pm SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are available on the Content Creation User Group wiki page.

A video recorded at the meeting by Medhue Simoni is embedded at the end of this update, my thanks to him making it available. Timestamps in the text below refer to this recording. The meeting was disrupted by three region crashes, and this is reflected in the stream recording.

Asset HTTP Viewer

[2:50] The Asset HTTP RC viewer (version 5.0.6.326593 at the time of writing) has an update with LL’s QA. As noted in my last TPV Developer meeting update, this includes the new viewer update management code. It is now expected to appear in the release channel and an RC update in week #24 (week commencing Monday, June 12th).

Animated Objects

[3:18] Vir is continuing to work on the animated objects project, and now has an internal version of the viewer that hooks-up to a correctly configured simulator. It is still some way from being ready to be offered as a project viewer, however.

Skeleton Positioning

[4:09] One issue to be considered with animated objects using the avatar skeleton is where the skeleton is supposed to be positioned. Avatars are placed by the simulator providing information on where the agent is, and the bones are then positioned and things like hover height are applied, and whatever rigged objects are being worn are positioned relative to the skeleton’s position. With an animated object, the reverse is true: the object has a defined location, and some means needs to be found for the system to position the bones accordingly; it’s not currently clear how this should be done.

Vir has tried experimenting using the mPelvis bone, and aligning that with the object’s position, with mixed results. So, should the Lab simply pick a convention and have people build their animated objects accordingly, or should a smarter, more adaptive solution be sought?

Collisions

[10:50] Collisions (being struck by avatars, other objects). Collision detection isn’t currently carried out in SL for skinned objects, however, Vir is considering calculating collisions based on the collision volume of the skeleton, although this has yet to be investigated.

Setting a Prim as Object Root

[11:19] Cathy Foil has suggested using a prim as the root for an animated object, with the skeleton positioned relative to that prim. This has the advantage of potentially allowing the skeleton, as a child linkset of the root, to have physics; further, the prim could be set statically at a fixed location in a region, and the skeleton  / object animated to roam independently or it could be scripted to move (and even use Pathfinding), with the animated skeleton / object carried along with it. Thus, it could offered a flexible approach to the problem.

[14:34] One of the things Vir is aiming for is for creators to be able to take existing skinned mesh content and turn it into animated objects, without the need for the model to be re-worked / re-uploaded.

Multiple Rigged Meshes in an Animated Object

[17:38] With his current work, Vir believes it should be possible to have multiple rigged / skinned mesh objects animated by a single skeleton (e.g. so an avatar body can be split into the notional lower body, upper body, head). This could have some interesting uses providing the meshes don’t try to use the same bones.

Frame Rates

[20:05] Vir has had a number of animated objects running at the same time, and he has not seen a significant impact on frame rates. However, the caveat here is the relative rendering complexity of animated objects and how that affects client-side processing. The current hope is that the impact of any given animated object will equate to that of a similarly rigged and complex avatar, so the potential for performance impact is there; it’s just too early in the project to make any definitive statements.

Editing Size

[20:45] At the moment, the size of an object is governed by the size of the skeleton; it could be more flexible if the size of the objects could be set / edited, and this determines the size of the skeleton. This might, for example, be done by sizing the skeleton to the object’s bounding box (which adjusts as the object is resized). However, it’s again too early in the project to offer a definitive way this might be done.

[23:12] Cathy points out that having a root prim for an animated objects, sizing them could be tied to the size of the root prim. So, for example, doubling the size of a root prim would double the size of the object.

Applying Baked Textures to Mesh Avatars

[33:41-35:45] A short explanation of this project for those unfamiliar with it. In brief, a means to apply composited textures bakes (skin, tattoo, clothing layers, etc), to mesh bodies using the SL baking service, with the aim of potentially reducing the complexity of avatar bodies.  This work is being carried out alongside of animated meshes, but is not dependent upon that project (or vice-versa).

[29:06] Updates to the baking service to support baking textures on mesh avatars has now started. This is currently infrastructure work – updating the baking service to a newer version of Linux, etc.

After this, the first step in getting the service to work with mesh bodies will be updating it to support 1024×1024 textures and producing a corresponding viewer update. Once the latter is available for testing, then the Lab will be ready to look at the feature set for supporting bakes on mesh.

Materials Support and the Baking Service

[30:30] There may be a misunderstanding circulating that the baking service will “disable” materials on meshes. This is not the case.

The baking service has never supporting materials processing, and the work to enable texture baking on meshes will not include extending the baking service to handle materials  – this would be a huge undertaking. However, it will not prevent materials from being used via other means (application directly on the mesh, etc.), or any other way in which materials are used in-world.

The baking service uses is a composited diffuse (texture map). This may be less than is currently possible when using applier systems (which should continue to work alongside bakes on mesh). [40:34] It will also be possible to still manually apply normal and specular maps to an avatar mesh using the bakes.

Baked Texture Delivery to a Mesh / Persistence

[31:53 and 38:47] Once a bake has been completed it would be delivered to the mesh by means of flagging the face to which it is to be applied. This flag will remain persistent, so when the avatar appearance is updated texture will be re-applied to the face, until the face is flagged as requiring a different baked texture.

Arbitrary Use of Bakes

[36:24] As noted in my last Content Creation UG update, there has been some discussion of a more arbitrary use of bake textures and applying them to other objects, but this in not the focus of this current work. However, these ideas might be considered in the future.

Anchor Linden

[41:58] Anchor Linden is a new name at the Lab, and is currently working with Vir, focusing on the texture baking project.

Supplemental Animations

[41:38] The supplemental animations work, designed to overcome issues of animations states keyed by the server-side llSetAnimationOverride() conflicting with one another, is still on the card, just no further movement as yet.

General Discussion

[44-22-end] General discussion: mesh uploads, proper management of LODs, etc.

SL project updates week 21/3: Content Creation UG w/audio

The Content Creation User Group meeting, at the hippotropolis Camp Fire (stock)

The following notes are taken from the Content Creation User Group meeting, held on  Thursday, May 25th, 2017 at 1:00pm SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are available on the Content Creation User Group wiki page.

Audio extracts are provided within the text, covering the core points of the meeting. Please note, however, that comments are not necessarily presented in the chronological order in which they were discussed in the meeting, but are ordered by subject matter.

A video recorded at the meeting by Medhue Simoni is embedded at the end of this update, my thanks to him making it available. However, do not that this cuts out mid-way through the meeting. Timestamps in the text below refer to this recording.

Applying Baked Textures to Mesh Avatars

[1:54] This was announced as a new project – see my separate update for details.

The meeting saw additional questions asked about the baking service, which are summarised below.

Will the Baking Service Support Animated Objects?

  • Not initially. Baked textures are only relevant to your Current Outfit Folder (COF), affecting your appearance only. Animated objects will not have any notion of a COF (as they do not have an associated inventory structure as avatars do), so whose textures would an animated object show?
  • Also, even if you could assign your own COF-defined appearance to an animated object, it would only be valid until you change your own appearance, which would discard the bake used by the object, probably leaving it blank.
  • One solution might be allowing arbitrary textures to be sent to the baking service (see below). Another would be to allow animated objects to have their own notion of a COF contained within the object itself which the baking service could somehow reference
    • WERE this kind of work to be adopted, this would be Vir’s preferred approach. However, it is not currently a part of either the animated objects project or baking textures on meshes.

Baking Arbitrary Textures

Would it be possible to have a LSL function to request baking arbitrary textures?

  • Not as a part of applying baked textures to mesh, although it might be considered in the future.
  • However, the baking service could offer considerable flexibility of use were it to be extended, simply because of the way it defines the body area (head, upper body, lower body).
  • A problem is that, as noted above, baked textures are held only so long as your current avatar appearance defined via your COF is relevant, after which they are discarded. For the system to be useful with arbitrary textures, the resultant composite textures would need more rigorous storage, perhaps as a new asset class or retained in some form of “temporary” texture store – either of which would have to be defined and allowed for.
  • Thus, the problem is the amount of work involved in extending the baking service and (potentially) the asset handling required to support it.

HTTP Asset Viewer

[4:22] The HTTP Asset viewer was updated to version 5.0.6.326593 on Friday, May 26th. This update primarily bring the viewer to parity with the recently promoted release viewer, and so primarily comprise the revised region / parcel access controls, and the updates to Trash emptying behaviour.

Supplemental Animations

[6:53] As well as working on animated meshes, Vir is now also working on the LSL side of supplemental animations alongside of LSL changes need for animated objects. The work is designed  to overcome issues of animations states keyed by the server-side  llSetAnimationOverride() conflicting with one another.

Animated Objects

Current Project Status

Vir has got basic prototyping working in a “hacked up” single version of the viewer. He’s now working on the shared experience – how is an animated object seen by multiple viewers.

There is still no details on what limits beyond land impact which may be applied to animated objects (e.g. number of animated objects – not avatars – permitted per region type, etc), as there is not at this point any solid data on potential performance impact to help indicate the kind of limits which might be required..

Number of Allowed Animation Motions

[8:52] Currently, SL supports a total of 64 animation motions playing at one time per agent (hence walks, arm swings, wing flaps, tail swishes, etc., all of which can happen at the same time). It’s not been tested to see how much of an actual load running multiple animations places on a system. The limit might have to be changed as a result of animated objects – or it might not; it’ll come down to testing.

Other Items of Discussion

Avatar Scaling

[12:24-video end] There is a lengthy discussion on avatar scaling.

  • Essentially, the size slider works within a certain range; go beyond this, and distortions of body parts (e.g. facial features) can start to occur, as some sliders stop working properly.
    • Obviously, it is possible to scale avatars using animations, but again, doing so also doesn’t play nicely with the sliders.
  • This problem is particularly impactful with Tiny and Petite  avatars (although it also affects really large avatars). One workaround is to upload a mesh without joint positions of the affected bones, but this causes breakages in the mesh.Thus, having a slider which could handle the avatar’s scale over a broader range might be beneficial. However:
    • Changing the definition of the current scale slider to work over a broader range isn’t an option, due to the risk of existing content breakage.
    • Adding a new “global scale” slider to the system might be possible. However, while its is relatively simple at the viewer end of things, SL is already close to its limit of 255 sliders, and any additional global slider will require significant changes to the back-end.
  • A further problem is motion is not affected by scale, but is keyed to the current avatar size range. So, additional work would be required to the locomotion system to ensure the distance covered by an avatar’s stride is consistent with its size, adding further complexity to any changes.
  • Also, the ability to scale avatars would also require using rotations only, as any use of translations could result in locomotion issues noted above (e.g. so a really small avatar would appear to zip along at 100s of miles an hour), and rotation-only animations are somewhat limiting.

BUG-20027: Allow joint-offset-relative translations in animations

Created during the Bento project, this feature request was originally closed as something the Lab could not implement. It has now been re-opened as people wanted to add further feedback to it. So, if you have an interest – please go and comment on the JIRA.

Cost of Animating via Bones vs. Using Flexis

The Lab views animating via flexis as being very inefficient, but have no numbers for a direct comparison to the cost of animating bones.

Improving IK Support

General requests have been made for SL to better support Inverse Kinematics (IK) to add greater flexibility of joint / extremity positioning. Vir has requested that if someone could start a feature request JIRA, open for comments, on what might be sought, it would be helpful.

Next Meeting

The next CCUG meeting will be Thursday, June 8th, 2017.