SL project updates 2017 10/2: Content Creation User Group w/audio

The gathering: people gather for the CCUG, including a ridable dragon, a work-in-progress by Teager (l) and a wearable dragon, also a WIP by Thornleaf (r)
The Content Creation User Group meeting, Thursdays, 13:00 SLT

The following notes are taken from the Content Creation User Group meeting, held on  Thursday March 9th, 2017 at 1:00pm SLT at the the Hippotropolis Campfire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are available on the Content Creation User Group wiki page.

HTTP Fetching

As I’ve noted in several recent SL project updates, the Lab is shifting the fetching landmarks, gestures, animations, shapes, sounds and wearables (system layer clothing) away from UDP through the simulators and to HTTP via the CDN(s).

The simulator side of the code is already in place on Aditi and awaiting further testing (see here for more). Vir is heading-up the viewer side changes required to support this work, which is now getting “petty close” to being available in a public viewer (most likely a project viewer). I’ll continue to update on this work through my various SL project update reports.

Rendering Costs

Vir has also been looking at the viewer-side rendering costs of various avatar models to improve the overall rendering cost calculations. This is more a data gathering exercise at the moment, but it is hoped it will lead to improved calculations when determining the overall rendering complexity of models, and will likely mean that, for example, the cost of rendering rigged meshes will at some point be updated.

This isn’t directly related to the potential of animating objects (e.g. for non-player characters). While the Lab is apparently still pondering on this as a possible project, it would mean back-end changes to calculate the land impact of avatar models used as NPCs, rather than alterations to the viewer-side rendering cost.

Animation Transitions

There have been, and continue to be, a number of issues with animation playback, some of which appear to be related to llSetAnimationOverride, one of the server-side functions for controlling your animation state. Some of these were reported early on in Bento, which exacerbated some of them (e.g. quadrupeds crossing their forepaws).

Issues can also occur with jump animation states (pre-jump, jump and landing), as has been reported in BUG-7488. For example, During the meeting, Troy Linden and Medhue Simoni pointed to problems: for Troy, it was with respect of an avatar “sticking” in the landing animation, rather than returning to the expected standing animation; Medhue reports issues in general playback, and whether the transitions will actually play correctly.

It not clear if these issues are all a part of the same problem, but feedback from the meeting is being relayed back to those at the Lab poking at things.

Information and Tools

People are still having problems finding Bento information on the SL wiki – such as the skeleton files. This is partially due to the files being on the Bento Testing page.  It’s also not easy for new creators to find information on suitable tools (e.g. Avastar. MayaStar, etc.).

One suggested solution (allowing for the wiki currently being locked from general edits) is to have a general SL tools page where the various tools, etc. can be listed with links to their respective websites. This could include free tools: GIMP, Blender, Wingz 3D, etc.), plus tools which are not specific to SL can be used within (e.g. Maya, Zbrush, etc), and then add-ons like Avastar and Mayastar.

Such an approach, coupled with a clean-up of the Bento information, might be suited to being included in an overhaul of the wiki Good Building Practices pages the Lab is working on as and when resources are available. Troy has made a note to take these ideas back to the Lab.

Other Items

Transparency Rendering Cost

There was some discussion on whether the rendering cost of a rigged mesh should remain high if it is set to transparent. Some felt the cost should be lower, and Vir noted that the system avatar has a special UUID for an invisible texture when can reduce the rendering cost of the system avatar. However, rigged meshes may not be subject to this check, which may also depend on how the mesh is made transparent (i.e. via a texture or via the transparency setting). He also noted that rendering as transparent could add cost over rendering a rigged mesh as opaque.

There was some discussion on whether simply having the mesh in memory, whether or not it is rendered, could add to its complexity. Vir indicated that as he’s not precisely sure how things are handled, he’d have a look at the code.

Calling Animation UUIDs via Script without the Animation Residing in Inventory

A question was asked whether it would be possible to have a script call an animation via the animation’s UUID without the animation being physically in the parent object’s inventory. The short answer to this is “no”.

While animations can be pulled from objects with modify permissions and used elsewhere, many items with animations (chairs, beds, etc.), tend to have animations in them set to No Copy, limiting the ability to freely re-use them. If animations could be freely called via script using their UUID, this protection would be eliminated, as anyone with the UUID could use the animation as often as they wished, regardless of whether or not a version of the animation resides in their inventory.

This conversation edged into the issue of people being able to pull Copy permissioned inventory from a No Modify object by opening it; however, that is something of a separate situation, which was not discussed further.

Avastar Status

AvaStar is now at release candidate 4, with RC 5 on its way, which may be the final RC before a release.

.ANIM Exporter for Maya

Aura Linden is re-working the code on her open-source Maya .ANIM exporter. She was originally working on it in Maya’s MEL scripting, which would make it compatible with all versions of Maya.

However, after encountering some problems, she is now coding it in Python. This means the exporter will only initially work with versions of Maya supporting Python  (version 8.5 onwards). It may be that once this work has been finished, Aura hopes to be able to go back and complete the exporter in MEL for older versions of Maya.

Mayastar Update

Cathy Foil will have an update for Mayastar appears shortly. When the .ANIM exporter is available (above), it will be folded in to Mayastar, although it is not exclusively for Mayastar.

Splitting the Avatar Shape into Different Elements

This was suggested some time ago as a possible Bento follow-up as a means of making it easier for users to mix and match heads and bodies by allowing different underpinning avatar shapes for both, which could be worn simultaneously. This was seen as particularly useful for users who are uncertain about customising their form using the sliders, or where creators provide No Modify shape with their head or body product, limiting the suer’s ability to modify one or the other. N definitive proposal has been put together on how this might be achieved.

Supplemental Animations

This was also the subject of early Content Creation meetings with Vir as a possible Bento follow-on project. The idea is to allow “supplemental” animations to run alongside the animation states keyed by llSetAnimationOverride(), effectively allowing them to play together, rather than conflicting with one another as is the case at the moment. This is still be considered, but no work has been carried out as yet.

Next Meeting

As Vir is out of the office in week #11, the next Content Creation meeting will be on Thursday, March 23rd, 2017 at 13:00 SLT.

SL project updates 2017 8/2: Content Creation User Group w/audio

The gathering: people gather for the CCUG, including a ridable dragon, a work-in-progress by Teager (l) and a wearable dragon, also a WIP by Thornleaf (r)
The gathering: people gather for the CCUG, including a Bento ridable dragon, a work-in-progress by Teager (l) and a Bento wearable dragon, also a WIP by Thornleaf (r)

The following notes are taken from the Content Creation User Group meeting, held on  Thursday February 23rd, 2017 at 1:00pm SLT at the the Hippotropolis Campfire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are available on the Content Creation User Group wiki page.

Core Topics

  • HTTP asset fetching
  • Animating objects
  • Applying Baked Textures to Mesh Avatars

HTTP Fetching

As previously noted, the Lab is working on moving landmarks, gestures, animations, sounds and wearables (system layer clothing) from UDP delivery via the simulator to HTTP delivery via the CDN(s). This work is now progressing to the stage where initial testing is liable to be starting soon. It’s not clear if this is internal testing with the Lab, or whether it will involve wider (Aditi testing) as well. As things progress, expect the viewer-side changes to appear in a project viewer and then progress through the normal route of testing / update to RC and onwards towards release.

Potential Project: Animated Objects

As noted in my last Content Creation UG meeting notes, the Lab is taking a speculative look at using the current avatar skeleton to animate in-world objects to provide a means for users to more easily create animated objects (e.g. non-player characters – NPCS -, plants and trees responding to a breeze, providing mesh animals which do not rely on performance hitting alpha swapping, etc) – see feature request BUG-11368. for some of the ideas put forward which helped prompt the Lab’s interest.

It is important to note that this is still a speculative look at the potential; there is no confirmed project coming off the back of it, the Lab is currently seeking feedback on how people might use the capability, were it to be implemented. No in depth consideration has been given to how such a capability would be support on the back end, or what changes would be required to the viewer.

One of the many issues that would need to be worked through is just the simple matter of how an object might be animated to achieve something like walking, running or flying. These require the simulator to make certain assumptions when handling an avatar which are not a part of object handling. There’s also the question of how the skeleton would be applied to an object.

Having animated objects does give rise to concerns over potential resource / performance impacts. for example, someone having a dozen animated pets running around them as animated objects could potentially have the same resource / performance overheads as thirteen actual avatars in a region.

One possible offset to this (although obviously, the two aren’t equitable) is that mesh animals / objects which currently use a lot of alpha flipping to achieve different “states” of “animation” (such a the squirrel which can jump from the ground and swing on a nut holder and jump back down again, or the peek-a-boo baby bears, etc., all of which are popular in gardens and public regions) could be made a lot more efficient were they to be animated, as the resource / performance hitting alpha swapping could be abandoned.

It was suggested that rather than having the full skeleton available for animated objects, it might be possible to use a sub-set of bones, or even the pre-Bento skeleton. Agreeing that this might be done, Vir pointed out that using the full skeleton would perhaps offer the most flexible approach, and also allow the re-use of existing content, particularly given that things like custom skeletons (also mooted) would be too big a project to undertake.

A closer look at Teager's WIP ridable dragon, which has yet to be textured
A closer look at Teager’s WIP Bento ridable dragon with Teager aboard, which has yet to be textured

Applying Baked Textures to Mesh Avatars

Interest is increasing in this potential project, which would allow baked textures – skins and wearble clothing layers – to be applied directly to mesh avatars via the baking service. This also has yet to be officially adopted by the Lab as a project, but there is considerable interest internally in the idea.

As I’ve previously reported, there is considerable interest in this idea, as it could greatly reduce the complexity of mesh avatar bodies by removing the need for them to be “onion skinned” with multiple layers. However, as I noted in that report, a sticking point is that currently, the baking service is limited to a maximum texture resolution of 512×512, whereas mesh bodies and parts (heads, feet, hands) can use 1024×1024.

These is concern that if the baking service isn’t updated to also support 1024×1024 textures, it would not be used as skins and wearable using it would appear to be of lower resolution quality than can be achieved when using applier systems on mesh bodies. Vir expressed doubt as to whether the detail within 1024×1024 textures is really being seen unless people  are zoomed right into other avatars, which for most of the time we’re going about our SL times and doing things, isn’t the case.

Troy Linden wears a Bento octopus
Troy Linden wears a Bento octopus “backpack”

This lead to a lengthy mixed text / voice discussion on texture resolution and extending the baking service to support mesh avatars (were it to go ahead), which essentially came down to two elements:

  • The technical aspects of whether or not we actually get to see the greater detail in 1024×1024 textures most of the time we’re in world and in re-working the baking service to supporting 1024×1204 across all wearable layers from skin up through to jacket.
  • The sociological aspect of whether or not people would actually use the baking service route with mesh avatars front , if the texture resolution were left at 512×512, because of the perceived loss of detail involved.

Various compromises were put forward to try to work around the additional impact of updating the baking service to support 1024×1024 textures. One of these was that body creators might provide two versions of their products if they wish: one utilising appliers and 1024×1024 textures as is the case now, and the other supporting the baking service and system layers at 512×512, then leave it to users to decide what they want to use / buy. Another was a suggestion that baking service support could be initially rolled out at 512×512 and then updated to 1024×1024 support if there was a demand.

None of the alternative suggestions were ideal (in the two above, for example, creators are left having to support two product ranges, which could discourage them; while the idea of leaving the baking service at 512×512 falls into the sociological aspect of non-use mentioned previously). Currently, Vir appears to be perhaps leaning more towards updating the baking service to 1024×1024 were the project to be adopted but, the overheads in doing so still need to be investigated and understood.

Other Items

.ANIM Exporter for Maya

Cathy Foil indicated that Aura Linden has almost finished working on the .ANIM exporter she’s being developing for Maya. The hope is that the work will be completed in the next week or so. She also indicated that, in keeping with Medhue Simoni’s advice from a few weeks ago (see .BVH Animations and Animation Playback), she was able to overcome some of the issues being experienced with fine-tuning .BVH animation playback, although there are still problems.

The .ANIM exporter will be available for anyone using Maya, and is not something dependent upon Mayastar.

Avastar 2.0 in RC

The upcoming fully Bento compliant version of Avastar is now available as a release candidate.

IK Constraints

Tapple Gao has been looking at IK (Inverse Kinematics) constraints within Second Life. These aren’t widely used within existing animations – although up to about eight constraints can be defined – largely because the documentation doesn’t appear to be too clear. Tapple hopes to improve this through investigation and then updating the SL wiki.

Next Meeting

The next content Creation meeting will be in two weeks, on Thursday, March 9th, at 13:00 SLT.

SL project updates 2017-7/2: Content Creation User Group w/audio + HTTP assets

The Content Creation User Group has re-formed out of the Bento User Group, and is held at the Hippotropolis Camp Fire Circle. Imp costumes entirely optional :D .
The Content Creation User Group has re-formed out of the Bento User Group, and is held at the Hippotropolis Camp Fire Circle. Imp costumes entirely optional 😀 .

The following notes are taken from the Content Creation User Group meeting, held on  Thursday February 16th, 2017 at 1:00pm SLT at the the Hippotropolis Campfire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are available on the Content Creation User Group wiki page.

Core Topics

  • HTTP asset fetching
  • Potential project: animated objects

HTTP Asset Fetching

In 2013 / 2014, the Lab made a huge change to how avatar appearance information and texture and mesh assets are delivered to users, shifting them away from UDP (User Datagram Protocol) delivery through the simulators, to HTTP via Content Delivery Networks (CDNs) – see my past reports on the HTTP updates. and CDN work.

As was indicated at several TPV Developer meetings recently (see here for an example), the Lab has been looking to move more asset types for delivery over the CDN, and this work has now started, with a focus on animations and sounds. This should see improvements in both the speed and reliability of assets, which should be particularly beneficial to animations.

The work is in the early stages, and progress will be tracked through my SL project updates.

Potential Project: Animated Objects

A topic of common conversation at various user group meetings is that of animated objects – e.g. objects which can be animated but which are not necessarily part of the base avatar mesh, and / or things like non-player characters (NPCs).

Decent NPC a possible future project? Lab wants feedback on use-cases for animation objects
Decent NPC a possible future project? Lab wants feedback on use-cases for animation objects

While it is still very speculative, the Lab is considering how this might be done and what sort of applications people would use such a capability for. One idea has already been extensively documented – “created agents”, which are avatars which do not necessarily have a connection to a viewer in order to operate – see feature request BUG-11368.

The main aim would be to use the same base avatar skeleton for this work, as well as it being compatible with existing rigged objects, rather than introducing something like custom skeletons (seen as a much bigger project). A lot would also depend up things like performance impact (if the simulator is operating a certain volume of NPCs or ridable objects, for example, then these could impact on resources which might otherwise be used by avatars, etc).

One potential way of achieving desired results would be to animate rigged meshes using the avatar skeleton, but without necessarily having the actual avatar base mesh underpinning it. For example, when we use a mesh body for our avatars, we use the base avatar, but hide it with an alpha mask, with the avatar skeleton animating the worn mesh. With an animated object utilising the skeleton, there is no real need to have the underpinning base avatar, as it would in theory never be seen.

One issue is that many mesh models are multiple parts, therefore some means would be required to control them, and this could be lost without the base avatar, together with the ability to attach static objects to something like an NPC. Hence the idea put forward in BUG-11368; the “created agent” would effectively be a special object class, providing the means for multiple animated meshes to operate in concert.

It is unlikely that the bone limit for a given object would be raised to accommodate animated objects, as this is pretty much a limit imposed by people’s graphics cards. During testing, the Lab found that if too many joints are defined for a single object, some graphics cards are unable to render the object correctly. This impact has actually already been seen with some Bento content (FIRE-20763).

Other aspects which would have to be considered are things like Land Impact. Avatars don’t have a land impact, but that may have to change in the case of animated, avatar-like objects – again, seen the performance concerns above. There are also some concerns over possible griefing vectors.

Performance-wise a potential benefit would be animated objects would not require alpha swapping, which requires a fairly hefty performance hit – but this could be countered to a degree (and depending on where you are and how animated objects are used) but the volume of animated objects around you.

Right now the idea is still being discussed internally at the Lab – there is no defined project. However, if you have views on things, attending the Content Creation meetings would be a good place to get them heard.

Other Items

Applying Baked Textures to Mesh Avatars

Still under consideration is a project to allow baked textures to be applied directly to mesh avatars (see here for more). This is still under consideration, but has yet to be formally adopted by the Lab as a project.

Modelling for Efficient Rendering

The subject of efficiency and LODs was the focus of an extended conversation. As I reported in my last Content Creation UG meeting report, Medhue Simoni has been producing a series on the use of Level of Detail (LOD) to help with generating rendering efficient models in Second Life. All three parts of the series are now available on his YouTube channel, and he and I will be discussing them in this blog in the very near future.

SL project updates 2017-4/2: Content Creation User Group w/audio

The Content Creation User Group has re-formed out of the Bento User Group, and is held at the Hippotropolis Camp Fire Circle. Imp costumes entirely optional :D .
The Content Creation User Group has re-formed out of the Bento User Group, and is held at the Hippotropolis Campfire Circle. Imp costumes entirely optional 😀 .

The following notes are taken from the Content Creation User Group meeting, held on  Thursday January 26th, 2017 at 1:00pm SLT at the the Hippotropolis Campfire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are available on the Content Creation User Group wiki page.

Core Topics

  • Potential follow-on projects
  • Modelling for efficient rendering
  • Animations
  • Outfits

Follow-up projects

There has been no further progression on the potential follow-on projects from Bento (see my week #2 Content Creation Group meeting report for details of follow-ups).

  • In terms of the baked texture on mesh idea, the Lab has asked for specific content there “onion” meshes are used – and it has been reiterated this is most of the common mesh bodies and heads
  • There is still some confusion as to what may be causing the animation conflict issue. While it may be that changes will be made to the animation system in the future, as per the suggestions Vir forward previously (see the link above), the solution for now is to try to address the issue at a scripting level to prevent conflicts.

Modelling for Efficient Rendering

Medhue Simoni has been producing a series on the use of Level of Detail (LOD) to help with generating rendering efficient models in Second Life. Part 1 is embedded below, and Part 2 can be found here. A third part will be available soon, and hopefully, he and I will be producing a companion article in this blog once that part is available on-line.

Efficient modelling for Second Life has long been a problem within the platform, and something exacerbated by the arrival of full mesh support. Given this:

  • The Lab is considering revising the rendering cost calculations “at some point” to encourage people to consider more efficient models (e.g. making more use of normal maps to add detail to models where appropriate, rather than rely on always producing an extremely high poly count model).
  • It has been suggested that providing some basic indicators of what might be considered “reasonable” number – poly counts, proportionate scaling of LODs, etc. – for models could be produced, together with videos (by content creators with a solid understanding of the subject and Second Life)  on efficient use of normal and specular maps
Using a normal map to enhance the detail on a low-polygon model
Using a normal map to enhance the detail on a low-polygon model. The image on the left shows a model of some 4 million triangles. The centre image shows a model with just 500 triangles. The image on the right shows the 500-triangle model with a normal map taken from the model on the left applied to it. Credit: Wikipedia

The discussion broadened to cover awareness among content creators as to what actually works and where falsehoods / misunderstanding lie. A cited example in the meeting was that of mesh clothing makers avoiding the use of normal maps because they want their clothing to look the “same” to everyone, even though doing so can severely impact the user experience for those on lower-end system, and thus discourage users from buying their goods.

.BVH Animations and Animation Playback

Animations can be uploaded to Second Life in one of two formats, .ANIM and .BVH. The latter are optimised as a part of the upload process, and this is proving to be a particular problem for Maya users when animating facial expressions. These require finer bone movements in the animation, which the optimisation process is filtering out, requiring Maya users to use exaggerated movements. Blender users can avoid the issue by using .ANIM, which does not pass through any form of optimisation / filtering.

While it is recognised that the thresholds used by the .BVH optimisation / filtering process may not be the best for very small bone movements, there are currently no plans to alter / refine the .BVH uploader. Nor is it really feasible to adjust the thresholds for hand and face bones, as this could have an adverse effect where these bones are re-purposed for other uses (as Bento is intended to allow).

There are two possible workarounds which may help with these issues for anyone using the .BVH format:

  • Swap to using .ANIM files, which do not go through any similar optimisation process. Unfortunately, this isn’t an option for Maya users, as there is current no .ANIM exporter for Maya, although Aura Linden is working on one in her own time, and is hoping to get time in about three weeks to sit down and finish it
  • Alter the frame rate of the animation itself – so rather than creating it at 30 fps, try 15 or 10 fps, depending on the animation.

There was also some confusion over the maximum file size for animations, as per my 2016 week #25 report, this was increased from 120 Kb to 250 Kb in June 2016. The wiki page on SL limits has now been correctly updated to reflect this. It’s also worth noting, as an aside, that animations will be moving to delivery via the CDN network in the future.

There was an extended conversation around Outfits and the Outfits folder. While much of this revolved around the Visual Outfits Browser option for Outfits, a couple of significant items were discussed.

The first was on the subject of saving gestures with outfits. As noted in my Bento update #26 and Bento update #27, gestures cannot be automatically saved with an outfit, but can be manually added as links / copies. However, Medhue Simoni has found a catch: should the outfit ever be re-saved, the gestures are removed. Expect a JIRA soon

A common request for the Outfit system to allow nested folders once more the ability was removed with viewer 2.1 (see VWR-19774), and while it was at the time noted as a possible “priority” item for consideration by the Lab, the topic has only recently again come up in internal conversations as a result of feature request BUG-41826. However, the amount of work involved to make it happen makes it unclear if the request will be accepted.

One request for Outfits which is unlikely to be acted upon, is to have links to other folders in addition to objects. This is seen as even more complex than allowing nested folders within the Outfits hierarchy.

Next Meeting

The Next Content Creation User Group meeting will be on Thursday, February 16th, 2017.

SL project updates 2017-3/2: texture uploads, Content Creation UG

An Uncertain Destiny, Mystic; Inara Pey, January 2017, on FlickrAn Uncertain Destinyblog post

Server Deployments – Recap

  • The Main (SLS) channel was restarted on Tuesday, 17th January, but there was no associated code deployment
  • A new server maintenance package was deployed to the RC channels on Wednesday, January 18th, comprising a partial fix for (non-public) BUG-3286, “Can’t move object” fail notifications (fixes for regions/objects with longer names are pending) + internal server and logging enhancements

SL Viewer

No further updates to the current viewer pipeline list.

Texture Uploads and First Time Rendering

The Lab has been making some hardware (/communication?) changes to the texture upload / delivery mechanism (e.g. both the handling of texture uploads from the viewer to the asset system, and then from the asset system back out to the viewer via the CDN). When deployed to the main grid, these should see improvements in the uploading of new textures and their appearance on in-world objects, whether uploaded individually or in bulk (e.g. hopefully little / no grey panels in new texture upload previews when viewing them from inventory, and few / no grey object faces when uploading a texture and then immediately applying it to an object face).

Content Creation User Group

Summary of General Points

  • No further movement on the potential “Bento follow-up” project ideas (see my Content Creation UG notes for week #2).
  • The next Avastar release is in advanced testing, with a potential release around late February / March, but will include devkit support and a large number of bug fixes.
  • Appearance sliders:
    • A question was asked whether the slider system could be updated to allow for easier scaling for smaller avatars utilising Bento (as not all Bento bones are linked to sliders). Vir noted this is unlikely, as it would require a change for the base slider scaling which could break existing avatars.
    • However, Vir noted that suggestions to offer new sliders for sizing things like wings and tales have been discussed at the Lab, but nothing has as yet been decided.

Pain Points / Blockers to Bento Content

A general question thrown out by Simon Linden was whether people are experiencing particular “pain points” in producing Bento content: what they might be waiting for tools-wise or in any other way etc. this quickly spilled out into a much broader discussion on various tools and suggested tutorials. However, core points raised were:

  • Available time, learning to use external tools such as Blender,
  • Waiting on Avastar, plus tutorials, both generic and on using specific tools (e.g. Avastar and Mayastar) – which will hopefully come in time
  • Mention was made of making people more aware of SL-specific areas such as level of detail (LOD), managing physics, LI calculation rules, etc.

It was also noted that possibly having idea of having some for of sample content (e.g. wings, hands), which creators could use as an example / baseline for their own creations, together with a broader selection of documentation / tutorials / videos.

Simon pointed out that in terms of Lab developed tutorials and documentation, there are only limited resources. Many third-parties also produce tutorials (Mehdue Simoni, for example is waiting for the new Avastar to reach release before working on his video tutorials for it).

Others have also put together documentation, but are finding it hard to get that documentation seen heard above the broad range of misinformation on content creation which is  in circulation. Vir has suggested that meaningful documentation and tutorials could be linked to through the SL wiki.

In terms of the wiki, there are a range of resources available for content  creation / Bento:

 

SL project updates 2017-2/2: Content Creation User Group with audio

The Content Creation User Group has re-formed out of the Bento User Group, and is held at the Hippotropolis Camp Fire Circle. Imp costumes entirely optional :D .
The Content Creation User Group has re-formed out of the Bento User Group, and is held at the Hippotropolis Campfire Circle. Imp costumes entirely optional 😀 .

The following notes are taken from the Content Creation User Group meeting, held on  Thursday January 12th, 2017 at 1:00pm SLT at the the Hippotropolis Campfire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are available on the Content Creation User Group wiki page.

Core Topics

  • Bento request from Troy Linden.
  • Supplemental animations that run alongside the main animation (e.g., flapping wings while walking).
  • Possible future project – applying baked textures to mesh avatars.

Bento Request From Troy Linden

Troy Linden is preparing a presentation on Project Bento for an upcoming Second Life meeting within Linden Lab in which he plans to review the project, the interactions with content creators, the benefits this brought to the project, etc. In particular, he would like to demonstrate Bento content people are making and impress on LL’s executives how the project has been received, and how things might be followed-up.

To help with this, he is requesting that anyone with glamour shots of Bento avatars, etc., videos of avatars and Bento items  to contact him via IM to discuss and / or send him what they have (troy-at-lindenlab.com).

Supplemental Animations

Introduced in 2013, llSetAnimationOverride() is one of a series of animation commands keyed directly into the server’s animation states, allowing for faster, smoother animation state changes than with AO systems using the older llPlayAnimation() command. However, llSetAnimation() only allows one animation to be played  at a time for any given state, and this can lead to conflicts when trying to run custom animations as well (see BUG-41048 . An example of this is trying to use llSetAnimationOverride() to walk whilst using an animation to flap wings (below), which causes while the walk, set by llSetAnimationOverride(), to freeze in favour of running the wings flapping, as they are also seen as a locomotion animation.

Vir has identified two possible courses of action to deal with this. The first would be to extend llSetAnimationOverride() to allow “supplemental” animations to run alongside the animation states keyed by llSetAnimationOverride(), effectively allowing them to play together. The other would be to provide a means for people to define their own custom animation states (with associated animations) which the simulator would be able to recognise and handle alongside the existing animations states, rather than having the associated animation conflict with the default animation states.

No decision has been made on which route to take, and Vir is putting together a proposal on approaches, which he’ll put forward at a future meeting.

Applying Baked Textures to Mesh Avatars

This would allow the skin and clothing layers (skin, tattoo, under shirt, shirt, etc., “wearables”) to be directly applied to mesh avatars. In theory, this could be done, and could make it easier to do things like match skins between, say a mesh body and a non-mesh head without having to use applier systems. It could in theory even reduce the complexity of mesh avatars, which currently have to be made up of multiple layers (the so-called “onion meshes”).

 

A further benefit would be for non-human avatars a well. Providing the same UV is used across all elements of an avatar, it could allow creators to offer different pelts  / skins for their animal / creature avatars and, if they make their UV maps available to other creators, allow them to produce things like additional skins.

However, there are problems in proceeding this way.The baking service is capped at a limit of 512×512 texture resolution, which would mean a loss of detail trying to “stretch” such textures over a mesh avatar, which would result in the ability potentially being ignored in favour of using the current “onion mesh” and appliers approach.  It might also mean that wearable layers would be used in non-standard ways (e.g. using a “skirt” layer to apply a skin), which could lead to user confusion (“why am I using a skirt to wear a skin?”) – although this could be overcome by adding further wearable types specific for use with avatar meshes to the system.

An alternative would be to increase the texture resolution for the baking service to 1024×1024. While not entirely ruled out, it does carry with it a set of unknowns as well – what would be the back-end resource hit, could it lead to an uptick in texture trashing issues in the viewer, etc.).

Baking Textures on In-World Mesh and prim Surfaces

Part of the above discussion overlapped with the idea of allowing textures to be baked on arbitrary meshes (thus allowing for compositing, etc).

Vir noted that this would be a far more complex project due to the nature of the baking service, and thus would likely not be considered as a part of making changes to how system wearables might be applied to mesh avatars. However, he is interested in seeing feature requests on how this might be done and the benefits it would bring to SL, and a related JIRA – BUG-7486 – is in the process of being re-opened for comments along these lines.

Other Items

The latest version of Avastar is support of Bento is still undergoing testing. Those using it report it is behaving well, so hopefully a realise won’t be too far off.