Tag Archives: Content Creation

SL project updates 8/2: Content Creation User Group w/audio

The gathering: people gather for the CCUG, including a ridable dragon, a work-in-progress by Teager (l) and a wearable dragon, also a WIP by Thornleaf (r)

The gathering: people gather for the CCUG, including a Bento ridable dragon, a work-in-progress by Teager (l) and a Bento wearable dragon, also a WIP by Thornleaf (r)

The following notes are taken from the Content Creation User Group meeting, held on  Thursday February 23rd, 2017 at 1:00pm SLT at the the Hippotropolis Campfire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are available on the Content Creation User Group wiki page.

Core Topics

  • HTTP asset fetching
  • Animating objects
  • Applying Baked Textures to Mesh Avatars

HTTP Fetching

As previously noted, the Lab is working on moving landmarks, gestures, animations, sounds and wearables (system layer clothing) from UDP delivery via the simulator to HTTP delivery via the CDN(s). This work is now progressing to the stage where initial testing is liable to be starting soon. It’s not clear if this is internal testing with the Lab, or whether it will involve wider (Aditi testing) as well. As things progress, expect the viewer-side changes to appear in a project viewer and then progress through the normal route of testing / update to RC and onwards towards release.

Potential Project: Animated Objects

As noted in my last Content Creation UG meeting notes, the Lab is taking a speculative look at using the current avatar skeleton to animate in-world objects to provide a means for users to more easily create animated objects (e.g. non-player characters – NPCS -, plants and trees responding to a breeze, providing mesh animals which do not rely on performance hitting alpha swapping, etc) – see feature request BUG-11368. for some of the ideas put forward which helped prompt the Lab’s interest.

It is important to note that this is still a speculative look at the potential; there is no confirmed project coming off the back of it, the Lab is currently seeking feedback on how people might use the capability, were it to be implemented. No in depth consideration has been given to how such a capability would be support on the back end, or what changes would be required to the viewer.

One of the many issues that would need to be worked through is just the simple matter of how an object might be animated to achieve something like walking, running or flying. These require the simulator to make certain assumptions when handling an avatar which are not a part of object handling. There’s also the question of how the skeleton would be applied to an object.

Having animated objects does give rise to concerns over potential resource / performance impacts. for example, someone having a dozen animated pets running around them as animated objects could potentially have the same resource / performance overheads as thirteen actual avatars in a region.

One possible offset to this (although obviously, the two aren’t equitable) is that mesh animals / objects which currently use a lot of alpha flipping to achieve different “states” of “animation” (such a the squirrel which can jump from the ground and swing on a nut holder and jump back down again, or the peek-a-boo baby bears, etc., all of which are popular in gardens and public regions) could be made a lot more efficient were they to be animated, as the resource / performance hitting alpha swapping could be abandoned.

It was suggested that rather than having the full skeleton available for animated objects, it might be possible to use a sub-set of bones, or even the pre-Bento skeleton. Agreeing that this might be done, Vir pointed out that using the full skeleton would perhaps offer the most flexible approach, and also allow the re-use of existing content, particularly given that things like custom skeletons (also mooted) would be too big a project to undertake.

A closer look at Teager's WIP ridable dragon, which has yet to be textured

A closer look at Teager’s WIP Bento ridable dragon with Teager aboard, which has yet to be textured

Applying Baked Textures to Mesh Avatars

Interest is increasing in this potential project, which would allow baked textures – skins and wearble clothing layers – to be applied directly to mesh avatars via the baking service. This also has yet to be officially adopted by the Lab as a project, but there is considerable interest internally in the idea.

As I’ve previously reported, there is considerable interest in this idea, as it could greatly reduce the complexity of mesh avatar bodies by removing the need for them to be “onion skinned” with multiple layers. However, as I noted in that report, a sticking point is that currently, the baking service is limited to a maximum texture resolution of 512×512, whereas mesh bodies and parts (heads, feet, hands) can use 1024×1024.

These is concern that if the baking service isn’t updated to also support 1024×1024 textures, it would not be used as skins and wearable using it would appear to be of lower resolution quality than can be achieved when using applier systems on mesh bodies. Vir expressed doubt as to whether the detail within 1024×1024 textures is really being seen unless people  are zoomed right into other avatars, which for most of the time we’re going about our SL times and doing things, isn’t the case.

Troy Linden wears a Bento octopus "backpack"

Troy Linden wears a Bento octopus “backpack”

This lead to a lengthy mixed text / voice discussion on texture resolution and extending the baking service to support mesh avatars (were it to go ahead), which essentially came down to two elements:

  • The technical aspects of whether or not we actually get to see the greater detail in 1024×1024 textures most of the time we’re in world and in re-working the baking service to supporting 1024×1204 across all wearable layers from skin up through to jacket.
  • The sociological aspect of whether or not people would actually use the baking service route with mesh avatars front , if the texture resolution were left at 512×512, because of the perceived loss of detail involved.

Various compromises were put forward to try to work around the additional impact of updating the baking service to support 1024×1024 textures. One of these was that body creators might provide two versions of their products if they wish: one utilising appliers and 1024×1024 textures as is the case now, and the other supporting the baking service and system layers at 512×512, then leave it to users to decide what they want to use / buy. Another was a suggestion that baking service support could be initially rolled out at 512×512 and then updated to 1024×1024 support if there was a demand.

None of the alternative suggestions were ideal (in the two above, for example, creators are left having to support two product ranges, which could discourage them; while the idea of leaving the baking service at 512×512 falls into the sociological aspect of non-use mentioned previously). Currently, Vir appears to be perhaps leaning more towards updating the baking service to 1024×1024 were the project to be adopted but, the overheads in doing so still need to be investigated and understood.

Other Items

.ANIM Exporter for Maya

Cathy Foil indicated that Aura Linden has almost finished working on the .ANIM exporter she’s being developing for Maya. The hope is that the work will be completed in the next week or so. She also indicated that, in keeping with Medhue Simoni’s advice from a few weeks ago (see .BVH Animations and Animation Playback), she was able to overcome some of the issues being experienced with fine-tuning .BVH animation playback, although there are still problems.

The .ANIM exporter will be available for anyone using Maya, and is not something dependent upon Mayastar.

Avastar 2.0 in RC

The upcoming fully Bento compliant version of Avastar is now available as a release candidate.

IK Constraints

Tapple Gao has been looking at IK (Inverse Kinematics) constraints within Second Life. These aren’t widely used within existing animations – although up to about eight constraints can be defined – largely because the documentation doesn’t appear to be too clear. Tapple hopes to improve this through investigation and then updating the SL wiki.

Next Meeting

The next content Creation meeting will be in two weeks, on Thursday, March 9th, at 13:00 SLT.

Advertisements

SL project updates 2017-7/2: Content Creation User Group w/audio + HTTP assets

The Content Creation User Group has re-formed out of the Bento User Group, and is held at the Hippotropolis Camp Fire Circle. Imp costumes entirely optional :D .

The Content Creation User Group has re-formed out of the Bento User Group, and is held at the Hippotropolis Camp Fire Circle. Imp costumes entirely optional 😀 .

The following notes are taken from the Content Creation User Group meeting, held on  Thursday February 16th, 2017 at 1:00pm SLT at the the Hippotropolis Campfire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are available on the Content Creation User Group wiki page.

Core Topics

  • HTTP asset fetching
  • Potential project: animated objects

HTTP Asset Fetching

In 2013 / 2014, the Lab made a huge change to how avatar appearance information and texture and mesh assets are delivered to users, shifting them away from UDP (User Datagram Protocol) delivery through the simulators, to HTTP via Content Delivery Networks (CDNs) – see my past reports on the HTTP updates. and CDN work.

As was indicated at several TPV Developer meetings recently (see here for an example), the Lab has been looking to move more asset types for delivery over the CDN, and this work has now started, with a focus on animations and sounds. This should see improvements in both the speed and reliability of assets, which should be particularly beneficial to animations.

The work is in the early stages, and progress will be tracked through my SL project updates.

Potential Project: Animated Objects

A topic of common conversation at various user group meetings is that of animated objects – e.g. objects which can be animated but which are not necessarily part of the base avatar mesh, and / or things like non-player characters (NPCs).

Decent NPC a possible future project? Lab wants feedback on use-cases for animation objects

Decent NPC a possible future project? Lab wants feedback on use-cases for animation objects

While it is still very speculative, the Lab is considering how this might be done and what sort of applications people would use such a capability for. One idea has already been extensively documented – “created agents”, which are avatars which do not necessarily have a connection to a viewer in order to operate – see feature request BUG-11368.

The main aim would be to use the same base avatar skeleton for this work, as well as it being compatible with existing rigged objects, rather than introducing something like custom skeletons (seen as a much bigger project). A lot would also depend up things like performance impact (if the simulator is operating a certain volume of NPCs or ridable objects, for example, then these could impact on resources which might otherwise be used by avatars, etc).

One potential way of achieving desired results would be to animate rigged meshes using the avatar skeleton, but without necessarily having the actual avatar base mesh underpinning it. For example, when we use a mesh body for our avatars, we use the base avatar, but hide it with an alpha mask, with the avatar skeleton animating the worn mesh. With an animated object utilising the skeleton, there is no real need to have the underpinning base avatar, as it would in theory never be seen.

One issue is that many mesh models are multiple parts, therefore some means would be required to control them, and this could be lost without the base avatar, together with the ability to attach static objects to something like an NPC. Hence the idea put forward in BUG-11368; the “created agent” would effectively be a special object class, providing the means for multiple animated meshes to operate in concert.

It is unlikely that the bone limit for a given object would be raised to accommodate animated objects, as this is pretty much a limit imposed by people’s graphics cards. During testing, the Lab found that if too many joints are defined for a single object, some graphics cards are unable to render the object correctly. This impact has actually already been seen with some Bento content (FIRE-20763).

Other aspects which would have to be considered are things like Land Impact. Avatars don’t have a land impact, but that may have to change in the case of animated, avatar-like objects – again, seen the performance concerns above. There are also some concerns over possible griefing vectors.

Performance-wise a potential benefit would be animated objects would not require alpha swapping, which requires a fairly hefty performance hit – but this could be countered to a degree (and depending on where you are and how animated objects are used) but the volume of animated objects around you.

Right now the idea is still being discussed internally at the Lab – there is no defined project. However, if you have views on things, attending the Content Creation meetings would be a good place to get them heard.

Other Items

Applying Baked Textures to Mesh Avatars

Still under consideration is a project to allow baked textures to be applied directly to mesh avatars (see here for more). This is still under consideration, but has yet to be formally adopted by the Lab as a project.

Modelling for Efficient Rendering

The subject of efficiency and LODs was the focus of an extended conversation. As I reported in my last Content Creation UG meeting report, Medhue Simoni has been producing a series on the use of Level of Detail (LOD) to help with generating rendering efficient models in Second Life. All three parts of the series are now available on his YouTube channel, and he and I will be discussing them in this blog in the very near future.

SL project updates 2017-4/2: Content Creation User Group w/audio

The Content Creation User Group has re-formed out of the Bento User Group, and is held at the Hippotropolis Camp Fire Circle. Imp costumes entirely optional :D .

The Content Creation User Group has re-formed out of the Bento User Group, and is held at the Hippotropolis Campfire Circle. Imp costumes entirely optional 😀 .

The following notes are taken from the Content Creation User Group meeting, held on  Thursday January 26th, 2017 at 1:00pm SLT at the the Hippotropolis Campfire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are available on the Content Creation User Group wiki page.

Core Topics

  • Potential follow-on projects
  • Modelling for efficient rendering
  • Animations
  • Outfits

Follow-up projects

There has been no further progression on the potential follow-on projects from Bento (see my week #2 Content Creation Group meeting report for details of follow-ups).

  • In terms of the baked texture on mesh idea, the Lab has asked for specific content there “onion” meshes are used – and it has been reiterated this is most of the common mesh bodies and heads
  • There is still some confusion as to what may be causing the animation conflict issue. While it may be that changes will be made to the animation system in the future, as per the suggestions Vir forward previously (see the link above), the solution for now is to try to address the issue at a scripting level to prevent conflicts.

Modelling for Efficient Rendering

Medhue Simoni has been producing a series on the use of Level of Detail (LOD) to help with generating rendering efficient models in Second Life. Part 1 is embedded below, and Part 2 can be found here. A third part will be available soon, and hopefully, he and I will be producing a companion article in this blog once that part is available on-line.

Efficient modelling for Second Life has long been a problem within the platform, and something exacerbated by the arrival of full mesh support. Given this:

  • The Lab is considering revising the rendering cost calculations “at some point” to encourage people to consider more efficient models (e.g. making more use of normal maps to add detail to models where appropriate, rather than rely on always producing an extremely high poly count model).
  • It has been suggested that providing some basic indicators of what might be considered “reasonable” number – poly counts, proportionate scaling of LODs, etc. – for models could be produced, together with videos (by content creators with a solid understanding of the subject and Second Life)  on efficient use of normal and specular maps
Using a normal map to enhance the detail on a low-polygon model

Using a normal map to enhance the detail on a low-polygon model. The image on the left shows a model of some 4 million triangles. The centre image shows a model with just 500 triangles. The image on the right shows the 500-triangle model with a normal map taken from the model on the left applied to it. Credit: Wikipedia

The discussion broadened to cover awareness among content creators as to what actually works and where falsehoods / misunderstanding lie. A cited example in the meeting was that of mesh clothing makers avoiding the use of normal maps because they want their clothing to look the “same” to everyone, even though doing so can severely impact the user experience for those on lower-end system, and thus discourage users from buying their goods.

.BVH Animations and Animation Playback

Animations can be uploaded to Second Life in one of two formats, .ANIM and .BVH. The latter are optimised as a part of the upload process, and this is proving to be a particular problem for Maya users when animating facial expressions. These require finer bone movements in the animation, which the optimisation process is filtering out, requiring Maya users to use exaggerated movements. Blender users can avoid the issue by using .ANIM, which does not pass through any form of optimisation / filtering.

While it is recognised that the thresholds used by the .BVH optimisation / filtering process may not be the best for very small bone movements, there are currently no plans to alter / refine the .BVH uploader. Nor is it really feasible to adjust the thresholds for hand and face bones, as this could have an adverse effect where these bones are re-purposed for other uses (as Bento is intended to allow).

There are two possible workarounds which may help with these issues for anyone using the .BVH format:

  • Swap to using .ANIM files, which do not go through any similar optimisation process. Unfortunately, this isn’t an option for Maya users, as there is current no .ANIM exporter for Maya, although Aura Linden is working on one in her own time, and is hoping to get time in about three weeks to sit down and finish it
  • Alter the frame rate of the animation itself – so rather than creating it at 30 fps, try 15 or 10 fps, depending on the animation.

There was also some confusion over the maximum file size for animations, as per my 2016 week #25 report, this was increased from 120 Kb to 250 Kb in June 2016. The wiki page on SL limits has now been correctly updated to reflect this. It’s also worth noting, as an aside, that animations will be moving to delivery via the CDN network in the future.

There was an extended conversation around Outfits and the Outfits folder. While much of this revolved around the Visual Outfits Browser option for Outfits, a couple of significant items were discussed.

The first was on the subject of saving gestures with outfits. As noted in my Bento update #26 and Bento update #27, gestures cannot be automatically saved with an outfit, but can be manually added as links / copies. However, Medhue Simoni has found a catch: should the outfit ever be re-saved, the gestures are removed. Expect a JIRA soon

A common request for the Outfit system to allow nested folders once more the ability was removed with viewer 2.1 (see VWR-19774), and while it was at the time noted as a possible “priority” item for consideration by the Lab, the topic has only recently again come up in internal conversations as a result of feature request BUG-41826. However, the amount of work involved to make it happen makes it unclear if the request will be accepted.

One request for Outfits which is unlikely to be acted upon, is to have links to other folders in addition to objects. This is seen as even more complex than allowing nested folders within the Outfits hierarchy.

Next Meeting

The Next Content Creation User Group meeting will be on Thursday, February 16th, 2017.

SL project updates 2017-3/2: texture uploads, Content Creation UG

An Uncertain Destiny, Mystic; Inara Pey, January 2017, on FlickrAn Uncertain Destinyblog post

Server Deployments – Recap

  • The Main (SLS) channel was restarted on Tuesday, 17th January, but there was no associated code deployment
  • A new server maintenance package was deployed to the RC channels on Wednesday, January 18th, comprising a partial fix for (non-public) BUG-3286, “Can’t move object” fail notifications (fixes for regions/objects with longer names are pending) + internal server and logging enhancements

SL Viewer

No further updates to the current viewer pipeline list.

Texture Uploads and First Time Rendering

The Lab has been making some hardware (/communication?) changes to the texture upload / delivery mechanism (e.g. both the handling of texture uploads from the viewer to the asset system, and then from the asset system back out to the viewer via the CDN). When deployed to the main grid, these should see improvements in the uploading of new textures and their appearance on in-world objects, whether uploaded individually or in bulk (e.g. hopefully little / no grey panels in new texture upload previews when viewing them from inventory, and few / no grey object faces when uploading a texture and then immediately applying it to an object face).

Content Creation User Group

Summary of General Points

  • No further movement on the potential “Bento follow-up” project ideas (see my Content Creation UG notes for week #2).
  • The next Avastar release is in advanced testing, with a potential release around late February / March, but will include devkit support and a large number of bug fixes.
  • Appearance sliders:
    • A question was asked whether the slider system could be updated to allow for easier scaling for smaller avatars utilising Bento (as not all Bento bones are linked to sliders). Vir noted this is unlikely, as it would require a change for the base slider scaling which could break existing avatars.
    • However, Vir noted that suggestions to offer new sliders for sizing things like wings and tales have been discussed at the Lab, but nothing has as yet been decided.

Pain Points / Blockers to Bento Content

A general question thrown out by Simon Linden was whether people are experiencing particular “pain points” in producing Bento content: what they might be waiting for tools-wise or in any other way etc. this quickly spilled out into a much broader discussion on various tools and suggested tutorials. However, core points raised were:

  • Available time, learning to use external tools such as Blender,
  • Waiting on Avastar, plus tutorials, both generic and on using specific tools (e.g. Avastar and Mayastar) – which will hopefully come in time
  • Mention was made of making people more aware of SL-specific areas such as level of detail (LOD), managing physics, LI calculation rules, etc.

It was also noted that possibly having idea of having some for of sample content (e.g. wings, hands), which creators could use as an example / baseline for their own creations, together with a broader selection of documentation / tutorials / videos.

Simon pointed out that in terms of Lab developed tutorials and documentation, there are only limited resources. Many third-parties also produce tutorials (Mehdue Simoni, for example is waiting for the new Avastar to reach release before working on his video tutorials for it).

Others have also put together documentation, but are finding it hard to get that documentation seen heard above the broad range of misinformation on content creation which is  in circulation. Vir has suggested that meaningful documentation and tutorials could be linked to through the SL wiki.

In terms of the wiki, there are a range of resources available for content  creation / Bento: