LL announce a pause in the current SL AI character designer project

via Linden Lab

On Thursday, July 31st, Linden Lab provided an update on the AI Character Generation project which indicates it it to be paused / closed at the start of September.

The project was initially launched in December 2024 and powered by Convai, a platform for developers and creators proving an intuitive approach to designing characters with multimodal perception abilities in both virtual and real world environments (see: Linden Lab leverage Convai for AI character generation in Second Life). However, it was shortly thereafter suspended as a result of community feedback, before being re-launched to a wider audience of potential users at the end of February 2025.

The Character Designer was launched as an experimental feature to explore the potential of AI-powered characters in Second Life. Built in collaboration with our AI technology partner Convai, this tool enabled residents to create interactive, virtual characters with conversational capabilities.
From elaborate roleplay scenarios to immersive visitor greeters, your projects and feedback have been invaluable. This pause gives us time to carefully evaluate everything we’ve learned and determine how best to evolve this technology in a way that aligns with the broader future of Second Life.
This is not the end of AI in Second Life; rather, it is a thoughtful pause as we refine our strategy and continue exploring new opportunities for innovation.

– Linden Lab blog post

The “pause” is set to come into effect from Monday, September 1st, 2025, with the Lab further noting that as of that date:

  • It will no longer be possible to create, deploy, or run AI Characters using the Character Designer interface.
  • Characters created through the Designer will no longer function or appear in-world.
  • Previously created characters and their memory will not be retained post-pause.
  • Any alt accounts created specifically for testing the Character Generator will remain valid Second Life accounts, and can be logged into just like any other alt account.

Community support for the project will continue through the following channels:

  • A dedicated forum thread for on-going Q&A and feedback.
  • Second Life Discord for real-time responses from staff and developers.
  • Support Portal for any account-specific issues.

In addition, those who have used the Character Generator are encouraged to record their work during the wind-down period and share video through the forum thread or suitable platforms.

The sunsetting of this project does not mean the end of further possible projects and experiments in the use of AI technologies, with the blog post also stating:

 This is not the end of AI features in Second Life—we’re using this moment to regroup and plan for future development … We are actively and cautiously experimenting with other AI technologies to enhance Second Life’s creative potential, performance, and immersion. The insights from this project are already helping to inform future efforts.

For further information, please refer to the official blog post, which includes a short-term FAQ.

Linden Lab leverage Convai for AI character generation in Second Life

via Linden Lab

Update, December 21st: this alpha experiment has been suspended for the time being – read more here.

On December 18th, 2024, Linden Lab announced the Alpha release of a new Character Designer, a toolset for the creation and management of AI-backed NPCs (non-player characters) for in-world use.

Access to the new Character Designer is, at the time of writing, limited to Premium Plus account holders only – as was originally the case with access to the Second Life Mobile app; so expect access to be expanded over time in a similar manner to that seen with the SL Mobile app.

The Character Designer provides a solid foundation for immersive roleplay, offering a range of features that let you shape and refine the personalities and behaviours of your inworld characters. Current capabilities include:
  • Early-Stage Roleplay Support – Characters respond intelligently through IM, adapting and evolving as you interact.
  • Custom Personalities and Backstories – Define unique histories, preferences, and communication styles to bring your characters to life.
  • Integration with Existing SL Systems – Connect through a dedicated alt account, set arrival points, and fine-tune behaviour filters for a seamless inworld experience.
For a comprehensive overview of all options and detailed instructions, please see the Getting Started with the AI Character Designer guide.

– from the Second Life blog post on the Character Designer

Character Designer is being developed in collaboration with Convai, a platform for developers and creators proving an intuitive approach to designing characters with multimodal perception abilities in both virtual and real world environments.

Precisely how much of the Convai interface and workflow has been incorporated into the Second Life Character Designer flow is unclear to me (primarily because I do not have a Premium Plus account, and so am currently ineligible for the alpha trial). However, aspects of Convai’s ability to build character backstories and personalities appears to have been utilised, and will potentially be built upon, with knowledge and situational awareness capabilities recorded among a list of “planned enhancements” for the toolset.

With multiple experiments already in progress within Second Life in the use of AI alongside in-world NPCs, the development of the Character Generator and the noted work on an upcoming NPC functionality specifically to compliment its use could be of interest to many in developing NPCs for a range of NPC-related capabilites: for utilisation in actual immersive role-play environments, to shopping assistants, tour guides, to assist with simulations and training, education, and more.

Soon, you will be able to place multiple dynamic NPCs that welcome visitors, provide assistance, and enrich the overall atmosphere of your regions. By combining rule-based behaviour tools with Convai’s Narrative Design system, you can craft evolving narratives that adapt to your visitors’ actions and choices.

– from the Second Life blog post on the Character Designer

Convai promotional video. Note that not all capabilities shown may be applicable to the current SL Character Designer

Further information on the new capability can be found within the official blog post announcement, and the links below. I hope to be able to report more on the capability as I gain access to it with other Premium subscribers in due course.

Related Links

SL project updates 16/3: TPV Developer meeting, servers

Butterfly Conservatoryblog post

The majority of the following notes are taken from the following sources:

  • The TPV Developer meeting of Friday, April 21st. A video of the meeting is embedded (my thanks to North, as always), and time stamp reference appearing in the text relate to that video
  • The Server Beta User Group meeting of Thursday, April 20th.

Server Deployments – Recap

As always, please refer to the server deployment thread for the latest information.

  • On Tuesday, April 18th the Main (SLS) channel received the server maintenance package previously deployed to the RC channels in week #15.
  • On Wednesday, April 19th, the RC channelsreceived the improved region capacity and access capabilities previously on the McRib micro-channel, which has been reabsorbed into the RCs

Simulator OS Update

The simulator build using a new version of Linux is now on the Main (SLS) grid, but currently restricted to the Cake micro-channel. Region holders with a defined interest in testing their products, scripted objects ,etc. on the build should contact Concierge Support to request an opt-in to the channel.

SL Viewer

Release Viewer

[02:15] The release viewer was updated on Wednesday, 19th April with the promotion of version 5.0.4.325124, formerly the Maintenance Release viewer.

  • This viewer includes a number of important updates, including:
  • Avatar Complexity Rendering Updates, including avatar rendering exceptions
  • Grid Status Display Toolbar Button
  • Improvements to the snapshot floater, inventory offer messages
  • Block list improvements

See my overview of this viewer (from when it was an RC release) for more.

HTTP Asset Viewer

[02:38] The AssetHTTP release candidate updated to version 5.0.5.325600 on Thursday, April 20th. This viewer moves fetching of several types of assets to HTTP. This update was primarily to merge the HTTP code with the new release viewer, but also includes additional logging code in an effort to try to determine why the previous version had an elevated crash rate.

Voice Viewer

[02:56] This RC viewer was withdrawn due to a high crash rate. An updated version may soon be available, but is dependent upon another bug being fixed, described as a “really loud, horrible screech in your headphones” if you teleport when someone is talking.

Once available this is seen as an important update for TPVs to pick-up, not only for the new voice updates but also because it fixes a number of bugs in certificate handling.

Project Alex Ivy 64-bit Viewer

[03:45] This viewer is awaiting a further update, which will hopefully appear in week #17 (commencing Monday, April 24th), pending the outcome of QA testing. The update will include 64-Havok on the Mac (it is already in the Windows version).

E-mail Verification

On Wednesday, April 19th the Lab posted about Making Email From Second Life (More) Reliable, and the need for users to verify their e-mail addresses with the Lab (detailed instructions on which can be found here).

[05:17] There will be an update viewer supporting this (for IMs-to-e-mail, etc.), available “soon”.

Improved Estate / Parcel Access Controls

[09:08] A viewer supporting the server-side changes to the Public Access settings at region / parcel level (in short, parcel owner will not be able to set their parcels to restricted access if the region is explicitly set to Public Access at the Estate level), will be out shortly. Commenting on this, Grumpity Linden said:

Hopefully, we’ll have the first iteration out next week [week #17], and we might have to have some back-and-forth to make sure that the behaviour makes sense.

This viewer will also hopefully resolve issues such as BUG-4994 which results in a parcel being set to Group access (and gaining ban lines) if both the Public and Group access options are checked, are also resolved as a part of the work.

Improved Estate Ban List Management

[12:18] in February it was indicated that the current capabilities for managing estate ban lists are far from ideal. The ban list is confined to a small area of the World > Region Details > Estate tab, which is currently shared with three other lists. It is also non-searchable, making locating individuals for removal from very large lists time-intensive and difficult.

The Lab is working to provide a larger space for managing estate ban lists, with Grumpity Linden noting:

What I hope is not going to be unreasonably complicated to do, is to actually give you additional information, like when the user was banned and by whom. But we still need to see whether that is hard to do. It’s not going to be a part of the other UI changes for estate controls, because we want to get those out quickly; [but] I expect both of these things will require a couple of iterations.

Other Items

Second Life Statistics Stuck

[10:04] There is an issue with the statistics for the number of concurrent users on-line, Linden dollar exchange rate and number of daily sign-ups, which have remained frozen at the same values since April 12th, 2017 (the number of users on-line statistic, often referenced on the log-in splash screen by a number of viewers spent several days stuck at 44,647, for example) – see BUG-100468. The Lab is looking at the issue, but state it might take “a little while” to determine the problem.

Media Volume Issues

This breaks down into two areas:

  • Groups involved in the Community Gateway programme report that many incoming new users complain about the default volume at which the official viewer auto-plays streaming media on logging-in. An informal request has been made for the Lab to adjust the volume level downwards, but no work has been done on this (and no JIRA formally requesting the change has been raised)
  • [17:30] BUG-40937: “Shared media a great distance away (different region even) sometimes plays at maximum volume when entering a region or moving camera slightly” – the Lab has promised to discuss this, but is making no promises as to what might be done by way of resolution.

Fun Fact – Maintenance Viewer Internal Names

[19:37] Grumpity and Oz Linden revealed that they use internal project names to identify the various Maintenance viewers which are either under development or in flight. For some time now, these viewers have been named for assorted alcoholic drinks!

“For a while we stuck with drinks beginning with ‘s’,” Grumpity said, “but we had to expand.”

“They started getting pretty obscure!” Oz added.

SL project updates 2017 10/2: Content Creation User Group w/audio

The gathering: people gather for the CCUG, including a ridable dragon, a work-in-progress by Teager (l) and a wearable dragon, also a WIP by Thornleaf (r)
The Content Creation User Group meeting, Thursdays, 13:00 SLT

The following notes are taken from the Content Creation User Group meeting, held on  Thursday March 9th, 2017 at 1:00pm SLT at the the Hippotropolis Campfire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are available on the Content Creation User Group wiki page.

HTTP Fetching

As I’ve noted in several recent SL project updates, the Lab is shifting the fetching landmarks, gestures, animations, shapes, sounds and wearables (system layer clothing) away from UDP through the simulators and to HTTP via the CDN(s).

The simulator side of the code is already in place on Aditi and awaiting further testing (see here for more). Vir is heading-up the viewer side changes required to support this work, which is now getting “petty close” to being available in a public viewer (most likely a project viewer). I’ll continue to update on this work through my various SL project update reports.

Rendering Costs

Vir has also been looking at the viewer-side rendering costs of various avatar models to improve the overall rendering cost calculations. This is more a data gathering exercise at the moment, but it is hoped it will lead to improved calculations when determining the overall rendering complexity of models, and will likely mean that, for example, the cost of rendering rigged meshes will at some point be updated.

This isn’t directly related to the potential of animating objects (e.g. for non-player characters). While the Lab is apparently still pondering on this as a possible project, it would mean back-end changes to calculate the land impact of avatar models used as NPCs, rather than alterations to the viewer-side rendering cost.

Animation Transitions

There have been, and continue to be, a number of issues with animation playback, some of which appear to be related to llSetAnimationOverride, one of the server-side functions for controlling your animation state. Some of these were reported early on in Bento, which exacerbated some of them (e.g. quadrupeds crossing their forepaws).

Issues can also occur with jump animation states (pre-jump, jump and landing), as has been reported in BUG-7488. For example, During the meeting, Troy Linden and Medhue Simoni pointed to problems: for Troy, it was with respect of an avatar “sticking” in the landing animation, rather than returning to the expected standing animation; Medhue reports issues in general playback, and whether the transitions will actually play correctly.

It not clear if these issues are all a part of the same problem, but feedback from the meeting is being relayed back to those at the Lab poking at things.

Information and Tools

People are still having problems finding Bento information on the SL wiki – such as the skeleton files. This is partially due to the files being on the Bento Testing page.  It’s also not easy for new creators to find information on suitable tools (e.g. Avastar. MayaStar, etc.).

One suggested solution (allowing for the wiki currently being locked from general edits) is to have a general SL tools page where the various tools, etc. can be listed with links to their respective websites. This could include free tools: GIMP, Blender, Wingz 3D, etc.), plus tools which are not specific to SL can be used within (e.g. Maya, Zbrush, etc), and then add-ons like Avastar and Mayastar.

Such an approach, coupled with a clean-up of the Bento information, might be suited to being included in an overhaul of the wiki Good Building Practices pages the Lab is working on as and when resources are available. Troy has made a note to take these ideas back to the Lab.

Other Items

Transparency Rendering Cost

There was some discussion on whether the rendering cost of a rigged mesh should remain high if it is set to transparent. Some felt the cost should be lower, and Vir noted that the system avatar has a special UUID for an invisible texture when can reduce the rendering cost of the system avatar. However, rigged meshes may not be subject to this check, which may also depend on how the mesh is made transparent (i.e. via a texture or via the transparency setting). He also noted that rendering as transparent could add cost over rendering a rigged mesh as opaque.

There was some discussion on whether simply having the mesh in memory, whether or not it is rendered, could add to its complexity. Vir indicated that as he’s not precisely sure how things are handled, he’d have a look at the code.

Calling Animation UUIDs via Script without the Animation Residing in Inventory

A question was asked whether it would be possible to have a script call an animation via the animation’s UUID without the animation being physically in the parent object’s inventory. The short answer to this is “no”.

While animations can be pulled from objects with modify permissions and used elsewhere, many items with animations (chairs, beds, etc.), tend to have animations in them set to No Copy, limiting the ability to freely re-use them. If animations could be freely called via script using their UUID, this protection would be eliminated, as anyone with the UUID could use the animation as often as they wished, regardless of whether or not a version of the animation resides in their inventory.

This conversation edged into the issue of people being able to pull Copy permissioned inventory from a No Modify object by opening it; however, that is something of a separate situation, which was not discussed further.

Avastar Status

AvaStar is now at release candidate 4, with RC 5 on its way, which may be the final RC before a release.

.ANIM Exporter for Maya

Aura Linden is re-working the code on her open-source Maya .ANIM exporter. She was originally working on it in Maya’s MEL scripting, which would make it compatible with all versions of Maya.

However, after encountering some problems, she is now coding it in Python. This means the exporter will only initially work with versions of Maya supporting Python  (version 8.5 onwards). It may be that once this work has been finished, Aura hopes to be able to go back and complete the exporter in MEL for older versions of Maya.

Mayastar Update

Cathy Foil will have an update for Mayastar appears shortly. When the .ANIM exporter is available (above), it will be folded in to Mayastar, although it is not exclusively for Mayastar.

Splitting the Avatar Shape into Different Elements

This was suggested some time ago as a possible Bento follow-up as a means of making it easier for users to mix and match heads and bodies by allowing different underpinning avatar shapes for both, which could be worn simultaneously. This was seen as particularly useful for users who are uncertain about customising their form using the sliders, or where creators provide No Modify shape with their head or body product, limiting the suer’s ability to modify one or the other. N definitive proposal has been put together on how this might be achieved.

Supplemental Animations

This was also the subject of early Content Creation meetings with Vir as a possible Bento follow-on project. The idea is to allow “supplemental” animations to run alongside the animation states keyed by llSetAnimationOverride(), effectively allowing them to play together, rather than conflicting with one another as is the case at the moment. This is still be considered, but no work has been carried out as yet.

Next Meeting

As Vir is out of the office in week #11, the next Content Creation meeting will be on Thursday, March 23rd, 2017 at 13:00 SLT.

SL project updates 2017 8/2: Content Creation User Group w/audio

The gathering: people gather for the CCUG, including a ridable dragon, a work-in-progress by Teager (l) and a wearable dragon, also a WIP by Thornleaf (r)
The gathering: people gather for the CCUG, including a Bento ridable dragon, a work-in-progress by Teager (l) and a Bento wearable dragon, also a WIP by Thornleaf (r)

The following notes are taken from the Content Creation User Group meeting, held on  Thursday February 23rd, 2017 at 1:00pm SLT at the the Hippotropolis Campfire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are available on the Content Creation User Group wiki page.

Core Topics

  • HTTP asset fetching
  • Animating objects
  • Applying Baked Textures to Mesh Avatars

HTTP Fetching

As previously noted, the Lab is working on moving landmarks, gestures, animations, sounds and wearables (system layer clothing) from UDP delivery via the simulator to HTTP delivery via the CDN(s). This work is now progressing to the stage where initial testing is liable to be starting soon. It’s not clear if this is internal testing with the Lab, or whether it will involve wider (Aditi testing) as well. As things progress, expect the viewer-side changes to appear in a project viewer and then progress through the normal route of testing / update to RC and onwards towards release.

Potential Project: Animated Objects

As noted in my last Content Creation UG meeting notes, the Lab is taking a speculative look at using the current avatar skeleton to animate in-world objects to provide a means for users to more easily create animated objects (e.g. non-player characters – NPCS -, plants and trees responding to a breeze, providing mesh animals which do not rely on performance hitting alpha swapping, etc) – see feature request BUG-11368. for some of the ideas put forward which helped prompt the Lab’s interest.

It is important to note that this is still a speculative look at the potential; there is no confirmed project coming off the back of it, the Lab is currently seeking feedback on how people might use the capability, were it to be implemented. No in depth consideration has been given to how such a capability would be support on the back end, or what changes would be required to the viewer.

One of the many issues that would need to be worked through is just the simple matter of how an object might be animated to achieve something like walking, running or flying. These require the simulator to make certain assumptions when handling an avatar which are not a part of object handling. There’s also the question of how the skeleton would be applied to an object.

Having animated objects does give rise to concerns over potential resource / performance impacts. for example, someone having a dozen animated pets running around them as animated objects could potentially have the same resource / performance overheads as thirteen actual avatars in a region.

One possible offset to this (although obviously, the two aren’t equitable) is that mesh animals / objects which currently use a lot of alpha flipping to achieve different “states” of “animation” (such a the squirrel which can jump from the ground and swing on a nut holder and jump back down again, or the peek-a-boo baby bears, etc., all of which are popular in gardens and public regions) could be made a lot more efficient were they to be animated, as the resource / performance hitting alpha swapping could be abandoned.

It was suggested that rather than having the full skeleton available for animated objects, it might be possible to use a sub-set of bones, or even the pre-Bento skeleton. Agreeing that this might be done, Vir pointed out that using the full skeleton would perhaps offer the most flexible approach, and also allow the re-use of existing content, particularly given that things like custom skeletons (also mooted) would be too big a project to undertake.

A closer look at Teager's WIP ridable dragon, which has yet to be textured
A closer look at Teager’s WIP Bento ridable dragon with Teager aboard, which has yet to be textured

Applying Baked Textures to Mesh Avatars

Interest is increasing in this potential project, which would allow baked textures – skins and wearble clothing layers – to be applied directly to mesh avatars via the baking service. This also has yet to be officially adopted by the Lab as a project, but there is considerable interest internally in the idea.

As I’ve previously reported, there is considerable interest in this idea, as it could greatly reduce the complexity of mesh avatar bodies by removing the need for them to be “onion skinned” with multiple layers. However, as I noted in that report, a sticking point is that currently, the baking service is limited to a maximum texture resolution of 512×512, whereas mesh bodies and parts (heads, feet, hands) can use 1024×1024.

These is concern that if the baking service isn’t updated to also support 1024×1024 textures, it would not be used as skins and wearable using it would appear to be of lower resolution quality than can be achieved when using applier systems on mesh bodies. Vir expressed doubt as to whether the detail within 1024×1024 textures is really being seen unless people  are zoomed right into other avatars, which for most of the time we’re going about our SL times and doing things, isn’t the case.

Troy Linden wears a Bento octopus
Troy Linden wears a Bento octopus “backpack”

This lead to a lengthy mixed text / voice discussion on texture resolution and extending the baking service to support mesh avatars (were it to go ahead), which essentially came down to two elements:

  • The technical aspects of whether or not we actually get to see the greater detail in 1024×1024 textures most of the time we’re in world and in re-working the baking service to supporting 1024×1204 across all wearable layers from skin up through to jacket.
  • The sociological aspect of whether or not people would actually use the baking service route with mesh avatars front , if the texture resolution were left at 512×512, because of the perceived loss of detail involved.

Various compromises were put forward to try to work around the additional impact of updating the baking service to support 1024×1024 textures. One of these was that body creators might provide two versions of their products if they wish: one utilising appliers and 1024×1024 textures as is the case now, and the other supporting the baking service and system layers at 512×512, then leave it to users to decide what they want to use / buy. Another was a suggestion that baking service support could be initially rolled out at 512×512 and then updated to 1024×1024 support if there was a demand.

None of the alternative suggestions were ideal (in the two above, for example, creators are left having to support two product ranges, which could discourage them; while the idea of leaving the baking service at 512×512 falls into the sociological aspect of non-use mentioned previously). Currently, Vir appears to be perhaps leaning more towards updating the baking service to 1024×1024 were the project to be adopted but, the overheads in doing so still need to be investigated and understood.

Other Items

.ANIM Exporter for Maya

Cathy Foil indicated that Aura Linden has almost finished working on the .ANIM exporter she’s being developing for Maya. The hope is that the work will be completed in the next week or so. She also indicated that, in keeping with Medhue Simoni’s advice from a few weeks ago (see .BVH Animations and Animation Playback), she was able to overcome some of the issues being experienced with fine-tuning .BVH animation playback, although there are still problems.

The .ANIM exporter will be available for anyone using Maya, and is not something dependent upon Mayastar.

Avastar 2.0 in RC

The upcoming fully Bento compliant version of Avastar is now available as a release candidate.

IK Constraints

Tapple Gao has been looking at IK (Inverse Kinematics) constraints within Second Life. These aren’t widely used within existing animations – although up to about eight constraints can be defined – largely because the documentation doesn’t appear to be too clear. Tapple hopes to improve this through investigation and then updating the SL wiki.

Next Meeting

The next content Creation meeting will be in two weeks, on Thursday, March 9th, at 13:00 SLT.

SL project updates 2017-7/2: Content Creation User Group w/audio + HTTP assets

The Content Creation User Group has re-formed out of the Bento User Group, and is held at the Hippotropolis Camp Fire Circle. Imp costumes entirely optional :D .
The Content Creation User Group has re-formed out of the Bento User Group, and is held at the Hippotropolis Camp Fire Circle. Imp costumes entirely optional 😀 .

The following notes are taken from the Content Creation User Group meeting, held on  Thursday February 16th, 2017 at 1:00pm SLT at the the Hippotropolis Campfire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are available on the Content Creation User Group wiki page.

Core Topics

  • HTTP asset fetching
  • Potential project: animated objects

HTTP Asset Fetching

In 2013 / 2014, the Lab made a huge change to how avatar appearance information and texture and mesh assets are delivered to users, shifting them away from UDP (User Datagram Protocol) delivery through the simulators, to HTTP via Content Delivery Networks (CDNs) – see my past reports on the HTTP updates. and CDN work.

As was indicated at several TPV Developer meetings recently (see here for an example), the Lab has been looking to move more asset types for delivery over the CDN, and this work has now started, with a focus on animations and sounds. This should see improvements in both the speed and reliability of assets, which should be particularly beneficial to animations.

The work is in the early stages, and progress will be tracked through my SL project updates.

Potential Project: Animated Objects

A topic of common conversation at various user group meetings is that of animated objects – e.g. objects which can be animated but which are not necessarily part of the base avatar mesh, and / or things like non-player characters (NPCs).

Decent NPC a possible future project? Lab wants feedback on use-cases for animation objects
Decent NPC a possible future project? Lab wants feedback on use-cases for animation objects

While it is still very speculative, the Lab is considering how this might be done and what sort of applications people would use such a capability for. One idea has already been extensively documented – “created agents”, which are avatars which do not necessarily have a connection to a viewer in order to operate – see feature request BUG-11368.

The main aim would be to use the same base avatar skeleton for this work, as well as it being compatible with existing rigged objects, rather than introducing something like custom skeletons (seen as a much bigger project). A lot would also depend up things like performance impact (if the simulator is operating a certain volume of NPCs or ridable objects, for example, then these could impact on resources which might otherwise be used by avatars, etc).

One potential way of achieving desired results would be to animate rigged meshes using the avatar skeleton, but without necessarily having the actual avatar base mesh underpinning it. For example, when we use a mesh body for our avatars, we use the base avatar, but hide it with an alpha mask, with the avatar skeleton animating the worn mesh. With an animated object utilising the skeleton, there is no real need to have the underpinning base avatar, as it would in theory never be seen.

One issue is that many mesh models are multiple parts, therefore some means would be required to control them, and this could be lost without the base avatar, together with the ability to attach static objects to something like an NPC. Hence the idea put forward in BUG-11368; the “created agent” would effectively be a special object class, providing the means for multiple animated meshes to operate in concert.

It is unlikely that the bone limit for a given object would be raised to accommodate animated objects, as this is pretty much a limit imposed by people’s graphics cards. During testing, the Lab found that if too many joints are defined for a single object, some graphics cards are unable to render the object correctly. This impact has actually already been seen with some Bento content (FIRE-20763).

Other aspects which would have to be considered are things like Land Impact. Avatars don’t have a land impact, but that may have to change in the case of animated, avatar-like objects – again, seen the performance concerns above. There are also some concerns over possible griefing vectors.

Performance-wise a potential benefit would be animated objects would not require alpha swapping, which requires a fairly hefty performance hit – but this could be countered to a degree (and depending on where you are and how animated objects are used) but the volume of animated objects around you.

Right now the idea is still being discussed internally at the Lab – there is no defined project. However, if you have views on things, attending the Content Creation meetings would be a good place to get them heard.

Other Items

Applying Baked Textures to Mesh Avatars

Still under consideration is a project to allow baked textures to be applied directly to mesh avatars (see here for more). This is still under consideration, but has yet to be formally adopted by the Lab as a project.

Modelling for Efficient Rendering

The subject of efficiency and LODs was the focus of an extended conversation. As I reported in my last Content Creation UG meeting report, Medhue Simoni has been producing a series on the use of Level of Detail (LOD) to help with generating rendering efficient models in Second Life. All three parts of the series are now available on his YouTube channel, and he and I will be discussing them in this blog in the very near future.