360 Capture viewer now de facto SL release viewer

via Linden Lab

On Wednesday, December 15th, Linden Lab issued the Cache +360º viewer as the de facto official viewer release, marking the last viewer promotion for 2021.

As the second part of its name suggests, this viewer is designed to capture and produce 360º degree panoramic still images of the location / environment around your avatar / camera position (if freecamming) in a format that makes them suitable for viewer through platform supporting 360º panoramic images (including Flickr). It does this by simultaneously taking six images around the current camera position – one each at the four cardinal points, plus one directly overhead, and one directly looking down, all of which are then “stitched” into an equirectangular projection image.

The first iteration of 360º photo capability first appeared in the official viewer in October 2016, and came with a certain amount of complexity involved. Later iterations of the viewer improved on this, but the viewer continued to be hit by conflicts with the Interest List, and these and other issues forced work on the capability to be pushed into the background.

However, work resumed earlier this year, and as I reported at the time, an updated project viewer was issued in September 2021 (see Lab Issues Updated Projected 360 Capture Viewer). This release represents the latest iteration of that version whilst also being combined with the former Simplified Cache RC viewer, of which more below.

The 360º capture capability is utilised via a dedicated floater which can be accessed via the World menu and / or a dedicated toolbar button and / or a link in the original snapshot viewer and / or by pressing CTRL-ALT-SHIFT-S

Accessing the 360º snapshot floater (this can also be done via CTRL-ALT-SHIFT-S or by expanding the standard snapshot floater to show the full preview and clicking the Take 360º snapshot link in the lower right corner of the preview panel

Actually taking an image comprises a few simple steps:

  1. Position your camera.
  2. Select the image quality – for finished images you’ll need to set High or Maximum quality using the radio buttons.
  3. Click the Create 360º Image button to generate a preview in the floater’s preview panel.
  4. Click on the preview image and drag it around to ensure what you’re seeing is what you want / that things like textures have actually rendered correctly.
  5. Check the Hide All Avatars option, if required – this will cause the process to include all avatars present (it will not alter their in-world rendering).
  6. When you are satisfied with the preview, click Save As… to save it to your hard drive, renaming it if / as required.

Just remember that if you change the image quality, you must also click Create 360 button to update the preview AND image capture process to the new image quality, before click Save As… again.

Note that the required metadata to have image correctly show in Flickr and FB / Meta (and others) is included in the image – so if you save it to disk and upload it, it should render correctly, as per the image below).

An “unwrapped” Maximum quality 360º image captured using the Project 360 Capture viewer, showing the 6 captured images “stitched” together (click on this image to see it in 360º format in Flickr)
This viewer also included updated code for the viewer’s cache. This code is an update to the Simple Cache viewer originally issued in March 2021, but which to be rolled back after it was found to have a number of significant bugs, such as BUG-230337 “Simplified cache viewer is ignoring cache path” and BUG-230295 “Cannot upload images on the Simplified Cache Viewer”.

In particular the code replaces the VFS cache system used to save local copies of frequently used assets such as meshes, sounds and animations with a simplified cache, and should make loading / reuse of these assets smoother.

Given the level of interest that has been shown in the 360º Capture viewer, this code will hopefully find its way into TPVs in relatively short order, holiday period allowing. In the meantime, the official can be obtain through the official viewer download page.

Lab issues updated Project 360 Capture viewer

via Linden Lab

On Friday, September 3rd Linden Lab issued the latest update to the 360º Snapshot viewer – now called the Project 360 Capture viewer – with the release of version It represents the most significant update to this viewer we’ve seen, and it comes after a significant pause in its development.

As the name of the viewer suggests, it is designed to take 360º degree panoramic images of the environment around the camera. It does this by simultaneously taking six images around the current camera position – one each at the four cardinal points, plus one directly overhead, and one directly looking down. These are then “stitched” into an equirectangular projection image (e.g. one that can be projected as a sphere), which can then viewed through a suitable medium – such as Flickr (other mediums are available!).

An “unwrapped” Maximum quality 360º image captured using the Project 360 Capture viewer, showing the 6 captured images “stitched” together (click on this image to see it in 360º format in Flickr)

The first iteration of this viewer appeared almost five years ago, in October 2016, and came with a certain amount of complexity involved – including the need to install a local environment for previewing captured images. Later iterations of the viewer improved on this, but the viewer continued to be hit by conflicts with the Interest List.

In simple terms, the Interest List lightens the load – objects, textures and updates to active objects, etc., – the viewer has to process when rendering. It does this by ignoring things that are not in the camera’s direct field of view. This is why, for example, when you turn your camera away from the direction you are looking, it can take time for objects and their textures to render. However, for a 360º-degree image, everything needs to be properly rendered in the viewer – whether in the current field of view or not. Overcoming this problem has proven difficult – and it (admittedly with other factors also coming into play) caused work on the viewer to be halted for an extended period.

This version of the viewer overcomes most of these issues, and makes the creation of 360º snapshots straightforward through the use of a new 360 Snapshot floater that is independent of the “standard” snapshot floater, and the use of some additional back-end code to overcome the Interest List. This new floater can be accessed from within the Project 360 Capture viewer in one of four ways:

  • Via World 360 Snapshot.
  • By pressing CTRL-ALT-SHIFT-S.
  • By enabling the 360° snapshot toolbar button in one of the toolbar areas.
  • By expanding the standard snapshot floater to show the full preview and clicking the Take 360 Snapshot link in the lower right corner of the preview panel.
Accessing the 360 snapshot floater (this can also be done via CTRL-ALT-SHIFT-S or by expanding the standard snapshot floater to show the full preview and clicking the Take 360 snapshot link in the lower right corner of the preview panel

The floater itself comprises several elements:

  • The image Quality radio buttons and selection button (labelled Create 360 Image).
    • The quality buttons appear to utilise the viewer’s screen buffer to render the different image types, so Preview appears to use the 128 vertical buffer, while Medium and High use the 512 and 1024 buffers respectively, and Maximum the 2048 buffer (i.e. the full 4096×2048 resolution).
    • When you have selected your preferred quality, click the Create 360 button to generate a preview.
    • If you alter the image quality at any time, you’ll need to click on the Create 360 button again to update the preview / take a fresh image at the new image quality.
  • A checkbox to disable avatar rendering during the image capture process.
  • The preview panel. This will show a rotating image from the current camera position until refreshed, and this image can be manually rotated / panned up and down by clicking on it and dragging the mouse around.
  • The Save As… button that actually saves the image to your hard drive.

To take an image:

  1. Position your camera.
  2. Select the image quality – for finished images you’ll need to set High or Maximum quality using the radio buttons.
  3. Click the Create 360 Image button to generate a preview in the floater’s preview panel.
  4. Click on the preview image and drag it around to ensure what you’re seeing is what you want / that things like textures have actually rendered correctly.
  5. Check the Hide All Avatars option, if required – this will cause the process to include all avatars present (it will not alter their in-world rendering).
  6. When you are satisfied with the preview, click Save As… to save it to your hard drive, renaming it if / as required.

Remember, if you change the image quality, you must also click Create 360 button to update the preview AND image capture process to the new image quality, before click Save As… again.

Once captured – again as noted – images can be uploaded to a suitable display platform such as Flickr – the images contain the necessary metadata that should automatically trigger the 360-degree viewing process (just click on an image in flicker to manually pan around up / down).

An “unwrapped” Maximum quality 360º image captured using the Project 360 Capture viewer, showing the 6 captured images “stitched” together (click on this image to see it in 360º format in Flickr)

General Observations

  • An easy-to-use iteration of the 360º snapshot viewer that brings good quality and ease-of-use to the process.
  • The ability to avoid rendering avatars not only helps avoid issues of rendering / motion blurring when taking a 360º image, it enables the easy capture of landscape images. It also, obviously, allows for the capture of posed avatars if required.
  • There are still some issues in rendering out-of-view (relative to the visible field of view for the camera) items and textures at High and (particularly) Maximum quality images – note the blurring of the vessel name in the first 360 image above.
  • The lowest quality Preview option is simply too blurred to be of real value – perhaps using 256 rather than 128 might improve this (if only slightly)?

Viewer Links

Lab issues Performance Floater viewer for feedback

via Linden Lab

Among their stated goals, Linden Lab is working to improve the user experience with Second Life with a number of projects. One of these is in making the viewer UI and viewer controls more accessible, and as a part of this work, the Performance Floater viewer (version at the time of writing) was issued as a project viewer with the intention of gaining feedback from users on the changes / options it incorporates and how they are presented.

As the name implies, the Performance Viewer is focused on bringing together various options and controls than can help improve viewer performance, and presenting them through a single new floater called (in difference to the viewer’s name) the Improve Graphics Speed floater.

This new floater can be accessed in one of two ways: via World menu → Improve Graphics Speed or by enabling the Graphics Speed button within a toolbar area. It comprises four button options, together with the viewer’s current FPS provided in large, friendly numbers at the top of the floater.

The Performance / Graphics Speed floater and accessing it

The four buttons open dedicated panels within the floater, each focused on a specific group of settings:

  • Graphics Settings: pulling together the most commonly-used Graphics options from Preferences → Graphics and the Advanced Graphics Preferences floater:
    • Quality and Speed.
    • Draw Distance.
    • Toggles for enabling / Disabling atmospheric shaders and Advanced Lighting Model, together with a drop-down for setting Shadows.
    • A toggle for disabling / enabling water transparency and a drop-down for setting the quality of water reflections.
    • A “Photography” option that most of us will recognise as being the RenderVolumeLODFactor debug setting.
    • A button to open the Advanced Graphics Preferences floater, should further adjustments be required.
  • Avatars Nearby: a set of options related to rendering the avatars around you or in general:
    • The Maximum Complexity slider (from Advanced Graphics Preferences), with the value rounded down to the nearest whole thousand.
    • The option to Always Render Friends (from Preferences → Graphics).
    • A new checkbox for de-rendering all avatars in a scene (toggles Advanced → Rendering Types → Avatars (CTRL-ALT-SHIFT-4) off / on).
    • The radio buttons for showing / hiding avatar name tags (from Preferences → General).
    • A list of nearby avatars, with indicators giving their rendering complexity, colour-coded to denote fiends.
      • Running the mouse over a name in the list will highlight it, and offer an Information icon to open their profile.
      • Right-clicking on a name will bring up options to render them fully or as a “jellydoll” (neutral grey avatar) or remove them from your Exceptions list (exceptions being those set to never / always render, no matter what your Maximum complexity setting).
    • A button directly under the avatar list to open your Exceptions list, where you can again right-click on names and alter their render behaviour.
  • Your Avatar Complexity: a list of worn attachments on your avatar with a guidance on their relative rendering cost, and an option to right-click and on any of them and remove them.
  • Your Active HUDs: a list of worn HUDs, again with a relative rendering complexity indicator and the option to highlight and remove any of the listed HUDs.
The Graphics Setting and Avatars Nearby panels in the Performance / Graphics Speed floater

Prior to the release of this viewer, concerns were expressed at a number of the Content Creation User Group meetings that doing so before the re-working of the Avatar Complexity values through the ARCTan project would lessen the impact of those changes when they are eventually deployed. I’m not sure such arguments hold that much weight, simply because a) a lot of people have already made up their minds about avatar complexity and Maximum complexity, so are unlikely to be swayed by any change in how the values are calculated; and b) those who already take note of avatar complexity and the options for managing them, no matter how the values are calculated.

That said, playing with the viewer did raise a number of niggles / ideas with me:

  • Consistency of terminology: we’re all used to terms like “Quality and Speed” and “Draw Distance” in relation to graphics settings – so seeing these arbitrarily renamed to (the contextually meaningless) “Shortcuts” and “Visibility Distance” is a little disconcerting. I hope that labelling overall – whichever terms are used going forward are made consistent across the viewer.
  • That said, the use of numbers to denote quality settings rather than the “mid” to “ultra” labels, is an improvement, and I certainly hope that it is fed back into Preferences → Graphics.
  • The use of the word “hide” in reference to the Maximum Complexity slider is misleading – avatars are not “hidden” when using this slider, but are still rendered, if only as simplified grey humanoid shapes.
  • It would be useful to have Max No. of Non-Imposter avatars added to the Avatars Nearby panel, as for some this is a preferred method of reducing avatar impact on their system over seeing “jellydolled” (or “greyed” as it should perhaps now be) avatars.
  • The Avatars Nearby panel could perhaps also benefit from some additional explanatory text, such as a more rounded note on Maximum Complexity to help encourage people to use it.

As noted, this is the first iteration of the viewer and floater in order for it to gain some exposure and generate feedback from those interested in trying it. Those who do want to offer feedback on it should do so via the Jira as either bug reports or feature requests, depending on the nature of the feedback.


The Project UI viewer: a look at the new user Guidebook

via Linden Lab

In  May, the Lab issued the Project UI RC viewer, part of the work to overhaul the new user experience and provide greater context and support for incoming users when getting to grips with Second Life and – in this case – the viewer.

At  the time it was issued, I  provided an overview of the viewer based on my own walk-through of the viewer as it was at that time, and notes supplied by Alexa Linden (see: Lab issues Project UI viewer aimed at new users).

Since then, the Project UI viewer has progressed through the RC process, and was promoted to de facto release status in week #25. Along the way, it saw some revisions and additions, including a Guidebook to help new users find their way around the viewer. And it is that Guidebook I’m taking a look at here.

Before getting to it, however, a quick recap on the changes within the viewer previously covered:

  • A new menu option called Avatar, and streamlined / revised right-click avatar context menus.
  • Improvements to the Inventory panel.
  • An updated Places floater.

All of these are looked at in the blog post linked to above.

New User Guidebook

The Guidebook appears to be a case of taking an idea first seen in the Basic version of Viewer 2.0 a decade ago, and greatly enhancing it.

In 2011, the was to provide new users with a simple guide to tackle basic actions such as walking and chatting through a pop-up How To guide accessed via a toolbar button. The problem was that the idea was never really followed through: the How To guide was brief to the point of being ignored, and never fully leveraged.

The new Guidebook takes the same initial approach as the old How To, using a button within the toolbar to open a dedicated panel, samples of which are shown below.

The pages of the new Guidebook relayed to avatar / camera movement –  click for full size

However, it is at this point that all similarities with the How To approach ends, as the Guidebook dives a lot deeper into basic needs – walking, communicating, interacting with objects, an overview of avatar customisation and using avatar attachments, finding where to go in SL and where to meet people. It also offers pointers to various viewer menu options and how things like right-click context menus work.

On first being opened, the Guidebook will display the first of the pages dealing with avatar movement, with each page including “next” and/or “back” buttons. Pages display information clearly and concisely, and good use is made of illustrations.

The Guidebook menu

All of the topics covered by the Guidebook can be accessed directly at any time via the three-bar Menu icon in the top-right of panel, then clicking on the desired topic. This index also includes an option to teleport to a Welcome Back Island – a duplicate of the new Welcome Islands incoming users may arrive at, giving those already in SL the opportunity to hop back to an environment where they can gain a refresher. In addition, some sections within the Guidebook also reference locations within the Welcome Islands that also help new users gain familiarity with Second Life and the viewer controls.

Obviously, not everything can be covered in a single guide like this, and people will doubtless have their own views on what “should” be included. However, what is provided should provide incoming users with a reasonable grounding in finding their way around the viewer. It’s also worth remembering that these updates may not be all that’s coming by way of viewer UI updates and/or simplification.

A further aspect of the new user experience is that the Welcome Islands will use an Experience, which in turn uses web page links, it is possible there are yet-to-be revealed elements accessed as new users explore / travel through the new Welcome Islands that may actually give further context to the viewer. As such, any final judgement on what is available in the viewer as released might be premature. Given this, I’ll likely / hopefully be returning to these updates to the viewer as an when the new user experience comes on-stream.

In the meantime, the Project UI is available as the default official viewer download, and the updates it contains will, as usual, be a core part of all future viewer updates and releases from the Lab.

Lab issues Project UI viewer aimed at new users

via Linden Lab

As has been indicated in various discussions and statements from the Lab – such as the Above the Book sessions with Grumpity, Brett and Patch linden at this year’s VWBPE event, one element of Second Life that the Lab is focused on is the new user experience.

This work involves various projects, including the on-boarding process and changes to the viewer to help new users get to grip with things, and on Monday, May 3rd, Alexa Linden announced the release of the Project UI viewer which includes a range up updates specifically aimed at new users.

According the Alexa’s forum post, the new viewer includes three core areas of update:

  • A new menu option called Avatar, and streamlined / revised right-click avatar context menus.
  • Improvements to the Inventory panel.
  • An updated Places floater.

However, there’s actually more to this viewer than the forum post reveals, so here’s a run-down of some of the documented changes and some of those that are missed out from the forum post – but which could actually be of greater interest to established users.

The Avatar Menu and Right-Click Avatar Context Menus

This is perhaps the most significant update to in the viewer. To quote from Alexa’s post:

Making SL easier for newcomers to learn can improve the chances that they will become long-term Residents. Growing the Resident community benefits everyone — more people to meet, more participation in events, and more commerce. The changes described below are the first batch of what we hope will be an ongoing series of usability improvements.
Avatar menus
With this release we introduce the Avatar top-level menu which brings together all avatar tools in one place. One of SL’s most important features is now more visible to newcomers. You’ll notice the avatar right-click menu has been streamlined as well.
Have you ever struggled to select an avatar attachment?  It’s inside your avatar, it’s transparent, or it’s a mesh attachment that you just can’t grab. You can now touch, edit or remove an attachment using right-click from all Avatar windows and Inventory.

The Avatar menu and the revised right-click context menus are show below:

The new Avatar menu sits between the Me and Communicate menus brings together all of the frequently used avatar tools (l). Centre: the revised avatar right-click context menus seen when touching your avatar (top) or an attachment (bottom), and how they compare to the current versions of the menus (r)

Inventory and Places Updates

I’ve not a lot to say on the Inventory floater updates, so will leave that to Alexa’s forum post. The changes to Places and how landmarks are handled, again as specified in the blog post, are also straightforward, although there are a few additional points to note:

  • The new panel also sees the gear button moved to the top of the panel, and provides a new set of fairly self-explanatory options:
    • Teleport.
    • View.
    • Show on Map.
    • Copy SLurl.
  • The original Expand and Collapse options from the gear button have been moved to a separate drop-down menu button, with the delete option moved to a its own Trash button.
The Project UI viewer’s updated Places panel (l) and the release version

Other Menu Updates

The new Avatar Menus means there have been revisions to the Me and communicate menus as well, with avatar-related options (such as the Choose and Avatar option moving from Me to Avatar (and renamed Complete Avatars).

The revised Me and Communicate menus (with the blue bands) compared to the current release viewer – click for full size, if required

As well as these, there are other small tweaks  – World Menu now has a My Linden Home … option. Clicking this will open up the in-viewer browser and take the user to the Linden Homes page:

  • Premium members with a Linden Home will see the page relating to their home.
  • Premium members who do not have a Linden Home and Basic Members will see the Linden Home selection page (and Basic members will go forward to the Premium sign-up page).

Note also, that using this menu option (as with others in the viewer that use the built-in browser to access Second Life web pages) may trigger single sign-on, and require you log-in to the SL web properties.

EEP Updates

One of the biggest complaints with the Environment Enhancement Project (EEP) has been use use of trackball options to position the Sun and Moon, with many voicing their preference for “a slider like Windlight”. To address this, the UI Project viewer implements two sliders for positioning the Sun and two for the Moon across all of the EEP settings floaters. These are:

  • Azimuth  – which might be thought of as the east / west position of the Sun or Moon (technically, azimuth is more than this, but it’ll do for these notes).
  • Elevation – the position of the Sun or Moon over or under) the horizon, relative to azimuth.

These sliders are tied to the Sun / Moon movement using the trackball systems, allowing both to be used as preferred.

The Sun & Moon tabs on Fixed Sky and the Day Cycle floaters now include Azimuth and and Elevation sliders for positioning the Sun / Moon, and similar sliders can be found on Personal Lighting

Rapid-Fire Feedback

Overall, this is a reasonable set of changes; they do enough to streamline things in places without being a potential source of confusion for established users; the changes are for the most part logical – although I do have a couple of reservations.

On the plus side, bringing together the majority of avatar tools into a single menu makes a lot of sense. But I do wonder if having menus called “Me” and “Avatar” side-by-side might not be a little confusing for new users (e.g. “Huh? Wassa difference? Why two menus for my avatar?”). The use of the “avatar” menu name is liable to cause a small amount of consternation with Firestorm, as that viewer already use it in place of “me”, but c’est la vie.

I was also surprised to see that the Linden homes page has yet to be updated for Basic members – it still features photos and a video of the old 512 sq m Linden Homes. Given the newer Homes are more attractive (and have now been with us for a while), and the aim of this viewer is to help make engagement with SL more attractive to new users, linking to information that is pretty much out-of-date and doesn’t actually reflect the more common Premium offering seems a little disjointed.

Elsewhere, I like the ability to touch / select attachments – particularly worn mesh – made more accessible. Catznip introduced such a capability a few years ago, and I can’t help but wonder if seeing it now in the official viewer might be the result of a code contribution from that viewer.

It’s also good to see the Lab respond to requests with EEP, and hopefully the new sliders will help those who find the trackballs a little confusing – although I don’t doubt the labelling might cause a little confusion (“why not east and north?”).

I understand the updates to the learning / social islands will be coming along in summer – although I’ve no idea if these will see further tweaks to the viewer as well. as well. In the meantime, it’ll be interesting to see how this Project UI viewer develops over the coming months.

Related Links

* Note this link will become inlaid as the viewer is updated.

Bakes on Mesh – a basic primer

Updated with an overview of “Bakes on Mesh appliers” for Mesh bodies and head yet to be updated to support BoM.

Monday, August 26th, 2019 saw the formal release of Bakes on Mesh (BoM) for Second Life, and with it, an attempt to make system wearables (skins, tattoo and clothing layers) usable on modern mesh avatar bodies, utilising the avatar Bake Service and without the need for a dedicated applier system.

While Bakes on Mesh has been in development for over years, and much of it is known to many users, this article has been written to provide something of an introduction / overview of BoM, covering things like system wearables, the Bake Service, that changes that have been made, where to find information on using BoM, and what it may mean for Second Life users in the future, depending upon how well the capability is received by creators.

Some Basics

System Wearables and the Bake Service

System wearables as they appear in inventory

Without going too deeply into specifics for those unfamiliar with them, system wearables are a special kind of inventory asset (some of which are shown on the right) that can be directly worn / added to the system avatar to produce a “dressed” look.

These wearables come in a number of “layers”- skin (which must always be worn on the system avatar), tattoo, undershirt, shirt, and jacket.

The naming of the layers isn’t that important – a creator could be assign a bra or a shirt or a pair of pants to any one of the tattoo, undershirt, shirt and jacket layers, depending on how flexible they want their clothing to be. What is important is that the always follow an hierarchy: skin is always at the bottom and so “covered” by the other layers, which are in turn “covered” by the next (so undershirt wearables always apply “over” tattoo wearables; “shirt layers “over” undershirt wearables, etc), with the avatar able to wear up to 62 wearables in any combination of layers at one time.

This might sound very complex, but for those familiar with the system, it is very easy to grasp; however, what is important is what comes next. When an avatar’s look is complete, the information about all these wearables are sent to the simulator and then to a back-end set of servers called the Bake Service over a series of channels called the “bake channels”, which define where the layers appear on the avatar. These channels are:

  • BAKE_HEAD, which defines all the wearable elements that have been applied to the head (e.g. skin, and tattoo layers used for make-up)
  • BAKE_UPPER, which defines all the wearable elements – skin plus any tattoo, undershirt, shirt and / or jacket layer(s) that have been applied to the avatar body above the waist and below the neck (with the left arm mirrored from the right).
  • BAKE_LOWER, which defines all the wearable elements – skin plus any tattoo, undershirt, shirt and / or jacket layer(s) that have been applied to the avatar body from the waist to the feet (with the left leg mirrored from the right).
  • BAKE_EYES and BAKE_HAIR (both pretty self-explanatory).
  • BAKE_SKIRT, which defines skirt / dress style wearables.

The Bake Service then composites (bakes) the layers received on each of these bake channels into a single texture, and sends the results out to every viewer able to “see” the avatar. So, for example, facial / head skin and any make-up tattoo(s) received via the BAKE_HEAD channel are baked to become a texture seen on the avatar’s head, while the layers received over the Bake_Upper channel are baked into a texture seen on the avatar’s upper body, and so on, ensuring the avatar consistently appears to everyone dressed at the user intended, while also removing the need for individual viewers to manage the complex layering and rendering of all the individual wearable layers on other people’s avatars.

Mesh Bodies and Complexity

Since their introduction, mesh bodies have not been able to leverage this approach. Instead, they require a dedicated “applier” mechanism to achieve the same ends, together with the use of an alpha layer to hide the system avatar.

Further, to enable clothing items to be layered – so you can have an applied shirt / blouse appearing to be “under” a jacket, for a example, mesh bodies have had to be constructed in a complex manner, with several layers closely packed together (colloquially called “onion layers”) that effectively mimic the system wearable layers. This actually makes the avatar a lot more complex than they otherwise might be, resulting in their relatively high rendering costs.

Enter Bakes on Mesh

So, Bakes on Mesh has been developed to allow system wearable to be applied directly from inventory to worn mesh faces (e.g. avatar bodies and wearables) that have been correctly flagged by the creator to support Bakes on Mesh. Through Bakes on Mesh, Linden Lab hopes:

  • Users can avoid the need to use appliers, but can add wearables to their mesh avatar directly from inventory.
  • Creators will be able to simplify avatar mesh bodies and heads by removing the need for some of the “onion” layers. This should – if done – reduce the rendering complexity for bodies and heads, thus hopefully improving people’s SL experience (as avatars won’t be quite so resource intensive or require quite so much “assembly time” when encountering them on logging-on or after teleporting somewhere).


  • As with all new features, use of Bakes on Mesh will only be apparent to those actually using viewers running the Bakes on Mesh code; anyone not on such a viewer will likely see something of a mess.And as with new features, it will take time for the Bakes on Mesh code to be implemented by all TPVs.
  • Bakes on Mesh does not mean user “have” to go back to using system wearables nor does it mean that applier systems can no longer be used. It is simply a means of making system wearables work with mesh bodies and heads, hopefully with the benefits given above. Those who wish to can continue to use applier-based clothing as they always have.
Bakes on Mesh adds new options for applying suitable textures to the baking channels for application on a mesh body by the Bake Service

An introduction to using BoM can be found in the Bakes on Mesh Knowledge Base article. This includes information on trying BOM using a test mesh body – the best way to do this is to use Aditi, the beta grid. I’m not going to go into specifics here, simply because there are multiple resources available to assist users and creators – some of which are noted at the end of this article, and I want to keep this as a more general, easy-to-understand primer.

When considering Bakes on Mesh it is important to remember it is not necessarily intended as an outright replacement for appliers and current mesh bodies from the get-go. Rather, it is initially an alternative – although if the popularity / take-up among creators and users are sufficient, then over time it could obviously become the system of choice over appliers and more complex mesh bodies. However, existing mesh bodies / heads and applier systems will continue to work as they always have.

Key Points of Bakes on Mesh

This list is not exhaustive, but is intended to give a feel for Bakes on Mesh and its use:

  • System skin layers, tattoo layers, clothing layers and alpha layers all work – the mesh just needs to be flagged by its creator as supporting Bakes on Mesh and correctly set-up for alpha layers to work as intended.
    • As an alternative, there are assorted “BoM appliers” designed to work with mesh bodies  / heads that have not (yet) been updated with Bakes on Mesh support – see below for more.
  • You do not need a full body alpha to wear a Bakes on Mesh flagged mesh. If the flag is present when you wear the mesh, the body section it is flagged for disappears. So, if you wear a lower body “Bakes on Mesh ready” avatar part, then entire lower body of the system avatar will disappear.
  • The Bake Service has been updated to support 1024×1024 resolution textures, so it offers the same texture resolution for wearables as offered through applier systems (prior to Bakes on Mesh the maximum resolution for wearables was 512×512).
    • Obviously, the wearable must be made at this resolution in order to utilise it; a 512×512 wearable will not magically appear to be 1024×1024 resolution when applied.
  • In order to be fully effective, mesh bodies using BoM and BoM wearables should match the system avatar UV map as closely as possible.
    • Fortunately, most of the current range of avatar bodies sold under brands such as Maitreya, Slink etc., do tend to stay close to the system avatar UV map. So any new BoM-specific versions / updates should continue to do so.
  • Alpha support means that  layers means that mesh bodies should no longer need to be split into multiple pieces for individual alpha-masking to prevent a body clipping through clothes. Alpha requirements are back in the hands of the clothing creator, and should be made alongside the clothing, so that when used and providing the body is correctly set-up they should just “work”. In addition, clothing makers may not longer need to include auto alpha scripts.
  • Changing mesh body parts should be easier, providing both bodies are flagged to use Bakes on Mesh. The body takes whatever is worn on the system body – skin and make-up instantly appear on each change of head, for example.
  • Skin makers will be able to offer more options by including tattoos with their skins, allowing for a variety of make-up options, whilst there will no longer be any limitation on the use of tattoos (one per zone).
  • Applier support will still be required for the following: nails; eyelashes; standalone ears, hands, feet, lips, bust implants, etc.; lip gloss; materials finishes (see Some Possible Points of Contention, below); neck blenders, anything not intended to look “painted” on.

New with Bakes On Mesh

To provide full “wearables” support, Bakes on Mesh introduces some new elements that will be of key import to creators:

  • The introduction of 5 new bake channels – LEFT_ARM_BAKED, LEFT_LEG_BAKED, AUX1_BAKED, AUX2_BAKED, AUX3_BAKED:
    • These can only be used with Bakes on Mesh, and are not available to the system avatar.
    • LEFT_ARM_BAKED and LEFT_LEG_BAKED are intended to help with making mesh avatars where the left and right limbs have different textures (and so can be asymmetric, as can currently be achieved with applier systems).
    • The AUX channels are general purpose, and could be used for body regions not possessed by system avatars (such as wings) or for other purposes.
    • This means BoM has 11 possible channels for wearables to use for textures, and for the baking service to produce.
    • However, the new channels listed above do not have alpha support like the other channels, and so cannot have “holes” cutting through the mesh face they are worn against.
  • BOM also adds a new wearable type called Universal.
    • While specifically added to allow the wearing items that use the new  channels described above, the Universal wearable has slots corresponding to all 11 of the bake channels, offering extensive flexibility of use. In layering order, universal wearables go between the tattoo and body layers.

Note that for others to see your avatar correctly when you are using Bakes on Mesh they must also be using a Bakes on Mesh viewer. If they are not, they will see your avatar as a mesh of red, blue and yellow colours showing through your mesh parts.

Left: Bakes on Mesh when seen via a non Bakes on Mesh viewer: the system avatar will show through the mesh body parts, covered by default system-supplied BoM marker textures. Centre and Right: using a “Bakes on Mesh applier” on a non-BoM body and head and via a Bakes on Mesh capable viewer. The centre image shows the “before” avatar state: the system layer skin and clothing are worn, but do not conform to the mesh body and head (which must be worn without their corresponding alphas for this type of applier to work), and so “poke through” (highlighted in places by the red circles). The right image shows how things look after the”Bakes on Mesh” appliers have been used – the system layer clothing and skin now mostly conform to the mesh head / body, with the exception of fingers and toes (highlighted) which will generally require an additional “glove” or “mask” fix.

“Bakes on Mesh Appliers”

During the testing of Bakes on Mesh, at least two experimental applier systems were produced to allow BoM to be tested on non-BoM flagged bodies and heads. For example, Omega produced an experimental BoM applier system, with instructions here.

Since then, and given that several mesh body and head creators have yet to produce BoM flagged updates to their bodies / heads, several more such “BoM appliers” have been produced, some of which are available for free, some are provided by the mesh head / body creator, and others are available at a nominal cost, and may be for specific purposes (e.g. the Bakes on Mesh skin applier (Omega) by Conor Shostakovich at L$125).

These essentially work by allowing you to dress your system avatar with the required system wearables, then wearing your mesh body / head without their alpha masks, and then using the applier to apply the system layers to the mesh body / head in a similar manner to “traditional” appliers – but again, as a single composite layer when baked.

  • How effective these systems are can be variable.
  • Due to differences in the way skin skin textures / UV maps work and the way mesh bodies tend to be put together, such appliers may not work particularly well around feet and hands.

Note: links to products does not constitute endorsement. Always check the Marketplace for products and reviews.

Such appliers are intended as an interim “fix” for using Bakes on Mesh until such time as the major head and body creators provided full Bakes on Mesh support.

Some Possible Points of Contention

However, there are what might be regarded by some as “negatives” around Bakes on Mesh, a couple of the more prominent ones being:

  • The Bakes Service – and thus Bakes on Mesh – does not support materials (normal and specular maps). How much this impacts people’s acceptance of BoM is open to debate. However, when needed, materials still can be added manually (if the mesh / mesh face in question is editable) or via a suitable applier.
  • Appliers are convenient, as they are an all-in-one solution requiring only one or two items in inventory – the outfit applier HUD and possibly an intermediary relay tool like Omega.
    • With Bakes on mesh, wearables are all individual inventory assets, which could lead to inventory growth, some of which might be quite extensive as a result of creators providing multiple options / layers (although in fairness, some applier systems can be like this – I have seen a Hugo’s Design outfit with no fewer the 40 individual items, both system layer clothing and multiple applier options).
    • Some of the inventory “bloat” BoM might cause can potentially be managed via the use of the viewer’s Outfits capability (although this obviously also adds to bloat with inventory links) or via a new form of applier system that utilises system wearables created at 1024×1024 resolution.

How much these may impinge on consumer’s willingness to adopt BoM remains to be seen.

Closing Remarks

Like all new capabilities, Bakes on Mesh will take time to gain understanding and traction. Also like all new features, it has its outright fans, and those who have – even before really getting to work with it in earnest – decided its is bad / wrong / pointless / a step back, etc.

I’m personally sitting in the middle. If it does what is claimed on the tin, and if it gains traction among mesh body and head creators (and several have been working on BoM for the 12+ months its been in development) and clothing creators, then it could do its own little bit towards a better “optimisation” (quotes used intentionally, as there is still a lot more than can be done in terms of optimisations cross SL), and make things a little better for everyone.

But it will take time for Bakes on Mesh to mature in terms of general use – creators need to update their heads / bodies (although Slink is apparently ahead of the curve, and their new bodies are said to work with existing appliers, and other creators may also be providing products / updates, I’ve just not encountered any as yet). Those making system wearables are going to need time to update to the 1024×1024 where preferred (if they haven’t already, and so on. And, most obviously, it will take a little time for the Bakes on Mesh code to percolate out to all TPVs.

In the meantime, some links to useful resources.