Made in SL: education and CNDG in Second Life

The CNDG FutureWork Institute, as 2-region setting within CNDG’s spread of some 42 regions for education, training and showcasing

On Thursday, August 29th, 2019, the Lab launched the first segment of the new Made in SL series of videos. Carrying the  banner name Learning In SL, it would appear to be the first of a series (likely interspersed with segments covering other subject matter, as indicated by the original Made In SL series announcement) looking at the use of Second Life for educational / learning / training opportunities. Specifically for this piece, the work of the international and very successful Chant Newall Development Group, LLC (CNDG) is peviewed.

CNDG is a Virtual Learning Environment (VLE) developer. We specialise in creating tailored, user-friendly VLEs, offering a fully supported service on all major virtual reality platforms.

We provide our clients with networked environments where instruction, learning activities, assignments, and synchronous and asynchronous exercises are available 24 hours a day, 365 days a year.

– From the CNDG website

Students on CNDG’s Environmental Studies course (run with Florida State University) take to the water in SL simulating studies in climate change and ocean acidification. Credit: CNDG

CNDG is deserving of being surfaced in this series as their track record is impressive – but perhaps largely unknown to Second Life users. The organisation operates an impressive 42 regions within Second Life, which are split between what might be considered “core” regions, together with sets of “demo” and “live” regions and a series of specialised study regions – including sea / undersea settings. Not all of these are open to the public, being focused on servicing clients and students.

The organisation was founded in 2006, and has grown into one of the most respected providers of VLEs for clients – universities and other educational organisations, working in partnership with Pearson, the largest education company and book publisher in the world. This success also includes working with a number of commercial clients, including the likes of US Department of Veterans Affairs, Honeywell Corporation and Pfizer, the pharmaceutical conglomerate, to provide various specialised environments and facilities in Second Life.

We are not interested in building completely automated, run-on-their-own, no-contact systems: we build environments that help educators communicate their expertise and their knowledge to students in a direct, impactful way … We have the technology needed to create more opportunities for all students at all levels and all over to enter into relationships with mentors and teachers as needed. Virtual Learning Environments which are live and networked give us the ability to break down those barriers, and bring people together across boundaries.

– CNDG CEO and founder, William Prensky

Scotty’s Castle, a recreation of the idiosyncratic villa in Death Valley, was at the time, both CNDG’s first project and most elaborate and realistic buildings in Second Life. With the help of Linden Lab, it brought CNDG to the attention of their first commercial client, America’s Public Broadcast Service. Credit: CNDG LLC

Within SL, CNDG has developed and provided courses in biology, chemistry, economics and environmental science, working particularly with Florida State University and the University of Central Florida, which have seen in excess of 2,000 students participate in activities – with around 25,000 students having participated in programme developed by CNDG as a whole over the past 12 years.

A key part of the courses and units supplied is that students can access the in-world environments through the CNG gateway. This, like SL Community Gateways, provides sign-up, avatar selection and log-in at the main CNDG campus, where tutorial-style guides familiarise them with the viewer and their initial assignments. For clients – universities, collages, and so on – CNDG can provide tailored courses based on a client’s own materials, while Pearson can provide supporting printed material for CNDG’s broader courses (including access codes to sign-in to the CDNG virtual environments), which can be made available to students through the likes of university bookshops.

Within the video itself – running to just under 2.5 minutes, we are introduced to CNDG and its work, touching on some of the successes and partnerships that have arisen from 12+ years of supplying networked educational solutions within Second Life. It’s a fascinating glimpse and well worth taking the time to watch – hence embedding it below for ease of reference.

Given the sheer breadth of educational uses SL is put to, I certainly hope that Learning in SL will  – as seems to be implied by the title itself, as noted at the top of this piece – continue to be a theme within Made in SL as the series continues to evolve.

The Stolen Child in Second Life

The Itakos Project, The Stolen Child – CybeleMoon

CybeleMoon (Hana Hoobinoo) is renowned for her fabulous mixed-media art. It carries within it a richness of tone, a mixing and balance of light and shade, a depth of symbolism and – most poignantly – a wonderful framing of narrative that makes any exhibition of her work in Second Life utterly unmissable.

All of this richness, depth and framing is on display in full force at The Itakos Project, curated by Akin Alonzo, where Cybele presents The Stolen Child, a series of 15 images presented within a glade-like setting caught in the enfolding arms of ancient ruins, which has been specially built for the exhibit by Akim. Reached via the teleport door in the main foyer of the gallery, this setting is not merely a backdrop for Cybele’s art, it is part of the overall theme of the exhibition, designed through its form and lighting to increase the feeling of immersion in in the story the exhibition presents.

The Itakos Project, The Stolen Child – CybeleMoon

This story is not offered as a linear tale; rather, there is a central strand of theme running through both setting and images. This strand leads us through Cybele’s images, linking them indirectly and without necessary order (although one is suggested, somewhat by the circular placement of the pieces) as they form windows, if you will, into the underlying proposition of the exhibition; a proposition a proposition Cybele explains thus:

Fairies are not benevolent creatures at all, attracted by the strength and vitality of mankind, they kidnap children and especially newborns, or seduce (for the purpose of kidnapping) beautiful girls and boys.

She continues by noting the myth of the fairy lies routed in a times past need to rationalise the death of a child, be it at birth or with a short span of months or years thereafter: that the fairies had stolen the child away from a otherwise sad destiny. Within this weaving of fable, there was also menace: children with autism, depression, or other mental health issues were at times considered to have lost their souls as a result of eating fairy food.

The Itakos Project, The Stolen Child – CybeleMoon

Thus through Cybele’s art were are presented with a series of poignant scene sit within the framework of the dome of a night’s sky – the time when fairies might be abroad more than during the hours of daylight – and within a symbolic ring of ancient walls and arches. The latter carries with it a echo of the fairy ring of mushrooms that act as doorways to the fairy realms, or the idea of the faery castle hidden from mortal eyes by the form of a hill, and into which abducted children might be taken should they not take care.

That central strand running through the images – and the exhibition as a whole – takes the form of The Stolen Child, written in 1886 by by William Butler Yeats, who was also captivated by the entire mythology of faeries in Irish mythology. Through the words of his poem, we witness the bewitching song of the faerie folk, calling to children, tempting them away…

Come away, O human child!
To the waters and the wild
With a faery, hand in hand,
For the world’s more full of weeping than you can understand.

The Itakos Project, The Stolen Child – CybeleMoon

Cybele takes lines and words contained within the poem as titles for each of her pieces. Thus, each image forms that window I mentioned above, a glimpse into a scene, one that is often double-edged. On the one hand, it may seem innocent and rich in joy or tranquillity: young folk running through a meadow; a view across rolling hills at twilight while sheep graze; the innocence of blowing into a dandelion. On the other, the titles of the pieces hint at the darker element of fae intent: the stealing away of children, of leaving mothers bereft, to deny the young that chance to see sheep grazing at twilight or know the comforts of home and hearth, their young lives having been swept away with the promise of dances by moonlight in places forbidden by their ever-anxious parents.

To further accompany the exhibition, Cybele also provides a short story, together with additional images, that can be found on her (always enchanting) website. Also presented with the story and images is an audio recording of the marvellous Loreena McKennitt, who put the words of The Stolen Child to music. I’ll leave you with a video of the song from one of Ms. McKennitt’s live performances, and the note that this is a truly engaging and evocative exhibition; rich in narrative and atmosphere, and absolutely not to be missed.

SLurl Details

  • The Itakos Project (ATL, rated Moderate) – remember to take the teleport door in the gallery’s foyer to reach the exhibition!

Sansar Product Meetings week #35: feedback and Q&A

Courtesy of Sansar on YouTube

The following notes were taken from my audio recording of the August 29th (week #35) Sansar Product Meeting, which took the form of a general Q&A  / feedback session. As always, key points are summaries, but please also refer to the official video.

The Nexus Release

The Nexus Release had been anticipated as being deployed this week, but bug fixing derailed the plans for this. This is to now hopefully make an appearance in week #36 (commencing Monday, September 2nd).

Fees / Commission Rates

In October 2018, Linden Lab announced that Sansar’s availability would be extending to the Steam platform at the end of the year, being made available under that platform’s Early Access programme (and which duly occurred in December). As a part the October 2018 announcement, it was indicated that credit processing fees for Sansar Dollars would increase to S$250 to the dollar, with early access creators receiving a legacy conversion rate of S$143 to $1 until December 31st, 2019.

On Wednesday, August 28th, 2019, Linden Lab announced the conversion rate for all early access creators who process credit from Sansar will remain at the S$143 to $1 through until December 31st, 2020 – so a further year’s extension.


Nexus, Prime Portal and Codex

  • People will still be able to access their Home Space once the Nexus Release has been deployed

    Will people always have to log-in to the Nexus, once deployed? What about issues of trolling / harassment? Won’t this increase load times – a scene plus all the avatars?

    • Yes, all users (including Lindens) will by default be logged-in to the Nexus (or an instance thereof).
    • The comfort zone controls, muting / blocking abilities will all be available to users.
    • People will still be able to access their Home Space if they wish / need to, and will be able to cancel the Nexus load to go directly to the Home Space at log-in, if they wish.
    • Avatars are also cached, so allowing for cache limits, if there are avatars previously encountered at the Nexus, this could help mitigate additional load time at log-in / a return visit.
    • Load times and interactions within the Nexus will be monitored closely as well, in case adjustments have to be made.
  • Is there a risk that by making people go to the Nexus whenever they want to discover potential destinations they have not yet visited, people will be less inclined to explore?
    • The hope is that no, it won’t, but it will be closely monitored when first deployed.
  • As avatars can impact an experience when first loading in, could the Nexus suffer with lots of avatars constantly logging-in to it?
    • This is a concern and again will be monitored.
    • It is likely that the Nexus will initially have a lower avatar limit prior to spawning a further instance to try to mitigate this (although it is acknowledged this could cause issues with meeting friends in the same instance of the Nexus).
    • Avatar loading has also been moved to a separate thread to try to minimise performance impacts.
  • Do creators have to go through the Nexus to get to their scenes and work on them? If logging-in the Sansar, yes.
    • However, once logged-in to Sansar, all other routes to edit a scene are available and no further visit to the Nexus is required.
  • Will the Codex make visited experiences visible as a single list, or will it have categories, or will something like the top 10 popular experiences out of those visited be listed first?
    • The Codex will just be a searchable list of places a user has visited.
    • The Prime Portal at the Nexus, however, will be more Atlas-like as it will contain all public experiences (aka “worlds”) and have categories, etc., as well as being searchable.
    • In respect of both, the Lab recognises that they will have to look at ways to improve the discoverability of both existing and new content.
  • Will people be able to set their own home location – such as one of their own experiences, rather than having to go to the Nexus or their Home Space? This idea has been noted.

Avatar 2.0 and Avatar Related

  • The reference files for the male avatar are the same as the avatar base shape, so yes, the male avatar does look skinny; the explanation for this that LL are trying to make both female and male avatars similar in shape to allow easy sharing of clothing and accessories between the two (so a shirt for a male avatar should fit a female avatar, for example).
  • The similarity is also to help with body deformation (when that becomes available) – so that the base shape for both avatars has a “common ground” for deformation.
  • The base shape is, apparently, intentionally more stylised than realistic. It has particularly been noted by many, including blogger Ryan Schultz that the female avatar’s proportions are off, which has prompted discussion on the Sansar Discord channel.
    • However, it should be noted that this has long been the case with SL avatars, but deformation, etc., means it is possible for those who want a more realistic shape can have it, while those who want more out-of-proportion shapes (e.g. for an alien or something) can also do so.
  • The avatar 2.0 head / face will have (as previously reported) a series of presets to help users define a particular / preferred look.
    • In addition accessories (e.g. earrings, nose rings, etc.) should move as facial deformation / different presets are used.
    • There will also be a translation / rotation tools to allow accessories to be moved around and positioned.
    • Auto-skinning will be applied to that accessories should automatically follow a part of the face that’s in motion (so a lip ring should follow the lip movements, for example).
  • Will the bounding box for the avatar be increased in size? This is something the Lab is looking at, but no firm commitment to change as yet.
  • Will avatar 2.0 be scalable at release?
    • Uniform scaling, as with avatar 1.0, yes.
    • Non-uniform scaling, no, not until full body deformation is released.
    • Head scaling will be possible separately to the body.
  • Avatar texture caching: there is an ongoing project to improve how avatar textures are handled and cached / logged. The work should be surfacing in one or two near-future releases.
    • An initial element of this will be a cap on per-avatar textures within an experience. Avatars entering the experience with their texture load below the cap will be seen as intended, those over the cap will have textures downsampled to bring them under the limit.
    • This might be extended to be handled on a client basis – so those with more powerful systems can crank the limit up, for example.
  • Will avatar 2.0 have a user-adjustable height offset to prevent feet / shoes appearing to sink into the ground? Not currently planned. If there are significant issues, test cases should be forwarded to the Lab to see if adjustments to the IK system are required.

General Q&A

  • The usual five:
    • High heels for avatars: not on the roadmap at present.
    • 3D mouse support: not on the roadmap, will more likely be a general project to support game pads, joysticks, etc., if done.
    • Valve Index support: something the Lab wants, but no time frame, other than when the Lab get to work on it, it is estimated to be around 2-3 weeks of work.
    • Wiki: the more user-submitted guides that are made to the forum public documentation area, the more weight is given to the case for the Lab creating and offering / managing a wiki.
    • Persistence: the Sansar devs would love documented requirements on where and how persistence might be needed / used, particularly as persistence is under consideration at the moment.
  • Collaborative building in Sansar plus the ability to build experiences for others:
    • Collaborative building within a single space still very much on the cards. Is still a question of when the Lab can get to it, and also how to manage the permissions (who can do what within the scene).
    • Building for others is seen as something the Lab also want to make possible – with creators able to build scenes or entire experiences and making them available through the Sansar Store to others – but again, no time frame on when this will happen.
  • Moderation within an experience (e.g. giving the ability to provide others with the ability to ban people from a scene, to administer / manage it on behalf of the owner, etc). Also on the roadmap, but no overall time-frame.
  • Vehicles:
    • Something the Lab would like to do, but there are complexities: how is control of a vehicle subscribed? How to provide a good level of responsiveness to vehicle handling through the scripting system, etc.
    • Initially, what is being proposed is more “remote control” style vehicles, operating at lower speeds – presumably rideable, rather than just mini RC cars and things.
  • Event related:
    • Will the ticketing system be exposed to creators, and will creators be able to use it to charge for entry to events?
      • No plans to expose the ticketing system or allow creators to change for events (both of which appear to be a change from past statements).
      • Major reason cited is complexity of transactional management and tracking between users.
      • Suggestions for potential ticketed events (paid or unpaid) should be proposed to Linden Lab.
    • Can event creators have access to the number of people who register interest in an event (e.g. to help determine it date / times need to be revised in future similar events if 50 register interest and only 6 show up)? Yes, this has been considered by the Lab, along with other events improvements; it’s just a case of when and how things will be tweaked.
    • Can there be special LL-derived physical world prizes (e.g. “Sansar Early Access” t-shirts, etc.), people can win, so helping encourage engagement and involvement in events within Sansar – and help promote the platform? Will be taken into consideration.
  • Why is there an option to set a re-sale price for clothing when we cannot actually allow others to re-sell it? Because the capability hasn’t been released, and requires the on-going work in integrating the permissions system with the avatar system – but the ability to allow re-sale is coming.

The PrimPossible 1 LI Bento Piano

The PrimPossible Bento Mo-Cap 1LI Piano, shown in the built-in white finish option

Ample Clarity, the owner of the PrimPossible brand, made his mark producing 1-prim household items, initially using sculpties (not good for rendering, etc., but nevertheless impressive for their time for those pushed for LI) and more recently for doing much of the same with mesh. He’s well aware of my fondness for the piano, and so recently sent me a beta version of his new 1 LI mesh Bento baby grand piano featuring a selection of motion captured animations, and I decided I’d take it for a quick spin.

I cannot speak to the packaging of the piano, as it was delivered to me unboxed. However, in terms of shape and styling, it follows the expected form for a grand, and rezzes with the lid open and music stand raised. The former will tend to close when an avatar sits on the stool, but typing “open lid” (no quotes required when typing) in open chat will set it open once more.

Given this is pretty much a single mesh, there are some elements that can catch the eye a little: the curves of the housing rim perhaps aren’t as smooth as seen on other piano models; the detailing of the soundboard / plate / strings is a little basic compared to other piano models I’ve tried (but also better than others). Certainly, the keys and nicely raised and the texturing of the ivory gives them something of a look of having been used, rather than appearing utterly pristine – a touch I appreciate in my SL pianos.

The PrimPossible Bento Mo-Cap 1LI Piano, shown in the built-in white finish

Sitting at the piano will open the main menu, the top level of which provides access to the piano’s impressively broad range of animations. Depending on which animations are available, depends on whether you have the Adult or PG variant. For the PG variant, which I have, the animations are broken down into the following categories / sub-menus:

  • Bento: general single (male or female avatar) and couples sitting animations than make use of Bento animations. This can place avatars on or around the piano in a variety of animated poses.
  • Non-Bento: similar to the above in terms of general sits / cuddle, but also with non-Bento piano playing animations (female, male and duets), and a selection of “friends” animations that again place avatars in poses for chatting, etc., around or on the piano.
  • Bento piano: a set of four playing styles created for Bento hands and finger movements.
  • Bento Mo-Cap: as set of single and duet playing styles for Bento hands and created using motion capture software.

The Bento piano animations offer sufficient range for playing most of the pieces of music included in the piano, with Piano Boss adding a little athletic fun to the start of any playing for those so inclined! The Mo-Cap options (two single pianist options for “standard” and “tall” avatars, plus three duet pairs) are, like the Bento animations, fluid, and offer perhaps a more natural placement of hands whilst playing (as they have been motion captured).

Bento hand animations

A total of 24 pieces of music are supplied, the majority of them classical and public domain (Ernest Gold’s theme from Exodus would have entered public domain in 2017, had it not been for the 1978 change to US copyright laws….). Accessible through the Extras > Music Menu option, these are a familiar and popular selection – Bach, Beethoven, Brahms, Grieg, Mendelssohn, Satie, etc., – with a touch of Gershwin.

The music menu includes a Start / Stop option (so you can play the piano sans music, if you have music playing over the stream, etc.), plus options for selecting / playing / looping pieces, and for adjusting the piano’s internal playback volume. I confess that some of the pieces seemed to suffer in places from recording levels perhaps being set too high, with a – to my ears at least – a noticeable distortion.

When playing music, it is also possible to alter the playing animation to better match the piece selected, if desired, and the duets options offer a nice sense of shared moments, although having a couple of additional pieces obviously suited to duet play might be nice. For those who enjoy their piano to play by itself, this is possible: simply use the music menu to select the music and play mode and then click Play (if the piano isn’t already playing). You can do this either whilst seated at the piano or with a touch to bring up the menu when standing.

Also included in the menu are options to set permissions on who can use it (owner, group or anyone), plus texturing options and to adjust the level of shine, the ability to set it to phantom (and avoid bouncing into the air when standing up!), and to adjust your sitting position. The latter brings out one of the little niggles I have with all pianos that have both the instrument and the stool as a single item: as the stool is “fixed” relative to the piano, I never can quite get my avatar to what I feel is the optimal position for playing.

A final thing to note about this piano is the LI. The single LI count of the piano applies to when it is not in use; as soon as an avatar sits at the piano, the LI count will increase to 3. This is necessary due to the nature of SL and sit targets: the PrimPossible piano requires an additional (and invisible) “shell” to be rezzed with it in order for avatars to be correctly sit targeted. This shell is automatically deleted when the piano is not in use, returning it to the advertised 1 LI.  So, if you opt for this piano, do keep this in mind should you note the LI count changing – it’s not an issue / error.

Under the lid, the detailing is perhaps a little limited compared to some other piano makes, but at least as good as others – and remember, this is a single LI mesh object

The PrimPossible Bento piano is available in four versions and price points:

  • No Copy, Mod or Transfer PG at L$800 or Adult at L$950.
  • Copy, No Mod / Transfer PG at L$2,000 or Adult at L$2,400.

These prices are also listed as being “introductory beta”, and I understand that further animations and Mo-Caps will be added over the coming months. Even so, when comparing the L$2,000 price tag for the Copy version to something like the Culprit Sonata Bento Baby Grand (supplied Copy, No Mod / Transfer, and which I reviewed in March 2019), that’s a hefty difference should you be in need of a Copy version of a piano. Were I to give a very quick, high-level contrast between the PrimPossible and the Culprit it would be:

  • PrimPossible lower rendering and server costs (4576 and 1.0 respectively), lower LI (1 or 3), but fewer music options (24) and playing styles (for the present). Includes non-playing animations.
  • Culprit: higher LI (11) with a higher level of detail (particularly the soundboard / plate / strings / hammers  / dampeners), more music options (56) and playing styles. Higher rendering / server costs (8561 and 10.7 respectively).
The PrimPossible Bento Mo-Cap 1LI Piano

As it is, the Culprit wins out for me for general home display / use. I find the playing styles more varied (and some more reflective of piano playing techniques) – although it’ll be interesting to see what else is added to the PrimPossible model as the beta progresses. As someone who loves the grand piano, I also appreciate the amount of work put into the Culprit’s “innards”, and I’m not sure I like seeing one clambered all over / sat on, so the additional sitting animations in the PrimPossible model, while potentially fun, hold no real appeal here.

For those who might be pushed for LI, and given more is to come with the PrimPossible piano, it is certainly worth a look and consideration, given the range of prices and the additional animations. As it is, the PrimPossible has been added to my Linden Home houseboat (where it will admittedly be more decorative than functional), where it looks quite at home.


2019 SL User Groups 35/2: Content Creation summary

(fae forest), July 2019 – blog post

The following notes are taken from the Content Creation User Group (CCUG) meeting, held on Thursday, August 29th 2019 at 13:00 SLT. These meetings are chaired by Vir Linden, and agenda notes, meeting SLurl, etc, are usually available on the Content Creation User Group wiki page.

Bakes on Mesh

Project Summary

Extending the current avatar baking service to allow wearable textures (skins, tattoos, clothing) to be applied directly to mesh bodies and heads. This involves viewer and server-side changes, including updating the Bake Service to support 1024×1024 textures, but does not include normal or specular map support, as these are not part of the existing Bake Service, nor are they recognised as system wearables. Adding materials support may be considered in the future.


Current Status

  • BoM is now live. See:
  • There are some local edit issues  – some already noted in the viewer release notes – that can produce odd results when using the appearance editor which correct themselves on exiting the appearance editor and when baked via the Baking Service.
Cathy Foil has noted this glitch than can occur with Baked_Lower wearables on BOM: when editing locally in the appearance editor, the leg does not appear to be masked correctly by the wearable (which is also incomplete at the waist). Once baked, however, the wearable appears correctly applied and in full (r). Credit: Cathy Foil.
  • Some of these local edit issues may not be specific to Bakes on Mesh, but may be more noticeable as a result of BoM, and the Lab is looking to resolve them.

Animesh Follow-On – Project Muscadine

Project Summary

Currently: offer the means to change an Animesh size parameters via LSL.

Current Status

  • The simulator support on Aditi (the beta grid) – DRTSIM-421 (region Bakes on Mesh) has been updated, but there are no feature changes within it.
  • The project viewer has been merged with Bakes on Mesh (as the release viewer) and is passing through the Lab’s QA, and so should be appearing soon.


Project Summary

An attempt to re-evaluate object and avatar rendering costs to make them more reflective of the actual impact of rendering both. The overall aim is to try to correct some inherent negative incentives for creating optimised content (e.g. with regards to generating LOD models with mesh), and to update the calculations to reflect current resource constraints, rather than basing them on outdated constraints (e.g. graphics systems, network capabilities, etc).

Current Status

  • Work has now resumed.
  • First part of this is to continue data-gathering and look at re-aligning some figures based on the changes made for Animesh.
  • Thus far, the project primarily comprises enhanced logging to assist the Lab in data collection, allowing information on the overall cost of a specific avatar or in-world object to be gathered.
  • Once enough data has been gathered across a broader enough spectrum of content to give the Lab confidence they have a good understanding of things, then work can start on adjusting the cost calculations for rendering, etc.
  • It’s important to note that any user-viewable changes as a result of ARCTan are still some way off, and the Lab will be staging things to let users know what is happening when, and what it is likely to impact.
  • There was a lot of general conversation towards the end of the meeting on what people hope ARCTan will do (e.g. forcing creators to make proper use of avatar LODs, etc.).

Environment Enhancement Project

Project Summary

A set of environmental enhancements (e.g. the sky, sun, moon, clouds, and water settings) to be set region or parcel level, with support for up to 7 days per cycle and sky environments set by altitude. It uses a new set of inventory assets (Sky, Water, Day), and includes the ability to use custom Sun, Moon and cloud textures. The assets can be stored in inventory and traded through the Marketplace / exchanged with others, and can additionally be used in experiences.

Due to performance issues, the initial implementation of EEP will now likely not include certain atmospherics such as crepuscular rays (“God rays”).


Current Status

  • Work continues on rendering bug fixes.
  • The number of remaining issues is “trending downwards”.

Misc. Items

  • The upcoming Voice RC (or project) viewer mentioned at the last TPVD meeting still has a couple of issues preventing it from surfacing in the Alternative Viewers page.
  • The Umeshu Maintenance viewer (currently version, merged with Bakes on Mesh) could be promoted to de facto release status as early as week #36 (commencing Monday, September 2nd), in a break from the Lab’s preferred 2-week gap between release promotions.
  • Date of next meeting: Thursday, September 12th, 2019.

Lab Gab episode 1 – a summary

Strawberry and Xiola Linden with “Gabby”the cat (name TBC) on the Lab Gab set

The first episode of Lab Gab streamed on Wednesday, August 28th, hosted by Xiola and Strawberry Linden, and drew a comfortable audience. I intentionally stayed out of the chat but saw a few familiar names there.

Running to 35 minutes, the programme was not deeply revelatory with regards to Second Life news – that will likely be for future segments of the show. However, it did offer a comfortable, if slightly frustrating start – I say frustrating because the broadcast gremlins raised their little heads to try to keep Strawberry from being heard over the stream.

The first few minutes  – up to around the 9 minute mark – of the show covered Xiola and Berry’s roles at the Lab, and folded in notes on the Lab’s various social media presences and also the Destination Guide for finding places to visit.

Bakes on Mesh (BoM) then got a plug, being the latest update from Linden Lab. Berry attempted to clear-up confusion as to what it is (simply put: a means to apply system avatar wearables to suitably prepared mesh bodies and heads in a manner somewhat akin to how they used to be used with the system avatar, albeit with the option of supporting high resolution textures than available for the system avatar).

The official blog post on the release is available here, and I attempted to cobble together a basic primer on the subject (although I suspect that in part might also veer a little too close to the technical).

In talking Bakes on Mesh, Berry underlined at that BoM will require mesh body and head creators to provide updates to their offerings that are correctly flagged to use Bakes on Mesh natively. She also noted to the experimental Omega Bakes on Mesh applier system that I also referenced in my primer article. This is available from the Omega in-world store. However, I was interested to note that it did not work for Berry  – and I actually found it less than satisfactory when testing. Overall, the results seem variable, with some having absolutely no issues with it, and others (like Berry and I) encountering problems – hence, again, why it is called “experimental”. For those who wish to try it out, step-by-step instructions, courtesy of Theresa Tennyson (who is not associated with Omega, so please don’t crowd her with questions if you do have issues!) are available here.

Additionally, Berry pointed to an alternative HUD (L$125) which apparently works just fine, although I’ve yet to try it myself.

One of the key points with Bakes on Mesh is that it should enable body / head creators to make their products less complex, simply because they do not need to include so many onion skin layers, hence why the release advantage with BoM really lies in updates to existing bodies and heads, as the various creators will hopefully make available in the coming weeks (Slink has already updated). Thus, even for those who don’t use applier systems for clothing that mush, Bakes on Mesh is important, as adoption of updated bodies / head can have the potential to help reduce general rendering load for everyone.

Around the 14:40 mark, Xiola indirectly replied to some speculation on my part (raised when writing about Lab Gab ahead of the show), when I wondered:

I also admit to being curious as to whether the show might at some point down the road – depending on its longevity – also occasionally “hop over the fence” into Sansar or even perhaps take some “behind the scenes” (desires for things like privacy allowing among staff) looks at the Lab itself. “Lab Gab” seems to be too broad a title to remain purely about Second Life (although there is a lot to explore on that subject alone), even allowing for it being intentioned as a “catchy” name for the show.

By way of “reply” (I’m not sure Xiola’s comment was driven as a result of my speculation or not) Xiola noted:

I know the name of our new show here is “Lab Gab” – we just really likes how that sounded … but currently, short-term, our plans are to definitely focus on Second Life, although obviously we work for Linden Lab and Linden Lab also has Sansar … but the focus of this show is, initially and short-term, Second Life and the Second Life Community.

After some general chit-chat around giveaways, the show turns to a mini Q&A session from around the 19:15 mark, some of which are summarised below:

  • Linden Lab is currently working on an communications / companion app for iOS
  • When are last names coming back?
    • Still being worked on, have a lot of variables involved in terms of back-end systems and complexity.
    • Again, those interested can catch the last formal update I have (including comments from Oz Linden) in the First and last names section of my coverage of Oz’s appearance at SL16B with April Linden in June 2019. This also addresses a number of questions on the topic.
  • Linden Homes:
    • There is now a weekly roll-out programme (Mondays, Wednesday, Fridays) when homes are made available through the Linden Homes web page.
    • New types and styles of Linden Homes are still in the works, but no release dates.
    • Best way to stay up-to-date is to keep an eye on the official blogs and on the Linden Homes update thread on the forums.
  • Will Lab Gab include interviews with Lab staff? – Yes.
  • Upcoming major updates:
    • Bakes on Mesh is now out, per above.
    • The Environment Enhancement project (EEP) is progressing towards release – but no definite time frame other than Soon™ as bugs are being stomped on.
    • Not directly mentioned in the show is the new Animesh enhancements work (Project Muscadine) and also back on the horizon is the restarting ARCTan (two name but two of the more user-facing projects – there is also a lot of under-the-hood work going on).
    • Details on projects like this can be found in my (generally) weekly Content Creation User Group meeting summaries and also my other SL tech summaries.

An interesting start to the series, nicely relaxed, and a segment where the voice issues didn’t spoil things too much. Some nice teasers were dropped on future shows and direction which suggest Lab Gab will be a good option for tuning into every couple of weeks. In the meantime, you can catch the entire first show below.