SL project updates week 25/2: Content Creation UG w/audio

The Content Creation User Group meeting, at the Hippotropolis Camp Fire Circle (stock)

The majority of the following notes are taken from the Content Creation User Group meeting, held on  Thursday, June 22nd , 2017 at 1:00pm SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are usually available on the Content Creation User Group wiki page.

Audio extracts are provided within the text, covering the core project LL has in hand. Please note, however, that comments are not necessarily presented in the chronological order in which they were discussed in the meeting, but are ordered by subject matter.

Server Deployments Week 25 – Recap

As always, please refer to the server deployment thread for the latest updates.

  • On Tuesday, June 20th, the Main (SLS) channel was updated with a new server maintenance package (#, containing fixes to help with the caps (capabilities) router (see here for details).
  • On Wednesday, June 21st, the RC channels were updated as follows:
    • BlueSteel and LeTigre should receive the same server maintenance package (# containing internal fixes
    • Magnum should receive a server maintenance package (# intended to fix BUG-100830 (“HTTP_CUSTOM_HEADER no longer works on RC”) and BUG-100831 (“Lelutka Simone bento head spits a script error when attached on regions (Magnum & Cake)”).

Animated Objects

Vir has been trying to get animated objects using the avatar skeleton to scale in a reasonable way and that linksets are correctly referencing the same skeleton, and things are handled corrected when they are attached or detached. He’d also be interested in hearing from makers of the “current generation” of pets on how they work – how do they maintain ground contact, how they follow along, how the physics is getting managed, so that he can look into trying to make animated mesh objects operate in a compatible manner.

So, if you are a pet maker and can help Vir in this, please either attend the Content Creation User Group meetings, or contact him directly.

Attaching Animated Objects to Avatars and Avatars to Animated Objects

One of the popular aspects of pets today is the ability to attach them to an avatar (so you can carry them, have them sitting on your shoulder, etc), and this is seen as a potentially important aspect of animated mesh. However attempting to do so does present issues, as it would mean linking two avatar skeletons in some manner, something that is not currently possible. While there are some potential ways this could be done, it could add considerable overhead to the existing project, and also brings potential challenges with it – such as ensuring an attached skeleton is correctly oriented, determining the potential performance hit, etc..

Similarly, BUG-100864 suggests a means of going the other way – linking an avatar to an animated object – such as being able to walk up to a free-roaming horse on a region and being able to mount it and ride it, for example. However, this also raises some of the same concerns.

While not ruling either out, Vir is focused on bringing forward a relatively basic means of animating mesh objects using the avatar skeleton, one which can offer a series of potential uses whiles conceivably allowing existing mesh creations (such as pets) to easily be converted to use it. As such, he sees it as a foundation project, which can then be expanded to incorporate other capabilities in the future, rather than trying to pack everything into a single project which could run the risk of very long development times or becoming overly complicated with all it is trying to achieve right from the start.

Baked Textures on Mesh

Work is still focused on the baking service infrastructure updates required to support baking textures on mesh avatars. These are quite extensive, involving changes to the underpinning tools, the servers (including updating Linux), and so on.

Rigging To Attachment Points

There has been some confusion of late as to whether rigging to attachment points is allowed or not. From the Lab’s perspective, it is not allowed for uploaded since the introduction of Bento, but should still work for legacy items. However, what appears to be a server-side glitch in the last couple of weeks seems to have exacerbated the confusion.

Vir’s recommended rule-of-thumb for TPVs to test against the Lab’s official viewer and ensure behaviours match, otherwise confusion could occur down the road once the current glitches have been corrected. To help with matter, he’s going to refresh his mind on what limitations are enforced server-side, and hopefully bring a list of them to the next meeting to help TPVs ensure they are following the requirements in order to avoid future problems.

Other Items

Mesh Body Dev Kits / Clothing Making / “Standardised” Mesh Avatar

This topic took up the core part of the meeting, and as such, the following is an attempt to precis the core points into a readable summary

At the moment, all mesh bodies in Second Life are unique to their creator, utilising their own core shapes and skin weightings, which have a considerable amount of IP bound up in them. Because there is no available “standardised” mesh model available in Second Life, it means that the body creators need to provide developer kits to mesh clothing and attachment makers, which include this core information –  skin weights (in Blend or Maya or DAE or OBJ files) for rigging clothing and the shapes, which potentially makes it very easy for someone to create their own avatar bodies.

To try to reduce this risk, mesh body makers tend to have license agreements clothing makers are required to agree to, and by sometimes limiting who may or may not be deemed eligible to obtain such a kit.   This has  caused some friction / frustration in the cloth making community.

One suggestion put forward to help reduce fears on the part of mesh avatar creators and allow clothing makers more readily support avatar body brands, was that avatar makers should perhaps consider offering only the body shape to clothing makers – and then offer a fee-based rigging service to clothing makers. This would remove the need for avatar makers to give out their skin weight files, offer them a revenue stream and allow clothing makers more equitably create clothing for the more popular mesh bodies.

While there are no projects on the roadmap aimed at the SL avatar system, two other ideas were put forward which Vir agreed, could be worth consideration down the road:

  • One is a suggestion that LL look to emulate the ability in Maya and Blender to copy skin weights from an avatar model to an item of mesh clothing by running an algorithm to match the weighting from the avatar to the nearest vertices in the clothing. This would allow the clothing to fit almost any mesh body “automatically”, removing the need for clothing makers to specific weight their clothing to each of the mesh bodies they wish to support.
  • The development of a news “SL mesh avatar” designed to operate alongside the existing system avatar (so no content breakage for those preferring to continue using the current system avatar). If this avatar had a sufficient density of vertices, it offers two potential uses:
    • Mesh body makers could use its weightings with their custom shapes to produce individually unique mesh bodies, but which all have a “standardised” set of skin weights, reducing the amount of work involved in creating them (or they could continue to use their own custom skin weights if they wished
    • It could offer clothing makers a single source of skin weights for clothing, simplifying clothing making, which – if combined with the vertices matching algorithm mentioned above – would help ensure the clothing “fits” custom weighted mesh bodies.

The vertices matching algorithm idea might be the more difficult of these two ideas to implement – were either to be considered. However, the development of a mesh avatar that could exist alongside the system avatar could have a lot of merit and help “standardise” the more technical aspects of mesh avatars without impacting their individual shape / look.

Further, as mesh objects can support multiple UV sets, it would be possible for such an avatar to use the legacy UV map use to define the texture spaces on the three parts of the system avatar (thus allowing it to use existing skins, etc), or it could support more “advanced” UV maps (so skin creators could finally design skins with two arms, rather than having the one arm “mirrored” on the avatar, as is currently the case.

Why isn’t Scaling Bones by Animations Allowed?

Scaling bones using animations has never been supported in SL, although Vir isn’t clear on why (and pseudo bone scaling via animations has been possible through attachment point scaling or animating the point positions). However, one of the things that makes designing avatars harder is multiple ways to manipulation and aspect of a bone, because of the potential for conflicts. An example of this is bone translations, which can be affected by both animations and the shape sliders, and so can cause issues.

However, during the Bento project, the advantages of allowing translations through animations was such that the Lab opted to permit it, even allowing for the potential for issues. As scaling bones through animations could bring about a similar level of possible complexity to avatar design (as bones can obviously be scaled via the sliders, this could be the reason scaling bones via animations hasn’t been supported. Currently, this is unlikely to change, if for no other reason it would require a change to the animation format, which currently has no means to interpret bone scaling.


Singularity’s look at Sansar and Second Life

Writing in Singularity Hub, the on-line publication of Singularity University, Aaron Frank, principal faculty at the university lecturing on augmented and virtual reality, offers an interesting piece that covers both Second Life and Sansar.

New Virtual World Sansar is Ready to Pick Up Where Second Life Left Offwhich appeared on Friday, June 23rd, may have another slightly misleading title (see Sansar: thoughts around Kotaku’s hands-on), starts with a look back to May 2006, when the story of Anshe Chung’s rise to millionaire status marked her appearance on the cover of Bloomberg Businessweek. The event marked the start of SL’s broader rise in the consciousness of the media (and the general public), and Mr. Franks quickly runs through what followed, culminating in the so-called “failure” (i.e. writing-off by the media) of Second Life – before pointing out that for a “failed” venture, it is still here and still generating an economic throughput sufficient enough for some users (land owners in particular) to draw down a collective US $60 million in income from the platform in 2016.

Aaron Frank

True enough, nothing new here for those of us familiar with Second Life, and the Lab’s popular talking-points for the platform. These include referencing SL’s 2016 “GDP” of half a billion dollars, references to the use of the platform by Texas A&M University, which are again doubtless familiar to many SL users. But as well-trod as these point might be, it’s still good to see another writer willing to look openly at the platform within feeling the need to rub against the “seedier” (as others might wont to have it) side of Second Life.

After his look at Second Life, Mr. Frank takes a dive into Sansar – carefully noting it is the Lab’s new venture while avoiding any reference to it being in any way a “replacement” – because, as we’ve established elsewhere, it isn’t.

Here we’re again treated to a run through familiar territory: the description of spaces visited, the nod towards emerging mechanics on the platform (bouncing and throwing basketballs), the fidelity of the rendering, the spatial sound, etc. Before moving to equally familiar statements about the core differentiator between Sansar and SL (other than the former’s “built for VR” aspect) – the underpinning revenue generation model, before touching on the familiar analogies between Sansar and WordPress.

But within the familiar there are a couple of points worth noting, and which may have been missed along the way, despite being mentioned elsewhere. The first is the re-affirmation that Sansar spaces could be as big as four kilometres on a side – the equivalent of 16 SL regions by 16 SL regions.  While this has previously been intimated, it still seems to be something that is missed in some quarters, so seeing it referenced again here in no bad thing.

The SingularityHub article reminds us that spaces in Sansar could cover an area equivalent to 256 regions in SL: four kilometres on a side

The other element is the confirmation that scenes can be interconnected. This is something that has again been stated by the Lab in the past, but is also something that may have been missed in SL circles – a certain amount of the negativity towards Sansar has been the idea that spaces within it are all “standalone”.

Obviously, “interconnected” does not mean Sansar spaces are in any way contiguous with one another as Mainland and places like Blake Sea in Second Life are. However, it does suggest the ability to at least hop from one Sansar experience to another in a similar manner to teleporting in Second Life. In this, it’s also interesting to note that Ebbe Altberg himself first referred to teleporting between Sansar spaces in an interview a year ago, and it’s interesting to note Cecilia D’Anastasio referred to teleporting between Sansar scenes in her piece for Kotaku (linked to earlier in this article). Of course, this could mean going via the Sansar Atlas, which we’ve already seen – but “teleporting” does seem to suggest a more direct route than leaping via a directory of spaces.

Also noted in the article is something I’ve touched on before – that “creator” in Sansar has a wider meaning than we’re accustomed to seeing in Second Life. In the latter, “creator” is pretty much focused on those who ho design and make the goods we use to dress our avatars and furnish our land; it not a terms closely linked with those who obtain land in SL and design environments using the goods they have purchased from creators. Within “Sansar” the term clearly applies to both in equal measure, which also offers a broader scope for the idea of “democratising content creation” (after all, a region, even if designed using good purchased from others is as much a part of SL’s content as the goods themselves).

The Sansar Marketplace. Credit: Linden Lab, via SingularityHub

Towards the end of the article, there is a discussion on the cultural changes technology has brought about , with Mr. Franks notes with this that, “Society has become native to virtual living.”

And we have; the creative freedoms we have today to socialise across geographic boundaries, to share out thoughts through blogging, our images via photo sharing, our lives through video – and to combine all three – really didn’t exist on the scale we see today when Second Life started out. But that doesn’t mean that the world at large is ready to leap into Sansar (or any similar platform), be it with or without VR hardware, simply to carry on / do more of the things people are already doing through other means. As such, Sansar could  – in terms of the general populace and acceptance / use – face as big a mountain to climb as Second Life did.

But then, if Sansar lays claim to enough of those market verticals where it appears to have clear potential, and can leverage revenue from them, Sansar need not actually need to go “mass market” in the manner once envisioned for Second Life in order to be a success; it could do very nicely as a lead player in a variety of market niches.

Caravanserai: a Silk Road Celebration of the Arts in Second Life

Caravanserai 1: a Silk Road Celebration of the Arts

The desert sands lead on, but look ahead –
a palace of bright tents and green date palms
where camel backs can crumple knee bones down.
An oasis waits beneath the desert moon.

East meets west in a special celebration on Saturday, June 24th, 2017. Commencing at 07:00 SLT, Caravanserai 1 sees Dr Chris Mooney Singh (Singh Albatros) of The Writers Centre, Singapore and Scott Grant (Kaylee West) of Monash University in Melbourne Australia come together to create an event celebrating the connective cultural thread that is the legendary Silk Road.

In a desert oasis setting, they have brought together artists from around the grid for storytelling, drama, song, and machinima, to be followed by a panel discussion on the value of virtual arts in education.

Caravanserai’s intent is to celebrate the sharing of different cultures by transporting guests to an earlier time when the Silk Road contributed hugely to artistic and cultural understanding and world culture. Travellers going both directions would seek shelter in oasis’ or a caravanserai: an inn with a central courtyard for wayfarers in the desert regions of Asia or North Africa. These places of rest supported the flow of commerce, information, and people across the network of trade routes covering Asia, North Africa, and South-east Europe, especially along the Silk Road. In that pre-television, pre-internet time, it is easy to imagine people from many different traditions gathered around a fire with refreshments, sharing songs and tales of their lands and travels.

Caravanserai 1: a Silk Road Celebration of the Arts


This free performance is open to all Second Life Residents and is both part of this year’s SL MOOC, and benefits Feed A Smile.

For the event, much of which is presented in Voice, Singh narrates the programme, featuring his own adaptation of The Elephant and the Six Blind Men, original songs, special musical guest – gypsy violinist Navtali Torok, and James Elroy Flecker’s 1913 verse drama The Golden Road to Samarkand, he is also joined by actors Pip Albatros, Corwyn Allen, and Mavromichali Szondi.

There will be a screening of a machinima based on the Edwin Thumboo poem Ulysses by the Merlion and  there will be time for Questions and Answers as part of the panel discussion portion of the programme. The event will close out with a final song before the caravan comes full circle, and guests disembark for their native lands once more.


SL MOOC is a month-long cavalcade of education which focuses on active learning, reflection, sharing, and collaboration. The aim of the courses and workshops offered is for the participants to learn through meaningful connections and social interactions, building on the strength of virtual worlds as learning tools.

About Feed A Smile

Feed A Smile is a programme run by Live and Learn Kenya (LLK) to provide nutritious warm lunches for over 400 children every day, paid for entirely from donations to the project. It is part of a broader programme managed by LLK, which includes finding sponsors to finance the education of children in Kenya and helping to provide them with everything they need: uniforms, shoes, text books, school supplies, etc., and the building of the Nakuru school, Kenya.

In addition, the organisation also provides medical and dental care for children, including check-ups and vaccinations. 100% of the donations received by LLK are transferred directly to Kenya to care for children, provide education, medicine, food, shelter and foster care. Nothing is lost to salaries, fees or administrative costs at LLK.

SLurl Details

  • Caranvanserai 1: A Silk Road Celebration of the Arts (Monash University, rated: General)

SL14B Meet the Lindens: Landon Linden

Landon Linden with Saffia Widdershins

Meet the Lindens is a series of conversations / Q&A sessions with staff from Linden Lab, held as a part of the SL Birthday celebrations in-world. They provide opportunities for Second Life users to get to know something about the staff at the Lab: who they are, what they do, what drew them to Second Life and the company, what they find interesting / inspirational about the platform, and so on.

Tuesday, June 20th saw Landon Linden sit down with Saffia Widdershins, and this article hopefully presents some “selected highlights” of the chat, complete with audio extracts from my recording of the event. The official video of the event will be added at the end of this article, once available from Linden Lab.

About Landon Linden

Landon Linden joined Linden Lab in August 2008, and is currently VP of Operations and Platform Engineering, based in the Lab’s Virginia offices. He has led the transition of live operations and the production platform to support the company’s new products. With a BSc in chemistry. he worked as a research chemist before moving into the IT sector. Since then, he has worked in telecommunications, launching numerous products.

For SL users, he’s possibly most recognised as the man responsible for re-opening the technology blogs the Lab publishes after major issues / outages occur. These had dried up after FJ Linden departed the Lab in 2011, and Landon revived them in 2014. April Linden has since taken over core responsibility for these posts since then.

Landon loves building large-scale systems, and says his passion for virtual worlds is fuelled by his interests in sociology and economics. As he notes, the nature of the work his teams undertake  – running the services, architecting them, improving them, migrating them where appropriate, etc., – is such that most of it goes sight unseen by users, unless there is a problem.

The Discussion

the initial part of the discussion looks at Landon’s background, his interest in sociology and economics – he notes that by working with the Linden Dollar and the Lab’s transactional services he’s learned a lot about economics – and touches on the Lab’s own studies with users.

In this latter point, Landon makes it clear that the Lab does not conduct direct social experiments on users, but obvious does monitor the use of services and capabilities such as the user on-boarding process, games like PaleoQuest, etc., to see how they are being used, where points of weakness lie which might be improved, what kind of metrics are being generated, and so on.

In terms of general SL trends, he makes the point of noting that – and contrary to claims otherwise – the Lab has seen a “considerable strengthening” of the Second Life economy over the last six months, probably sponsored in part by the arrival of Bento, which the Lab is obviously pleased to see.

This moves into a broader chat about the evolution of things like mesh and breedables, and how that helped grow Second Life, the way in which the Lab cannot always anticipate how new features will be used – but do try adapt to how users take them on and start using them.

Using Amazon  Services

One of Landon’s responsibilities has been to oversee and drive the evolution and enhancement of these supporting services and the infrastructure which supports them and Second Life. Most recently, this has included moving various services in to Amazon cloud.

The Lab has been a long-time user of Amazon services, and this current work not only involves moving services to Amazon, but also moving them to a container model, making them easier to test and deploy, whilst leveraging the flexibility offered by cloud-based services. These include reducing the complexity of having to manage a dedicated data centre environment to run the services, the complexities of having to manage capacity, plan ahead for growth and the purchase, delivery, installation and testing of new hardware, etc., in order to meet specific demands (as the cloud provider can “simply” turn on additional servers and facilities as they are required, and add them to the current billing.

Right now, the intention is not to reduce costs per se  in making the move – Landon rather describes the Lab and trying to break even – but is rather geared to leveraging AWS (and ECS?) and thus doing more, infrastructure-wise with the money the Lab has coming in.

Lab Working Environment

While he is based in Virginia, Landon spends a good deal of time at the Lab’s head office in San Francisco, and notes that while the Lab operates a number of office – Virginia, Boston, Seattle, San Francisco – a lot of people actually work from home, and the Lab has a relaxed approach to office-based work requirements – if it is possible to work from home and be more productive in doing so, there is no problem with doing this.

From his personal perspective, Landon views the Lab as the best place he has ever worked, describing his colleagues as “an amazing group of intelligent, passionate people”. Like others at the Lab have said, it is also a place where he tends to learn something every day, whether about technology, how SL is being used by the residents or about people.

This topic touches on the Lab’s history, going back to the late 1990s and attempts to build a VR / haptics system (aka “The Rig“).

General Q&A

What is being done to Improve Platform Stability and Performance

Landon: We’re always working on these problems. One of the things that is frustrating for residents  – and it’s frustrating for me too – is that lag and crashing out seems to be like a perennial problem. And it is, but the reason it’s a problem is that it’s never, ever just one thing. It’s a near-infinite number of issues and problems, and we’re always working on trying to smooth those things out and reduce them, but it’s always ongoing work. And we’re always trying to balance being able to do new features versus performance improvements and stabilisation work, and I think we strike a pretty good balance there …

… This is going to come dangerously close to sounding like I’m blaming the residents for some of this stuff – and I’m not. But I think … it’s a very creative and expressive place, Second Life, and we really like people to be able to express themselves  in whatever way possible  – and within the confines of the law, at least! But that also means that the complexity of whatever it is that you’re doing, whether it’s in your region or in your parcel or on your avatar, can impact the people around you.  And so we’re trying to strike this balance of how can you express yourself without negatively impacting the people around you. And I think [Jelly Dolls] were a pretty good solution. And it also had the added benefit of feedback to the users, “Hey! Your avatar looks great, but maybe you should tone it down a bit.

Why Can We Have An Unlimited Inventory But Only 60 Groups?

Landon: Inventory is relatively cheap, you’re talking about a very small amount of storage when you have something sitting in inventory, and probably more importantly in the context of this question is the inventory doesn’t necessarily have to interact with other pieces of inventory. So you can pretty much just add anything in your inventory without bound, and UI problems notwithstanding, it doesn’t really have any negative impact on your experience and it certainly doesn’t impact anyone else.

When you’re talking about groups, you have this exponential impact on performance with the number of people who you’re adding into the group [particularly all the Group data which needs to follow you around SL so you can receive group notices, remain part of a group chat, etc.] … I think that’s the kind-of short and long of it. [Groups] have an impact on you and the people around you.

What is the Number One Cause of Lag, and Will Improved Server Hardware Improve SL?

Landon:  We’re always beefing up the hardware we’re using, and I can tell you the hardware is not a big factor at all in terms of lag. And this is going to be a really unsatisfying answer, but I can tell you that in my experience the single greatest contributor to lag is the network between you and wherever the server is.  So if you are physically far away from the server, you’re going to have a much more laggy experience. Most of our equipment – I dare say all of our equipment is in North America, and the west coast of North America at that. So if you’re in South America, you’re going to have more lag than some that’s sitting in Seattle, Washington. Likewise people who are in Europe and Africa are going to have a more laggy experience than people in North America.

… This is where I’m really going to get into trouble, because I don’t want to come out here and make a bunch of promises, because the things that I’m talking about are going to take probably years to do. But one of the things I absolutely have in the back of my mind is that once we get Second Life fully functioning on cloud services there is the possibility – and I will stress “possibility” – but there is the possibility we can co-locate regions more easily in other parts of the world, in south America or in Europe or in East Asia or Australia. And that would make the experiences for the people who are in those regions a lot better. The flip side to that is, if I’m moving the simulation closer to you and further away from somebody else, you’re making the lag worse for someone else.

… We did some analysis several years ago, regarding this. And what we saw was not a lot of geographic affinity for regions. One of the amazing things about Second Life is that people from all over the world come together and talk and get to know one another and chat and experience Second life together, and there’s not a lot of geographic affinity. There are a few notable exceptions to that, and I think language is one of those things; I think one of the exceptions is people who speak Portuguese, and then tend to almost exclusively come from Brazil. So we can say that if you have a region that caters to, or is attracted to Portuguese speakers, we would probably want to co-locate that region in Brazil.

This is just really stuff that we’re thinking about, there’s no hard plan to do any of this; I think we’ve got a lot of work to do before we can even considering doing something like that, but I’ve absolutely got that in the back of my mind.

Would LL Ever Consider Adding Any of the Reliable Language Translation Tools Back into the Viewer?

Landon: For what it’s worth, I’ve actually looked into some of that. I mean … there’s just some amazing tools that are becoming available now using AI machine learning, and I’m really interested in doing some things along those lines. That said, no promises, no commitments; I don’t control the product direction, so I’m looking at it just out of more-or-less professional curiosity and not something I’m actually planning on implementing.

But I think, to try to answer your question as best I can, I think it’s getting easier and easier to put translation and text-to-speech and speech-to-text services into your products, and I would hope that we get back to doing some of that – but no promises and no commitments, and I don’t control it anyway … I don’t make that call.