On Monday May 13th 2013, Troy and Nyx Linden appeared on a segment of Designing Worlds to discuss Server-side Baking / Appearance (SSB/A), alongside Brooke and Oz Linden, who were there to discuss Materials Processing.
Troy Linden is a Senior Producer at Linden Lab, who has been working on high-level server-side baking, and Nyx Linden is a Senior Software Engineer at the Lab, who has been working with the technical aspects of SSB/A and has been very much the public face of the project. Together, they answered a series of questions on the project put to them on behalf of users (the questions having been requested in advance of the show being recorded) by the Designing Worlds hosts, Saffia Widdershins and Elrik Merlin.
The following is a summary of the questions asked and answers given.
Saffia Widdershins (SW): Let’s start with the basics: what is baking, and how is it being handled now?
Troy Linden (TL): Baking is a process where we take all the information that involves your avatar – how it looks – and we combine it to deliver a finished avatar. Currently, how it’s handled right now [is] your computer, the individual’s computer, handles all of the processing involved in determining your avatar’s appearance, and it sends all the result back to our servers. So it’s a pretty involved process and there’s a bunch of time that it takes to do all that.
SW: So how is that going to be changed in the future … and will it simplify it?
TL: Server-side baking is our new system. It’s where we actually stand up a new service that will handle all of the baking process on our end. And what this actually does is it takes the load away from your computer, the individual user’s computer, and the results are a faster, more consistent experience during the whole baking process in Second Life.
Elrik Merlin (EM): Just to be clear about this … in the new system, what will be handled by the server and what will be handled by the viewer, exactly?
TL: The new viewer will be sending the server and [be] the recipient of all the avatar data, while the server does all the calculations required. So your viewer will download the results [of the baking process] over a lot faster HTTP connection.
EM: So that’s the basics of how it works, so to speak; how would you summarise the benefits to users?
TL: Well, simply put, it’s a much faster, more reliable avatar rendering experience. So hopefully you’ll see less avatars being stuck in their clouded state as well as being stuck untextured. So they’ll actually appear the way the user actually intended much quicker.
SW: So it will be an end to that problem where you half-rez but, (laughs) your make-up is blurred so you look as though you’ve been having a really heavy night!
TL: (Laughing) That’s the plan. We’re actually seeing some great results so far, so we’re very excited.
SW: Are there likely to be any downsides? There will be less impact on peoples’ machines, is that what you’re saying, or could there me more?
Nyx Linden (NL): The one downside to the new system is, because it is such a big change from how we have done things in the past, everyone is going to have to update their viewer. It will be a mandatory update. Users who don’t update will start to see even more avatars fail to load. Fortunately, we have the viewer that people need to download released, and users who use any actively maintained third-party viewer should be able to download an update presently as well. As long as users do update, they won’t see any downsides.
EM: This is obviously nearing completion and we’re nearing implementation. Can you tell us a little about where the project is, what its current status is, and what the time scales are for introduction are going to be?
NL: Absolutely! So, we’re in a multi-stage release; at this point we have our first viewer out the door. So the next stage is that we’re going to be standing-up the service that is going to be doing all the work for rezzing your avatar. Over time we will slowly roll-out the new system across the grid. That’s going to take some time, and we’re going to be following-up through our blogs and forums to let people know how that process is going, but we want to take our time with that process, to make sure that everything is working as well as we think it is.
SW: So there’s actually going to be a period when both mechanisms are in use, so for example, one set of server could be running the new system and another set running the old system.
SW: So how long do you think it’ll be before the switch-over is completed, or am I asking a “how long is a piece of string” question?
NL: We’re not sure. We’re going to start the roll-out process very slowly, and we’re going to be looking very carefully at our load numbers and making sure that the system we’re standing up will be able to handle everyone’s baking needs. So we’re going to be monitoring it as we are rolling it out to more and more servers, and then we’ll scale it up as quick or as slowly as we need to, to make sure that there are no hiccups or problems.
EM: If you have both systems operating, what are going to be the results? Say you move from one region running one system to a region running the other or you look across from one region running one system into a region running the other? What will you actually experience?
NL: That’s actually a case we’ve been looking very closely at, and we’ve been doing a lot of testing around that. Fortunately the viewer we’ve released and the third-party viewers that have accepted our changes should be able to handle that just fine. The viewer will be able to load avatars using either the old appearance system or the new appearance system. The only thing users might see is when they are transitioning from the old system to the new system is their avatar might reload, but that’s a process which should happen automatically and should be very quick [the avatar will very briefly turn grey before textures reappear].
EM: So presumably you’ve got some fairly hefty testing and in-world scenarios that you’re using to check that the whole thing scales correctly from smaller experiments to use is a real live avatar environment. Right now, for example, if you have 60 avatars in a region watching a concert, som of them around you will be grey, others will have parts missing like hair or boots or whatever it is, especially if there’s a lot of textures involved. Is that something that you’re looking at?
NL: Absolutely. We’re definitely looking at that, and we’re also making sure that the service we’re standing up will be able to handle the load from avatars all across the grid, not just avatars localised in one region. And what we’re finding is that the process of resolving an avatar is not only quicker, but more reliable; so you should see less avatars in a clouded or grey state, even if you’re all clustered in the same region.
SW: Some people have mentioned concerns that with the problems we see from time-to-time with the asset servers [is that] putting more onto the server-side could cause more problems on the grid. An example I was give is at the moment, if your avatar isn’t rezzing, you can force a rebake; you can use the advanced menu and just rebake textures, you can sometimes switch groups to kick a rebake, and there’s one I also find complete weird but which always works for me, which is that you switch the bald head that you’re wearing under your hair and you leap into visibility. Will there be a way to force a rebake if the Lab has a bad cache?
NL: Absolutely. In our viewer updates we have rewired the rebake avatar option in the menu to work with either the new or old system, whichever you’re using at the time.
EM: So if server-side baking affects avatars primarily, what’s going to be the effect on mesh avatars, things like animal avatars and petites, and what will happen with mesh-based clothing?
NL: The system we’re putting in place should not have a signficant impact on any attachments that your avatar is using, including mesh-based attachments and mesh-based avatars. However, the system should allow you to load your base avatar more quickly, which will allow the process of starting to load your attachments sooner.
SW: Now because clothing is becoming increasingly mesh-based rather than layer-based, do you think that SSB will continue to be useful as time goes by?
NL: I think it will because as you appear in-world, the viewer will have to load your base-level avatar even if you’re not actively using it. Speeding-up that process will mean that even mesh-based avatars will start the loading process sooner.
EM: We know that you’ve been working very closely with the makers of third-party viewers … what’s going to be the impact on third-party viewers? Presumably they’ll simply break if not updated, but how do you see this process going forward, and is everybody on-board with it?
NL: Absolutely. We’ve been working with all of our active third-party developers for months now, and they have had the code that they need to integrate and ship and I believe at this point that nearly all of them already have updates out for people to update their viewer. The avatar loading system will break for everyone, even people using our viewers, if they don’t update. So we are going to start messaging pretty seriously to let people know that they need to update their viewer.
SW: I understand that this should help mobile clients like Lumiya, and those users with limited data mobile [cell] plans. So what lies ahead? Are we looking at some point in the future to fully cloud-based rendering where all the heavy tasks of drawing a scene are handled remotely, which I guess could pave the way for ‘phone or tablet-based real-time clients?
TL: Unfortunately, we haven’t announced any plans with regards to this, so we’re not in a position to answer right now.
SW: This has been very illuminating – I don’t just mean that last answer, but the whole discussion! (laughter)
TL: We’re very excited about this. This is something we’ve been working on here at the Lab for a while, and it’s finally coming to a head, we’re seeing the results that we were expecting and then some, so we’re happy!