Server Deployments Week 10
A full set of server deployments this week.
On Tuesday March 5th, the Second Life Server (SLS) channel received the server maintenance project that was deployed to all three RC channels in week 9. This update only contains a fix to a single crash mode.
On Wednesday March 6th, the three Release Candidate channel should receive the following code deployments:
- BlueSteel and LeTigre: a new server maintenance project, which fixes a fairly common crash mode, together with Baker Linden’s large (as in file size) object rezzing project aimed at improving simulator performance (see below)
- Magnum: a new server maintenance project, which includes a mix of bug fixes and stability improvements. Specific fixes mentioned in the release notes are:
- BUG-1612: region Owners and estate managers finding they are unable to teleport back to their region after disabling direct teleports to the region
- SVC-8019: region visibility delays following region restarts. This may help with the problem of diagonally adjacent regions failing to render
- VWR-786: if a friend does not have ‘See my online status’ permission, they will now see “User is not online ..” message following IM or inventory offer.
Large Object Rezzing Project
Baker Linden has been looking to improve how objects with large file sizes are handled by the simulator software when being rezzed. He describes the work thus, “What I’ve been working on is hopefully significantly decreasing lag spikes when rezzing large, complex objects [such as those with lots of scripts]. Large does not necessarily imply size, but size of the files being read. When an object is rezzing, we have to parse the object / mesh files and create our in-world objects with that data.”
Until now, reading and parsing of any files related to objects which require rezzing has been on the main thread. When several such objects requiring rezzing at the same time, the simulator stalls. Baker has been moving the reading / parsing operation to a background thread in the expectation that this will prevent the simulator being choked.
The key point about this work is that it is specifically aimed at preventing the simulator processes from choking and a region stalling when there are a number of large object files being read / parsed, not at actually “speeding up” the physical rezzing process. As such, it is unlikely that objects will appear any faster in people’s in-world view as a result of this work. However, what it does mean is that the simulator code will be better able to handle rezzing multiple “large file” objects without the attendant region lagging which can occur as a result of the simulator being unable to process messages from viewers and other simulators, etc.
Materials Processing
In my last update on this work, I reported that the Lab believed they had one more issue to resolve with the materials processing project, after which the way should be clear for a project viewer to be made publicly available. At the time, it wasn’t clear exactly what the problem might be. However, on Monday March 4th, I was able to ask Oz about the problem, and it appears that it is with the project viewer itself.
“We’ve got a viewer, but it’s so crashy, and the crashes are mostly in material property editing, that I don’t want to distribute it yet…. I’m concerned that doing so would result in a lot of broken content lying around,” Oz informed me.

I asked Oz if the crash problems were related to physically applying maps to objects and / or object faces. He confirmed that this is indeed the case – and that the latest (non-public) version of the project viewer can crash if even the parameters for maps applied to an object / object face are modified. However, he went on to say, “Hopefully we’ll get the worst of the crashes dealt with soon, and then we can start giving it to a wider audience. We’ve already solved a bunch of them, but it’s not quite ready for even open alpha testing.”
So, for those who commented on the lack of any update following my last SL project update from week 9, I’m afraid the situation still appears to be one of, “Hurry up and wait.”
Server-side Baking Z-offset “fix”
An update to add the ability to adjust an avatar’s height was unexpectedly added to the SSB Sunshine project viewer at the end of week 9, a function deemed to be necessary given that the new SSB server-side code effectively stops the popular “z-offset” capability found in most TPVs from working (see my report here).
While this allows avatar height to be adjusted, it has issues when it comes to other uses to which the z-offset capability can be put, as outlined in SUN-38. Commenting on feedback on the solution as presented which chairing the Content Creation User Group meeting on Monday March 4th, Nyx Linden stated, “Indeed, the feature as it was presented to me initially was for micro/macro avatars, and avatars wearing shoes that threw off our height calculation, so the wearable parameter should be useful for those cases. We’ve been discussing the animations case and its impact, but I don’t have an update right now.”

Whether this means we’ll see further development of a broader solution or even some collaboration with users in seeking a broader solution, remains to be seen.
Aditi Issues

The remaining issues with Aditi, wherein an inventory hardware issue was affecting some 40% of those with active Aditi accounts, preventing them from logging-in (myself included using my main account), appears to have been cleared. The issue with getting Aditi’s inventory to sync with your main grid inventory has also hopefully been improved, as per my week 9 update, however, Nyx Linden indicated at the Content Creation meeting on Monday March 4th that there is more work to be done in this area.
Examining Avatar Rendering Improvements
Simon Linden is working on an experimental project designed to improve the user experience in crowded locations. Or as he expressed it at the Simulator User Group meeting on Tuesday March 5th, “I’ve been working on some combined viewer and simulator code to try to prevent viewers from falling apart when working in crowded areas such as clubs.”
In expanding upon the idea, he went on to state:
Right now, it basically tries to draw everyone, and if that’s too much, backs off using imposters … That first part, trying to draw everyone, is a problem and a lot of people don’t get past that as they crash or get horrible performance. Even with imposters, the viewer has to render the AV, getting shape and texture data.
This idea … again, only experimental … basically says you’ll probably want to first see those nearest, your friends and maybe anyone you’re in an IM chat with. If performance is OK, draw more.It needs a bunch of work to see how it feels and what the issues will be, but if you’re on a cheap laptop that can only handle 10 avatars, you should still be able to go to a club with 50 people around.
Simon goes on to cover the complexities of avatar rendering and the costs involved. Currently, the work is entirely viewer-side, which means that the viewer currently has no means of calculating the rendering cost of any given avatar until it has gathered all the data and attempted to render it, which immediately places a load on the computer. Even generating an avatar imposter requires a viewer draw the avatar once.
The idea Simon is trying to formulate is to have some method by which the rendering cost information could be shared between the region and the viewer and then enhancing the viewer to better utilise the capabilities of the machine on which it is running while simultaneously avoiding the complications of having “hard” limits (as is often the case with scripts and script monitoring). As such, he describes his ideal solution as:
Imagine you drop into a crowded area, and before you draw anything you could find out how expensive each AV is to draw. The most basic idea is just having an upper limit … if it’s too much, don’t draw it. However, if you’re into photography, working with a model on an empty region, you’ll want to draw everyone at high res … All this has to tie into your own settings and computer abilities. If you have a hot machine, you want to draw everything with all the graphics extras.
It far from clear where Simon’s work will lead – as he repeatedly stated, it is currently experimental and may take a good while before it leads to anything substantive – assuming it gets that far.
Materials: Well its crashy, but the important thing is there is materials, and its almost ready. Its just a waiting game now. Bugs are interesting, sometimes a single bug is solved, and it takes out more than one bug at once. Other times, it can be interesting…
LikeLike
are we there yet? are we there yet? are we there yet? lol
LikeLike
“Quiet in the back, or so help me, I’m turning the car around and we’re going home right now!” 🙂
LikeLike
The thing is, all the technology that makes up “Materials” has been around in CGI for a long time, some of it even before SL. But it has been high-end rendering stuff measured in seconds per frame, not the high-speed games rendering. So I’m not hugely surprised that it is taking longer to sort out than was expected.
And, well, does anyone at Linden Labs have the knowledge to coordinate things, to see past the current piece of shiny and think about how the data structures affect the future options. There’s a trace in that problem with height offsets and SSB. Who would have expected a change in how baked textures are made to affect the Avatar poisition? I get the feeling that this very common TPV feature uses some trick the Lindens didn’t know about. Are they taking Open Source seriously; are they listening? (That is not quite the same thing as using the code.)
This Materials viewer and its bugs: does it need a different pair of eyes on the code, and how do they find somebody who knows what they are doing?
LikeLike
It could simply be the state of the code and a lack of comments / structure, coupled with the loss of those who originally coded the server-side over the years.
When initially discussing his further work on HTTP back at the start of the year, Monty Linden indicated that he’d like to see the UDP code removed from both server and viewer once the new HTTP services wereseen to be running smoothly. However, in stating this, he went on to comment that on the server-side of things, “The UDP assumption is baked into a lot of failover paths and hidden mechanisms, it’s all over the place,” which makes him nervous about removing things “blindly”.
To my admittedly untutored eyes, his comments suggest the server code code is something of a plate of spaghetti at the best of times. So for me, the question is how many eyes are in fact available at LL to go over the code in detail, and what do they have by way of any documentation to assist them in at least getting some idea where issues may arise when making deep changes to the code (and how much pressure are they under to get things to a point where they are ready to go out the door, particularly given the philosophy that new things are going to break anyway)?
On the viewer side, I suspect much the same as you – TPVS know the code far more intimately than LL – and in some cases have been working with it far longer than people at LL – and as such, perhaps Linden Lab should be listening a lot more when TPV devs do speak up with clear concerns which can be pinpointed to the potential for “breakage” or other issues. As you say, listening is a far different activity to “having” to use any code presented to them. That said, and somewhat ironically, most of the viewer-side of the code for materials is actually being worked on by TPV developers seconded into the project – so again, who can actually tell, other than those at the sharp end of things?
LikeLike
Inara, you wrote, in reply to Wolf:
“On the viewer side, I suspect much the same as you – TPVS know the code far more intimately than LL – and in some cases have been working with it far longer than people at LL – and as such, perhaps Linden Lab should be listening a lot more when TPV devs do speak up with clear concerns which can be pinpointed to the potential for “breakage” or other issues. As you say, listening is a far different activity to “having” to use any code presented to them.”
This is precisely the point I have been making in the server Blog for the last week, being shouted down by others who (I presumed) know more than I. It is long past the time for Linden Lab to be playing the haughty “It’s our ball” game with the TPV devs.
LikeLike
I don’t think it is a case of LL being “haughty” per se.
A recent comment was passed on the opensource-dev mailing list which perhaps best illustrates the issue:
What I’ve come to understand, and I accept as I can see the logic behind it, is that open source is not the same as open decision making, nor is it the same as allowing others to make the decisions. It simply means the code is available under a permissive license for others – including us – to review, comment on, modify, and compile for ourselves. Let me re-iterate: open source is NOT community maintenance. LL has never implied or pretended that they’ve set up a community maintenance program – and I think they’d have major problems if they tried.
At times, it does seem as if “open-source” is taken to mean everyone has equal say in what goes into the viewer and how things should be done – and an awful lot people do get very upset (and often quite rude) when X, Y, or Z doesn’t make it into the core viewer code from LL. At the end of the day, and whether we appreciate it or not, SL is LL’s product, and one that (perhaps precariously) balances between “open source” (the viewer) and “closed source” (the proprietary server code), and so at times there may well be very valid reasons as to who LL say “no” to X, Y or Z.
The flip side is, of course, that “not having to accept all code contributions and / or suggestions” doesn’t mean that LL should stick their fingers in their ears and carry on regardless when very positive points of concern are being raised. They’ve actually got a heck of a lot better in not sticking their fingers in their ears, but it unfortunately does still happen.
The problem here, perhaps is giving LL further encouragement not to do so; and, dare I say, people cutting back on rude responses (and personal insults that can be directed at some LL staff) could be a starting-point (and I hasten to add that I don’t in any way direct this comment towards you; it’s a general observation on what I’ve seen in the forums, in meetings, in blog comments and elsewhere).
LikeLike
I never felt the need to get rude. It is in a huge sense counter constructive. I do think highly negative feedback in a word ‘cauterizes’ the process entirely. Its hard to take somones feelings, and ideas into consideration while they curse, and toss rocks (metaphorically speaking). I also understand that it sometimes feels as if no one is listening. While I am sure in most cases not being heard isnt the issue. I think the barrier is in language as in ‘What are we allowed to say?’, or ‘We can’t share that right now.’, or other, like, scenerios.
LikeLike
Agreed on both points. The art is finding the middle road; for both parties.
LikeLike