A Shining announcement: major improvements coming to SL

Yesterday Linden Lab announced a major series of new initiatives aimed at improving the overall SL experience. The announcement came via a Tools and Technology  blog post, which covers the initiatives in great detail. These focus on four main areas of activity, one of which is directly related to hardware and infrastructure, and the remaining three are focused on the platform itself and are grouped under the Shining project banner.

The hardware / infrastructure element of the work is described thus:

This year, Linden Lab is making the single largest capital investment in new server hardware upgrades in the history of the company. This new hardware will give residents better performance and more reliability. Additionally, we are converting from three co-locations to two co-locations. This will significantly reduce our inter-co-location latency and further enhance simulator performance.

The Shining project is something that is already known to many SL users – especially those who attend some of the User Group meetings. It is perhaps most famously associated with the Lab’s work on the Viewer rendering code, removing outdated functions and calls no longer supported in modern graphics systems (most notably Nvidia) and improving graphics handling overall. Shining has also been responsible for other incremental improvements to issues around streaming objects and avatars.

Under the new initiative, Shining is split into three core performance projects.

Bake fail: a familiar problem for many

Project Sunshine: One of the biggest complaints from users in SL is related to avatar rezzing. This can appear slow, and usually manifests in avatars remaining grey for periods of time, or in skin and system clothes remaining blurry (see right) – and at its worst, result in a user changing their avatar’s outfit – but others either seeing the avatar still dressed in the previous outfit or naked. Collectively, these issues are known as “bake fail” and are the result of the Viewer having to do all the compositing of avatar textures locally, then sending the results to the SL servers, which then send the information back to the simulator the avatar is in to be accessed by other Viewers in the same simulator.

Under Project Sunshine, to precis the blog post, much of this work is moved server-side, using a new, dedicated server, the Texture Compositing Server, which is separate to the simulator servers. This effectively allows all the “heavy” communications and calculations work relating to avatar texture calculations to performed within LL’s servers and across their own internal network, removing the reliance upon the Viewer and on Viewer / server communications which are outside of LL’s control.

Object Caching & Interest Lists: This is intended to directly address another common request from users: improving how the Viewer handles local object caching. This effectively means that once the Viewer has information relating to a specific region, and providing the information is still valid (i.e. there have been no changes to objects that the Viewer already has cached), then it will no longer need to re-obtain that information from the server. Only “new” or “changed” data needs to be streamed to the Viewer. This should mean that on entering a previously visited region, the Viewer should immediately be able to start rendering the scene (rather than requesting a download from the server), while simultaneously requesting any “updates” from the server through a comparison of UUID information and timestamps.

HTTP Library: The final aspect of Shining’s three-phase approach is to improve the underpinning HTTP messaging that are crucial to simulator / simulator and simulator / Viewer communications (and thus key to the other elements of Shining) through the implementation of “modern best practices in messaging, connection management, and error recovery”.

Overall, Shining will be tackling some of the major causes of Viewer-side lag and user frustration in dealing with avatar bake fail and the complexity and wastefulness of scene rendering that is encountered when moving around SL.

No definitive time frames for the improvements have been forthcoming with the announcements – and this is understandable; there’s a lot to be done and matters are complex enough that LL will want to proceed with minimal disruption to the grid and to users. Doubtless, more information will be made available as becomes known through the LL forums and (possibly more particularly) via the relevant User Groups.

Metareality discusses the “RedPoly” approach to mesh deformation

Note this is a 2-page article. Use the page numbers at the end of the piece to page back and forth.

Today’s Metareality podcast covers, as usual, a lot of topics, including Cloud Party and, more particularly the possible alternative approach to mesh deformation as proposed (or possibly re-proposed, given LL apparently looked at the same idea last year) by RedPoly, and which I covered in an earlier report this week.

The panel for this panel for this week’s show comprised Kimberly Winnington, aka Gianna Borgnine in-world and Karl Stiefvater, Qarl Fizz in-world, who were joined by Cyclic Gearz  and Geenz Spad.

While you can hear the broadcast in full over at Metareality, here’s a transcript of the discussion around the alternative means of mesh deformation.

[02:47] Gianna Borgnine (GB): So what is this new deformer, and how is it different? … For what I understand it works on bone definitions, is that right?

Geenz Spad

[03:07] Geenz Spad (GS): Well, basically yes, it uses several unused bones in the avatar skeleton … I guessing were used at some point to calculate the bounding box of the avatar on the server for collisions or similar. So, that’s mostly what it seems to be right now.

[03:30] Qarl Fizz (QF): I can probably add some more, but I should also specify that this is complete speculation because I haven’t had a chance to dig in … It seems like, yes, for the purposes of physics and maybe other stuff, at one point the Lindens had this approximation system put in so that when you dial your avatar sliders around, they have a basic gist of what your avatar looks like. And someone came up with the idea of using this information to do the deformation instead of the actual morphs themselves.

[04:10] GB: So, Cyclic, maybe you could answer this best: what about this is so appealing to the content creators?

[04:15] Cyclic Gearz (CG): Well, from my perspective, well, I make furniture mostly, but I still know a lot about design and stuff …  And all my designer-friends who make clothes … and part of the most difficult and annoying process is having to make five separate sizes currently, because at the moment that’s the best option for attracting the most customers – having more sizes that fit more bodies – if they have a deformer that works as is, and they do the work outside of Second Life, it reduces the workflow, it reduces the time to make new things; [it] means that they can get more stuff out and therefore more customers are happy.

[04:55] GB: So my guess is, I mean I talked to a few different people and got a few different opinions, and it was interesting to see the different sides and probably the only person I talked to … who wasn’t as thrilled about it, other than some of the developers I talked to, was Maxwell Graf, who is always looking to get rid of extra sizes, so I thought he would be excited! But for him, one of the big things was that it still felt like so much extra work because now he’s back to weight painting, which is something he was trying to get away from with Qarl’s deformer … But the thing that, as a person who does not make mesh fashions … Right now at least, you’re sort-of weight painting, but you’re painting blind, because you have to upload it to see the effects of what you did. Is that right?

Avastar in use

[06:06] CG: Sometimes; it depends on how you make your mesh. For instance, with blender you can get a plugin which you can pay for called Avastar by Gaia Clary. That is a really good way of seeing what your weight painting does and has an affect. You can also get a free burn file for Blender which is called The Avatar Workbench, also from Gaia Clary, where it has got all the bones and stuff and you can see what it’s supposed to look like. But you do sort-of have to guess … if you’re not versed in mesh or anything like that, and weight painting at all, it can be quite daunting. So from that perspective, not having to weight paint would be better for newer creator, because they’d be able to build something in blender or a different commercial program and not have to weight painting, because that is really horrible stuff! But … I do think people need to learn these skills, because the skill you learn for making 3D in Second Life can be applied in real life for big jobs … you could go into the games industry making models and stuff; but if you can’t weight paint, you’re out of luck!

[07:22] GS: Personally, I used to be an artist before I was programmer, and 3D animation was something I was always very interested in, and I definitely know the pains of having to go through and paint a variety of different vertex weights for different bones and things like that. And one thing that seemed interesting to me to the new approach to a deformer that works across all viewers that support mesh is that … you have 20-something bones you currently have to rig if you really want something that really looks good and really deforms well on most avatars with regards to just an avatar moving around; now you have all these additional bones you now have to worry about. That really seems to be the biggest drawback here. Granted, there are ways to mitigate this, and as I was saying on Monday, someone should find a better workload for this if it’s really going to be a viable solution.

[08:19] GB: Which made you really unpopular…!

[08:23] GS:  (Wryly) yes, because I’m a terrible person for suggesting something rational here, I guess!

[08:49] QF: So, I don’t know actually how this works, so may be you can help me, Geenz. So, what I said is true, right? These are like pseudo joints that the visual params modify to kinda …

[09:09] GS: … Kind-of get an idea of how big the collision capsule server-side should be – that’s what I’m guessing, you know? I could be wrong.

[09:12] QF: but you can’t visualise these in Blender at all, can you?

[09:18] GS: You pretty much have to manually add them currently.

[09:20] QF: So there’s no good way to … like Cyclic was saying, painting weights is hard, but you’re painting weights for … totally blind, right?

[09:33] GS: The worst part is here … there’s no guarantee that these will actually stick around in future versions of Second Life. I mean for all we know, after RedPoly outing it, Linden Lab may remove it in X number of months or they may keep it just because they’re afraid people began making content – and we know linden Lab’s policy on content breakage – So its either they’re going to break it now, or they’re not going to break it because people are going to make content with it. Danger of content breakage, here we go!

[10:10] GB: Well, Linden Lab is going to have to weigh-in at some point, because as it stands right now, it doesn’t deform around breasts or saddlebags or anything, so they would have to add in order to make it work right, right?

[10:23] GS: And on top of that, from what I can tell, the skeleton that’s being used is mostly just a rough approximation of the avatar itself in terms of its shape. That’s all you’re really going to need if you’re going to calculate a bounding box or a bounding capsule or something like that.

Continue reading “Metareality discusses the “RedPoly” approach to mesh deformation”