Project Shining: what it means for the viewer

On the 29th June, Linden Lab announced Project Shining, aimed at improving avatar and object streaming speeds. At the TPV/Developer meeting on Friday 13th July, the project was discussed in terms of how the various elements within it will affect Second Life viewers.

The following is a summary of that discussion, based on the recording of the meeting, and focused primarily on the viewer changes / updates that will be most directly seen / felt by the majority of users.

HTTP Library

Commencing at 22:30 into recording.

The aim of this project is to improve the underpinning HTTP messaging that is crucial to simulator / simulator and simulator / Viewer communications. Monty Linden is leading this project.

Key points:

  • LL will release a project viewer containing a new “wrapper” implemented around how data is handled and a new texture fetch library  (see time frame comments at the end of this article)
  • Providing there are no major problems with the project viewer, the initial code release will move to a release version of the viewer
  • This will be followed by changes to group services and a “more ubiquitous” use of the library in the viewer – which is where Oz’s warning to TPV developers comes into play, as some services and the behaviours will start to change to improve throughput and reliability – and may even help improve the SL experience for those on older routers.

As a side note, some of this work has involved router testing aimed at determining what router hardware is compatible with Second Life. While it is hoped that work around the HTTP libraries will improve the SL experience for some using older router hardware as noted above, the tests have revealed that certain types of older router – Linksys WRT and Belkin G series routers were specifically named – are not compatible with running Second Life.

Avatar Baking

Bake fail: a familiar problem for many

Commencing at 32:38 into the recording.

The aim of this work (Project Sunshine) is to improve issues around avatar baking and to eliminate bake fail issues. It will primarily focus on moving the emphasis for the baking process from the viewer to a new Texture Compositing server. The viewer will retain some elements involved in avatar baking – the actual baking of the avatar shape (i.e. shape values and IDs) will still take place on the viewer side, for example.

Precisely how this new service will work on the server-side of things is yet to be fully determined by Linden Lab. However, work is progressing on the viewer side of the equation, with the current key points as follows:

  • The new service will use the Current Outfit folder to drive the new baking service
  • TPVs not currently supporting Current Outfits will have to implement it, otherwise they will effectively fail on avatar baking
  • The basic process will be that when it is time to send a rebake request (e.g. after a user has finished editing their appearance) the viewer must send a new message to the baking service which effectively says, “Look at the contents of my Current Outfit folder and give me back a new appearance based on that”
  • Viewers in general will have to support this new message that is sent to the service, and change how they perform the fetching of avatar textures; for the technically inclined, this will be HTTP without UDP fallback.

Currently, the plans is for LL to integrating the new way of doing avatar baking into their viewer code, which will be available for TPVs to integrate – although none of the Linden Lab 1.x code will be updated to support the new process, so this will effectively break their own Viewer 1.23.5, which currently is still in use within SL.

The viewer code will support both the “current” method of avatar baking (within the viewer itself) and the new baking service (using the Texture Compositing server) until the new service is fully rolled out across the grid. This means that if a user is in a region that does not make use of the new baking service, avatar baking will continue to be handled using the viewer-side mechanism we currently have. However, if the user is on a region that utilises the new baking service, avatar baking will be handled through that. The viewer will be able to recognise whether it is connected to a region supporting the “new” method through the region capabilities.

In order to ensure as smooth a transition to the new baking process as possible, LL are proposing a relatively long lead-in to the new service, making the code available well ahead of the new service being enabled, allowing TPVs to integrate it into experimental builds. The server-side changes will initially be implemented on a number of beta grid regions for testing with viewers there, prior to being scaled-up. The server changes will then be released onto the main grid in a controlled manner and then scaled up from there.

What Does This Mean for Users?

If all goes according to plan, and providing that you keep up-to-date with releases of your preferred viewer, this actually shouldn’t mean very much in real terms. There are however a number of things to be aware of:

  • If you use a viewer that is not updated to use the new code (i.e. the official viewer 1.23.5 or a viewer that is not updated to use Current Outfit folder and / or to support the new bake request message / HTTP texture fetch mechanism) OR you continue to use an old version of a viewer rather than updating, there will come a time when your avatar  – and those around you – will not bake correctly
  • There are two issues that may occur during the transitional period when both the “current” and the “new” baking methods are in issue:
    • When teleporting or crossing between regions that use the different methodologies, users will experience their avatar rebaking, as the viewer will effectively be using two sets of data for the bake process
    • If there are two adjacent regions, one of which is uses the current avatar bake process and the other is using the “new” baking service viewers in one region will not be able to correctly resolve the textures of avatars in the other region
  • It is hoped that the transitional period where both methods of avatar baking are active will only last for about two weeks.

Object Caching and Interest Lists

Commencing at 57:25 into the recording.

When you enter a region at the moment, your viewer receives a huge amount of information on what requires updating, much of it relating to things you can’t even see from your position in the region. The data is received in no particular order, with the familiar result that things appear to rez in your view in a totally random order – quite often with the thing you actually want to see being one of the last to rez due to the mechanics of Sod’s Law. What’s more, if you have previously visited the region, the chances are that much of the information being sent to your viewer is already cached.

Object caching and interest list changes: easing the pain of random rezzing

The focus of this project is to optimise the data being sent to the viewer, information already cached on the viewer and the manner in which that data is used in order to ensure it is used more efficiently so that things rez both faster and in a more orderly manner than is currently the case.

At this point in time, this work is in a greater state of flux than the HTTP library and avatar bake projects. This is more a process of optimisation both on the server-side of things and within the viewer itself, rather than that of new functionality within the viewer per se. There are no general time frames for this work at present, but there will be updates once things become clearer as to how the optimisation is going to be addressed.

Time Frames

The precise timeframes for implementing these changes have yet to be properly defined. However, Oz Linden hopes that there will be at least a two month period between Linden Lab making the code for each of these project elements available for integration by TPV developers into their viewers and the point at which the Lab states the code must be in use.

At the moment it is likely that the HTTP library element of the project will but rolled-out first, although this is unlikely to be within the next two months, for the reason given above. Project Sunshine, dealing with avatar baking, will then follow after that – or although how soon after has yet to be determined; as described earlier in this article, this will be a very controlled roll-out. It is possible that the object caching / interest lists part of the project many not be rolled-out for another six months. However, timeframes are still in discussion within LL, so any of this may well change.

Expect updates on all three of these project elements as and when more information is supplied by Linden Lab.

Related Links

28 thoughts on “Project Shining: what it means for the viewer

  1. Do you know of any viewers that might be immediately effected by these things? I particularly use Singularity, and it’s kept up really well so far. I don’t really know enough to say if it would or wouldn’t :C, but it’s definitely more advanced than the Linden 1.23.x viewer.


    1. Singularity already uses Current Outfit Folder mechanism to store appearance (including the multi-wearable support that is about to be released) so adjusting to the new baking system should be a trivial change. 1.23.5 will probably be permanently broken when this rollls out (probably not going to happen that soon, my guess is 6-12 months from now). Some viewers that don’t use COF like Cool VL Viewer will have to implement it, so they’ll have a bit more work ahead of them, but I’m sure it’ll get done.


    2. As Latif points out in his comment, and I cover in the article, no viewers will be immediately affected by any of these changes due to the time-scales involved. Changes are potentially at least two months way – and those will initially be for the HTTP library changes, with things like the avatar baking service even further down the road from that. How far is anyone’s guess at this point in time. However the lead-time is such that any TPV that does not support the Current Outfit folder will have time to make whatever necessary changes are required in order for them to support it and to support the other changes required to use the new baking service.

      If in doubt, contact the dev(s) responsible for the viewer.


  2. This is all very interesting and encouraging, especially the “Object Caching and Interest Lists” as the current random method is pretty insane.

    One thing that does concern me, though, is taking the rendering server-side. The current method of doing rendering client-side means LL offload the processing to the client, and those of us with serious CPU and GPU hardware can process the rendering pretty quickly. By pushing it server-side, we’re all reduced to the same common denominator plus LL have to make a hardware investment to take on the extra load. Also, will there be some mechanism for the client to ask the rendering to be done to a certain level of detail, so that people running Ultra get back a higher quality texture than people running Low?

    I guess it depends how they implement it, though. I guess if they massively parallise it in a GPU farm then maybe they can do it efficiently, but you never know with LL.


    1. Rendering isn’t going server-side. What will change is where the object data needed for rendering is drawn from (local cache or streamed from the server), based on an initial exchange between the viewer and server, and the priority given the the object data, based on the viewer’s object priorities. The actual rendering, however, will take place client-side.


  3. I am puzzled.

    SL incompatible with some routers?

    I though packets were packets. Is this about something such as IPv4 NAT, because that could be a very widespread problem. It is sometimes done at ISP level, for instance.

    I know the Internet doesn’t match the formal theoretical layer-structures such as the OSI 7 layer model, but what are the Lindens doing?


    1. I’m puzzled as well. I do have a Linksys WAG, which is a model introduced around the same time than the WRT (the WAG includes an ADSL modem), and although I run OpenWAG on it, the router is so old that even the nice open source developers are not very active in developing it.

      Why exactly “old routers” will fail to connect to Second Life is beyond me, specially because we’re talking mostly about changes done at the HTTP layer. What insane tricks are LL up to that will “break” old routers? I guess @Wolf Baginski is right and this involves some kind of clever NAT-related routing which might be a relatively new addition to the NAT protocol and, as such, not available in older routers. But… as said, I’m puzzled but also very curious!

      My concern is not really the need to buy a new router (well, that’s also an issue, of course… buying a new router was not in my plans, to be honest) but how well LL is able to test a vast amount of routers world-wide, because there is an insane amount of variety out there. Imagine that whatever they’re doing is not even supported at the ISP level. This could lead to problems like “sorry, you have to change ISP to be able to log in to SL”.

      I hope that as part of the long rollover process LL provides some kind of tool to allow people to check it their networking setup is “compatible” with SL…


      1. Monty’s tests appear to have been in connection with the wi-fi side of the routers.

        The comments are made at 24 minutes into the recording, with Monty broadly commenting on testing, specifically mentioned Linksys WRT and Belkin G series, prompting Oz to comment: “So the reports of Second Life ‘killing’ people’s wi-fi are, in fact, legit, in some sense,” a comment that prompted some TPV developers to agree – so one assumes the wi-fi issues have been widely reported / seen as a cause of problems for some people using SL.


        1. The Wi-Fi problem is partly down to crowded bandwidth. I’m lucky, nobody close by who uses wi-fi, though I can pick up a couple of other routers from the front of the house. I was able to identify the channels they were set to, and put my router on a different channel. So, no interference problem from that. Then there is RFI in general. Available wi-fi speed can be unpredictable.

          Avoid wi-fi if you can. It’s not an automatic killer for SL, but that’s the way to bet.

          ISPs, and wi-fi salesmen, talk about the peak bandwidth. There is no guarantee that you will get it. Sometimes, I think the Lindens are suckers for sales brochures. I have this bridge in London I should sell them….


      2. “Insane trick” = opening a lot of parallel connections. That’s all. Trying to fix it for the old routers: trying to reuse the same connection for many downloads, and opening only few in parallel so crappy home routers can cope.


      3. This is a bit of information I’d like to know as well. When I upgraded my account to 100 Mbps my ISP replaced the previous CIsco router with an Aethra SV6044 and I have had problems with SL since then. It would be good to have a tool to test the router so to have some evidence that the router is at fault and ask the ISP to change it.


    2. Incompatible is perhaps the wrong word to use here. Some routers do not cope well when you open large number of parallel http requests. SL clients are known to open 30 or even more parallel connections at once, and some of the routers with their tiny little processors just keel over.


      1. That, and the Wi-Fi, does make sense. Wi-fi bandwidth is something of a can of worms, with other networks on the same frequency, and RF interference in general.
        Firestorm has a debug setting ImagePipelineUseHTTPFetchMaxRequests and warns that if it is set above 32 you will get crashes.
        With the protocol overheads and all, never mind the router, this looks as though SL will need insane amounts of bandwidth.There’s a lot of reasons to use HTTP, but this looks more like stupid bandwidth consumption,
        I have wondered on occasion if anyone in Linden Labs understands HTTP. Now I am wondering again.


  4. As far as i noticed from the viewer i use and has already some of the shining code integrated, Return object to last position in world after taking it to inventory is no more!
    But sim crossing as well as fps are much improved (don’t know if due to changes by LL or by the tpv developer)


    1. The Shining project has been a wrapper for a range of viewer improvements over the last few months, so yes, some are already out there. This is a further redefinition of the project into three additional areas.

      I’m not sure on the status of regions crossings – there is an on-going project to improve these, but I’m not up-to-date on the overall status.


      1. They’re trying to improve the actual code in the server. “Threaded region crossing” seems to be deployed, and that enables other changes which should make a difference. The last time they had a big improvement they re-arranged the network proximity of the servers to better reflect the grid-proximity, but every time they do a roll-out the mapping of regions to servers changes.

        My guess: the new code being used for roll-outs tries to preserve the mapping. So it’s going to be slower, but the quality of the sim-crossing holds up better.

        That is a guess, but it makes sense to me as part of a set of improvements. It can’t be the only thing the new code does.


  5. Pingback: Bits and Pieces
  6. Thanks for that update in non-techy speak Inara – these are important improvements and it is nice to be able to understand them and their implications.


    1. It has to be non-techie, as I’m totally non-techie! 🙂

      I’m always worried about misunderstanding something as a result, which is often why these pieces take a while to appear – I’m off trying to read-up on bits and pieces elsewhere to at least try and ensure I get reasonable context when people start talking in terms of “curl”, etc.


    1. There are already other problems predicted: the Pathfinder code and the Havok 7 licensing does seem to force a split.


      1. And it is entirely possible LL and the Havok company might come to regret forcing the split…. say, if OpenSim takes the opportunity to redesign themselves without having to maintain viewer compatibility with how SL does things with viewers… and then proceed to leave SL in their dust in certain functionality. Say, if they add a mechanism that lets one look through a portal and actually SEE, visually, the destination of that portal in real time, and then walk right through, exactly the same as if you were at a sim boundary, except taking you to ANY arbitrary place across the grid, or even over the hypergrid to another grid altogether.


    2. The changes mentioned only in this article could be implemented in opensim with very little hassle. The upcoming pathfinder/havok thing is some what of a different story unless of course some one finds an opensource way to replicate the havok code. Im sure devs are looking into it as we speak. A code split is basically an operating plan (in lue of that replacement option being found) by most TPV devs so that development pace of opensim support doesn’t suffer in the mean time.


Comments are closed.