Second Life asset fetching: HTTP, CDN and project viewer

Update, April 1st: Vir Linden’s comments on this viewer, offered at the Content Creation User Group meeting, are appended to the end of this article in an audio file.

Some of my recent SL project updates have mentioned that the Lab is working to remove the remaining task of asset fetching away from UDP running through the simulator and to HTTP (avatar baking information, mesh and texture data have been delivered to users via HTTP for the last several years).

This work involves changes to both the simulator and the viewer, both of which have been subject to testing in Aditi, the beta grid for the last few weeks.

However, on  Thursday, March 30th, the Lab effectively marked the start of testing on Agni, the main grid, with the release of the AssetHttp project viewer, version 5.0.4.324828.

This viewer enables the remaining asset classes used in Second Life – landmarks, wearables (system layer clothing and body parts), sounds and animations – will now be delivered to users the same way as textures, mesh and avatar baking information: via HTTP over a Content Delivery Network (CDN) rather than through the simulator. This should generally make loading of such content both faster and more reliable.

Hang On! What’s this CDN Thing?

If you’ve followed the HTTP / CDN project, you can skip this part 🙂 .

To keep things extremely brief and simple: a Content Delivery Network is a globally distributed network of servers which can be used to store SL asset information. This means that when you need an asset – say a sound or animation – rather than having to go via UDP to the simulator, then to LL’s asset service, back to the simulator and finally back to you (again via UDP), the asset is fetched over HTTP from whichever CDN node which is closest to you. This should make things faster and smoother, particularly if you are a non-US based user.

A generic CDN diagram: instead of the users in India, Spain, Peru and on the US West coast having to fetch data from a central server (marked in red) they can obtain the data more quickly and reliably from local caches of the data, held by CDN nodes located much closer to them.

There are some caveats around this – one being, for example, if you’re calling for asset information not stored on the local CDN node, then it still has to be fetched from the Lab’s services for delivery to you, where it can be cached by your viewer.

As noted above, the Lab started using CDN providers when they introduced the avatar baking service (called server-side baking) in 2013, and extended the use to the delivery of mesh and texture assets as part of a massive overhaul of Second Life’s communications and asset handling protocols spearheaded by Monty Linden (see my HTTP updates). Moving the remaining asset types to HTTP / CDN delivery effectively completes that work.

OK, So, What’s Next?

Right now, this is only a project viewer, and the Lab are looking to have people try it out and test fetching and loading of landmarks, wearables (system layer clothing and body parts), sounds and animations, so they can examine performance, locate potential issues etc.

However, the code will be progressing through project status to release candidate and ultimately to release status over the next few weeks / months (depending on whether any significant issues show up). Once this happens, TPVs will be given a period of time to integrate the code as a well, after which, all support for UDP asset fetching will be removed from both the viewer code, and from the simulators.

A rough time frame for this latter work is around late summer 2017. When it happens, it will mean that anyone using a viewer that does not have the updated HTTP code for asset handling isn’t going to be able to obtain any new or updated asset data from the Second Life service.

3 thoughts on “Second Life asset fetching: HTTP, CDN and project viewer

  1. While I am hoping this works, some things covered by the change have been erratic lately, there’s still going to be a lot of UDP traffic, The sticking point is going to be the third-party viewers. Some I expect to be pretty prompt, others may be running on too rigid a timetable and that could slow the changes, some are very slow to update.

    I have my own doubts where anyone properly understands the workings of the cache, and I am wondering just what the practical difference is between Avatar Baking and System Layer Clothing. I suppose it will help with what you see when you change clothes, but nobody else will see any effect.

    Large numbers of people in the sim seems to be the big problem, not their visibility

    Like

    1. These are changes that have been discussed with TPVs for the last few weeks / months through the TPV Developer meeting. As such, those who attend are well aware of the changes, and those who don’t are generally tracking viewer changes through the repositories directly, so they are also generally ready to go in short order. As such, there shouldn’t be any serious bottleneck in delaying the deployment beyond the Lab’s anticipated time line.

      Avatar baking does make a difference, but I think it is negated by the use of mesh, so many don’t actually get to see it in action. Caching wise, to my inexpert eyes the problem seems to be not so much in how things are delivered to the viewer, but the amount of work the viewer has to do in trying to sort everything out an put it all together. As a mesh body user, for example, I’ve noticed something of an increase in the time it takes for the viewer to assemble my mesh body, which seems to compete with the viewer’s attempts to render the in-world scene.

      And yes, large number of avatars are an issue, as they are handled differently to everything else in the region, and as they tend to be in motion a lot, they obviously generate a lot more in the way of update traffic the viewer has to deal with, further impacting its performance. Draw distance is another potential local hit – how many people tend to roam with a “permanent” DD of 256 metres or more? Perhaps not so bad on an isolated region, but when in a region directly connected to others, it’s again placing a lot of extra work on the viewer; and even on a single region, a high value DD can be detrimental to local performance when wandering around indoors. Recommendations here tend not to help. Only recently I visited an installation on a region surrounded by others, where the recommendation was “Set draw distance to at least 500m” (my emphasis) – leaving people’s viewer trying to deal with data coming from not just the regions surrounding the one with the installation, but potentially chunks of the regions beyond them as well.

      Like

  2. I actually was thinking of where all this work was heading, myself, the first time this last group of possible changes came up…and came up with perhaps one idea of how all this actually could come together and be used to it’s fullest : What if LL is planning to move SL to a whole new grid system…this time…to their own design of something like OpenSim’s Distributed Scene Graph ? Perhaps even eventually making an entirely new viewer to go with it, that would be modular, making it’s upkeep much easier ?

    Like

Comments are closed.