On Wednesday, October 29th, the Lab promoted the HTTP pipelining viewer to the de facto release viewer, a move that came just after the grid-wide deployment of CDN support on Tuesday, October 28th. While the two are complementary rather than reliant upon one another, both should help improve the majority of users’ Second Life experience to some degree.
The HTTP pipelining viewer is the latest phase of over two years of work on Second Life by Monty Linden, and which has involved both the viewer and the servers and back-end services which support SL.
The work, originally a part of Project Shining, which was itself heralded as complete in June 2014, initially focused on texture handling between the servers and the viewer. Since then, Monty has gone on to tackle a number aspects of improving the use of HTTP in Second Life, such as making connections more robust and reliable, improving throughout to the viewer via HTTP, and so on.
The HTTP pipelining viewer, as the name suggests, leverages HTTP pipelining, a technique in which multiple HTTP requests are sent on a single TCP connection without waiting for the corresponding responses, which significantly improves the download of data (currently avatar baking information, texture data, and mesh data) to the viewer. The upshot of this is that the impact of a user’s physical location on scene loading is reduced, improving their overall experience.
As well as this, the HTTP viewer includes significant improvements to inventory folder and item fetches, which can markedly decrease the time taken for inventory to load, particularly if a user’s local inventory files have been flushed as a part of a cache clearing (or similar) exercise.
These inventory updates alone are liable to be appreciated by users as the viewer-side HTTP code gains wider adoption by TPVs. Tests have shown that a decently structured inventory (e.g. one that uses a folder hierarchy, rather than everything dumped into just a handful of top-level folders) of 100K can have a “clean” load time of 16-18 minutes reduced to around 3 minutes.
Earlier in October 2014, Monty blogged on his work, showing how both the CDN and the HTTP pipelining viewer, coupled with his earlier HTTP improvements have benefited texture and mesh fetching in SL. If you’ve not read that blog post, I recommend that you do.
As well as working on HTTP, Monty has also been engaged on rebuilding and cleaning-up many of the third-party libraries used in the building of the viewer. This work should not only improve the viewer build process and such third-party libraries are consistently used in the build process, it may also help pave the way toward the Lab producing 64-bit versions of their viewer in the future.
Alongside of the HTTP pipelining viewer there is also the Lab’s CDN work – the benefits of which can be felt when using any viewer with SL, but which is likely to be further enhanced as a result of the HTTP pipelining viewer code.
As I’ve reported in covering the CDN work, Linden Lab have engaged the services of Highwinds, a Content Delivery Network specialist, who supply their CDN services to a broad range of businesses, including a number of games companies such as Valve (Steam), Funcom, Meteor Entertainment, GameFly and Virgin Gaming. They operate 25 centres around the world (click on the image below for locations) over their own network infrastructure, which they call “RollingThunder”, which peers with more than 1,600 provider networks worldwide and over 14,000 ASNs, and uses the anycast network addressing and routing methodology.
The CDN has been supporting avatar baking for some time, but with this deployment it supports both texture and mesh fetching for regions. This means that when you enter a region, instead of any texture and mesh data your viewer needs to have to be able to render the scene having to come via the sim host on which the region is running, it can be delivered to you directly from the CDN node that is (generally) closest to you. Overall, this should mean the data can be requested and obtained a lot faster than going via the Lab’s servers, particularly if you are a non-US based user.
There are a couple of caveats to this; but they are rather unique. The first is that if you are entering a region which has never previously been cached by your “local” CDN node, then that very first scene rendering for the region might actually tack a little longer, as the CDN service has to obtain the data from LL. However, once the CDN node has the data, the this situation will no longer apply. Also, there may be some rare instances where things are a little slower as a result of the CDN (e.g. if you happen to reside closer to the Lab’s data centre than to your local CDN node).
A further benefit to having asset data delivered to users via a CDN rather than through the Lab’s simulator hosts is that it lifts a considerable amount of data handling from the latter, particularly their associated Apache services, allowing them to get on with handling other critical activities.
So far, the CDN has more than met the Lab’s expectations, and aside from a slight (and quickly rectified hiccup) at the start of deployment, it has revealed no signficant downside. Right now the Lab is still monitoring how the service performs, but thought is already being given to moving other asset data to use it in the future. In addition, the Lab plans to spend time and effort assessing just what the effect of this change has been on their operations from a number of perspectives, and seeing what other improvements might arise from it as a result.
In the meantime, and to return to the HTTP pipelining work, TPVs are being encouraged to adopt the HTTP code as soon as they can manage within their release cycles – so if you are a TPV user, hopefully you won’t have too long to wait before the HTTP pipelining improvements are yours to enjoy alongside of the benefits you can already hopefully experience via the CDN.