
In show #46 of The Drax Files Radio Hour, which I’ve reviewed here, Draxtor pays a visit to the Lab’s head office in Battery Street, San Francisco. While there, he interviews a number of Linden staffers – including Monty Linden.
Monty is the man behind the Herculean efforts in expanding and improving the Lab’s use of HTTP in support of delivering SL to users, and which most recently resulted in the arrival of the HTTP Pipeline viewer (the code for which is currently being updated).
He’s also been bringing us much of the news about the content delivery network (CDN) project, through his blog posts; as such, he’s perhaps the perfect person to provide further insight into the ins and outs of the Lab’s use of both the CDN and HTTP in non-technical terms.
While most of us have a broad understanding of the CDN (which is now in use across the entire grid), Monty provides some great insights and explanations that I thought it worthwhile pulling his conversation with Drax out of the podcast and devoting a blog post on it.
Monty Linden talks CDN and HTTP with Draxtor Despres on the Drax Files Radio Hour
Monty starts out by providing a nice, non-technical summary of the CDN (which, as I’ve previously noted, is a third–party service operated by Highwinds). In paraphrase, this is to get essential data about the content in any region as close as possible to SL users by replicating it as many different locations around the world as is possible; then by assorted network trickery, ensure that data can be delivered to users’ viewers from the location that is closest to them, rather than having to come all the way from the Lab’s servers. All of which should result in much better SL performance.
“Performance” in this case isn’t just a case of how fast data can be downloaded to the viewer when it is needed. As Monty explains, in the past, simulation data, asset management data, and a lot of other essential information ran through the simulator host servers. All of that adds up to a lot of information the simulator host had to deliver to every user connected to a region.
The CDN means that a lot of that data is now pivoted away from the simulator host, as it is now supplied by the CDN’s servers. The frees-up capacity on the simulator host for handling other tasks (an example being that of region crossings), leading to additional performance improvements across the grid.

An important point to grasp with the CDN is that it is used for what the Lab refers to as “hot” data. That is, the data required to render the world around you and other users. “Cold” data, such as the contents of your inventory, isn’t handled by the CDN. There’s no need, given it is inside your inventory and not visible to you or anyone else (although objects you rez and leave visible on your parcel or region for anyone to see will have “hot” data (e.g. texture data) associated with it, which will gradually be replicated to the CDN as people see it).
The way the system works is that when you log-in or teleport to a region, the viewer makes an initial request for information on the region from the simulator itself. This is referred to as the scene description information, which allows the viewer to know what’s in the region and start basic rendering.
This information also allows the viewer to request the actual detailed data on the textures and meshes in the region, and it is this data which is now obtained directly from the CDN. If the information isn’t already stored by the CDN server, it makes a request for the information from the Lab’s asset servers, and it becomes “hot” data stored by the CDN. Thus, what is actually stored on the CDN servers is defined entirely by users as they travel around the grid.

The HTTP work itself is entirely separate to the CDN work (the latter was introduced by the Lab’s systems engineering group while Monty, as noted in my HTTP updates, has been working on HTTP for almost two-and-a-half years now). However, they are complimentary; the HTTP work was initially aimed at making both communications between the viewer and the simulator hosts a lot more reliable, and in trying to pivot some of the data delivery between simulator and viewer away from the more rate-limited UDP protocol.
As Monty admits in the second half of the interview, there have been some teething problems, particularly in when using the CDN alongside his own HTTP updates in the viewer. This is being worked on, and some recent updates to the viewer code have just made it into a release candidate viewer. In discussing these, Monty is confident they will yield positive benefits, noting that in tests with users in the UK,, the results were so good, “were I to take those users and put them in out data centre in Phoenix and let them plug into the rack where their simulator host was running, the number would not be better.”
So fingers crossed on this as the code sees wider use!
In terms of future improvements / updates, as Monty notes, the CDN is a major milestone, something many in the Lab have wanted to implement for a long while, so the aim for the moment is making sure that everyone is getting the fullest possible benefit from it. In the future, as Oz linden has indicated in various User Group meetings, it is likely that further asset-related data will be moved across to the CDN where it makes sense for the Lab to do this.
This is a great conversation, and if use of the CDN has been confusing you at all, I thoroughly recommend it; Monty does a superb job of explaining things in clear, non-technical terms.