Monty Linden discusses CDN and HTTP

Monty Linden talking CDN and HTTP
Monty Linden talking CDN and HTTP

In show #46 of The Drax Files Radio Hour, which I’ve reviewed here, Draxtor pays a visit to the Lab’s head office in Battery Street, San Francisco. While there, he interviews a number of Linden staffers – including Monty Linden.

Monty is the man behind the Herculean efforts in expanding and improving the Lab’s use of HTTP in support of delivering SL to users, and which most recently resulted in the arrival of the HTTP Pipeline viewer (the code for which is currently being updated).

He’s also been bringing us much of the news about the content delivery network (CDN) project, through his blog posts; as such, he’s perhaps the perfect person to provide further insight into the ins and outs of the Lab’s use of both the CDN and HTTP in non-technical terms.

While most of us have a broad understanding of the CDN (which is now in use across the entire grid), Monty provides some great insights and explanations that I thought it worthwhile pulling his conversation with Drax out of the podcast and devoting a blog post on it.


Monty Linden talks CDN and HTTP with Draxtor Despres on the Drax Files Radio Hour

Monty starts out by providing a nice, non-technical summary of the CDN (which, as I’ve previously noted, is a third–party service operated by Highwinds). In paraphrase, this is to get essential data about the content in any region as close as possible to SL users by replicating it as many different locations around the world as is possible; then by assorted network trickery, ensure that data can be delivered to users’ viewers from the location that is closest to them, rather than having to come all the way from the Lab’s servers. All of which should result in much better SL performance.

“Performance” in this case isn’t just a case of how fast data can be downloaded to the viewer when it is needed. As Monty explains, in the past, simulation data, asset management data, and a lot of other essential information ran through the simulator host servers. All of that adds up to a lot of information the simulator host had to deliver to  every user connected to a region.

The CDN means that a lot of that data is now pivoted away from the simulator host, as it is now supplied by the CDN’s servers. The frees-up capacity on the simulator host for handling other tasks (an example being that of region crossings), leading to additional performance improvements across the grid.

LL's CDN provider (Highwinds) has a dedicated network and 25 data centres around the world which should help to generate improvements in the speed and reliablity of asset downloads to your viewer, starting with mesh and textures
Highwinds, a CDN provider Linden Lab initially selected for this project, has 25 data centres around the world and a dedicated network from and through which essential asset data on avatar bakes, textures and meshes (at present) can be delivered to SL users

An important point to grasp with the CDN is that it is used for what the Lab refers to as “hot” data. That is, the data required to render the world around you and other users. “Cold” data, such as the contents of your inventory, isn’t handled by the CDN. There’s no need, given it is inside your inventory and not visible to you or anyone else (although objects you rez and leave visible on your parcel or region for anyone to see will have “hot” data (e.g. texture data) associated with it, which will gradually be replicated to the CDN as people see it).

The way the system works is that when you log-in or teleport to a region, the viewer makes an initial request for information on the region from the simulator itself. This is referred to as the scene description information, which allows the viewer to know what’s in the region and start basic rendering.

This information also allows the viewer to request the actual detailed data on the textures and meshes in the region, and it is this data which is now obtained directly from the CDN. If the information isn’t already stored by the CDN server, it makes a request for the information from the Lab’s asset servers, and it becomes “hot” data stored by the CDN. Thus, what is actually stored on the CDN servers is defined entirely by users as they travel around the grid.

The CDN is used to deliver
The CDN is used to deliver “hot” texture and mesh data – the data relating in in-world objects – to the viewer on request

The HTTP work itself is entirely separate to the CDN work (the latter was introduced by the Lab’s systems engineering group while Monty, as noted in my HTTP updates, has been working on HTTP for almost two-and-a-half years now). However, they are complimentary; the HTTP work was initially aimed at making both communications between the viewer and the simulator hosts a lot more reliable, and in trying to pivot some of the data delivery between simulator and viewer away from the more rate-limited UDP protocol.

As Monty admits in the second half of the interview, there have been some teething problems, particularly in when using the CDN alongside his own HTTP updates in the viewer. This is being worked on, and some recent updates to the viewer code have just made it into a release candidate viewer. In discussing these, Monty is confident they will yield positive benefits, noting that in tests with users in the UK,, the results were so good, “were I to take those users and put them in out data centre in Phoenix and let them plug into the rack where their simulator host was running, the number would not be better.”

So fingers crossed on this as the code sees wider use!

In terms of future improvements / updates, as Monty notes, the CDN is a major milestone, something many in the Lab have wanted to implement for a long while,  so the aim for the moment is making sure that everyone is getting the fullest possible benefit from it. In the future, as Oz linden has indicated in various User Group meetings, it is likely that further asset-related data will be moved across to the CDN where it makes sense for the Lab to do this.

This is a great conversation, and if use of the CDN has been confusing you at all, I thoroughly recommend it; Monty does a superb job of explaining things in clear, non-technical terms.

6 thoughts on “Monty Linden discusses CDN and HTTP

  1. In Sweden, we have noticed a definite improvement in that we do not have the “America wakes up” the problem anymore. Performance in Second Life deteriorated when the US woke up and was improved when the United States went to bed.

    Like

  2. I noticed when sailing this weekend a issue that i don’t know if is related, but on some regions, after a cross sim, the boat and avatar is derrendered and even after a few (left crtl alt r) only when crossing back to another region i was able to see me and the boat rendered again.
    Also on the blake sea, several sculpts (the ll old trees) never changed from the state of a bulb (when some starts to rezz), even after stopping and waiting for a long time.
    Besides, the use of the draw distance slider to change it is a must for any explorer, when you arrive to any place you need to lower it to 32m and then increase it up to where you can safely travel and see what lies ahead.
    So i do start to believe deeply that (and i don’t use that viewer) LL official one, needs a draw distance slide bar like most the tpv’s already use.

    Like

  3. And this is why linden lab made some changes in the ToS about content. without that change CDN would be problematic.

    Like

    1. Not really. The ToS has always secured the rights the Lab has needed in order to distribute user-generated content across the Internet to user’s viewers. The CDN is simply a further part of that distribution process.

      Like

  4. The big question, of course, is, at the end of the day, does it work?

    Amazingly, it does. Yesterday I was in one of the more texture-intensive areas I know: Neufreistadt, a region built in 2004, when content creators were all amateurs and happy to paste 1024×1024 alpha textures on every little prim face. Of course some buildings were redesigned, but it’s still a very, very laggy place. And, on top of that, I attended an event with 40 avatars, all dressed up for a formal ball, all of them dancing.

    The performance? About 30 FPS. It would go slightly down when zooming in on some participants and with very rapid camera movements — like, say, going as much as 15-19 FPS — and then quickly recover.

    You might say it’s all thanks to a new computer. Sure… but this (GMT) morning I went to the same place, which is now empty (and so is most of the SL grid). What was the performance on the same spot? Also 30 FPS! It seems that this is pretty much the limit that my graphics card is able to render at my current settings (which are not completely maxed out, but almost). The conclusion? The number of avatars and what they’re doing now has much less impact on overall performance than before, and this is an utterly astonishing improvement!

    One might wonder if everybody had the same experience. The answer is ‘no’. The majority of people were using Firestorm. They complained about lag all the time. The Firestorm team is deliberately waiting for LL’s improvements to ‘settle down’ before they consider adding them to the viewer. They have stated that their team is focusing on stability, not performance. And they might be right on the stability issue: I crash rather frequently (perhaps 3-4 times per hour on the LL viewer). But it could also be my side of the connection, of course.

    For the past month or so, as LL has been continuously improving their viewer, I’m abandoning Firestorm (except, of course, for OpenSimulator, which I use a lot) and consistently using LL’s viewer instead. The performance is simply incredible. Oh yes, the interface continues to drive me insane. So many years using TPVs, especially Firestorm, where things are much more logically laid out, and much simpler to use, have spoilt me — I miss Firestorm’s interface thoroughly. It might not be perfect (that’s impossible considering it builds upon LL’s code), but it’s light-years ahead of what the Lab comes up with. So this is a tough dilemma. If you prefer stability and a rational, logical UI which is far easier to use, stick to Firestorm and other TPVs. If you want sheer performance (good for machinimas) and are willing to accept a few crashes now and then, and have enough patience to go through Linden Lab’s infuriatingly illogical UI, then go for the Lab’s viewers, they leave Firestorm in the dust begging for more performance.

    I wish that LL just assimilated the Firestorm team. After all, Lindens participate and contribute code to Firestorm, as we can see on FS’s JIRA. They already work so closely together that maybe it’s time to merge both projects. That, of course, would mean LL accepting to continue support for OpenSimulator… which I hardly believe they will ever do again. And it would also mean getting rid of whoever at the Lab believes they know how to design user interfaces.

    Like

    1. Slight update. The experience I described on that last comment was done with the standard viewer. Today I downloaded the Release Candidate. Guess what, I got a 20% improvement in FPS — and no crashes for almost an hour, on an event with some two dozen people. Tomorrow I’ll be on an even laggier place, and at an hour when the grid is busier, so I’m curious about the differences.

      Whatever magic you’re doing, Linden Lab, keep it coming 🙂

      @zzpearlbottom there most certainly is a drawing distance bar on the LL viewer. It’s just on Me > Preferences > Graphics > Draw Distance 🙂 Aye, I know what you mean, it’s hardly on a convenient spot…

      Like

Comments are closed.