LL confirms Second Life regions now all on AWS

Logos ©, ™ and ®Linden Lab and Amazon Inc

On Thursday, November 19th, and after several months of very hard work in order to manage things in an orderly and as non-disruptive manner as possible, the last remaining regions on the Agni (the Second Life main grid) were successfully transitioned over to running on Amazon Web Services (AWS), thus placing the entire grid “in the cloud”.

The announcement can first via Twitter, and from April Linden, the Lab’s Systems Engineering Manager, Operations, who announced:

April Linden’s announcement

The Lab actually started transitioning regions several weeks ago, and without fanfare, first moving a number of regions only accessible to Linden personnel, and they carefully widening things to include selected public regions on the Mainland, and  – subject to the estate owners initially keeping quiet as well – private regions that experience assorted loads.

These initial transitions were more about testing certain aspects of simulator operations, rather than marking the outright start of any region migration process; the Lab wanted to gather data on simulator / region performance on AWS and investigate how simulators with a wide mix of avatar  / content loads behaved.

However, these initial moves quickly gave April and her team, the QA team under Mazidox Linden  and the simulator development team, the confidence to start broadening the “uplift” process further, extending things first to the simulator release candidate deployment channels (RC channels) and then, in the last couple of weeks, the bulk of the regions as they sit on the SLS “Main” channel.

While there have been hiccups along the way – most notably with teleport problems and group chat / IM failures,together with some performance degradation in other areas – on the whole, the entire transition of the grid has been remarkably smooth and problem-free.

However, this does not mean all of the work is over: as LL would only be quick to point out themselves, there are still a number of back-end systems to transition to AWS, and after that, there will inevitably be a period of “bedding in” everything to get things running, before work can start on the “fine tuning” of all the various services. (there are also some regions still running in the Lab’s co-location facility in Arizona to help  people with workarounds for specific issues, but these are perhaps just a handful, including a couple of  public regions – Debug1 and Debug2.)

Soft Linden on the AWS transition

Nevertheless, this is a huge achievement, and marks a hugely significant milestone in what has thus far been around a 3-year project to get all of Second Life safely transitioned over to AWS, so congratulations to all of those at the Lab who have been working very hard to make this happen, and without causing widespread upset or issues.

11 thoughts on “LL confirms Second Life regions now all on AWS

  1. Glad to see all sims are now migrated to AWS. Well done to the team doing that.

    Can somebody in the know please tell me why Oregon was chosen as a location to host the SL sims. And was a study done to determine if any particular hosting location would be beneficial to the wider SL community, or privilege any particular group?

    Also, when the first wave of sims were migrated, we noted that there was no throttling of scripts being run, and most AWS sims showed 100% or near 100% Scripts Run stats. Now I see that the Scripts Run percentage is down to where is was roughly on the old LL servers. What is the thought process on this?

    Liked by 1 person

    1. No specific reason has been given. But presumably, the move was taken in consideration of various internal factors at Linden Lab, possibly combined with a recommendation from Amazon themselves (and even possibly a prior working relationship with the Oregon centre via-a-vis Sansar (prior to its sale) – so personnel at the Oregon centre have familiarity with this type of product hosting?)

      As has been indicated by Ebbe and Oz, the Oregon selection may only be the first tranch of cloud-based operations; as LL gain familiarity with running the SL services via AWS, it’s been indicated that things might become a little more distributed in nature (including the future potential of simhosts being geographically diverse, depending on their primary audiences – although that going to likely be a good way down the road from where things are now / will be for the foreseeable future).

      Liked by 1 person

  2. “Can somebody in the know please tell me why Oregon was chosen as a location to host the SL sims.”

    Do you mean that the cloud AWS servers (which as we all know actually don’t live in the air, but somewhere on the ground) are situated in Oregon, and that, when you move stuff to AWS, you get to pick what state your AWS hosting is actually in? So in effect, the data has all moved from Arizona to Oregon?

    Liked by 1 person

  3. Yes, the data has moved from Arizona to Oregon. At least for now. The concept of Cloud is a misnomer.

    You can choose any Amazon AWS location when you set up your service with them. Usually the choice is based on the geographic location of your clientele, for faster serving of data, unless some special deal has maybe been done with Amazon.

    Inara, there is no logic in deciding on a specific location based on personnel as most Amazon AWS are administered remotely anyway. The personnel on the ground at one site or another are responsible for air conditioning and maintenance, not server configuration and administration.

    Someone made the decision to choose Oregon. I’d like to know the reason. Was it price (i.e. cheaper), or was there some other reason?

    Liked by 1 person

    1. Personnel – hence why I placed a caveat on the comment in the form of a question mark.

      At the end of the day, the move had been made, and the reasons will doubtlessly be there. I personally don’t see why knowing the reason why is so intrinsic to your SL experience at this point in time (although I would doubt pricing played a significant role (for the same reason you reject my comment on personnel / exposure – pricing tends to be centrally managed by Amazon). You can try asking LL directly – but I would doubt you’ll gain the level of response you seem to want.

      Liked by 2 people

  4. I would like to know what is the difference between AWS and old LL servers, because in-world i don`t see any improvements, contrary i am facing lag like never before, frozen screen during combat sessions…
    So which improvements we are talking about? Teleport “average time”? How about people still not receiving a teleport request even is send by 3-4 different users? Chat lag has always been our daily nightmare, but now it worsen from over 2 months with people not been able to see the Group Chat at all for days, even weeks.
    All i see is just a deprecated quality overall and for a user like me with over 10 years active in SL is disappointing, can`t talk with my friends, can`t join combat…is sad…

    Liked by 2 people

    1. Significant improvements were not expected following the initial completion of the simulator move, although there have been measured improvements in some areas (such as with simulator side script processing) – along with some teething issues. It is hoped that in time the move will yield more appreciable improvements (alongside of other work being undertaken with SL).

      The core difference between the AWS servers and those operated by the Lab is that the latter are generationally much older than the servers that can be furnished by Amazon through AWS, and because they are furnished by AWS, it removes a huge burdened from the Lab in having to identify new hardware, test it, make the capital expenditure to obtain it and then go through a long process of commissioning it and then transitioning services over to it and then running it for multiple years so it effectively “earns its cost” through operation / depreciation. Now all of that effort sits squarely with Amazon, leaving Linden Lab effectively free to select the hardware they want for their various services (including simhost servers), agree to costs for using them, and then say to Amazon “Make it so!”, leaving LL free to make the necessary adjustments to their code and services to make use of the new hardware without worrying about all the capital expenditure and headaches of commissioning / implementation.

      There are also numerous other benefits which may not be immediately user-facing. For example, Amazon provide a range of diagnostic and remote monitoring tools that LL are already starting to leverage alongside of their own tools, and which should hep into getting to the root cause of significant issues much more effectively.

      In terms of the issues you mention – teleport failures and Group chat issues – these have been a factor that has had a negative impact on the transition, although as Oz Linden pointed out in an official blog post a while back, their root cause might not be due to the transition itself, but rather the move exposing pre-existing issues which had previously been somwehat masked purely as a result of the way in which the Lab’s physical environment at their co-location facility had been put together. However, the good news with these is that LL is very aware of them, and the engineering teams are actively working to try to resolve them.

      Liked by 1 person

  5. The location of the SL servers is important to most who live outside of the USA. The ping time between you and the SL servers indicates the delay in serving data and it indicates essentially how far you are from them. You can test it yourself looking at the sim stats. A ping time of under 200ms is acceptable. Right now, to Oregon from France, I’m getting around 250ms, so not the best.

    I just wonder if there was a study to see what AWS location would result in the best service for all users.

    When I set up an AWS server three years ago, we looked at the ping times from US, from Europe and Australia, and we determined that West Virginia was the best AWS site to serve to everybody.

    Liked by 1 person

    1. Yes, I understand ping times. My own ping time to Oregon is little different to that to Phoenix and the co-lo (average of 150-170 for the latter, and 160-175 for Oregon), and like you I am in the UK. So ping is at best a subjective measure, and somewhat dependent on routing (particularly at either end of the trans-Atlantic pipe in your case and mine). As such, I prefer to look at a broader group of subjective measures: ping time, FPS, loading time at log-in, etc., which again – at this point time at least – all seem to be pretty much unchanged for me.

      Was a study carried out? I’ve no idea – but LL were talking about this as far back as 2016/17, a fair while before any work commenced (the idea of transitioning SL was raised directly as a result of the work carried out with running Sansar), and AWS were apparently very much involved in those discussions. But again, you’ll have to ask LL directly for specifics – if they are willing to express them. All I can say is that as far as I’m aware, specific of the decision to locate to Oregon haven’t been publicly expressed.

      Liked by 2 people

Comments are closed.