2020 SUG meeting week #43: further uplift update

Bungenäs, Binemust – blog post

The following notes were taken from the October 20th Simulator User Group meeting.

Cloud Uplift Update

There are no formal simulator deployments for week #43. This is to make way for selected regions (both RC and Main channel) to be migrated to AWS services (the cloud). This means those regions that are transitioned will be restarted.

For details of the initial announce on the uplift, please refer to Lab expanding number of regions on AWS.

Speaking at the simulator User Group meeting on October 20th, Maxidox Linden provided the following update:

This morning we started our medium-size (at least by the size of a world) movement of regions to cloud based simhosts. Specifically we now have over 300 simulators running in the cloud. We’re looking at how quickly we can move more, as well.
For this round we went with a mixture of extremely high impact regions and extremely low impact “filler” regions, to stress various parts of our systems like the region allocation process and how we pack regions into a host.
In future rounds it’s likely we’ll be focusing on specific Release Candidate channels.
Basically: My *goal* (I’m not saying it’s what we’re doing, but I’m saying it’s what I’m trying to do) is to make this the worst it gets, and it only gets better from here.

– Mazidox Linden

Bullet Notes on Uplift

  • As of the October 20th SUG meeting, some 300 regions have been transitioned to the cloud, representing a mix of region in terms of use.
  • Performance issues have been warned about because there are a number of back-end systems that have yet to be uplifted (and presumably need the simulators in place before they can be), which may have some impact.
  • The hope is that some of these additional services will be transitioned when they can, rather than being “left until last”.
    • One of those earmarked for transition that has been causing some angst are the servers managing the KVP database for experiences.  There have already bee reports of degraded experience performance on AWS-hosted regions, so the hope is to get the KVP database moved, with “quiet a bit of effort” going into it.
  • Those who have had regions uplifted and feel they are suffering adversely from the move can request a roll back to being hosted in the Lab’s co-lo. However:
    • As all regions will be running on AWS services by year’s end, LL would rather region holders bear with any issues they may encounter whilst things are in a state of flux, and if they are encountering specific issues, to work with the Lab to try and identity and rectify matters.
    • Those who feel they need a roll-back to the lab’s co-lo facility should file a support ticket, providing a clear indication of why they would like their region(s) rolled back, and the issues they are encountering.
During the transition (as now) either the uplifted or data centre regions will have some penalty … we expect that will mostly disappear by the time we’re done

– Oz Linden on possible simulator (and other) performance impacts
during region migration

  • The Lab’s aim is to continue to make the transitioning of services from their co-lo to AWS services as invisible to users as possible (that is, you shouldn’t really be able to tell the difference between a service now running on AWS and when it was running via the Lab’s co-lo).

SL Viewer

There have been no updates to the current list of official viewers, leaving the pipelines as follows:

  • Current release viewer version, formerly the Mesh Uploader RC released October t and promoted on October 14 – No Change.
  • Release channel cohorts:
    • Cachaça Maintenance RC viewer, version, issued October 1.
  • Project viewers:
    • Project Jelly project viewer (Jellydoll updates), version, October 1.
    • Custom Key Mappings project viewer, version, June 30.
    • Copy / Paste viewer, version, December 9, 2019.
    • Project Muscadine (Animesh follow-on) project viewer, version, November 22, 2019.
    • Legacy Profiles viewer, version, September 17, 2019. Covers the re-integration of Viewer Profiles.
    • 360 Snapshot project viewer, version, July 16, 2019.

In Brief

  • Group chat issues are being worked on, but is pending the uplift work.
  • The long-promised increase in sound loop support from 10 seconds to 30seconds, first promised a fair while ago now, is … pending the uplift work – although it is rumoured (and subject to confirmation) to be a Premium Plus benefit.

2 thoughts on “2020 SUG meeting week #43: further uplift update

  1. I find it kind of amusing to hear terms like “Uplift” and “The Cloud”. As an IT professional the more common term is “Migrate”. Also, “Cloud Storage” means storage that is not on your PC, that you can access from anywhere. SL was always on “the cloud”, albeit served from the LL data center at Level3.

    I was interested to see how the sim border crossings would be between the Amazon AWS servers. We have seen crossings between AWS and LL server be a little bumpy, but up to now its not common to find AWS sims next to each other. I found two in the Blake Sea, and report this:

    Some sims of Blake Sea have now been migrated to Amazon AWS servers. Not all, and not many side by side. Blake Sea – Cattewater and Blake Sea – Haggerty have both been moved to Amazon. Here are a couple of Gyazos to show sim crossing from S to N and then N to S: https://gyazo.com/a0100741b82f3de0528de3bcec857c10 and https://gyazo.com/dc3e43a2b2416b753478930054ceb9a5 Pretty smooth in my opinion.

    I just hope that the choice of hosting the Amazon AWS servers in Oregon (AWS us-west-2 Region) is the best idea. When I was testing AWS for an international deployment a few years ago, we found the us-east-1 Region in North Virginia to be better for US and European clients over a USA West choice.

    Liked by 1 person

    1. I generally more usually refer to the process as the “migration” or “transitioning” – but Uplift is the project name the Lab use for the process, so that tends to get used in headings, etc., as that’s what people recognise 🙂 .

      Thanks for the notes on current Blake Sea regions now running on AWS. Extensive AWS / AWS testing was carried out on Aditi (which helped with improvements to the region crossing code as a whole), and the majority of people testing AWS / LL co-lo on Agni have also tended to report good results.

      Liked by 1 person

Comments are closed.