Updated, September 25th: As indicated to me by Grumpity Linden, the cause for the Wolfpack and Maintenance RCs to be withdrawn as noted in this article (and which was as a result of this issue), has now been resolved and the two updated versions of these viewers are once again available. As a result, the links to their release notes and download options have been restored.
The majority of the notes in this update are taken from the TPV Developer meeting held on Friday, September 22nd 2017. The video of that meeting is embedded at the end of this update, my thanks as always to North for recording and providing it. Timestamps in the text below will open the video in a separate window at the relevant point for those wishing to listen to the discussions.
Server Deployments Week #38 – Recap
- There was no deployment / restart on the Main (SLS) channel on Tuesday, September 18th, leaving that channel running on 17#17.09.01.508236.
- On Wednesday, September 19th, the RC channels were updated as follows:
- BlueSteel and LeTigre received a new server maintenance package, 17#17.09.14.508549, comprising improvements to address some problems that could degrade simulator performance in rare cases.
- Magnum received a new server maintenance package, 17#17.09.14.508533, containing a fix for BUG-100505 “llGetEnv (“agent_limit”) is returning an empty string in Magnum, LeTigre and Blue Steel regions.”
Alex Ivy 64-Bit
[0:54 and 6:00] The Alex Ivy 64-bit viewer is due an update, possibly in the early part of week #39 (commencing Monday, September 25th). This may not have all the fixes required for the viewer to get promoted to de facto release status. Before this happens, the Lab wants to tackle the problem with pipeline stalls in this viewer, and are working on an experimental branch of the viewer to try to resolve the issue. This branch will be made available as a test viewer to those who have reported the issue and can reliably repro it. Depending on the outcome of this testing, a decision will be made on folding it into the RC branch for the viewer.
The wiki instructions for the viewer should now be updated to the 64-bit build requirements, nd Oz indicates that a new 64-bit Havok library should follow the release of the viewer.
[1:33, 2:13, and 34:02-37:14] There will be a new Voice SDK arriving for the Voice RC viewer in the near future, which will include an updated SDK that includes a fix for some long-standing problems. There are still some problems to be fixed, so it is unlikely this viewer will be promoted until the new SDK has spent time in RC and the remaining major issues have been resolved.
This viewer already fixes the high number of failures to connect to the Voice service when logging-in; however, there is an issue where manually killing the Voice process will not restart (as it used to), and so Voice won’t work. The Lab would like to fix this so the process does restart the process, but this is not seen as a critical issue to be resolved before the viewer is promoted.
The new SDK does not alter the Voice protocols, but is not compatible with previous versions, requiring the supporting updates in the viewer to work. This means the new SDK cannot work with older viewer versions, and older SDKs cannot be used with viewers incorporating the code updates to support this new SDK.
Maintenance and Wolfpack RCs
[2:04 and 4:15] The meeting references updates to the Maintenance RC viewer (to 184.108.40.2069115) and the Wolfpack RC (to 220.127.116.11128).
While both updates were available at the time of the meeting and shortly thereafter, the Alternate Viewers wiki page now references the previous RC releases for both (18.104.22.1689065 and 22.214.171.1248990 respectively). It is not clear whether this is an error with the wiki page, or if the updated RCs have been withdrawn (both still appear on the viewer release notes list). Resolved.
[5:14] There may be a new updated to the 360 snapshot viewer in the next week to two weeks. Work has also started on providing better support for using 360-degree images in Second Life Place Pages (see here and here for more on Place Pages).
Keeping the above in mind, the current viewer pipeline comprises:
- Current Release version 126.96.36.1998060, dated August 9th, promoted August 23rd – formerly the Maintenance RC
- Release channel cohorts:
- Maintenance RC viewer, version 188.8.131.529065, dated September 18th.
- Wolfpack RC viewer,version 184.108.40.2068990, dated September 12th – this viewer is functionally identical to the release viewer, but includes additional back-end logging “to help catch some squirrelly issues”
- Alex Ivy 64-bit viewer, version 220.127.116.118209, dated September 5th
- Voice RC viewer, version 18.104.22.1688552, dated September 1st
- Project viewers:
- Obsolete platform viewer version 22.214.171.1240847, dated May 8, 2015 – provided for users on Windows XP and OS X versions below 10.7.
New Viewer Splash / Log-in Screen
[7:12-7:42] As noted in my week #36 TPV meeting notes, Phronimos Linden is updating the viewer splash screen which will see a different look and feel to the screen, including how information is displayed (such as making grid status info more prominent), and will see updates to some of the widgets providing information in the splash screen. This work is now with the QA team, and information on the updates will be available for TPVs soon.
Windows Viewer Installation Warning
[7:47-8:48] The Lab’s code-signing key used to verify the viewer with Windows (notably Windows 10) has expired. The Lab have a new key, but for an interim period, it means users installing the Windows version of the official viewer may find Windows SmartScreen reports the viewer as unverified.
Server Version Updates and Move to the Cloud
[12:13-12:53] A number of server version updates are advancing. these don’t always have user-visible changes, but they are nevertheless important to Second Life. Among other things, they are part of the preparatory work for moving SL capabilities to the cloud (see my week #36 TPV notes for more on this).
[15:08-19:06] There is no time line for moving things to the cloud, simply because the Lab does not know at this point how long it will take. There are some significant changes which must be made to both the way things are built and the way they are run, and there need to be assorted updates to various components that go into building and running SL services.
Some SL services are already being tested in the cloud, and some are performing well – such as the process for determining if a user requires a viewer update. Others have been tested and revealed problems which must be addressed if they are to be run from the cloud – or should be addressed even when not running in the cloud.
It is unlikely the Lab will be providing specifics on services which have moved to the cloud / are being tested, and which are still based within their data centre until things reach a point where simulators are running in the cloud, simply because where many SL services run makes absolutely no difference to the user experience, as long as they are running. Moving and testing simulators in the cloud is likely to be one of the last things to be tackled, simply because of the complexities involved.
The first goal is to get everything working pretty much “as is” from the cloud. Only after this has been done, will work start on leveraging the benefits of having everything in the cloud be explored and exploited.
[19:06-21:49] This could include giving – and to use Oz’s words, the option of having their regions hosted in specific geographical locations. So, for example, the various communities located in South America could have their regions all hosted in South America, potentially improving response times between viewer and server. However, whether this will in fact be possible is dependent on the Lab reaching that point at which they can start leveraging the benefits of the cloud.
Obviously there are trade-offs in this kind of shift, should it occur; relocating a simulator to better serve a community may not improve things for others access the region on that simulator. However, in potentially supplying the option, the Lab is providing land owners with a choice of what they would like to do.
[21:56-22:38] If nothing else, this work should be a demonstration that the Lab really is continuing to invest in Second Life and its future. Were they seriously thinking of letting it go (i.e. in favour of Sansar), then none of this work – and the associated expenditure – would be taking place.
Environment Enhancement Project (EEP)
(See also my week #38 CCUG update.)
[29:26-31:16] There is a “fair amount” of back-end work that is being worked through, and the work is approaching the point of internal testing within the Lab. Once this has reached a suitable point, the server-side / simulator changes will be deployed (e.g. to Aditi) for wider testing, alongside of a project viewer to handled the client-side application of the capabilities.
Recent Grid Issues
[37:51-39:20] As most are aware, there have been some recent grid issues. While not the cause of these issues, but which has been a contributing factor to their duration, is some low-level code within the viewer which handles log-in retries far too aggressively. When this happens en masse (such as when there is a grid issue), it results in the log-in servers being swamped, adding to the woes for people trying to log-in.
A recent Maintenance update to the SL viewer addresses this issue (see my week #30 TPV update), and the request for TPVs to pick these code changes up was re-iterated at the meeting. In addition, the log-in servers have themselves been made more robust when facing large number of attempted / repeated log-in attempts.
Estate Tool Ban List Improvements
[9:32-11:22] The Lab has resumed work on the region ban lists (layout / usability, etc), and the updates should be appearing soonTM. The specifics of what is being done will hopefully be available for the next TPV Developer meeting.
Premium Member Benefits
[13:13-14:53] There is apparently at least one Premium member benefit that will be appearing real soonTM which the Lab believe people will like, and some further ideas are being considered. Oz declined to comment on what any of these might be, citing it being more fun to find out when they are announced. He also indicated that appropriate and considered suggestions / ideas for benefits (e.g. not things that persist after a Premium subscription has been cancelled) are also welcome.
Group Notice Failures
[28:00-28:55] Still no work on group notices (on-line and off-line) sometimes not getting through for some people. It’s not on the “now / next” roadmap of things the Lab is / will be looking at. The focus on sever-side work is on dealing with instability issues which can cause crashes / offer exploits to griefers.
Asset HTTP Messaging and Asset HTTP Issues
[41:14] As noted in my week #36 TPV meeting update, the recent Asset HTTP updates are leading to the texture pipeline getting out of sync, and people experiencing texture load stalls. A JIRA for this has been filed (BUG-139123), and a possible fix has been submitted to the Lab by Sovereign Engineer.
[43:16] The Lab is also working on the texture caches in an attempt to make them faster and more effective.