Server Deployments, Week 35 Recap
As always, please refer to the server deployment thread on the forums for the latest updates and information.
- On Tuesday August 26th, the Main channel was updated with the server maintenance release previously deployed to all three RC channels in week 34, which contains a single crash fix.
- On Thursday August 28th, after a delay from the planned deployment, the three RC channels were all updated with the same server maintenance package, which contains further crash mode fixes and, fixes for SVC-2262 – “Incorrect height value in postcard which sent from above 256m” (a postcard being a snapshot sent to e-mail) and BUG-6466 – “Numbers expressed in scientific notation and include a plus sign in the exponent are not parsed as JSON numbers by LSL”, which was thought to have been fixed a while ago, but which in fact resulted in BUG-6657 – “Valid JSON numbers like 0e0 no longer valid after 14.06.26.291532″, prompting the original fix to be rolled back.
The crash mode fix deployed on the RC channels in week 34 and the Main channel in week 35 is apparently related to Skill Gaming regions (SEC-1458). Essentially, if you sent somebody a teleport offer from a Skill Gaming region, you could, depending on the circumstances, crash the region. Which sort-of sounds like a skill game in its own right …
The RC rolls were postponed 24 hours as a result of problems with the simulator deployment tool, rather than with the RC code itself. The postponement allowed the deployment team locate and fix the problem.
CDN and Map Tile Images
During the Server Beta meeting on Thursday August 28th, Maestro revealed that as well as looking to move mesh and texture fetching to a CDN, the Lab is considering using the same service to deliver map tile images for the world map. To test the idea, and while acknowledging it doesn’t have much of a load test, as the beta grid is much smaller than the main grid, map tiles are now being delivered to any viewers connected using Aditi via the CDN.
Hopefully, this should result in faster and more reliable map tile loading than when relying on the Amazon S3 servers.
Commenting on the current state of play with Aditi, Maestro said, “Aditi doesn’t have too many regions up, so there’s no exciting load test to do with a regular viewer, but for what it’s worth, I do see much lower latency when fetching map tiles from the CDN than from Amazon S3 (from my home, about 15ms vs 30-42ms, round trip).”
It’s not clear when this might be available on Agni, “I think it’s a matter of the ops team setting up an Agni-pointing CDN,” Maestro said.
The following notes are drawn from the TPV Developer meeting held on Friday August 28th, and shown in the video above. Time stamps, where relevant, have been included for ease of reference to the video. Note that items are listed according to subject matter, rather than chronologically, so time stamps may appear out-of-sequence. My thanks as always to North for the recording.
There are no major changes to the various LL viewers; the Experience Keys project viewer has yet to be updated, and the Oculus Rift viewer will be getting updated as the Lab continue to work with their Oculus DK2 sets.
The Snowstorm RC is liable to remain in RC for a while yet, although it appears to be doing well in terms of crash rates. The Experimental log-in viewer is still available and being tested.
[01:48] As reported following the last TPV Developer’s meeting, the Lab is updating the tool chain used to compile the Windows and Mac versions of the viewer, together with implementing a new viewer autobuild process. Referring to the latter at the TPV Developer meeting on Friday August 28th, Oz described the work as going “really well” and pointed to the fact that there are now two versions of the autobuild process that re “in theory” available for testing, “one that builds 64-bit, and one that does not.” These are liable to be merged in the near future.
Some TPV Devs have reported a few problems reported with the process, which may be library or environment related, but these still need to be looked into further to determine what the problems being experienced are and where they actually reside.
[03:50] There is some further work to be done on the new process, notably in making the handling of licenses and copyrights a lot stricter, but otherwise it is viewed as being very close to being ready to go. Oz hopes to have the latest bug fixes and updates merged into the process in week 36, and once that is done, the new autobuild process will be used in compiling the Lab’s viewers going forward as a part of the overall tool chain update.
[18:52] As a Part of the tool chain update, the Lab will be shifting to using a newer version of Kakadu (KDU). Firestorm is already using KDU 7.4 without any issues being encountered.
[04:39] Alongside of the tool chain updates, the Lab plans to release some additional wiki pages detailing working with the new tools – such as compiling with Visual Studio 2013, etc.
[06:30] As noted above, autobuild will have a 64-bit build capability within it, although Oz indicated that whether or not the Lab will use that capability is open to question.Theoretically, it could be argued that the Linux viewer could be made 64-bit only; however, Linux isn’t a part of the current tool chain update. Within the Mac environment, there are still some Mac mini systems which can only run in 32-bit, so Mac builds would be unlikely to go to 64-bit even if the Lab wanted to move in that direction.
[10:28] With 64-bit builds, Monty Linden pointed to s further potential issue (via chat) in that some media plugins may be 32-bit only.
Compiling in Linux
[05:12] The Lab has been compiling the Linux version of their viewer on a later version of the Gnu Compiler Collection (GCC), etc. Currently the Lab is hoping to make the Debian packages with the required source code, and details on how the packages were built, available to TPVs, although there is still some further auditing to be done before this happens, but Oz doesn’t believe there is anything which should prevent this from being the case.
[16:20] The Lab is continuing to work on Group chat, with Oz noting, “The systems we needed to update for our next round of experiments were getting updated for a different purpose by a different project. So we’re sort-of waiting around to make sure that that went smoothly before we do our next deploy. We try not to re-deploy the same systems too frequently. So the next round of experiments is hanging fire, waiting around for that set of updates to get the green light.”
The Lab is also watching the situation relating to group chat server failures – where the chat server hosting group chats with a specific letter at the start of their ID (e.g. beginning with “b” or “d”, etc.) stops responding. They believe they understand why it is happening (although reports in the forums seem to have decreased), but there is no time scale yet on when a fix will be available. In the meantime, there is a support process in place for getting a server restarted, so those experiencing the problem are encouraged to contact LL’s support team, explain that group chat messages are failing. This should result in the server being restarted.