HTTP and Group Services updates

There are a number of projects underway at the moment to improve various aspects of Second Life performance. Some of these have been reported on as a part of the Shining Project, others are being dealt with elsewhere are reported on through the likes of the SL Scripting User Group and the fortnightly TPV/Developer meetings.

The following is by way of a brief update on the ongoing HTTP Library and Group Management projects with information taken from the most recent TPV/Developer meeting (recording link).

HTTP Library

The focus of this aspect of the Shining Project is to improve the underpinning HTTP messaging that is crucial to simulator / simulator and simulator / viewer communications, and it is under the management of Monty Linden.

Discussion on progress with the project commences at 36:36 into the recording.

The project code (textures only) is with the Linden Lab QA team and is expected to be in the 3.4.1 viewer once it has been released by QA. In the meantime, the HTTP project viewer was updated at the end of July. Many people are noticing improvements in viewer performance that go beyond initial texture loading, although there have been reports of other aspects of the viewer which use HTTP apparently being “slower” to use. This latter issue is most likely a false impression, with Monty commenting at the August 24th meeting that, “Most parts shouldn’t be affected. It’s competitive, when you’re doing both texture downloading and some of that work … but other things aren’t being cheated if you’re not downloading textures at the time.”

An issue has been noted in older Macbook Pro systems (late 2007 into 2008 dual-core systems, although the span of the problem isn’t clear) using nVidia drivers, wherein the expected speed-up with cached data which can be seen on other systems isn’t occurring. Monty is still investigating this. Overall, however, feedback on this project has been positive.

Group Management Functions

Large group loading: a familiar problem

Baker Linden has been working to resolve this problem, and his plan is also to go the HTTP route, which will require changes on both the server and the viewer sides of the equation. His comments on progress start at 42:53 into the TPV/Developer meeting recording.

The server-side code for an initial implementation of the solution has been passed to LL’s QA and is expected to be rolled to selected regions on the Beta (Aditi) grid soon.

In terms of the viewer, the plan is to develop a Project Viewer, which will be made available in the near future for people to use with the Aditi test regions. How soon this viewer is likely to appear is open to question – the code will initially need to be passed by LL’s QA (who may have received it on the 24th August) prior to the viewer being built. Once in the project viewer repository, the code will also be available for TPVs to produce test viewers of their own.

How long the testing period will last is also open to question and dependent upon feedback / issues arising. However, the plan will be to follow the usual pattern for roll-outs in that once the code has been tested on Aditi and necessary updates made, it will be rolled to a main grid RC for more more involved testing. This is important, as there is a significant different in the number and sizes of groups operating on the two grids. For example, the largest group on Aditi numbers some 40,000 members; on the main grid the largest group is about 112K, and there are many more groups with between 40K and 112K members.

One thing that has been made clear is that there will be no attempt at backward compatibility with V1-based viewers on the Lab’s part; the new code will be aimed solely at the V3 code base. However, V1-based viewers will still be able to use the UDP protocols for group management, although the LL servers will limit UDP access to groups with 10K members or fewer, so V1-viewers will have some more code backporting on their hands.

There will also initially be some issues around the new HTTP protocol. For example, in the first implementation, the data will be uncompressed. This means that a 40K member group is around 5Mb in size, which can take up to a few minutes to download, depending on someone’s connection speed, so some frustrations are liable to continue. While data compression will eventually be used, this is not planned for the initial implementation.

The discussion involved providing an option to routinely clear-down group lists based on people’s last log-in date, or who have not logged in for a (group owner specified) number of days. However, LL are not going to implement such a feature on the grounds that it could lead to mistakes being made, and people being accidentally removed from a group.

Time Scale and Implementation

As mentioned above, there is no definitive time scale for this work to be completed. Testing is liable to take several weeks at the very least, so it is unlikely the new group management capabilities will be rolled-out on a widespread basis for at least another month, or possibly longer.

However, and like the upcoming new avatar bake service, once the server code is available on the grid, the switch-over will be transparent. If a viewer has the code to use the new group management HTTP service, it will do so, if it has not been updated, it will continue to use the UDP service (with the aforementioned 10K “cap”) until such time as that capability is “retired” from the grid.

5 thoughts on “HTTP and Group Services updates

  1. Compressed or not, it’s unclear to me why the viewer and API can’t implement paging of the data based on the viewport. It seems slightly ludicrous to download a 5mb group roster when on the majority of the occasions, it probably won’t be scrolled through anyway.

    Like

  2. Paging sounds good, indeed. I never liked the fact that I needed to wait for my (frozen) viewer to finish loading the (thousands) group members to check a notice or check if it’s join-open…

    Like

  3. Compression is also good if the persons CPU is responsible for uncompressing, less strain on LL servers too and less download 🙂

    Like

    1. Compression is a tricky one. For a start, unless you store the compressed data, the LL Server is going to have to compress every download. But if it is stored compressed, every edit of the list has an overhead.

      On the other hand, compressing anew might not be as great a penalty as sending the data uncompressed. Though what makes sense for a lone computer might not look so good for a server. Are there the spare clock cycles?

      Of course, it used to be that modems had built-in data compression, v.42bis is the label I recall, but that is almost dead now, gone away with dial-up. I can see how some sort of streaming compression of that sort might pay off. I’m not very confident that the Lindens could implement it.

      Like

Comments are closed.