Update, April 1st: Vir Linden’s comments on this viewer, offered at the Content Creation User Group meeting, are appended to the end of this article in an audio file.
Some of my recent SL project updates have mentioned that the Lab is working to remove the remaining task of asset fetching away from UDP running through the simulator and to HTTP (avatar baking information, mesh and texture data have been delivered to users via HTTP for the last several years).
This work involves changes to both the simulator and the viewer, both of which have been subject to testing in Aditi, the beta grid for the last few weeks.
However, on Thursday, March 30th, the Lab effectively marked the start of testing on Agni, the main grid, with the release of the AssetHttp project viewer, version 126.96.36.1994828.
This viewer enables the remaining asset classes used in Second Life – landmarks, wearables (system layer clothing and body parts), sounds and animations – will now be delivered to users the same way as textures, mesh and avatar baking information: via HTTP over a Content Delivery Network (CDN) rather than through the simulator. This should generally make loading of such content both faster and more reliable.
Hang On! What’s this CDN Thing?
If you’ve followed the HTTP / CDN project, you can skip this part 🙂 .
To keep things extremely brief and simple: a Content Delivery Network is a globally distributed network of servers which can be used to store SL asset information. This means that when you need an asset – say a sound or animation – rather than having to go via UDP to the simulator, then to LL’s asset service, back to the simulator and finally back to you (again via UDP), the asset is fetched over HTTP from whichever CDN node which is closest to you. This should make things faster and smoother, particularly if you are a non-US based user.
There are some caveats around this – one being, for example, if you’re calling for asset information not stored on the local CDN node, then it still has to be fetched from the Lab’s services for delivery to you, where it can be cached by your viewer.
As noted above, the Lab started using CDN providers when they introduced the avatar baking service (called server-side baking) in 2013, and extended the use to the delivery of mesh and texture assets as part of a massive overhaul of Second Life’s communications and asset handling protocols spearheaded by Monty Linden (see my HTTP updates). Moving the remaining asset types to HTTP / CDN delivery effectively completes that work.
OK, So, What’s Next?
Right now, this is only a project viewer, and the Lab are looking to have people try it out and test fetching and loading of landmarks, wearables (system layer clothing and body parts), sounds and animations, so they can examine performance, locate potential issues etc.
However, the code will be progressing through project status to release candidate and ultimately to release status over the next few weeks / months (depending on whether any significant issues show up). Once this happens, TPVs will be given a period of time to integrate the code as a well, after which, all support for UDP asset fetching will be removed from both the viewer code, and from the simulators.
A rough time frame for this latter work is around late summer 2017. When it happens, it will mean that anyone using a viewer that does not have the updated HTTP code for asset handling isn’t going to be able to obtain any new or updated asset data from the Second Life service.
The following notes are primarily taken from the TPV Developer (TPVD) meeting held on Friday, November 6th 2015. A video of the meeting is included at the end of this report, and time stamps to it are provided. My thanks as always to North for the video recording and providing it for embedding.
Server Deployments Week 45 – Recap
On Tuesday, November 3rd, the Main (SLS) channel received the server maintenance package previously deployed to BlueSteel and LeTigre, comprising a simulator crash fix
On Wednesday, November 4th, all three RC channel received a new server maintenance package comprising a fix for group invite throttle notifications, and an internal server code clean-up.
Following the main channel deployment, issues were variously reported with llHTTPRequest returning NULL_KEY even when it should not be throttled (see BUG-10627). The issue was initially noted with skill gaming mechanisms, but has also been encountered by those using Sculpt Studio, and reportedly with breedable systems.
SL Viewer Updates
[0:20] A further Maintenance RC viewer due to be released in week #46, which includes a range of fixes, including some for the regressions present within the current de facto release viewer (version 188.8.131.525981).
It currently appears that the next viewer promotion to release status will either be this maintenance viewer or the HTTP viewer – however the promotion is unlikely to occur in week #46, due to the Maintenance RC only just having been released and because the HTTP viewer his a number of issues to be resolved – see below.
Quick Graphics RC Viewer
[0:35] There are still issues with this viewer which although described as “nothing terrible” in the Lab’s view, are sufficient to prevent its promotion in the immediate future.
HTTP (Project Azumarill) RC Viewer
[1:04] There are a number of issues which have been identified by both the Lab and reported by users which are sufficient to block the promotion of this viewer to release status. One of these is an increased number of viewer disconnects on teleporting. The causing of this is proving elusive as it seem to only occur for some people with others never encountering issues. Other significant reported issues include:
BUG-10230 – Voice often fails to connect on the Azumarill viewer
BUG-10391 – Avatar often bakes fails on Azumarill.
CEF (Project Valhalla) Project Viewer
[1:19] The Chromium Embedded Framework viewer is performing well on Windows in the Lab’s estimation, and is getting “really close” on the Mac. The hope is that this viewer will progress from project status to a release candidate “pretty soon”.
[16:40] There have been some reports of issues with this viewer, including BUG-10598, logged out when tping from one LM to another, which is likely related to the disconnect issues being experienced with the HTTP viewer (both the HTTP viewer and the CEF viewer are closely related).
[17:25] There has been a report of “major” HTML video performance issues (see BUG-10558). However, running 30+ YouTube videos via HTML 5 in a single location is viewed by the Lab as possibly excessive and, as noted on the JIRA, the problem hasn’t been easy to reproduce and the video footage supplied might suggest a problem other than simply with running multiple HTML 5 videos.
Vivox Project Viewer
[1:44] A new project viewer, version 184.108.40.2067189, was released on Friday, November 6th. The Vivox project viewer should correct a number of Voice quality and connection issues on both Windows and the Mac.
The Lab requests that anyone who has experience Voice issues to try this viewer, and if it does not resolve their issues to raise a JIRA, being sure to cite this viewer’s version number in the report. Those failing to indicate they have tried the project viewer are liable to be asked to do so, simply because Vivox has asked the Lab not to forward bug reports to them unless they have been tested against the Vivox package included in this viewer. For a list of fixes, please refer to the release notes in the above link.
Simulator Behaviour Changes
Attachment Point Validation
[30:15] As noted in the last TPVD meeting in October, the Lab are shifting a number of validation checks from the viewer to the simulator. One of these is attachment point validation checks, which will mean that attachments attempting to fix itself to an invalid attachment point ID will be attached to the chest by the simulator.
Mesh Upload Validation Checks
[31:15] A further simulator-side check the Lab has been considering would prevent the upload of animations and meshes weighted to valid attachment points. The Lab have been discussing this given the feedback given in October (see also FIRE-17144 and BUG-10543 for feedback as well), and have decided to investigate options further rather than implementing any immediate changes. However, they do note that there is no guarantee the ability to upload meshes weighted in this way will continue in the future.
[32:05] The Lab has made it clear what will be implemented in the near future is a simulator validation check to prevent the upload of animation or mesh items weighted to use joints /attachment points which are not a part of the recognised set of avatar joints.
[22:20] An issue has emerged around the upcoming gateway trial programme which is loosely connected to the avatar updates mentioned above. The new avatars (mesh and “Classic”) are only available via the Lab’s “new” registration API. Currently, the API presented to gateways is the older version, which relies on the default Character Test avatar.
As noted at the TPVD meeting, this could be immediately off-putting to new users entering Second Life through the new trial gateways. However, the Lab appears to be “pretty close” to being able to switch the gateway SPI to using the newer set-up – and this may be one of the reasons the trail gateway programme hasn’t as yet been officially announced.
Not directly related to news from the Lab, but the next release of the Firestorm viewer should be around Monday, November 16th or Tuesday, November 17th. However there are still dependencies on this, and it is not a hard set of dates.
On Tuesday, October 20th, Linden Lab issued a blog post highlighting upcoming technical improvements to the service, particularly viewer-side updates, which will shortly be making an appearance, or which are already available in the form of release candidate or project viewers.
Regular readers of this blog will hopefully already be aware of most of the improvements mentioned by the Lab’s post, as I’ve covered them through various updates and focus articles in these pages.
The first to get a mention is the Project Valhalla viewer, which introduces Chromium Embedded Framework functionality to Second Life to replace the ageing LLQTwebkit functionality, and which I initially previewed here. Commenting on this viewer, the Lab blog post states:
A new age of modern HTML5 content is upon us, and we’re overhauling the way shared media (aka “media on a prim”) works so that you can enjoy all kinds of modern web content within Second Life. Chromium Embedded Framework (CEF) is coming to replace the ageing QTWebkit framework. What you can now see in a Project Viewer is the latest released version of Chrome – so it will render all modern web technologies – like HTML5, CSS3, WebGL; has the latest security patches; and will be easy to keep updated to a recent version. What does this mean for your Flash content? What about QuickTime? They may still work, but because both can only be viewed if the user has correctly installed a 3rd party plugin, we can’t promise support and you shouldn’t expect that it will work for everyone. Standard HTML5 is the way of the future and this Viewer will enable it for anyone. There are still bugs to squash, and we’re iterating quickly to bring you a smooth in-world media and browsing experience. If you have comments about this feature – please post to the forum thread about this topic – located here.
My own quick-and-dirty demo of using a viewer with the CEF code (in this case, the Black Dragon TPV, as I don’t have a video using the Valhalla viewer) to access WebGL content displayed both on prims in-world, and via the viewer’s built-in browser. The WebGL demonstrations are provided via David Walsh (with thanks to Whirly Fizzle for the link), and this video is intended to offer a simple overview of some of the capabilities, which as mentioned in the notes from the Lab are actually far more extensive.
The major take-away from the CEF introduction at this point is that if you make use of media within your products (e.g. TVs, etc.), or as a part of you in-world presence, now is the time to be looking to convert them to using HTML / WebGL, etc., testing them against the Valhalla project viewer, and reporting any issues / highlighting any concerns. The Lab is more like to address matters / consider changes while the viewer is will at project status, rather than when it reaches a release candidate status.
Also mentioned in the post is the new Notifications RC viewer and the Quick Graphics viewer, both of which are both available as release candidate viewers. The former provides a new front-end for handling incoming notifications (and which I previewed here). The latter provides both the new Avatar Complexity functionality (which replaces the old ARC / ADW) and the abilities to create, save and quickly re-use sets of your own graphics presets for use in different environments and settings. Again, I’ve previewed both of these here and here.
Additional insight into Avatar Complexity is provided, both through the post and via a link to the Knowledge Base article on it. a request is also given that anyone who has general comments on the capability to please post them to the related forum thread (comments posted to this blog may not be read by the Lab).
Mention is also made of the HTTP updates which have been undertaken by Rider Linden (Project Azumarill) and which are also available in a release candidate viewer. This project builds on the foundations laid by Monty Linden with his HTTP work, further enhancing the use of HTTP capabilities to provide more robust communications capabilities with the simulators and back-end services. Finally – at least in terms of available viewers – the updated mesh importer viewer, recently promoted to de facto release status, gets a mention.
Also referred to, although it has yet to appear in a project viewer, is a further inventory related project. This is being led by Aura Linden, and the aim is to improve the overall robustness of inventory handling, the work being carried out in two parts, as I was (again) able to preview recently, courtesy of a Third-Party Viewer Developer meeting:
The first part will see the removal of all of the old UDP inventory messaging paths used by the viewer which have already been replaced by more robust mechanisms, but which have until now remained a part of the viewer’s code – this aspect of the work should be appearing in a project viewer viewer soon
The second part will comprise a refactoring of the viewer inventory files and functions with the overall aim of making the code more readable and easier to maintain.
As the Lab’s blog post indicates, this project further builds on the on-going work we’ve seen during the course of the last few months to improve inventory performance, reduce the number of inventory losses users may suffer, and provide assistance with inventory-related problems which affect things like logging-in to to SL.
Finally, the Labs blog post also reveals that Flash has now been removed from the secondlife.com website, noting:
Due to the numerous recent security problems affecting Flash, it has been completely removed from our website. A modern way to animate starter avatars in the join carousel and some other exciting news for registration is coming out really soon. Keep an eye out!
This announcement again signifies that while the Shining Project many have ended, and things may have been a little quiet of late, things are still being progressed. As it stands, the notifications updates may well be promoted to the de facto release viewer in week #44 (week commencing Monday, October 26th). In the meantime, the links below will take you to the latest (at the time of writing) versions of the viewers mentioned here and in the Lab’s blog post. If you find any reproducible issues with any of them, please raise a JIRA, and remember to post any general comments you may have on the viewers to the official forum threads mentioned above.
The following notes are primarily taken from the TPV Developer meeting held on Friday, March 27th, a video of which is included towards the end of the article (my thanks as always to North for recording it and providing it for embedding), and from the Server Beta meeting held on Thursday, March 26th. Any time stamps contained within the following text refer to the TPV developer meeting video.
Server Deployments Week 13 – Recap
As always, please refer to the deployment thread in the forums for the latest updates / news.
On Tuesday, March 24th, the Main (SLS) channel received the server maintenance package deployed to the three RCs in week 12, comprising updates which allow the Lab to make various configuration changes without having to necessarily run a rolling restart when they have done so. It contains not actual functional changes to the simulator software
On Wednesday, March 25th, the three RC channels received the same new server maintenance package, which is focused on inventory loss issues, and provides the Lab with better error detection and logging, improving their ability to look at some of the failure places and the removal of unused code. This updates does notremove the server-side messaging used in support of RTLP.
SL Viewer Update
Avatar Layers Project Viewer
Vir Linden’s work on a new global limit for system layer clothing was released as a project viewer, version 220.127.116.119805. With this viewer, a user can wear any combination of clothing layers (wearables), up to a maximum of 60, rather than being limited to (in general, and as with the official viewer) to a maximum of 5 items per layer type. Note that these changes do not apply to body part wearables (skin, shape, hair, eyes), for which the limit is still one of each, and do not affect attachments, for which the limit is still 38 total.
[07:18] There is already an update in the pipe for this viewer, which should be appearing next week.
Camera Positioning / Handling
[05:12] While there are no specific details as yet, the lab is hoping to put some work into improving camera positioning and handling in the not too distant future, in the hope of removing various glitches and issues.
Build Tools Viewer
[05:54] There have been a few fixes added to this viewer (currently version 18.104.22.1689443), so a further update to the release candidate version is with the Lab’s QA team and should be appearing in week #14 (week commencing Monday, March 30th).
Maintenance Release Viewer
[06:29] Currently at version 22.214.171.1249845, the latest Maintenance release viewer has a range of issues, many of which have hopefully been addressed with a series of fixes, so an update to that viewer is also with the Lab’s QA team. However, given the scope of the updates, it is proving a little harder to pass the QA process.
Experience Tools Viewer
[06:50] The Experience Keys / Tools viewer (currently version 126.96.36.1999338) is being merged-up with the latest release version of the viewer code (version 188.8.131.529635). The updated version should also be appearing (again as an RC) in week #14.
[17:27 – 19:50] There is an interesting discussion on the viewer code which, for anyone interested in how the viewer has developed over the years, and how much of it dates back some 14 years.
[00:00] There was a pile-on test of the new Viewer-Managed Marketplace capability on Aditi in week #12, and Brooke Linden was at the TPV Developer meeting to provide feedback. The pile-on test did not reveal any significant issues in terms of performance.
However, there is still a viewer / simulator / marketplace communications issue which has to be resolved, which may take another couple of weeks to fix. After that, there are two grid deployments which need to take place: one for the VMM code itself, and one for updates to the Advanced Inventory System (AIS), so it is unlikely VMM will be fully deployed within the next month to two month, and the project viewer (currently version 184.108.40.2068865) is unlikely to progress through a release candidate to release status until after the server components have been deployed.
[07:32] Simon Linden has been continuing to work on the group chat code, and all of his current updates should have been deployed to the back-end group chat servers. A broad consensus is that the issues which recently occurred as a result of some changes have been reversed, and that the group chat service as a whole is now running a lot better, both in terms of the early performance improvements Simon made, and with regards to the overall stability of the service and the servers.
[08:24] There is a further round of updating in the planning, but these require a platform upgrade to be carried out for the group chat service first. Therefore, unless unless the latest set of updates deployed by the Lab start to show issues, the engineering team will be switching focus for the immediate future, and will return to working on group chat once the necessary upgrade work has been completed.
Experience Keys / Tools
[09:20] One of the items the engineering team want to focus on in particular is Experiences, and getting the remaining back-end issues sorted out so that Experiences can be properly deployed.
[09:59] There will be a further round of voice updates which are expected to appear in a project viewer “shortly”. They include (but are not limited to) things like general code clean-up to prevent unnecessary list loading, removal of media messaging in person-to-person calls (which has never worked), fixes for issues related to microphone volume and improvements to the microphone test so that you can now hear yourself when testing your microphone, and improvements for hot swapping microphones / headsets.
[13:58] There is some confusion over whether or not a fix to voice designed to prevent someone’s voice channel being “left behind” when teleporting between regions has actually worked. It had been thought that the fix for this had been deployed in later 2014. However, bug reports are still being made still reporting issues (see BUG-8543 and STORM-2109), prompting the Lab to re-examine the status of the fix.
[19:54] Voice package updates from Vivox are also expected to be forthcoming in the future as well.
Restore To Last Position (RTLP)
[21:08] There have been around 400 responses to the Firestorm call for feedback on how people use the Restore To Last Position functionality found in some TPVs. As I’ve previously reported, the Lab had been considering deprecating the server-side message RTLP uses as an overall part of on-going work to reduce the amount of inventory loss issues (real or perceived) which can occur.
Firestorm’s call is helping the Lab to better understand how, as faulty as it might be, RTLP does fulfil a range of useful / valid use cases. Commenting on the fact the he has been reading through the feedback, Oz Linden said:
[21:49] Well, I understand that there are user scenarios that need to be addressed and need to be better supported. Whether the existing feature is the way to do that or not, I still consider to be an open question. I do want to take those use cases and work back through that process [of determining how best to serve them].
So the Lab still isn’t going to do anything “quickly” either way on RTLP, and people needn’t worry about RTLP vanishing / breaking “suddenly”.
In the meantime, they are working on other changes intended to address various rezzing failure situations. This work is more server-side focused, although it may be a while before updates appear on the grid as the exact nature of the updates is still being determined.
[23:42] Oz also again thanked everyone who responded to the Lab’s call for feedback on inventory losses in general, defining the feedback as “really, really useful”.
In show #46 of The Drax Files Radio Hour, which I’ve reviewed here, Draxtor pays a visit to the Lab’s head office in Battery Street, San Francisco. While there, he interviews a number of Linden staffers – including Monty Linden.
Monty is the man behind the Herculean efforts in expanding and improving the Lab’s use of HTTP in support of delivering SL to users, and which most recently resulted in the arrival of the HTTP Pipeline viewer (the code for which is currently being updated).
He’s also been bringing us much of the news about the content delivery network (CDN) project, through his blog posts; as such, he’s perhaps the perfect person to provide further insight into the ins and outs of the Lab’s use of both the CDN and HTTP in non-technical terms.
While most of us have a broad understanding of the CDN (which is now in use across the entire grid), Monty provides some great insights and explanations that I thought it worthwhile pulling his conversation with Drax out of the podcast and devoting a blog post on it.
Monty starts out by providing a nice, non-technical summary of the CDN (which, as I’ve previously noted, is a third–party service operated by Highwinds). In paraphrase, this is to get essential data about the content in any region as close as possible to SL users by replicating it as many different locations around the world as is possible; then by assorted network trickery, ensure that data can be delivered to users’ viewers from the location that is closest to them, rather than having to come all the way from the Lab’s servers. All of which should result in much better SL performance.
“Performance” in this case isn’t just a case of how fast data can be downloaded to the viewer when it is needed. As Monty explains, in the past, simulation data, asset management data, and a lot of other essential information ran through the simulator host servers. All of that adds up to a lot of information the simulator host had to deliver to every user connected to a region.
The CDN means that a lot of that data is now pivoted away from the simulator host, as it is now supplied by the CDN’s servers. The frees-up capacity on the simulator host for handling other tasks (an example being that of region crossings), leading to additional performance improvements across the grid.
An important point to grasp with the CDN is that it is used for what the Lab refers to as “hot” data. That is, the data required to render the world around you and other users. “Cold” data, such as the contents of your inventory, isn’t handled by the CDN. There’s no need, given it is inside your inventory and not visible to you or anyone else (although objects you rez and leave visible on your parcel or region for anyone to see will have “hot” data (e.g. texture data) associated with it, which will gradually be replicated to the CDN as people see it).
The way the system works is that when you log-in or teleport to a region, the viewer makes an initial request for information on the region from the simulator itself. This is referred to as the scene description information, which allows the viewer to know what’s in the region and start basic rendering.
This information also allows the viewer to request the actual detailed data on the textures and meshes in the region, and it is this data which is now obtained directly from the CDN. If the information isn’t already stored by the CDN server, it makes a request for the information from the Lab’s asset servers, and it becomes “hot” data stored by the CDN. Thus, what is actually stored on the CDN servers is defined entirely by users as they travel around the grid.
The HTTP work itself is entirely separate to the CDN work (the latter was introduced by the Lab’s systems engineering group while Monty, as noted in my HTTP updates, has been working on HTTP for almost two-and-a-half years now). However, they are complimentary; the HTTP work was initially aimed at making both communications between the viewer and the simulator hosts a lot more reliable, and in trying to pivot some of the data delivery between simulator and viewer away from the more rate-limited UDP protocol.
As Monty admits in the second half of the interview, there have been some teething problems, particularly in when using the CDN alongside his own HTTP updates in the viewer. This is being worked on, and some recent updates to the viewer code have just made it into a release candidate viewer. In discussing these, Monty is confident they will yield positive benefits, noting that in tests with users in the UK,, the results were so good, “were I to take those users and put them in out data centre in Phoenix and let them plug into the rack where their simulator host was running, the number would not be better.”
So fingers crossed on this as the code sees wider use!
In terms of future improvements / updates, as Monty notes, the CDN is a major milestone, something many in the Lab have wanted to implement for a long while, so the aim for the moment is making sure that everyone is getting the fullest possible benefit from it. In the future, as Oz linden has indicated in various User Group meetings, it is likely that further asset-related data will be moved across to the CDN where it makes sense for the Lab to do this.
This is a great conversation, and if use of the CDN has been confusing you at all, I thoroughly recommend it; Monty does a superb job of explaining things in clear, non-technical terms.
On top of their feature blog post on recent improvements to SL, on which I also blogged, the Lab has also issued a Tools and Technology update with data on the initial deployment of the CDN.
Entitled CDN Unleashed, the post specifically examines the percentage of simulator servers experiencing high load conditions (and therefore potentially a drop in performance) on the (presumably) BlueSteel RC both before and after deployment of the CDN service to that channel – and the difference even caught the Lab off-guard.
While a drop in load had been expected prior to the deployment, no-one at the Lab had apparently expected it to be so dramatic that it almost vanishes. Such were the figures that, as the blog post notes, at first those looking at them thought there was something wrong, spending two days investigating and checking and trying to figure out where the error in data came from – only it wasn’t an error; the loads really have been dramatically reduced.
Elsewhere, the blog post notes:
Second Life was originally designed for nearly all data and Viewer interactions to go through the Simulator server. That is, the Viewer would talk almost exclusively to the specific server hosting the region the Resident was in. This architecture had the advantage of giving a single point of control for any session. It also had the disadvantage of making it difficult to address region resource problems or otherwise scale out busy areas.
Over the years we’ve implemented techniques to get around these problems, but one pain point proved difficult to fix: asset delivery, specifically textures and meshes. Recently we implemented the ability to move texture and mesh traffic off the simulator server onto a Content Delivery Network (CDN), dramatically improving download times for Residents while significantly reducing the load on busy servers.
Download times for textures and meshes have been reduced by more than 50% on average, but outside of North America those the improvements are even more dramatic.
Quite how dramatic for those outside North America isn’t clear, quite possibly because the Lab is still gathering data and monitoring things. However, the post does go on to note that in combination with the HTTP pipelining updates now available in the current release viewer (version 220.127.116.115700 at the time of writing), the CDN deployment is leading to as much as an 80% reduction in download times for mesh and texture data. Hence why the Lab is keen to see TPVs adopt the HTTP code as soon as their release cycles permit, so that their users can enjoy the additional boost providing the code on top of enjoying the benefits offered by the CDN.
Again, at the time of writing, the following TPVs already have the HTTP pipelining code updates:
Cool VL version v18.104.22.168 and v22.214.171.124 (legacy version)
As per the Performance, Performance, Performance blog post, the Lab want to hear back from users on the improvements. Comments can be left on the Performance Improvements forum thread, where Ebbe and Oz has been responding to questions and misconceptions, and Whirly Fizzle has been providing valuable additional information.