The following notes are primarily taken from the TPV Developer (TPVD) meeting held on Friday, August 21st, 2015. A video of the meeting is included at the end of this report, with any time stamps in the following text referring to it. My thanks as always to North for the recording and providing it for embedding.
Server Deployments – Recap
There was a single server maintenance package deployed during the week, which was delivered to the BlueSteel RC on Wednesday, August 19th. This was intended to provide fixes for items and folders getting mixed up. however, this was subsequently rolled back on Thursday, August 20th.
Project Quick Graphics
On Friday, August 21st, the long-awaited Avatar Complexity / graphics presets viewer arrived in project viewer form. Version 188.8.131.524433 is being referred to as “Project Quick Graphics”. I provided an initial look at this viewer in pre-release, but I now have an updated overview available.
As noted in that report, the Avatar Complexity default you get is based on the rendering performance of your system. however, this might be adjusted by the Lab during the time the viewer is available at a project status.
[02:00] An update to the Oculus Rift viewer is still anticipated, although this has tended to be pre-empted by other things, and may be again.
There have been no other viewer updates since the promotion of the Maintenance viewer on Tuesday, August 18th, as reported in part 1 of this update.
[23:35] The will, at some point be an experimental viewer build, which should lead to a project viewer in the future, using the FMod Studio for audio.
[08;00] Rider Linden has been engaged in further HTTP work, specifically aimed at the viewer with the intent of reducing the paradigms for how HTTP should be used within the viewer from 4 to a single, consistent approach. He has most recently been engaged in aligning recent HTTP updates made to the viewer with his own work.
[19:36] The Lab is still looking for move more asset types from delivery using UDP via the simulator to delivery using HTTP via the CDN, but this is pending the completion of Rider’s HTTP work. Overall, the view is that there is no reason why any asset that goes to the viewer should be cached and delivered via the CDN.
[10:48] The lab is continuing to investigate causes of inventory issues with the intention of reducing them. In particular, they are considering server-side enforcement on how inventory should be organised.
The idea is not to prevent how people organise their inventories, but rather to ensure things that simply should not happen under normal use, but which have been shown to lead to inventory losses when they do occur, are no longer possible. Examples of this include a user’s inventory gaining more than one Trash folder, or the system allowing folders to be created without an associated system ID, and so on. The most effective way of achieving this is through server-side rules enforcement.
While the Lab is not ready to start implementing such changes as yet – they are still investigating, as noted – these changes are part of an overall goal to migrate all inventory operations over to AIS (Advanced Inventory System) and then to deprecate older inventory code – all of which will involve changes to the viewer. This means that as this work progresses, viewers not supporting the AIS v3 code will no longer be able to perform inventory operations.
[16:40] Commenting on issue of validation of uploads in general, Oz Linden said:
I would like to add validation for more things that get uploaded [but] of course there’s always the backward compatibility problem, people complaining that once upon a time I could upload this, and now I can’t…
However, he went on to say that there is a case for not limiting validation of uploads purely to the viewer, as is currently the case:
There’s nothing wrong with also checking in the viewer, but if it’s not the model we expect to be true of the world, there should be validation on the server because we have a lot of third-party viewers … So we really can’t count on the viewer to get it right, there are too many of them. And if nothing else, some things that can cause crashes that might be deliberately put into viewers … that might cause crashes in other people’s’ viewers, and that’s not good. So we have to try to protect against that.
The best place to put that protection, if we can do it, is to put it one the server-side, if we can do it. So there are lots of things that, over time, we may add checking of things, as they are uploaded, on the server, and we may reject uploaded things, and we may reject uploaded things that are inappropriate.
How quickly we will be able to do that will probably vary with what the upload type is and what time we have between doing dazzling new features; but if we find something related to some dazzling new feature we can add some checks to, we might do that.
Chromium Embedded Framework
[23:55] The work to replace llwebkit with Chromium Embedded Framework for media purposes has been progressing, but is currently on pause for a week or two as the developer is away from the Lab on vacation.
There has been an internal demonstration of it working within the Lab, and the feedback is that HTML 5 works well on prims, etc. However, it may be a while longer before there is anything available for a more public view as the Lab works through the “zillion little things” that tend to need addressing towards the latter stages of a project like this. Currently, a target date for a project viewer could be late September / early October.
The CEF work will “almost certainly not” have Flash or QuickTime support included, although Flash may still work depending upon how people have their systems set-up (at least for a time). Ultimately, providers of in-world media systems using Flash or QuickTime will have to update them if they wish them to keep working as CEF is introduced. In addition, the implementation will only support codecs directly supported by CEF.
[20:25] During the meeting, Oz repeated a call he originally made some time ago asking that if there is developer wishing to volunteer for a “deep dive” into viewer caching systems, and possibly undertake a re-write of some elements, he’d like to hear from them.
There is a conviction that some don’t work as they should, an example being what appears to be asset data which should already be locally cached getting downloaded. Another possible example might be a re-examination of the use of a single, large static VFS file is used to store data for almost anything that’s not a texture compared to other means of storing the information; it is thought that the use of a single VFS file can lead to possible contentions between it and the local texture cache, when the latter gets full.
[27:46] Vivox are apparently still in a final round of bug fixes ahead of issuing the next set of voice updates the Lab will be able to incorporate into the viewer. However, these will be Windows / Mac focused, as Vivox apparently have no plans to update their Linux SDK.
Questions have been asked on the ability for people to create and sell experiences created using the Experience Keys / Tools. Commenting on the question when asked during the TPV Developer meeting, Oz said:
Would it be good to be able to sell experiences? Yes. Would it be something we might at some point consider doing? Yeah, it’s been discussed. [But] it’s not real simple, actually; we’re not working on it. It has been talked about as a possible good thing, but you get into complicated situation there.
If I transfer – let’s say transfers – my experience from myself to Hope [an attendee at the meeting], then who owns all the objects that are associated with that experience? Who owns all the scripts that have been compiled with that experience? And you end up with this big tree of things that needs to get changed in order for the real responsibility to be changed.
It’s not that it wasn’t a goal; it’s that it’s really complicated. An experience ties together a lot of different things. So like I said, it’s been talked about, but it’s not clear that it is easy to do.
In terms of gird-wide experiences, and whether these will be generally enabled, the decision is currently on hold while the Lab monitors how the current land-based iteration of experiences capabilities is adopted and used, and how the system generally holds up.
There has apparently been several hundred experiences that have been activated so far (i.e. apparently had their keys registered), but not all of them are currently active or have scripts, etc., associated with them, so the Lab is waiting to see how things evolve.