Recent SL viewer activities

It’s been a while since I’ve reviewed any of the official SL viewers from LL, so here’s a quick round-up of recent releases.

New Log-in / Account Creation Prompt

The new account creation prompt, displayed if the viewer does not locate any user settings files on a computer, and which first appeared in the 3.4.1.263582 release (August 16th), now looks to be the default option for all development / project viewers. It is part of both the most recent Mesh Deformer project viewer (3.4.1.264215, August 31st), and the new HTTP Group Services project viewer (3.4.1.264495, September 7th). However, it has yet to filter through to either the Beta or release versions of the Viewer.

Account creation prompt: now standard on all development / project releases of the SL viewer released since August 16th (click to enlarge)

Mesh Deformer Project

August 31st saw a new release of the Mesh Deformer (3.4.1.264215), which includes a revised mesh uploader floater with deformation options for the male and female shape.

New deformation options

According to Nalates Urriah, the new options invalidate all test items so far provided for the project, and new samples are now required, although no comments to this effect appear to have been made on the JIRA or elsewhere, so they may have been confined to a user group meeting. Details on how to provide test items can be found in Oz’s forum post on the matter. The JIRA (STORM-1716) for this project is still open for viewing and comment.

Group Services Project Viewer

As noted this week, there is now a Group Services (group management) project viewer available for testing the new HTTP group management service. The server-side of this project has yet to be rolled-out to Aditi, so it cannot be tested as yet. However, Baker Linden, who is developing the service, is apparently updating the JIRA, SVC-4968 (which is still publicly viewable) with the project status, and has indicated he’ll post when the server-side elements are available for testing.

The viewer is available in Windows, Linux and OSX flavours.

HTTP Libraries Viewer

The HTTP Libraries project viewer (3.3.3.262585) appeared on July 27th. This project, which Monty Linden is driving, is currently aimed at improving texture downloading and rezzing as a part of the Shining project.

HTTP Libraries project viewer: improved texture loading and rezzing

Texture loading / rezzing would appear to be significantly faster on this viewer compared with other offerings, although there also appear to be what might be placebo effects associated with it. Some people have reported that floaters, etc., seem to load more slowly, and some have reported various performance improvements outside of the HTTP library changes.

Beta Viewer and Release Viewers

The Beta viewer (3.4.0.264445 at the time of writing) continues to be focused on pathfinding, with fixes and updates going into it on a weekly basis – which is why the pathfinding tools have yet to release a release version of the viewer.  The removal of JIRA numbers from the release notes now means that tracking issues previously being watched is that much harder (even if the JIRA themselves are no longer accessible, having the JIRA numbers still visible facilities easier identification of issues being specifically tracked).

Similarly, the release viewer (3.3.4.264214) appears to be focused on bug fixes and general improvements, with the release notes currently benefiting from the retention of JIRA numbers, making scanning for specific fixes easier.

Performance

I carried out basic performance tests on the viewers listed above using Dimrill Dale as my sample sim, during a period when there were the same number of avatars in the region (5, including myself). Tests were carried out in the same location on the region, looking in the same direction and with the same viewer settings (e.g. Graphics on high, Draw Distance set to 260m, using default time-of-day, with deferred disabled / with deferred enabled and lighting set to Sun/Moon+projectors, etc.). While all such tests are rough-and-ready, these did tend to show that all of the viewers offer the same performance on my default PC (see the sidebar panel on the right of this blog’s home page for system details). My results were:

  • Non-deferred: 18-20fps
  • Deferred with Sun/Moon+ projectors: 8-10fps

Similar figures were also obtained using the current Firestorm and Exodus viewers, although with deferred enabled and Sun/Moon+projectors active, Firestorm was slightly down at an average of 17-18fps, the other viewers being closer to an average of 19-20fps.

Group management project viewer released

As I recently noted, Baker Linden has been working on the large group management / editing issues, developing a new HTTP-based service to replace the current UDP service which has significant issues handling groups with more than 10-11K members. At the TPV/Developer meeting on the 24th August, he indicated that a project viewer would be available in the near future.

On Friday September 7th, he updated the JIRA on the issue with notes that the project viewer is now available for Windows, Linux and OSX.

In commenting on the JIRA (SVC-4968), Baker notes that the server-side code for the new service has yet to be deployed to Aditi, where initial testing will take place, and adds that he’ll be providing an update on the status of the server code once the situation has been clarified. He goes on to add:

There may be some issues during testing. When getting the member list of a large group, other info (group title, group info, etc.) may not properly load. This is an issue with the speed of Aditi’s SQL server and shouldn’t occur once live on Agni. To receive the rest of the data, wait for the member list to appear (this can be upwards of a few minutes), go back to the My Groups panel of the people floater and view the group profile again. The query will be cached this time, and the member list will appear quicker than it did before (depending on your connection speed). The rest of the information should be received this time.

If you find any problems while testing, please send me a message in-world (on Agni).

Large group loading: part of the group management problem

As noted in my previous report, in the first implementation, the data will be uncompressed. This means there will still be some delays in group loading (Baker previously estimated that a 40K member group is around 5Mb in size and could take up to a few minutes to download, depending on someone’s connection speed). Data compression is being looked at for a future release, although as noted in the comments on my last article, some are wondering why paging group data isn’t being implemented (does the viewer API support it?).

Another point of note is that the new service is not compatible with V1 code, so adoption by V1-based viewers is liable to require some backporting. This is important, as once the new HTTP service is rolled out, the older, more limited UDP service will be capped at groups containing 10,000 members – larger groups will not function.

There is still no definitive time scale for the roll-out of the new service. However, it seems likely that once available on Aditi, the server code will remain there for testing for at least a couple of weeks prior to it being added to a RC channel on the main grid. How long the testing period across both will be is open to question, and a lot will depend on feedback as to how well the new service performs.

HTTP and Group Services updates

There are a number of projects underway at the moment to improve various aspects of Second Life performance. Some of these have been reported on as a part of the Shining Project, others are being dealt with elsewhere are reported on through the likes of the SL Scripting User Group and the fortnightly TPV/Developer meetings.

The following is by way of a brief update on the ongoing HTTP Library and Group Management projects with information taken from the most recent TPV/Developer meeting (recording link).

HTTP Library

The focus of this aspect of the Shining Project is to improve the underpinning HTTP messaging that is crucial to simulator / simulator and simulator / viewer communications, and it is under the management of Monty Linden.

Discussion on progress with the project commences at 36:36 into the recording.

The project code (textures only) is with the Linden Lab QA team and is expected to be in the 3.4.1 viewer once it has been released by QA. In the meantime, the HTTP project viewer was updated at the end of July. Many people are noticing improvements in viewer performance that go beyond initial texture loading, although there have been reports of other aspects of the viewer which use HTTP apparently being “slower” to use. This latter issue is most likely a false impression, with Monty commenting at the August 24th meeting that, “Most parts shouldn’t be affected. It’s competitive, when you’re doing both texture downloading and some of that work … but other things aren’t being cheated if you’re not downloading textures at the time.”

An issue has been noted in older Macbook Pro systems (late 2007 into 2008 dual-core systems, although the span of the problem isn’t clear) using nVidia drivers, wherein the expected speed-up with cached data which can be seen on other systems isn’t occurring. Monty is still investigating this. Overall, however, feedback on this project has been positive.

Group Management Functions

Large group loading: a familiar problem

Baker Linden has been working to resolve this problem, and his plan is also to go the HTTP route, which will require changes on both the server and the viewer sides of the equation. His comments on progress start at 42:53 into the TPV/Developer meeting recording.

The server-side code for an initial implementation of the solution has been passed to LL’s QA and is expected to be rolled to selected regions on the Beta (Aditi) grid soon.

In terms of the viewer, the plan is to develop a Project Viewer, which will be made available in the near future for people to use with the Aditi test regions. How soon this viewer is likely to appear is open to question – the code will initially need to be passed by LL’s QA (who may have received it on the 24th August) prior to the viewer being built. Once in the project viewer repository, the code will also be available for TPVs to produce test viewers of their own.

How long the testing period will last is also open to question and dependent upon feedback / issues arising. However, the plan will be to follow the usual pattern for roll-outs in that once the code has been tested on Aditi and necessary updates made, it will be rolled to a main grid RC for more more involved testing. This is important, as there is a significant different in the number and sizes of groups operating on the two grids. For example, the largest group on Aditi numbers some 40,000 members; on the main grid the largest group is about 112K, and there are many more groups with between 40K and 112K members.

One thing that has been made clear is that there will be no attempt at backward compatibility with V1-based viewers on the Lab’s part; the new code will be aimed solely at the V3 code base. However, V1-based viewers will still be able to use the UDP protocols for group management, although the LL servers will limit UDP access to groups with 10K members or fewer, so V1-viewers will have some more code backporting on their hands.

There will also initially be some issues around the new HTTP protocol. For example, in the first implementation, the data will be uncompressed. This means that a 40K member group is around 5Mb in size, which can take up to a few minutes to download, depending on someone’s connection speed, so some frustrations are liable to continue. While data compression will eventually be used, this is not planned for the initial implementation.

The discussion involved providing an option to routinely clear-down group lists based on people’s last log-in date, or who have not logged in for a (group owner specified) number of days. However, LL are not going to implement such a feature on the grounds that it could lead to mistakes being made, and people being accidentally removed from a group.

Time Scale and Implementation

As mentioned above, there is no definitive time scale for this work to be completed. Testing is liable to take several weeks at the very least, so it is unlikely the new group management capabilities will be rolled-out on a widespread basis for at least another month, or possibly longer.

However, and like the upcoming new avatar bake service, once the server code is available on the grid, the switch-over will be transparent. If a viewer has the code to use the new group management HTTP service, it will do so, if it has not been updated, it will continue to use the UDP service (with the aforementioned 10K “cap”) until such time as that capability is “retired” from the grid.

Materials processing: the what, why and where

On August 16th, Linden Lab announced the forthcoming arrival of material processing in SL in the form of specular and normal maps. At the same time, a video was released demonstrating some of the capabilities. But what does this actually all mean for the everyday user in SL? Here’s what I hope is a lay guide to things, including comments from one of the architects of the new system, Geenz Spad, as to how it came about.

Materials Processing

This is not intended to be a technical discussion on computer graphics mapping in general or on normal or specular maps in particular. Rather, it is intended to provide a broad, non-technical explanation as to how the latter work. 

Materials processing is the combining of various computer graphics “maps” to significantly increase the level of detail that appears on any object or surface within a computer game. Within Second Life, textures (themselves a form of computer graphics map called a diffuse map) are routinely used to add the illusion of surface details to in-world objects and surfaces. The new material processing capability will introduce two further kinds of computer graphics map to SL which can be used in-world with textures to dramatically increase the detail and realism of objects and surfaces. These additional maps are called normal maps and specular maps.

Normal  Maps in a Nutshell

Normal maps (sometimes referred to as bump maps, although they are more rightly the most common form of bump map) are a means of faking high levels of detail on an otherwise bland surface by means of simulating the bumps and dips that create the detail. Normal maps can be created in several ways.

For example, when working with 3D models, a common method is to make two models of the same object: one a very complex, highly detailed model with a high polygon count, the other a much lower polygon count model with significantly less detail. An overlay process is then used to generate a normal map of the detailed model’s surface features which can be applied to the less complex model, giving it the same appearance as the highly detailed model, but for just a fraction of the polygon count, reducing the amount of intensive processing required to render it.

Using a normal map to enhance the detail on a low-polygon model. The image on the left shows a model of some 4 million triangles. The centre image shows a model with just 500 triangles. The image on the right shows the 500-triangle model with a normal map taken from the model on the left applied to it (credit: Wikipedia)

Another common way to produce a normal map is to generate it directly from a texture file. Most modern 2D and 3D graphics programs provide the means to do this, either directly or through the use of a plug-in (such as the nVidia normal map filter for Photoshop). When combined with diffuse maps, the normal map creates the impression of surface detail far greater than can be achieved through the use of the texture alone.

Normal map from a texture: left – the original texture (diffuse map) and its normal map shown as a split view; right – the material resultant from applying both maps to surfaces inside a game (credit: Valve Corporation)

Specular Maps

In the real world, every highlight we see in an object is actually the reflection of a light source. Surfaces and surface details reflect light differently to one another, depending on a range of factors (material, lighting source point(s),  etc.). Specular maps provide a means of simulating this by allowing individual pixels in an object to have different levels of brightness applied to them, giving the illusion of different levels of light being reflected by different points on the object.

When life gives you lemons: a mesh lemon with (l) a  normal map  applied, and (r) a normal and a specular map together. Note how light is apparently being reflected across the surface of the latter (credit: Mind Test Studios)

Like normal maps, specular maps can be produced in a number of ways, both within 3D graphics modelling programs and in tools like PhotoShop. As shown above, they can be combined with normal maps and textures to add detail and realism to 3D models and flat surfaces.

What Does This Mean for Second Life?

Second Life itself already includes a dynamic example of how normal and specular maps can be used: Linden Water. This is created using an animated normal map to create the wave-like effect for the water, while an animated specular map adds the highlights and reflections. The result is a very realistic simulation of moving water able to catch and reflect sunlight.

Just as the use of normal and specular maps create a very real illusion of water with Linden Water, the new materials processing capabilities will significantly enhance the look and realism of both mesh and prim content within SL. Mesh content should additionally benefit as it will be possible to produce high levels of detail on models with low polygon counts (as shown in first image in this article). This will improve rendering performance while also having the potential to lower things like land impact for in-world mesh items.

The only initial limitations as to where and how normal and specular maps can be applied is that they will not be applicable to avatar skins and system layer clothing. Any decision on whether the material processing capability should be extended to include these will depend upon at least two things:

  • Community feedback – whether there is a demand for normal and specular maps to be used with avatar skins
  • Understanding what is happening with the avatar baking process, and determining what is involved in getting the new baking process and material processing to work together.

Use the page numbers below to continue reading

Pathfinding: starting to reach TPVs

The pathfinding tools are starting to find their way into TPVs well ahead of showing any sign of moving from the SL Beta Viewer to the release version.

The delay in updating the release viewer may be down to several reasons. One of these might be that Linden Lab staff acknowledge the pathfinding documentation is currently undergoing update and rationalisation, and so the capability is still regarded as being “in beta”.

The table below is a list of current TPV versions (August 19th) of TPVs which have started to embrace pathfinding, and indicates the tools provided.

(click to enlarge if required)

Note that the Navmesh View  / Test option is tied to the new SL Havok sub-licence arrangement, as such none of the above viewers are able to include it unless / until they sign the sub-licence agreement (and are eligible to do so). However, visualising the navmesh is not essential to setting pathfinding attributes for objects in-world or optimising regions where pathfinding is being actively used. Other “missing” functionality as indicated in the table above will doubtless be addressed by the viewers in future releases.

Links for these viewers, including to their release notes, can be found on my Viewer round-up page.

Related Links

Pathfinding: playing inside buildings

Over the last couple of days, I’ve been experimenting with setting pathfinding characters roaming within buildings. What follows is not intended to be definitive, but more a case of what I’ve found so far. Until there is more up-to-date documentation from LL on setting-up pathfinding, this was very much thumb-in-the-air stuff, and as ever, YMMV. I’m still fiddling with things, and may add a further article later.

Characters with Impact

For the test, I used a basic pathfinding script to animate a cube (which I called “Charlie”). It was simple and was enough for the basic task. One thing to bear in mind with pathfinding characters is that they’ll have a land impact of 15. This is related to the character’s physics weight, and will not change as a result of adding / removing prims (a 1-prim character will still have a land impact of 15 as will a 30-prim character), although other factors (such as streaming cost) may raise the LI.

People may find the idea of a 15 LI character “harsh” (esp. if the prim count is lower). For my part, I don’t think it is that bad; it still allows for a fair few NPCs in a role-play region without significantly impacting prim counts.

Setting Attributes

Setting-up a building in which pathfinding characters can roam requires setting the correct pathfinding attributes. During my tests, I wasn’t trying for anything sophisticated like setting-up paths; rather, I wanted to see how a simple character (like a pet or animal) would roam and interact with its surroundings.

Pathfinding attributes, as outlined in my pathfinding overview, are set via the Linkset floater. There are a few points that need to be noted prior to doing this:

  • Only one attribute can be set for a linkset, so if your structure includes walls and floors within the same linkset, you cannot set one attribute for the floor, another for the walls, and so on
  • It is possible to set pathfinding attributes against NO MOD builds, as pathfinding attributes are not the same as object permissions. However, there are caveats to this – such as whether or not the build includes things like scripted doors (see the following bullet point)
  • Attributes which affect navmesh calculations (e.g. Walkable, Static Obstacle, Exclusion Volume, Material Volume), should not be set against linksets with scripted moving objects (such as doors). Doing so will prevent the scripts in the objects working as intended
    • If you have a structure which includes things like doors, these must be unlinked first and their attribute left as moveable obstacle
    • Obviously, in the case of NO MOD builds, this potentially limits your choices as to how you enable pathfinding within a building and in ensuring pathfinding is suitably optimised.
Pathfinding attributes for buildings require setting with care to avoid possible “breakages”

To set an object’s pathfinding attribute:

  • Right-click on the linkset for the object  and select Show in Linksets
  • Select the required attribute from the drop-down list
  • Apply the attribute to the linkset
  • Use the Rebake Region button which will appear at the bottom of your screen to update the region navmesh.
Setting an object’s pathfinding attributes

Walkable Areas

Broadly speaking, the following options are available when creating walkable areas in a building:

  • Set the entire structure to Walkable. This works reasonably well, however:
    • For modifiable builds, all scripted moveable elements must be configured as linksets separate to the main structure, as noted above
    • This option should not be used for NO MOD buildings with scripted moveable elements integral to the structure
  • If the building’s floor areas are already an independent linkset, set that linkset to Walkable
  • If the building is modifiable, unlink the floor areas and then re-link them as an individual linkset which can be set to Walkable
  • Create your own floor “overlays” from prims, position them over the existing floors and then set their attribute to Walkable (useful in NO MOD builds which include scripted moving elements).

Which of these options you use is down to the building you have and personal choice. I found that setting an entire building to Walkable (after taking care of the door linksets) worked perfectly well for the most part.

Placing a Walkable floor into a build. Left: the floor prim and house; right: as seen in navmesh view with house selected (wireframe) and the floor in green to indicate it is walkable. Note the floor equates to one room of the house

Note you can set the Walkable attribute for the floor prims prior to positioning them, but you’ll have to run a region rebake once you’ve done so. You can “hide” the floor prims by making them transparent.

I should also note that in terms of furnishings, I left anything set with the Movable Phantom attribute alone, and anything set to Movable Obstacle to Static Obstacle (this did not “break” any scripts for sitting, etc.).

Optimising

To give better control over characters roaming inside a building you might wish to set additional attributes against individual elements in a building. For example, in setting an entire building to Walkable and with Charlie moving at the default character speed, I found he would periodically “pass through” a wall or window and continue roaming around outside. I stopped this by setting the walls of the building to Static Obstacle. As well as potentially helping with character behaviour, setting additional attributes for linksets and objects helps optimise pathfinding for the entire region in which it is being used.

Use the page numbers below left to continue reading