The following notes are taken from the Sansar Product Meeting held on Thursday, August 2nd. These Product Meetings are open to anyone to attend, are a mix of voice (primarily) and text chat. Dates and times are currently floating, so check the Sansar Atlas events sections each week.
The primary topic of the meeting was Sansar physics, although inevitably other subjects were also covered.
My apologies for the music in the audio extracts. This is from the experience where the meeting was held, and I didn’t disable the experience audio input.
Express Yourself Release Updates
The July Express Yourself Release (see my overview here) had two short-order updates following its deployment. Both were to provide fixes for emerging issues. The first went out on July 19th, and the second on July 30th.
The Express Yourself release included an alteration to network behaviour that means physics interactions occur locally within the client first, allowing the user an immediate response. The idea is to allow the kind of immediate feedback to the user that will be essential to dynamic activities such as drive or flying a vehicle as well as allowing for more immediate response when picking an object up, walking, firing a gun, etc.
However, as the updates still need to pass through the server and then back out to everyone else, this can result in objects appearing to instantaneously move when control is passed to another avatar. More particularly, it was discovered the change could adversely affect any movement governed by scripts, which require additional time for server-side processing, and this resulted in some content breakage, which in turn caused the updates – notably that of July 30th – to be issued in order to fix things.
It has also resulted in some behaviourial changes with scripted interactions; for example: when firing a scripted gun, as the action still requires server-side script processing, while initial movement response is client-side, it is possible to fire a gun while moving and have the projectile appear to spawn separately to the gun and avatar (e.g. behind or slightly to one side). This is to be looked at if the July 30th update hasn’t fixed it.
This work is going to be refined over time to make interactions both responsive and smoother, and is seen as an initial step towards more complex object interactions, such as being able to pick in-world objects up and hold them in the avatar’s hands.
Avatar Location Issue
One side effect of this is that avatars in an experience, when seen by others, can appear to be in a different place to where they have placed themselves. At the meeting for example, some avatars appeared to be in the local group in their own view (and, I think, to some others), but were appearing to still be at the spawn point for the experience in other people’s views. This seemed to be particularly noticeable with avatars standing still, with movement required to force the server to update everyone’s client on the location of an avatar. A further confusion from this issue is that as voice is based on an avatar’s position relative to your own, if they appear to be much further away, they cannot be heard, even if in their own view they are standing right next to you.
Avatar Locomotion Enhancements
Improvements to avatar locomotion are said to be in development at the Lab. This work includes:
- The ability to use animation overriders.
- Additional animation states (e.g. jump)
- Avatar physics driving – allowing avatars to be affected by physics for things like ballistic movement or falling.
It has been suggested this work should include an ability for the avatar IK to be enabled or disabled alongside creator animations, depending on the animation type being used.
The client scripting idea requires careful consideration: will creators want their scripts run client-side? Could it be a toggle option so scripts can be expressly flagged to run of the server only? What would be the communications mechanism between script on the client and scripts on the server to ensure they remain synchronised? Should client scripts be limited to only certain capabilities, with the server still doing the heavy lifting? and so on. So – look for the ability to attach avatars to vehicles (and vehicles to avatars and objects to one another) in the future.
As noted above, the work on making physics more client-side active is aimed towards enabling better vehicles (using the term generically, and not as a representation just of road / wheeled type vehicles) and their controls in Sansar. This will likely initially take the form of an ability to attach avatars to vehicle objects (a-la Second Life), allowing both to be “driven” via scripted control. This would allow for very simple vehicle types. From there the Lab’s thinking is moving in two directions:
- A scripted approach (client-side?) that would allow for a more flexible approach to defining vehicles and their capabilities;
- A “vehicle component” within the platform that could be applied to different vehicle models to enable movement, etc. This would be potentially the easier of the two approaches, but would limit the degree of customisation that could be employed to ensure it fits certain vehicle types,
Scene Load Times
There has been – from the start with Sansar – much discussion on scene load times. While a lot has been done on the Lab’s part to improve things there are some experiences that do still take a lot of time to load, and for some, depending on the circumstance may never load. There are really two issues for scene loading:
- Bandwidth – the biggest.
- Memory footprint – some experiences can top-out with a physical memory footprint of 14.5 Gb. For a PC with “just” 16 Gb of memory, that represents a struggle. Virtual memory (disk space) can obviously compensate, but can lead to a performance degradation.
In hard, practical terms, there is little the Lab can directly do to resolve these issues – a person’s bandwidth is whatever their ISP provides, and physical memory is whatever is in the box. However, as noted there has been a fair amount of work to offer improved optimisation of scenes, improve load times through the way data is handled – notably textures, potentially one of the biggest causes of download problems, and sound file handling (another big issue) – and more work is coming, with Lab CEO Ebbe Altberg recently noting a number of options being considered, by way of the Sansar Discord channel:
- Progressive texture loading.
- CDN distribution (for more localised / faster availability of scene objects materials and textures, rather than having to call them “long distance” through the cloud).
- Background scene loading.
- Addition of better LOD capabilities for model loading /rendering (if it is far away, only load / render the low-detail model).
Further indicators are, I understand, also planned for the Scene Editor, designed to keep experience creators better informed about the load times of objects and elements. Appropriate elements of this information will also be made available in store listing for items, allowing scene builders to again make more informed choices about the items they may be considering buying for inclusion in their experiences. There are also some practical work creators can do to ease things across the board: use smaller textures, decimate their mesh models correctly, employ reuse of sounds and textures, etc.
- Aggressive render culling: Sansar can employ some aggressive render culling resulting in objects appearing clipped or vanishing from a scene unexpectedly. This is most obvious with animated objects using bone animations. This is to be looked at.
- The last few minutes of the meeting were focused on ideas such as having a mini-map capability to find people within an experience; an ability to “go to” teleport to a friend; the ability to offer a teleport someone in an experience to your location, etc.