On Monday, September 10th, Linden Lab issued Sansar release 25 (R25), entitled the Shop, Gift, & Spend Release. As the name suggests, the focus is on shopping and gifting Sansar dollars – although there is more to this release than commerce activities.
I provided an overview of some the new features on August 30th, 2018, based on information provided at a Sansar Product Meeting. This article looks at some of these features in more detail, as well as the other elements in the release. Note that as I do not own a VR headset, these reviews primarily focus on using Sansar in Desktop Mode.
The first noticeable change with the release is with Look Book – which users will be delivered to the first time they log-in to Sansar following the update. A new background image has been added to the Look Book, replacing the blue screen (as shown in this article’s banner image). The background places your avatar into a living room style space, offering a cosier setting when adjusting your look.
In addition, VR users will no longer have to revert to Desktop mode in order to adjust their avatar in Look Book, bu can now do so whilst in VR, including making adjustments to clothing made using Marvelous Designer, as shown in the video below, courtesy of the Sansar team at Linden Lab.
Additional Avatar Updates
Comfort Zone Changes:
The comfort zone now applies in first person desktop mode as well as to VR.
The Comfort Zone is now disabled by default to all incoming new users starting from this release. However, all pre-existing comfort zone settings will still persist.
Comfort zone options for Friends and non-Friends can be found by scrolling to the bottom of the Settings panel (More Options … > Settings).
Teleport sound: A sound can be heard by everyone when someone teleports nearby.
New dance animations: type /dance3 or /dance4 for new dances.
The R25 updates sees two enhancements to the Sansar Store:
The ability to browse the Store from within the client.
A new shopping cart.
Browsing the Store in the Client
Accessed via the new shopping bag icon in the top right icon set of the client (show below, right), the store functions almost as it does within a web browser.
In desktop mode, once open, it is possible to scroll through item thumbnails, select categories via the drop-down, sort listings via drop-down (both of which are shown open in the image below), while clicking on an item will open the full listing in a pop-up panel (again shown below).
However, as it is not currently possible to make purchases via the client version of the store, clicking on the Buy button will take you to the Sansar Store web listing for the item, where a purchase can be made. The ability to make purchases through the client version of the store will hopefully be part of a future update.
The Sansar Store shopping cart appears in the web version of the Store only at present, and is located in the top right corner of the browser tab. When empty, it is displayed as a plain white cart icon. However a small running total of items is displayed as items are added, as seen below, top right.
Items are added by viewing them and then clicking the Add To Cart button, which will change to Added To Cart when the item has been added (along with the item count icon in the shopping cart incrementing).
When items are in the cart, click it will display a drop-down list (again shown in the image below), allowing individual items to be removed or the entire cart emptied or for all items to be purchased and delivered to your inventory (assuming there are sufficient account funds on hand).
When using the shopping cart, note that at present item quantities in the cart cannot be adjusted.
Linden Lab is running another Sansar experience creation competition, this one with a S$5,000 first prize (equal to US $50) and an Oculus Rift headset and touch controllers (approx US $400) up for grabs.
For this competition, entrants are asked to build a “watch room” – a space where people can gather to watch a media stream on one of three topics: anime, sport or pets. The “watch room” can be as complex or as simple as entrants desire, so long as the environment matches the subject matter of the media stream (so if the video is of pets, then pets should feature in the experience design).
Videos themselves should be drawn from Twitch, YouTube or Vimeo, and should be played on a purpose-built media surface, or a suitable media surface obtained through the Sansar Store, some examples of which include:
All of which are available for download and use free-of-charge.
Entries are made by publishing completed entries in the Sansar Atlas and then sharing a link to the experience on Twitter, using the hashtag #MySansarLounge and tagging @SansarOfficial with the link.
The closing date for entries is Sunday, September 30th, 2018. The full rules for entry can be found on the Sansar website, with perhaps the most important being that experiences entered into the competition cannot break copyright law or feature intellectual property – including the media stream – that the user does not own the rights to.
The following notes are taken from the Sansar Product Meeting held on Thursday, August 28th. These Product Meetings are open to anyone to attend, are a mix of voice (primarily) and text chat. Dates and times are currently floating, so check the Sansar Atlas events sections each week.
Attending this meeting were Nyx, Derrick, Aleks, Torley and Ebbe. Unfortunately, I was AFK for part of the meeting, and while I was gone, my client disconnected from the meeting location, so I missed some 25 minutes of discussion.
At the meeting, Cara confirmed that Permissions / Licensing will not be part of the next release (R25), due to some last-minute issues that need to be addressed. However, items slated to appear include (note this is a limited list, due to my being disconnected from the meeting whilst AFK):
The initial release of the Sansar Store within the client.
The Client version of the store will allow items to be browsed. However, for purchasing, the user will be transferred from the client version of the Store to the web version in their browser.
The ability to purchase goods from within the client will be added in a future update.
A shopping cart capability in the Sansar Store on the web.
The gifting of Sansar Dollars to another avatar will be possible with the next release, although it is be subject to the 15% commission payment to LL – so a gift of S$100 from one avatar to another will in fact be S$115 for the avatar making the gift. The 15% commission charge serves a dual purpose:
It is in line with the Lab generating revenue through transactions.
More importantly, it prevents users avoiding paying any commission to the Lab by paying one another directly for goods and services.
Gifting of goods will be possible in a future release.
The next release will allow custom images to be added to people’s events, rather than having to use the experience image.
A teleport sound will be added, allowing those within an experience to hear when someone has teleported.
Avatar-to-avatar collisions will be turned off. This should hopefully prevent narrow passages, doorways, etc., from being blocked by an avatar standing in / in front of a confined space.
Permissions / Licensing
Although it will not be part of the next release, Nyx provided more insight into the permissions that will be available when the system is deployed. These will comprise:
Permission for content to be resold, including how much money should be earned by the creator, whether the item is directly re-sold or used as a component within another creator’s item.
Included in this is the ability to specify what properties within an object can be further modified by a purchaser. This will allow purchasers to set properties within an object without the creator having to give up the right to earn from any re-sale of the object.
Full permissions on an item – essentially an open-source licence for the object and its contents to by used howsoever a buyer wishes, including claiming it as their own, so that no further income is earned by the original creator.
There is apparently one further permissions category to be added, and may follow the initial deployment of the permissions system when it happens. However, Nyx did not go into specifics on this.
CDN Asset Distribution
It is hoped that asset delivery for experiences will be moving to CDN (Content Delivery Network) distribution / delivery will be happening in the very near future. This means that rather than having all the data for an experience being delivered from the Amazon services, content assets could be delivered from a “local” CDN cache. It is the approach currently used for asset delivery within Second Life. It is hoped that this move will reduce the load times for those experiences that have had their assets previously cached within a “local” CDN node.
There have been further requests for additional store categories (e.g. “Trees and plants”), with ideas being requested. The Lab fully intends to keep adding to the categories list as it become clear what is needed.
It has been requested that sub-categories are better surfaced. For example, being able to mouse over the Avatar category, and have the category list expand to show all of the avatar sub-categories, rather than have to go to the Avatar category list, then click for a drop-down of sub-categories. The Lab had been working on something similar to this, but the work was sidelined; it may be resumed.
Work is proceeding on making notifications visible to VR users, although this will not be in a forthcoming release.
Edit Server Issues
Some issues have arisen with the move to the Edit Server infrastructure. These include scene settings failing to persist, and scenes reporting as being saved when they have not. One issue with settings failing to persist is the sky being set to 0 – so everything is black when entering the scene in Edit mode. The way to fix this is to go to the scene settings and adjust the sky distance. The Lab has been trying to reproduce the issue with scene settings failing to persist, but so far have not been able to do so.
People having specific, repeated issues with scene settings can made a copy of the scene and contact the Lab, who will take the copy and run it on their test environment for further investigation.
Support for Nvidia RTX: not currently being planned.
Improvements are being made to the Chat App, including quality of life improvements for viewing chat, and (hopefully) timestamps again chat items.
Support for custom avatar animations, including the ability to sell them, is planned for the R26 release.
Work is progressing on customisable controller options for VR handset – no release date as yet.
A virtual keyboard for VR users is also being developed, but not details on how it will work as yet.
Disabling capabilities in run-time: this has come up on a few occasions. There is concerns that if there is not a simple, direct way to inform users as to what is / is not permitted in an experience (e.g. having teleports allowed in one experience, but disabled in another), they could become confused when hopping between different experiences.
Cancelling an experience load: an option will be coming to allow users to abort loading an experience which is taking – for them – too long.
Consideration is being given to changing the Atlas in terms of how experiences are listed (is concurrency the best approach?). This may include providing categories under which experiences can be listed (e.e. Games, Educational, etc.), to make searching and listing experiences easier.
The following notes are taken from the Sansar Product Meeting held on Thursday, August 23rd. These Product Meetings are open to anyone to attend, are a mix of voice (primarily) and text chat. Dates and times are currently floating, so check the Sansar Atlas events sections each week.
Attending this meeting were Eliot, the Sansar Community Manager, with Bagman Linden (Linden Lab’s Chief Technology Officer, Jeff Petersen), Birdman, SeanT and Nyx Linden.
The one drawback in this move is that it means that when editing a scene for the first time, there will be a delay in accessing Edit mode while the server is spun-up and loads.
The move paves the way for the introduction of the new licensing / permissions / supply chain system.
It will also in time allow for things like creators being able to work collaboratively within the same scene.
This is indicated as being “pretty far down the line”, and unlikely to appear before 2019.
Licensing / Permissions / Supply Chain System Deployment
It had been intimated in previous meetings that the licensing / permissions / supply chain system could start to be deployed in the September release. However, Nyx Linden was a little more cautious in addressing when the deployment might occur.
The Lab is still working on some bug fixes and wish to ensure the first stage of deployment is smooth and successful.
Due to the way in which things are interlinked, the deployment would be pretty much all of the core supply chain / licensing / permissions system, although further extensions to the capability may be added in the future.
The system will include a “Save to Inventory” option.
This will initially only allow for objects to be “added to” – so an object can have a script or lighting capabilities added to it and then saved back to inventory. It will not initially allow for disparate objects (e.g. walls and floor and roof) to be combined into a single object.
It will however allow creators to use licensed components (e.g. scripts, sounds, lights) from other creators in their own items, and then sell those items, with the component creators also receiving payment.
The ability to combine disparate objects into a single unit will be added over time.
All items sold through the Sansar Store prior to the permissions system deployment will be set to “no resale” to prevent them being wrongly re-used / re-sold.
Items uploaded after the system is deployed will always be available for re-use in other peoples’ objects.
However, the original creator will have the ability to set whether or not their creations can be re-sold. So, if an item flagged as “not for re-sale” is used as a component in another creator’s object, they will not be able to then place that object for sale.
Clothing and avatar accessories will not be included in the initial permissions system deployment.
The Lab is continuing to look at experience loading and when to “drop” the loading screen and allow people to start moving around within an experience.
One option being considered is to have everything within immediate viewing range of a spawn point to be caches by the client prior to dropping the loading screen, then having lower resolution textures on the faces of more distant objects or those that are initially “out of sight”, which are then progressively swapped out for hight resolution textures during the first one or two minutes the user is in the experience.
This would allow access to experiences to be somewhat quicker for some, although it would mean that spawning followed by immediate rapid teleporting might result in seeing some of the lower-grade textures prior to them being swapped out.
However, the experience wouldn’t be like SL, where actual objects are still being rendered (resulting in an avatar walking into a wall that has yet to render in their view; all of the physical objects would be visible in Sansar, some might just briefly have lower quality textures.
Sansar caches sizes: Sansar uses a 10Gb “large” cache and a 10Gb “small” cache for smaller files. Both of these will be user-configurable in the future)
Disabling Capabilities in Run-Time
Some types of experience that would benefit from having some run-time capabilities such as free camera movement or teleporting disabled in the Runtime mode (e.g. blocking the ability for someone to avoid traps in a game by teleporting past them, or using the free cam to cheat their way around a maze).
Bagman confirmed the back-end technology, as it stands, doesn’t allow for this, but it is something the Lab is aware of, and something they do want to address and make possible. However, it is not as high on the priority list right now as some other aspects of interactivity and options for creators the Lab wants to add to the platform.
VR Avatar Options
The Lab is working on “switching on” the avatar in VR, including the ability to see your hand / body in first-person VR (one would hope this is also extended to Desktop as well); the ability to use hand gestures (e.g. give a thumbs-up, clench a fist, etc.) through the VR controllers, etc.
Avatar collisions: The R25 update should include the ability to disable the avatar collision capsule, making in possible for other avatars to come as close to you as possible (and even pass through your own.
Finding people within an experience: this has been previously discussed, and the Lab is a looking at options (e.g. “teleport to friends” or a teleport request option or an ability to be directly teleported to friends on accessing an experience, etc.
Voice indicator: another long-standing request – a means to more easily identify who is speaking on Voice – also being looked at.
Object hand-off: the ability to directly pass an object from one avatar to another is also being looked at by the Lab.
VR Look Book: this will be coming “soon” to Sansar, allowing VR users to change outfits, swap avatars without having to come out of VR.
Tactile feedback: a small vibration is being added to VR hand controllers when picking up or dropping objects.
Ability to change the client settings from within VR: this isn’t currently being looked at, but is seen as perhaps needing to be moved up the priorities list.
Text chat in VR: is seen as requiring a more technical solution – a virtual keyboard, etc., – although it is on the UI team’s radar.
Server crash: There are occasions when an experience server can crash, leaving the local “instance” of the scene running on the client. When this happens, the use has no idea the server has crashed – and nor, initially, does the client. As there can be latency and other network delays between server and client, the Sansar client has a very long-time out while waiting for updates (around 90 seconds). During this time, the only indicator that something has happened as that other avatars in the experience will not more or respond to voice / chat. Bagman has indicated the Lab will see if there is anything that can be done to make such crashes clearer to the user when they occur, rather than just waiting on the time-out.
A while back, Linden Lab offered Sansar users a free, basic gallery building space. It’s not overly complex or particularly big; but it remains a nice freebie to have. At the time I thought it could make a neat little studio gallery for showing off SL photography; all it needed was the right artist.
Step forward Wurfi, virtual worlds explorer, blogger and photographer.
Wurfi has – entirely independently of my own thoughts on the idea, which were never passed on to anyone – done just that. Wurfi’s Little Gallery is exactly what it says on the label: a little gallery exhibiting some of Wurfi’s SL photography; eight pieces in all (at the time of writing).
It’s a simple, elegant approach, the gallery sits essentially as a skybox, a spawn point inside, and the four walls adorned with Wurfi’s excellent photographs. It’s fast-loading, fun to visit, and offers a nice reminder of Second Life from within Sansar. It’s also a great little place for those who may not have tried Sansar to try out the client and the basic movement controls without being distracted or confused by Things. Just go, practice walking and admire the photography!
I’m hoping Wurfi expands it with more images from his Flickr stream in the future.
Thursday, July 16th saw the release of the Sansar Script, Snapshot and Share update. After the extensive updates in the July release, this is a more modest update, with a focus on what the Lab refers to a “quality of life” improvements – focusing on user-related capabilities, notably for creators.
This article highlights some of the more visible new features and updates with the release. As always, full details of the updates in the new release are available in the release notes. In addition, these notes also include comments from the August 16th, 2018 Product Meeting, which preceded the release. Boden was attending the meeting, together with Aleks and Zaius. Their voices, along with that of Community Manager Eliot, can be heard in the audio extracts included below.
To jump directly to information on the upcoming Edit Server changes click here.
This release follows in the footsteps of the web Events on the Sansar website, allowing you to add your events to your local Sansar calendar, which also has its own tab within the Events panel.
To make use of it:
Within the client, either while displaying the Atlas or within an experience, click on the Events calendar icon in the top right set of icons. This will open the Events panel (note: you can also get to the Events panel via the open Atlas and clicking Featured > View All Events).
The Events panel, which now comprises just two tabs: All Events and My Calendar (which replaces the My Events tab). To add an event to your Sansar calendar, click on the Add To Calendar button.
You Can then view all your recorded events (including those in the recent past) in the My Calendar tab. This actually lists:
Events you have created and are hosting, if you have created any.
Upcoming events you’ve added to your calendar (if any).
Those events you’ve recorded / attended in the past.
Listed upcoming / past events include a Remove From Calendar button, allowing your list to be managed.
Events added to your Sansar calendar will also appear on the web version of the your calendar and vice-versa (a refresh of either will be required if both are open at the same time when adding / removing events from one or the other).
There is currently no ability to add events from the client to external calendars (Google, Apple, Outlook, Yahoo) as you can via the Sansar web site. This will hopefully be in a future update.
Snapshots to Profiles Update
It is now possible to save snapshots taken with the client to your Sansar web profile via a new button – Share. When you’ve positioned the camera and sized the capture area to your requirements, clicking the Share button will:
Save the image.
Upload it to your profile on the web.
Open a tab in your default web browser and display the snapshot.
In the snapshot web page, it is possible to:
View all of your snapshots.
View all snapshots of the Experience featured in a given picture.
View the latest snapshots uploaded by anyone.
Delete the snapshot you are displaying (your own snapshots only).
Report a snapshot (only available when viewing snapshots uploaded by others).
The options are listed above an image when viewing them in your browser, and are arrowed in the image above. You can also obviously share the image URL if you wish.
You can view other people’s snapshots directly from their web profile. So, if you click on the name of an experience creator, or on the name of a friend in your Friends list, for example, you can view their snapshots alongside of their published experiences and current store listings (if they have any of either of the latter). Clicking on a snapshot will display it in its own page, with the options described above.
Side Notes on Snapshots to Profiles
Snapshots to profiles can currently only be viewed on the web, they cannot be seen when viewing profiles from within the client.
There is no ability to caption a snapshot with a description. This is intentional on part of the Lab, although it may be reconsidered in the future.
In the future, snapshots will be appended to the web pages for experiences as well, whether uploaded by the experience creator or anyone else (however, the experience creator will be able to moderate which snapshots remain displayed on their experience page.
This is why the ability to include descriptions in uploaded snapshots has been excluded; it is felt that there is too much risk of people leaving inappropriate descriptions with images, giving experience creators a moderation headache.
This option is ready to go, but will be turned on once the necessary moderation tools are in place for experience creators to manage snapshots shared to their experiences.
However, a future update to the capability will include the ability to tag snapshots, making them searchable.
Other snapshot items raised at the Product Meeting:
This update doesn’t change anything else within the snapshot app. However there have been requests put forward the Lab is considering:
Adding date and time to snapshots when captured.
Auto-generating sequential file names for snapshots taken in sequence, rather than each one having to be manually named.
Possible offering a broader range of saved file formats (e.g. TGA, JPG, etc).
One thing that is being considered is the option to take a series of snapshots and have them “held” during a session, allowing the user to then go through them and select which ones they want to actually upload to their profile and discard the rest.
Edit Mode Improvements
Scene Report Generation
It is now possible to export a .CSV breakdown (comma-separated values file that may be opened in a spreadsheet or text editor.) of every object in your scene. These reports comprise:
Size estimate for download.
Number of textures.
Number of triangles.
Reports are generated via Scene Toolbar > About This Scene > Generate Report > Set the destination location on your computer > Save.
Import Lighting from .FBX
This release allows creators to create point lighting (e.g. colour, intensity, animation) in their preferred editing tool and then import them directly into the scenes as .FBX files. Once in Sansar, the properties for these lights can still be edited when the .FBX file is within a scene.
Additional Edit Mode Enhancements
Locking persistence: objects locked within a scene when editing will now remain locked between Edit mode sessions.
Scene objects panel enhancements: these comprise:
Rename a scene object’s name: the name fields for various scene objects have been removed from the properties panel, with the Rename option moved to the scene objects panel.
New object icons: there are new object icons attached to scene objects to help guide you in distinguishing items
Toggle selectability per object: the ability to select an object within a scene can now be disabled or enabled. This allows for easier selection of objects which may be layered behind others, etc (e.g. lighting within an object).
Trigger Volume filter: it is now possible to now filter by trigger volumes.
New Simple Scripts
Simple scripts were introduced in the August release with the aim of offering non-scripters the ability to achieve basic functions within their scenes (such as opening / closing doors, etc.), in an easy-to-understand and simple manner. The August release builds on this with three further simple scripts:
SimpleDispenser to rez objects.
Currently this does not include any form of parameter to allow spawned objects to decay, but does include the ability to remove the last or all spawned objects.
It includes the ability to cap how many items can be spawned in a given time.
Objects are spawned as the are imported into the script. So a dynamic object imported into the script will spawn as a dynamic object, for example.
SimpleMedia to change the streaming media – the Greenwall VR experience utilises the SimpleMedia script on their media board.
SimpleObjectReset to reset an object’s position.
Additionally, the SimpleCollision script has been revamped to better handle Trigger Volumes.
New Base Script Class: ObjectScript.
In anticipation of rezzable scripts (not yet enabled), this base class only has access to ScenePublic and a maximum of 10 parameters. SceneObjectScript scripts will not run on rezzed content; ObjectScript scripts can run on scene content or rezzable content.
Other Scripting Updates
Parameters limit for scene objects increased from 10 to 20 parameters.
ObjectPrivate.AddInteraction: an Interaction to an object dynamically. Used to add Interactions to rezzed objects or when it isn’t desired that the Interaction prompt be a script parameter.
Improved syntax for [DefaultValue] on vectors, quaternions and colours. These no longer need to be specially formatted strings, simply list 2 to 4 values: [DefaultValue(1.2, 3.4, 5.6)]
SimpleScript base class deprecated. Not to be confused with the new Simple Scripts. Scripts that use this base class will still compile with a warning. Support for new compiles will be disabled in a future release.
It is now possible to browse the Sansar Store using the two new top-level categories of Avatar Looks and Scene Creation, with the sub-categories defined accordingly.
New Edit Server
Due to appear in a point release between the August (R24) and September (R25) updates is the Edit Server release. This moves scene editing from within the Sansar Client (and local) to being server-based. It means that when editing a scene for the first time, there will be a delay in accessing Edit mode and the scene being edited as the Edit Server instance is spun-up.
The reason for this change is to pave the way for a range of new capabilities in Sansar, most notably in relation to the platform’s upcoming licensing / permissions / supply chain system.
Moving the Edit capabilities server-side allow the Lab to incorporate the ability to check the licenses associated with all of the objects within a scene and verify what can / cannot be done with them (e.g. is an object / script modifiable? Can it be incorporated into objects intended for sale? etc).
The initial benefit of this is that it will allow creators to build complex objects in a scene and then export them as a single object back to inventory (so a car is complete with its wheels, engine, seats, etc.), rather than these all being individual objects), allowing the composite object to be sold.
Additionally, this will enable the licensing / permissions / supply chain system of Sansar’s economy, so that duly licensed objects by other creators can be used within an individual’s own creations, which can then be saved to inventory and sold through the Sansar Store. The first elements of the licensing / permissions / supply chain system is due to start deployment in upcoming releases following the switch to using the Edit Server. Beyond this, the move may in the future allow for things like creators being able to work collaboratively within the same scene.