The majority of the following notes were taken from my recording of the Sansar Product Meeting held on Thursday, April 18th, which served to introduce members of the Sansar team, discuss changes to featured experiences, and raised the possible instigation of a possible Office Hours with the product team. The meeting was followed on April 19th by an announcement concerning the use of PayPal as a supported payment service provider, which is noted at the end of this report.
The meeting took the opportunity to introduce:
Kelly Linden: Kelly is worked at the Lab for 15 years, obviously primarily with Second Life, where he headed the scripting team; he now leads the scripting development team for Sansar.
Skylar Linden: Skylar heads the Sansar Live Events team, and is leading the work on the work with events and the Atlas that is currently in progress.
Signal Linden: has been part of the Sansar team for three years, working in a number of areas, including the web API, and is currently focused on some transform tool changes in the Edit mode, and which see the addition of some hot keys to allow toggling between things like move and rotate, etc.
Lankston Linden: a data analyst for Sansar, focusing on typical activities within Sansar – how many people are doing X at a given time, etc., how many are using Y, helping to determine trends within the platform, etc.
The analytics Lankston is helping develop are intended for use by the Lab, but Landon Linden indicated that developing analytics for use by Sansar creators. This should hopefully start to happen towards the end of the second half of 2019.
Harley Linden, who head the Sansar support team.
Featured (Recommended in the web version of the Atlas) have not been updated in a while. The web version comprises a 3×3 grid, and in the future:
Three of these featured experiences will probably be reserved for the Lab’s content partners.
Three will feature experiences that align with other internal goals the Lab has which “may or may not be obvious”. These might, for example, focus on live music concerts.
The remaining three will be open to Sansar creators, and will be changed on a weekly basis.
The criteria on how the last three are selected still have to be determined, however:
It is likely that a creator featured one week will not be eligible to be featured the following week, even with a different experience.
There is likely to be some form of public “voting / nomination” for potential experiences through the Sansar Discord channel.
It might be that the three slots will be determined by an over-arching theme (e.g. holiday themes during notable holiday weeks, or themes like clubs, games, etc.
Sansar’s Discord channels have been opened to the public.
A new public channel (or channels) is available for those who might hear about Sansar and who want to check the community, etc., before opting to install the client and sign-up.
This means the content of the existing channels will be available for anyone on the public Sansar Discord channel to read, but only registered Sansar users will be able to post to them.
The Lab is additionally look to reach out to other communities on Discord and generate interest in Sansar (e.g. the live music community).
In Brief from the Meeting
Linden Lab is looking to expand the number of avatars a single instance of an experience can handle. The limit has been 35, but 50 is being looked at with tests of up to 65.
There is a known issue with voice that can see it suddenly drop / fail within an experience where multiple people are chatting.
With the next release (as previously reported) events and the Atlas should be fully re-integrated. However, it is still TBD on whether the event will feature the primary instance of the supporting experience, or will be tied to a dedicated instance of the experience.
Product Office Hours: the idea here is for sessions (possibly on Twitch) in which specific topics could be addressed (e.g. “how to rig an avatar”). Feedback on this idea has been asked for via Discord.
On Friday, April 19th, Linden Lab announced – via Steam – that with immediate effect, they would supporting PayPal as a payment service provider, allowing users to purchase Sansar Dollars, store items, and tickets to events using PayPal.This is something users have been requested since at least the time of the public beta.
The following notes were taken from my recording of the Sansar Product Meeting held on Thursday, April 11th intended to be on the subjects of custom avatars, upcoming changes to the Sansar Discord channel, and moderation capabilities in Sansar.
First Time Avatar Selection for New Users
With the March Jumping and Questing release, new users can select from a range of (user-created) custom avatars via a new carousel (or opt to create their own look using the system avatars via the Customise button).
Stats show that around 50% of new user avatar selections are now custom avatars.
Custom avatars are currently selected on the basis of:
How good it looks in the Lab’s eyes, particularly to gamers.
How appealing the avatar is to all allowed age groups. – including whether the avatar is fully clothed (allowing new users to go directly “in-world” to experiences without having to go to the Look Book to customise their look.
Whether the avatar can be easily dressed using Marvelous Designer clothing.
Whether the avatar is offered for free in the Sansar Store.
The selection of custom avatars will continued to be curated and updated.
Avatar Contest: New User Carousel Edition
To encourage creators to make custom avatars suitable for presentation to new users through the carousel, Linden Lab have launched the Avatar Contest: New User Carousel Edition.
Creators are invited to present custom avatars suitable for use on the new user avatar carousel.
Avatars must meet the requirements outlined above, and additionally must have properly rigged hands and mouths.
Creators may submit up to five entries (but can only win one of the prizes).
Five avatars will be selected from entries, and the designers awards S$27,500 each.
The onus is on humanoid avatars, or human anthropomorphic avatars.
Avatars can be submitted through until 17:00 PDT on Thursday, May 9th, 2019.
If the contest proves popular, it may be run again in the future.
These are due to come into effect from Monday, April 15th, 2019.
Those who wish to keep their view of Discord as it is will be able to do so. All of the changes / new features will be on an opt-in basis.
A new public channel (or channels) is to be introduced (most likely on Monday, April 15th) alongside the current “user” channels,.
This is specifically aimed at those who might hear about Sansar and who want to check the community, etc., before opting to install the client and sign-up.
Existing users are encouraged to join the public channel and participate in discussions there, answer questions, etc.
The existing “user” channels will remain only accessible for interaction to those with a Sansar account (which will continue to be the point of entry to them).
However, the content of the channels will be available for anyone on the public Sansar Discord channel to read.
If the public channel aspect doesn’t work as anticipated, it might be rolled back in the future.
This was more of a general discussion / feedback session, rather than an announcement of new features or changes to the current moderation / blocking capabilities. Key points from the discussion are summarised below:
The current set of moderation tools with their focus on avatar / avatar blocking have been developed from the perspective of helping new users deal with unwanted situations. It is acknowledged that a broader toolset is required,
At present, users can block one another’s avatars – useful if someone is being a particular nuisance, but the experience owner isn’t around to intervene, or if they are just being a very specific annoyance for another user. However:
There has been feedback that the current blocking is insufficient, as it doesn’t include removing their text comments from local chat.
There is no personal block list, so it is difficult for some to determine whom they might have blocked (possibly accidentally) without the subject of their block actually being present in an experience with them (but being invisible to them).
How to unblock also lacks clarity.
A lack of ability for experience creators to quickly ban troublemakers from causing issues within an experience being enjoyed by others – such as a single-click eject (back to the user’s Home Space?) / ban capability.
Avatar blocking doesn’t work in this situation, as it is purely avatar-to-avatar, so the nuisance can still go on to bother others in an event game, and can also be disruptive as they can continue to interact with elements in the scene.
Having to record an avatar’s name, then go to Create > Build Experiences > My Experiences > Publish > Publishing Options > Ban List and then add a name is (rightly) seen as too long-winded.
This is something the Lab has considered alongside the current moderation tools and are planning to provide. However, it is also something that hasn’t as yet been prioritised.
However, the Lab have looked upon such capabilities as being more event-driven (e.g. large-scale events that require specific moderation / some form of moderator role which would include the required capabilities.
The problem with the Lab’s approach is that potentially, without a broader, more accessible set of moderation capabilities available to them, experience creators already in Sansar are reluctant to hold major events of their own, simply because of the overhead involved in taking action against a troublemaker (or worse, a group of troublemakers) with the current capabilities.
Part of the Lab’s approach to moderation is to provide tools that allow users to be both pro-active and to consider the options at their disposal in accordance with a situation (does someone’s behaviour actually warrant blocking, or is muting sufficient? Should they be banned for an experience – where banning is an option – or should they be reported? Should they be banned and reported? etc).
As it is, blocking / muting in VR is not that intuitive. The Lab is aware of this and looking to improve things.
The following notes were taken from my recording of the Sansar Product Meetings held on Thursday, April 4th. The first meeting, lasting 30 minutes, focused on Sansar Events and the changes made to the events system. The second meeting, lasting an hour, focused on upcoming features. This update focuses on the reasoning behind the events changes, and the confirmed work coming up for the R32 release and beyond.
The Questing and Jumping Sansar release saw some changes to how events are managed. In short:
Events can no longer be joined by finding an experience, it must be done via the event calendar, with the event itself a special copy (not an instance) of the experience.
Active events are listed on a new Featured tab – Client Atlas only.
Event creators can change the scene tied to an event, customise the scene like any experience, and delete the experience if it’s no longer needed.
As Linden Lab is looking towards much more in the way of events hosting – specifically with “big partners”, these changes have been made to improve the management of events, the Atlas, and to better support the running of ticketed events. Thus, events are now viewed as much more their own type of experience, rather than being an activity tied to an experience and the Atlas.
The intent is to make events more a of driver of Sansar use.
In particular, it is hoped that ticketed events (with the hint of tiered ticketing, such as “general admission”, “VIP admission”, etc.), will be better supported through the new system.
This has resulted in making events harder to find via the Atlas, as events no longer appear towards the top of the event pages on the basis of the number of people attending. To address this (and other pain points):
With the next (R32) release, events will be folded back into the Atlas, rather than only appearing on the Events panel.
Some of the Atlas UI will be revised in order to make it more user-friendly to established and new users alike when trying to find things.
These revisions should mean that users more clearly see both experiences and events as they browse through the Atlas.
The featured section of the Atlas will be changing, so that both experiences and events will be listed.
More intelligence will be put behind featured events; e.g. events with an active attendance will be pushed into the featured listing in lieu of any specified as “featured” not having attendees (perhaps because they have not yet opened).
It will not be possible to bookmark / like the copy of an experience in which an event is being held; however, it will still be possible add events to your calendar.
In time, it might be that recurring events will be automatically retained in a calendar, rather than just the current date / time.
Concerns have been voiced that as events now take place in a dedicated copy of an experience, users will no longer be able to bookmark the original experience so they might return to it at a later date.
A counterpoint to these concerns is that currently, Sansar has disparate ways to find out what is going on: the Atlas to see what experiences creators are making, the Store to see what goods creators are making, etc., and it might be better to present users with a more holistic view of what creators are doing as a whole (this is kind-of possible via profiles).
However: a) such an approach, if taken, has yet to be clearly thought-through to determine how best to implement it; b) it will not compensate creators who wish to use events to encourage visitors to their actual experience.
Future Developments: R32 and Beyond
R32: Full Body Tracking
Vive Tracker support has been added to allow full body tracking.
The Lab provided a video of an Sansar skeleton moving in time to Landon McDowell’s daughter (wearing the trackers) as she danced (see below).
It is hoped that this will add a new dimension to Sansar for VR users (or at least, Vive system users for now), and that it might in the future be used to record custom emotes (gestures in SL parlance).
This has involved overcoming a lot of technical challenges, and there is still more work to do to refine the system further.
R32: New Movement Scripting API
Allow objects to rotate and turn.
Include a set of supporting Simple Scripts.
Enable frame-perfect animation; movements can be properly queued to be executed at specific times / in a specific order by the engine of the desired frame.
Allow animations to ease-in and ease-out one to the next.
Support multiple animation states, with blending between them.
This system should work for non-rigid bodies as well.
A major part of this is to allow the development and movement of non-player characters (NPCs), allowing simple patrol / follow behaviours, etc.
Initially these will be relatively simple: there is no pathfinding, complex interactions, etc.
However, it does Include line-of-sight location (e.g. if an NPC is designed to chase an avatar, it will only do so if it can “see” an avatar, so such NPCs could be avoided by keeping things like walls between you and them – see the official video below).
LL plan to suse this system to add some simple NPCs to the questing capabilities.
Longer-term it is hoped to have NPCs supporting looks, expressions, being able to play recorded voice audio and use Speech Graphics, utilise MD clothing, etc.
This should allow dynamic assets to drive animated assets, which in turn should enable things like animated held objects like guns, etc.
Avatar crouching for desktop will be added as a part of R32. This should reduce the avatar capsule to allow walking through spaces with low ceilings, etc.
New throw capability for Desktop mode which should allow more accurate throwing and placement.
Throwing will likely include direction and strength indicators for Desktop mode users, allowing them to participate in games with VR users.
Placement controls should allow Desktop mode users more accurately place items they have picked up, rather than simply dropping them (e.g. pieces on a chess board can be accurately placed).
Uniform scaling for avatars: scale them up or down proportionally.
Scaling of individual body parts (e.g. an arm or the head) will not be possible until the “avatar 2.0” is released.
Partial animation import:
Allowing a partial / simplified Sansar skeleton (with around 70 bones) to be downloaded and animating directly off of that.
Being able to hook this into Mixamo’s animator tools.
Feedback on the latest updates for using Marvellous Designer are being gathered and examples put together and will be fed back to MD in the hope of getting issues to work as expected.
Work is continuing on the questing system, including opening it to user-generated experiences.
Currently, the focus is on getting the system to work for the Lab and ironing out issues and bugs.
For the R32 release, the Lab hope to provide more quests beyond those added in the Questing and Jumping release, with the emphasis on showcasing what is possible, rather than building entire games.
Performance team continuing to work on a LOD system and performance improvements to get more avatars into an experience.
There is also a release with a focus on crash fixes coming soon.
Run-time joints: a feature which will hopefully appear later in 2019 that would allow users to parent things to their avatar (e.g. take a hat in and experience and wear it within that experience).
The following notes were taken from my recording of the Sansar Product Meeting held on Thursday, March 14tth. The full official video of the meeting is available here. The topic of the meeting was that of the upcoming R31 release due at the end of March, and the Sansar avatar. In attendance from the Lab were community manager Galileo, with Landon MacDowell, the Lab’s CPO, and Sansar project manager Cara, together with Nibz, Nyx, SeanT, Harley, Lacie, Stretch, EagleCat and Stanley.
Skin, clothing and Custom Avatars
The next release should see the default Sansar avatar have skin tinting enabled. If I was understanding this correctly, the skin will have six basic swatches of colour, which can then be adjusted to allow users to generate a wide range of skin tones.
It will be possible to dress custom avatars (until now, these have had to be supplied with clothing that cannot be removed or altered or added to).
This option is regarded as a beta release.
It will allow custom avatars to be dressed, use hair and accessories and wear clothing developed in Marvelous Designer (MD).
The adjust clothing option for MD clothing within the Look Book should work with custom avatars, and the Lab is working to make this capability easier to use in Desktop mode.
Dressing custom avatars will probably not work well with items rigged to fit the default avatars, and the Lab would appreciate feedback on this, and to how to improve the system, what problems are encountered. etc.
Obviously, the closer a custom avatar is to the default avatar, the better things are likely to work.
Future updates to follow this initial release will include:
Adjusting rigged accessories to correctly fit custom avatars will be a future iteration for the system.
Attachment points (e.g. select a right hand to add a watch).
Allow MD clothing to be moved, rotated, uniformly scaled, etc.
Avatar Editor Changes
The Save and “return to world” functions are being separated into their own buttons.
The Save function will allow users to save their changes to the current avatar or as a new avatar look in the Look Book.
A reset button will be added to the adjust clothing capability for MD clothing, to overcome issues of things being “dropped”.
There are a number of cosmetic updates to the avatar editor UI.
New User Flow
The new user on-boarding flow will be extended to include a selection of custom avatars as well as the existing default avatar set.
It will also be possible for users with the Grey avatar to be able to dress them until they decide on an avatar.
Animation Updates and Improvements
Jumping is almost ready for release.
Different heights of jump can be achieved when using the jump button / key, and jumping will be gravity sensitive (the lower the gravity in an experience, the further / higher the jump).
It is hoped jumping will open up the opportunity for platform style game experiences.
Future updates to this might include:
Scripted control of jumps (e.g. gain a “power up” in a game to jump higher / further).
Adding an animation override for custom jump animations.
A “falling” animation will also be released with jumping (so if you step off a wall or cliff, the avatar will fall, for example).
Walking / Running
The default walking and running speeds are to be increased.
The animation graphs supporting these may also be opened to scripted adjustment in the future, depending on feedback.
VR Animation Improvements
Work is being done to improve the sensation of being “grounded” (more connected to the virtual ground) when moving around in VR.
Work is also being carried out to better handle arm and hand positioning when in VR (e.g. so removing a headset in VR doesn’t result in the avatar’s hand being weirdly positioned or its arms going through its body).
General R31 Updates
Users will be able to see their bodies when in Desktop first-person view (this won’t include seeing weapons correctly held in the first pass, so carry on shooting yourself in the foot for now in Desktop mode 🙂 ).
Save current location: when you go to Look Book within an experience, you will be returned to the last “safe” location you occupied in the experience, rather than being forced back to a spawn point (“safe” as in not being spawned in mid-air if you were travelling on a moving surface when you entered Look Book, for example).
Teleport to friend will be updated so:
If you are in a different experience on teleporting, you will teleport to them in the scene they are in, and not to the experience spawn point.
If you are in the same experience when using teleport to a friend, the experience load screen will not longer be displayed.
“Safe” locations for teleporting will apply in both cases.
Creators of game-type experiences or similar that require a specific spawn point, and who do not want people randomly popping-up in their experiences will be able to set a flag to override this and divert teleports to their assigned spawn point.
Grabbing objects in VR should not longer display the blue beams, but allow users to naturally reach and take objects.
The object’s outline will still be highlighted.
A grabbed object should stick in the hand a lot better.
Other work in progress for future releases includes:
Uniform scaling of avatars: users will be able to uniformly scale avatars up / down (precise range still to be decided), and will include automatic scaling of clothing and attachments.
Also TBD with this is whether or not the avatar capsule should scale as well, whether or not walk / run speeds should scale, etc.
Scaling will see foot placement and grounding in VR mode correctly adjusted as well.
More parity between Desktop and VR when grabbing / throwing objects.
These updates will include a “throwing beam” for desktop users so they can see the trajectory of an object were they to throw it, and then adjust it.
Work is continuing to the default avatar 2.0 (greater customisation, etc.).
Full body tracking in VR is being investigated (e.g. using the additional trackers for the HTC Vive). This could open Sansar for a lot of performance related activities.
Windows Mixed Reality (WMR) gear, etc., should work better with the next release. However, this is a user-generated fix, and it shouldn’t be taken to mean Linden Lab are supporting WMR, etc.).
A frequent Sansar question asked of the Lab is why did they opt to build their own platform engine, rather than choosing to use an off-the-shelf engine such as Unity or Unreal. To try to address these questions, as a part of the Sansar Product Meeting on Thursday, March 7th, 2019 Richard Linden (Sansar’s Chief Architect) and Jeff Petersen (aka Bagman Linden), Linden Lab’s Chief Technology Officer gave a 25 minute overview of the Lab’s thinking on the matter and what they see as the advantages / disadvantages in either building their own engine or using an existing product.
The following is a summary of their discussion, with audio extracts, the full version of which can be found in the official video of the product meeting. Note that in presenting them, I have attempted to group comments made by topic, as such the audio extracts below do not necessarily chronologically follow the meeting video. However, care has been taken to not lose the overall context of the comments made by either Jeff or Richard.
The presentation was followed by a more general Q&A session, which also involved Landon McDowell (Linden Lab’s chief Product Officer) and Nyx Linden. Some of the questions relevant to the core subject of the platform and engine and roadmap are also included. Please refer the video for all questions ask at the meeting.
Jeff Petersen – Introduction
The decision to create an engine from the ground up was made some 5 years ago.
There is a degree of envy towards the likes of Unity and Unreal, particularly because of their maturity and cross-platform (operating system) support, plug-in support, etc.
However, the Lab’s aim is to build a platform that can compete with other platforms in the same arena, which includes both Unreal and Unity. Ergo, using one of those platforms as the foundation for something like Sansar made (and makes) little sense, simply because doing so would deny a product the kind of low-level control required to differentiate from those platforms.
In addition, there are directions in which Linden Lab would like to take Sansar and user-generated content (UGC) which perhaps do not sit well with the use of an off-the-shelf engine.
Further, in opting to build their own engine for Sansar, LL has been able to make use of a number of third-party applications and tool sets (perhaps the most notable being Speech Graphics facial animation software and Marvelous Designer), which allow LL to leverage capabilities and integrate them into the platform to provide the necessary flexibility of approach / use. This kind of approach isn’t so easy with platforms such as Unity and Unreal, which are seen as more “full stack” solutions: once you start using them, you are pretty much locked into the technology they support and the integration they want to provide.
A further point of note in the decision to build Sansar’s engine is that Linden Lab obviously has an enormous amount of experience with development systems designed to support and enable user-generated content and user creativity.
In this respect, Richard Linden, Sansar’s Chief Architect, is one of the longest-serving members of the Lab’s staff, having been with the company since before the development of Second Life, in which he played a significant role.
Richard Linden – Constraints
An important thing to remember with Sansar is it is not just a virtual world, it is a platform creation engine intended to allow users to design and implement compelling content they can use to attract an audience.
So again, Sansar is in competition with the Unitys and Unreals that are out there, and thus needs to be differentiated from those platforms. This is done through the constraints / requirements LL apply, their own unique experience in running Second Life and in handling UGC on a scale and of a type normal games do not have to deal with.
In terms of constraints, LL recognised a number of performance related constraints that informed their decision to develop their own engine:
Rendering: UGC comes in many flavours from the optimised to the unoptimised. From the managed poly count to the outrageous poly count. Sansar has to provide a rendering system that can handle all of this, and ensure that it can deliver experiences to uses that offer both a smooth experience in VR and do not cripple a user’s computer in doing so.
Physics: again, the physics engine must be robust enough to handle all kinds of UGC, optimised and unoptimised. In this, LL have 15 years in using the Havok physics engine in Second Life, so it made sense to leverage that experience in Sansar.
Scripting: experiences and (in time) avatars will be liable to have many, many scripts running in them / associated with them, scripts which (again) might not be optimised for streamlined execution, so the platform needs a scripting engine that can scale to the demands being placed upon it as experiences become more complex in their capabilities and avatars evolve (and appear in Sansar in (hopefully) greater numbers over time).
As noted by Bagman, this includes managing malicious (deliberately or otherwise) scripts whilst keeping the scripting environment open enough for creators to be able to do what they want with their scripts. This is something not easily achieved within a “full stack” engine architecture without requiring substantial changes to its scripting system.
Audio and UI capabilities: again providing the necessary flexibility for support of audio content from creators (FMod) and a UI tool (NoesisGUI) that is flexible enough to meet the needs of creators and of consumer users.
Richard and Jeff – Asset Management
A further constraint is the sheer volume of UGC assets.
Second Life has an estimated 24 billion UGC assets associated with it.
Linden Lab hope that in time, Sansar will be at least an order of magnitude bigger than this.
To avoid issues of having to reprocess data associated with assets, SL and Sansar are founded on the idea of the immutability of assets. Linden Lab promise that so far as is possible, updates to the platform will not break existing content.
The majority of games built on other engines don’t have to deal with any of this.
They have comparatively low number of assets they have to deal with.
When they update (or the engine they use updates) they can do one of two things:
Reprocess their assets, then burn and ship a new version of their game.
Remain on the current version of the engine and use the newer version for their next project.
Given the above, engines like Unreal and Unity aren’t geared towards dealing with massive amounts of asset data or in maintaining the immutability of assets, as that is not expected of them.
Using such engines for an environment like Sansar, where assets could be expected to be relevant for years (as is the case with Second Life now), and continue to work “as is”, without having to be reprocessed by the Lab each time the engine is updated, is therefore a non-starter.
Richard – Aims in Building an Engine
LL ultimately want to make Sansar an environment where anyone can create and share, whether or not they are “hard-core” content creators. This means Sansar needs to:
Support users who may not create original content, but can use that content (as provided by others) to express themselves and present environments they and their friends can enjoy.
That has the ability to take on a lot of the heavy lifting involved in content optimisation, etc., and which doesn’t necessarily require those creating environments to have in-depth / professional level knowledge on scene optimisation, content development, etc., that might be required in other platforms.
That can (in time) offer a collaborative content creation environment, so people can work together to design and build as well as visit experiences together.
Collaborative editing does not only mean being in a shared editing space, it means also having access to all of your chat and communications tools to be able to stay connected to friends who are in Sansar but not in your edit environment – again, these types of capabilities aren’t necessarily provided in other engines.
Not all engines have all of these types of capabilities built-in. And even where third-party plug-ins are available to achieve aspects of the functionality, they may not actually be as flexible to use or in meeting the constraints particular to Sansar as might initially seem to be the case.
Jeff – IP Protection
IP protection has and remains a major consideration, and was looked upon as a show-stopper for using other engines.
Sansar is designed to provide a supply chain economy, with individual rights respected in the use of component elements (.e.g whether an item can be used just within a scene, how it can be used as a component in someone else’s model, how royalties are safeguards and paid in respect of the latter, etc.
The use of personal inventory and the Sansar Store is also viewed as potentially being seamless (e.g. scene creators can use items they upload to their inventory and / or items available on the store, up to and including the potential for “try before you buy” from the Store, with all rights again respected.
This kind of protection isn’t seen as being offered by engines like Unreal or Unity without a considerable amount of code re-writing which, as it is part of the overall engine “stack”, runs the risk of having to be re-implemented each time there is an update to an off-the-shelf engine each time that particular aspect of / tool within the engine gets updated by the provider.
Richard and Jeff – Broader Pros and Cons
A major disadvantage with using an off-the-shelf engine is seen as the back-end support.
There tends to be very little out-of-the-box support to meet the requirements a platform like Sansar has.
Trying to engineer around this using such a product can be difficult, particularly given the amount of information sharing that goes on between the Sansar client and the back-end.
Most likely, it would have meant working on the engine’s code, it would have effectively been a fork of the original Unity / Unreal / whatever code base, which itself opens up all sorts of headaches.
The code won’t really be supported by the engine provider
How is the code maintained; how are major updates to the engine handled and merged without potential breakage to the forked code, etc?
This is already a problem for LL with Havok.
As mentioned above, there is the issue of longevity. Game built using engines like Unity tend of have a finite requirement on the engine, when they are shipped, that’s largely it, and no need to necessarily maintain full backwards compatibility; the next version can be built on the latest version of the engine. Sansar doesn’t have that luxury, and most engine providers don’t see it as a need.
The case against a dedicated engine is that, obviously, it takes a lot longer to build out all of the necessary functionality that an off-the-shelf product might provide and that can be used.
LL is a small company with limit resources, ergo, building their own engine is a long-term task.
However, LL is uniquely positioned to be able to afford to take on the work, and has a fully supportive board who recognise the effort.
On Thursday, February 28th, Linden Lab released the C’mon Get Happy release. This is rather a small update compared to previous releases. The full release notes are available, and highlights of the release key features might be summarised as:
Save and sell a collection: creators can now pull a group of objects from a scene and save it back in their inventories as a single object.
All script relationships and relative positioning for the objects will be stored in that single object, making it easy to drag and drop a collection of items in a scene or sell it in the store.
Note the objects will not be linked: when placed back into a scene, they will remain a group of individual objects. This will be coming in a future release.
Smoother gifting: there is a new notification to let receivers know that they received Sansar Dollars from another user.
Draw distance limit: creators can now define an object’s draw distance limit from the properties panel. The draw distance defines the distance at which an object starts to render in the scene.
For example, if an object’s draw distance limit is set to 10 metres, the object will no longer be visible when a user in an experience is beyond 10 metres from the object.
This is currently set to infinite by default, so creators are asked to implement it when building their scenes.
Extended limits on uploaded Avatar items: the proximity limits on clothing, accessories, and hair are expanded, with the Axis Aligned Bounding Box (AABB) area increased by .1m left/right and .3m front/back.
This means that the AABB area is now min(-0.9m, -0.9m, -0.05cm) max(0.9m, 0.9m, 2.2m).
This change does not affect emotes nor custom avatars.
New avatar reference files: the avatar reference files are now noted as being updated and can now be found here.
Emojis have been added to chat.
The font used is Segoe UI Emoji, which is not supported by Windows 7. Users on that operating system will see an X in a box whenever an emoji is used.
Two key bug fixes for the release are:
Servers should spin-up faster when trying to access an experience which has no-one in it.
Chat should no longer scroll to the top when opening the chat panel.
Again, for the full list of updates, please refer to the release notes.
Sansar as a World
This is something that has been mentioned in recent product meetings – the shifting to emphasise Sansar as a “World” rather than as a collection of discrete experiences. Commenting on this at the product meeting, Landon McDowell, the Lab’s CPO, explained the reasoning behind this thus:
We asked ourselves what was really missing from Sansar and what we wanted to add to it, and one of the things that kept coming up consistently is … one of the magical things in Second Life is it feels like a world. It feels like place … and when we designed Sansar, we didn’t really implement that; it was a design decision. we wanted the individual worlds to stand alone, and be disconnected and independent … [Now] we feel that lack of place … is something that we’re personally missing and something we want to add into Sansar.
– Landon McDowell, Linden Lab CPO
Questing and Gameplay
The focus of the February 28th Product Meeting was on the updating Quest / rewards / achievements system that has been the subject of recent Product Meetings. This is seen as being both a means to help on-board new users to Sansar and – linked to the above – as a means of providing a capability that can allow grater gaming and questing with common roots across experiences, thus helping to give a feeling of continuity between them.
Part of this is what the Lab is calling Directed Play, which is liable to start appearing over the next couple of releases (March / April), as outlined by Stanley, the Director of Product for Sansar and Aleks Altberg:
The first pass at a quest system. This will initially be a basic approach of complete a task / achieve an objective, and receive a reward.
This will initially feature quests formulated by the lab, so will be player focused, but over time will be opened out to allow creators to build using the tools.
For the initial release, as it will feature game play from the Lab, the rewards will be small Sansar dollar amounts, as these are the easiest thing for the Lab to offer.
The system will be broadened such that when Creators are able to use it, they will be able to offer items as rewards – accessories, clothing, custom avatars, etc.
The ability for creators to use the system and offer rewards will hopefully be made available in the spring / late spring of 2019.
Longer-term, the Lab is also thinking about progression systems, e.g. experience points / levelling system or achievements.
These are again being considered in terms of both how the Lab might use them and how creators can incorporate them into their experiences.
This work might start to be surfaced in the summer of 2019.
The first quest that will be deployed in the March release is the previously mentioned “tutorial quest”, specifically aimed at new users. This will take them through the basics of walking, talking, running, interacting with objects, etc.
Ultimately, it will push new arrivals into the Social Hub, which will include a new area focused on quests, and tentatively referred to as the Quest Giver.
The Quest Giver will have a couple of further quest provided by the Lab:
A scavenger hunt spread over some of the experiences provided by Sansar Studios, where player have to locate various Easter Eggs and return them to the Quest Giver.
A guided tour approach to various Sansar Studio experiences, with landmarks participants must visit.
Both formats will include rewards on completion.
One thing the Lab does not want to get into, outside of some “premium” content they will produce, is building quest style content over and over. The focus is very much on producing a set of tools that can be leveraged by content creators whilst providing users with a consistency of use across different types of quest.
Q&A Session On The Quest System
Will creators be able to assign and store data against players (experience points (XP), etc)?:
The plan is to have a global XP system that works across all of Sansar, but this has not been fully defined. However, the idea is to allow content creators to contribute towards it.
This does not prevent creators using their own system if they so wished.
One issue is that anyone can be a creator and anyone can be a player, therefore the system has to be robust enough to avoid being gamed, and this is one of the reasons the Lab is approaching the XP system carefully.
Will creators be able to gift questors with rewards automatically?: Yes, but creators are asked not to think of it as “gifting”, and don’t want users to have the expectation of a reward dropping into their laps on completion of every task. Rather the idea is to make these games an overall quest that results in a rewards being given (i.e. a product the creator might otherwise sell in their store).
More broadly, the gift capability will remain separate to the quest system and the concept of rewards.
Will it be possible to build experiences that only user reaching certain XP levels can enter? Possibly, but the Lab has not got to the point of considering this type of specific requirement as yet.
Will it be possible to assign animated characters (NPCs) as quest givers? Eventually, yes.
Will it be possible to branch quests (e.g. complete task A, then either go on to B or C, rather than having to complete B then C)?
Initially, where quests are related, there will be a linear progression: if you want to do quest B, you must complete quest A.
Longer term, branching might be possible, as the Lab is still putting ideas together (hence requesting feedback through this PM).
Where quests are not related, it is possible to participate in more than one (so if quests X, Y and Z stand independently to one another and have no requirements one to the next), a user can be involved in all three simultaneously.
Will creators be able to set-up and run multiple instances of popular quests they create and track usage, etc? Not initially; but if it becomes necessary, the Lab will consider it.
Will it be possible to have objects that can only be obtained / used by players reaching a certain level? Once the levelling system is introduced, mostly likely yes, but objects like that would require explicit scripting on the part of the creator.
Will players be able to pick up items and add them to a local inventory (“backpack” or similar) to carry around and use as required, rather than being limited to just carrying things by hand? Potentially, by means of scripted support.
Will there be a “quest list” or “log” for users to track what quests they participated in, and their current progress within quests? Yes, and this will be part of the initial release.
Will quests be limited to individual experiences or run across multiple experiences? Initially, the system will be focused on quests within individual experiences. However, it will be expanded to support quests across multiple experiences.
Why should creators build games outside of the quest system if the Lab is going to be building and promoting its own games?
The intent for the Lab (as noted in the audio above) is not for the Lab to be in the market of making content and games. Their involvement is more to test the tools (e.g. the native UI elements), ensure they work and can do what is expected of them before passing them over to creators to start using them.
The quests built by the Lab can also function as a means to introduce incoming users to the quest system and how it works, so they will be familiar with the basics before they enter quests built by creators.
Will the system allow creators to set a limit on the number of players in a quest, e.g. set their quest so only one or two or just a small group can participate at any one time? Not something currently on the roadmap, but as the idea has been a common request, something to allow this might be added in the future.
Can creators / users still do their own thing if they don’t want to use this system? Yes. It’s just another set of tools creators can use if they so wish.
Similarly, users do not have to participate automatically. All quests will be opt in.
Those opting-in to a quest will gain access to the native UI elements the lab is building for quest players (and which will be available to creators to use when the system is opened out).
Will the system include a heath system? Not in the initial releases.
Why isn’t Sansar built on Unity? Because it was a conscious decision to build a dedicated engine the Lab could manage and extend without being dependent upon a third-party supplied engine that is geared towards trying to support multiple markets.
That said there is no reason why user-generated content cannot be used on either platform, and the Lab has been considering a Unity import mechanism (see my previous PM summary notes).
Will avatar locomotion include climbing as well as jumping and crouching? No plans for climbing, sliding or things like it at present. Jumping and crouching are the current focus for locomotion additions.
Can a slider be added for transparencies to allow opaqueness to be adjusted on objects? Not directly, but can be achieved by setting the materials and using an alpha on the object / face.
Will experience concurrency be increased? This is being worked upon, and the goal is to raise the ceiling on avatars in an individual instance of an experience to 100, hopefully be mid-2019.
Will Sansar have a particle system? A popular request, but currently on being worked on, although it is a goal for the future.
Will there be a “Universal” inventory system usable across all experiences? Again, a goal, but not for the immediate future.
Will Sansar allow adult content? There are currently no plans to allow adult content.
Custom animations for sit points: still at least a couple of releases away.
Private grouping (e.g. allowing private voice calls or text chat between 2 or more users): something the Lab wants to provide, but currently a question of resources and priorities.
Object parenting: might be out in the next release for the Edit mode, but this will not include run-time parenting of objects in run time.
Windows Mixed Reality support: still no plans to officially support WMR headsets.
Ticketing system: the ticketing system has been used for a number of LL organised Sansar events. A new, more robust ticketing system is currently being built, and it is hoped to make that available to experience creators so they can use it with their events.
Site-to-site teleporting: the next release should include the ability to set-up teleports that deliver users to a specific point within an experience