Sansar Product Meetings #10: events in Sansar

Sansar: The Whyst Garden – blog post

The following notes are taken from the Sansar Product Meetings held on Tuesday, March 6th and Thursday, March 8th. These weekly Product Meetings are open to anyone to attend, are a mix of voice (primarily) and text chat. Dates and times are currently floating, so check the Meet-up Announcements and the Sansar Atlas events sections each week. Official notes, when published can be found here.

The subject for both of the week #10 meetings was Events in Sansar. These notes are an attempt to bring together the key points of discussion from the meeting note that they represent discussion points, not necessarily ideas the Lab will be implementing in Sansar.

Changes to Event Submissions

Currently, the process of adding events to the Sansar events calendar (visible on the Web Atlas events page, and within the Client Atlas) is via e-mail submission to Linden Lab.

  • This will be changing soon to allow users to post their events directly to the events listings.
  • However, the option will initially be limited to hosting events in a user’s own experience – it will not be possible to post events taking place in an experience run by someone else.
    • This is primarily to prevent issues of people trying to unfairly monetise other people’s experiences.
    • Once there is a means for splitting revenue between event organiser and event “host” (experience creator), this restriction would likely be lifted.
    • As an alternative, LL might consider ruling that events can be held in any public experience, but where the experience creator is not the event organiser, than the event must be free of charge to access.
  • Some time after the events submissions process is revised, the system will be expanded to allow people to invite other users directly to their events.

How Should Events / Event Access Be Governed?

Controlling who can access events is still under discussion, with the Lab courting feedback on ideas for how this might be managed.

  • The Lab’s thinking is that there should be two broad categories of events:
    • Public – as in anyone can attend (subject to the payment of any associated access costs)
    • Private – the event is restricted to a defined access list / via invitation for the event host / experience owner only.
  • The above could be expanded  to a more defined set of event types that experience creators might use to define their events (e.g. live music, DJ / Dance, etc.,).
    • This is seen by some as potentially limiting and requiring additional management overhead (maintenance of the central list to suit all needs; experience owners having to resign “allowed” events depending on what they want in an experience at any given time, etc.).
  • There has been a feature request for the Lab to provide event-focused access control lists (ACLs).
    • A single ACL could be used to control access to multiple activities of a similar nature across multiple experiences.
    • Combined with the idea of scene “privacy volumes”, ACLs could be used to control access to content with an experience (e.g. there is a puppet show in the experience, but casual visitors don’t get to see / enter it, as it is within a privacy volume, until they “pay” an entry booth, which then adds their information to the controlling ACL – and they can access all other puppet shows controlled by the same ACL).
  • A counterpoint to this is that if better persistence of data is provided by LL – such as an API to allow experience creators to store data in an external database -, then such “hard-coded” ACLs would not be required, and experience creators could script them.
    • There are questions as to how user-friendly would individually coded access lists be; how would they scale, etc.

Accessing Events and Event Instances

Instances of experiences are capped for the number of users able to access them. No hard limit has been officially defined, although the number is currently around 30, and the Lab has indicated they are aiming to perhaps have 100 avatars active within an individual instance of an experience.

However, when attending events, this does raise questions, including:

  • How can large numbers of people witness a single event?
  • How can people wishing to attend an event together be sure they will be able to access the same instance of that event, and thus share it?

To address the first question, the Lab has already raised the idea of having a “broadcast” or “parent” event.

  • This would be the primary instance of an event (e.g. a live band performing on stage, a presenter on stage addressing and audience, etc.), which broadcasts out to multiple instances of the event as they are spun-up to meet the needs of the audience numbers.
  • Those in the “audience” instances might not be able to communicate into the “parent” instance or directly interact with avatars in other “audience” instances (e.g. dance with them) – but they would see / hear everything going on in the “parent”.
  • There is currently no time frame on when this capability might be introduced.

To address the second question – people being able to access the same instance of an event together – the Lab is looking at various ideas, including:

  • Access based on group size / queuing: people define themselves as a group, and effectively access an event together, allowing them to be sent to the same instance.
  • An “event lobby” system where users can see where they could go in order to be together.
  • The ability for people to directly invite one another into an instance of an experience.
  • Some combination of all of the above.

Monetising Events

The ability to monetise events – particularly events hosted in experiences created by someone other than the event host – is unlikely to occur until the supply chain / licensing system has been deployed. However, some ideas for monetisation that have been put forward include:

  • The ability for people to “rent” experiences built by others for a specific event / specific period of time.
  • The ability to make the “parent” instance of an event a “premium” experience, where people pay extra to be in it and able to interact with the performers / presenter(s) – so they could, for example, ask questions during a presentation.
  • The ability for people to charge subscriptions to events / activities.
  • The ability for event hosts / experience creators to define a specific “event” scene of a public experience.
    • This “event” version could be “reserved” for those paying / allowed to access the event (e.g. a dance, a hunt, a game, etc.), and instanced accordingly.
    • Anyone not paying the admission fee go to a “normal” instance of the experience which does not have the event running within it.
    • However, creation of such an “event scene” of an experience would still count against the creator’s allowed total of published experiences, even if it is effectively a “copy” of an existing published experience (although the scene could obviously be unpublished after the event).
    • A suggestion put forward to offset hitting the cap on the number of published experiences a creator can have, is to allow a creator to pay a nominal fee to have an “extra” experience published for a specified period of time, before it then goes away.

Monetising events touches on the wider aspect of monetising experiences in general, and during the meetings the Lab re-iterated that coming on the roadmap is:

  • The ability for creators to build and sell entire scenes / experiences.
  • The ability to offer direct tips in Sansar Dollars (which encompasses both events and experiences).
    • This is seen as more relevant at this point in time than offering a means to pay wall experiences.
  • Possibly the ability for experience creators to charge people wishing to access their experiences.
    • This does raise the question – as the Lab acknowledges – of how do users gain an assurance that the experience / content is worth paying for in order to view it?

Continue reading “Sansar Product Meetings #10: events in Sansar”

Kokua viewer – news and future updates

Update, March 10th: Two new versions of Kokua are available for 64-bit Windows (RLV – version 5.1.3.42936 – and non-RLV – version 5.1.3.42935).  These build on recent updates to Kokua using the Lab’s 5.1.3 code base, and feature internal code refactors. They can be downloaded from Kokua’s Sourceforge repository.

In October 2017, Nicky Perian announced he would be stepping back from a direct, hands-on leadership role in maintaining Kokua to enjoy a well-deserved retirement. He put out a call for members of the Kokua community to step forward and help maintain Kokua, although he has maintained a role working on the Mac and Linux versions of the viewer.

On Friday, March 9th, Chorazin Allen – perhaps best known as the creator of Chorazin Creations, a range of RLV-enabled cages and cells for the BDSM community, and the Chain of Command range of scripted plug-ins for the Real Restraints range of products by Marine Kelley-  issued a Kokua group notice indicating he would be joining the team, taking directly responsibility for:

  • The Windows builds of Kokua
  • RLV updates
  • Release management and general administration.

In a separate group notice, Chorazin also notes:

You can check on the latest Win64 versions of Kokua by visiting Sourceforge here:

https://sourceforge.net/projects/kokua.team-purple.p/files/Kokua-SL/Windows64Bit/

You may also set up notifications from Sourceforge when new versions are added.

RLV users should update to 42932 to get a fix for the garbage collector failing to remove restrictions from vanished objects.

Chorazin notes that – understandably –  it will take a little time for the re-organisation within the Kokua team to be completed, and Kokua users are asked to keep an eye on group notices – which will become more frequent as a new version is readied for release – and on the  Sourceforge repositories for updates to forthcoming versions.

In the meantime, news that Kokua is to be moving forward will likely be welcomed by the Kokua community, and kudos to Chorazin for taking up the request to help manage the viewer and carry it forward. I’ll continue to cover updates as they are released.

2018 SL UG updates #10/2: CCUG summary

Queen of Dragons? Surrounded by Animesh dragons by Wanders Nowhere and used by Lucia Nightfire as Animesh test models

The following notes are primarily taken from the Content Creation User Group (CCUG) meeting, held on  Thursday, March 8th, 2018 at 13:00 SLT. For the purposes of Animesh testing, the meetings have relocated to the Animesh4 region on Aditi, the beta grid – look for the seating area towards the middle of the region. The meeting is chaired by Vir Linden, and agenda notes, etc, are usually available on the Content Creation User Group wiki page.

There is no video to accompany this update, notes are taken from my own audio recording of the meeting.

Animesh

Project Summary

The goal of this project is to provide a means of animating rigged mesh objects using the avatar skeleton, in whole or in part, to provide things like independently moveable pets / creatures, and animated scenery features via scripted animation. It involves both viewer and server-side changes.

In short, an Animesh object:

  • Can be any object (generally rigged / skinned mesh) which and contains the necessary animations and controlling scripts in its own inventory  (Contents tab of the Build floater) required for it to animate itself.
  • Can be a single mesh object or a linkset of objects (link them first, then set them to Animated Mesh via the Build floater > Features).
  • Has been flagged as and Animesh object in the project viewer, and so has an avatar skeleton associated with it.
  • Can use many existing animations.
  • Will not support its own attachments in the initial release.
  • Will not initially include the notion of a body shape (see below).

Resources

Viewer Update

An updated to the Animesh project viewer occurred on Wednesday, March 7th with the release of version 5.1.3.513013. The focus of this update has been bug and crash fixes (e.g. correcting some of the issues of LODs getting stuck).

Current Progress

  • The major piece of work to be completed (other than further bug fixing) is performance profiling: looking at possible limits on tri count, LI, to ensure Animesh can scale across regions without becoming a major performance impact. This work takes object scaling into consideration.
  • Remaining bugs to be resolved include the animation timing issue, whereby an avatar entering a region were an Animesh object is already running may not receive the animations updates for the object. This is liable to require both viewer and server updates to fix.

Body Shapes

Providing a notion for Animesh objects to have a body shape is being considered as an initial follow-on for the project. If implemented, this would mean that Animesh creations could:

  • Make use of the shape sliders.
  • Utilise the Appearance / Baking Service.
  • Have a fully avatar-like inventory.
  • Make use of the server-side locomotion graph for walking, etc.

The assumption is that if the Lab look to add this functionality, it will be handled in much the same was as is the case for avatars. However, this aspect of the work will be opened up for discussion and ideas once it has been officially adopted as a follow-on project, rather than having speculative (and premature) discussions about how body shapes might be applied / managed.

Bakes on Mesh

Project Summary

Extending the current avatar baking service to allow wearable textures (skins, tattoos, clothing) to be applied directly to mesh bodies as well as system avatars. This involves server-side changes, including updating the baking service to support 1024×1024 textures, and may in time lead to a reduction in the complexity of mesh avatar bodies and heads. The project is in two phases:

  • The current work to update the baking service to support 1024×1024 textures on avatar meshes.
  • An intended follow-on project to actually support baking textures onto avatar mesh surfaces (and potentially other mesh objects as well). This has yet to fully defined in terms of implementation and when it might be slotted into SL development time frames.

This work does not include normal or specular map support, as these are not part of the existing baking service.

It is important to note that this project is still in its preliminary stages. Any release of a project viewer (see below) doesn’t mark the end of the project, but rather the start of initial testing and an opportunity for creators to have input into the project.

Project Viewer

  • Currently with the Lab’s QA team.
  • Will support a basic means of applying system bakes to mesh faces, although their will be caveats as to how well this initially works (e.g. matching to mesh UV maps).
  • The majority of this work is viewer-side, allowing mesh faces to be flagged with a special texture ID which allows a specific part of an avatar bake to be applied to them.

Feature Request JIRA

It is recognised that there are numerous questions / concerns relating to Bakes on Mesh (see my week #8 update). Because of this, and the lack of a specifications document, user Elizabeth Jarvinen (polysail) has started putting together a Feature Request JIRA outlining some of the specifics which need to be considered with regards to the project. Some of the ideas offered might be considered as a part of the initial Bakes on Mesh project, so might be left for a follow-up project. For more information on this follow the links below:

Project Arctan

This is the code-name for the project to re-evaluate avatar and object rendering costs (avatar complexity and Land Impact). Again, just to be clear on this project:

  • It is only just starting.
  • The aim is to make avatar complexity and Land Impact values more reflective of the actual “cost” to render avatars and objects. This might lead to ARC / LI values changing.
    • However, there are currently no details of how LI / avatar complexity values may change or when, as the Lab is still gathering data at this point in time.
    • The Lab is aware of the potential for increases in LI values to cause disruption, and if this is the case, they will seek to minimise the impact as far as possible.
  • The work is not related to Animesh – which will see a “basic” cost (in terms of geometric complexity) applied to Animesh objects (see performance profiling, above). Animesh objects may be subject to further revision as a result of Project Arctan’s findings.
  • It is hoped that Graham Linden, who is leading the project, will attend future CCUG meetings to discuss Arctan as the project progresses.

Other Items

  • PBR Shaders: there have been requests for the Lab to implement physically based rendering (PBR) – using realistic shading/lighting models along with measured surface values to accurately represent real-world materials. Such requests have been noted by the Lab. While adding support for PBR shaders has not been ruled out, it is seen as a significant undertaking, and is thus seen more as a “someday, maybe” piece of work, rather than being a part of the current SL enhancement roadmap.
  • Terrain texture improvements: terrain textures in SL suffer from a low texture density, causing blurring, etc. Again, the Lab is aware of this, and considering making improvements to terrain texture as a possible future project.

Next Meeting

The next CCUG will be on Thursday, March 22nd, 2018, and 13:00 SLT.