SL project updates 41/2: Content Creation User Group

People gather for the Content Creation User Group meeting, October 12th, 2017

The following notes are taken from the Content Creation User Group meeting, held on  Thursday, October 12th, 2017 at 13:00 SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are usually available on the Content Creation User Group wiki page.

Medhue Simoni live steamed the meeting to You Tube, and his video is embedded at the end of this article, with key points of discussion noted below. Time stamps to the recording are included below, and clicking on any of them will launch the video in a separate browser tab at the assigned point. However as these notes present the meeting in terms of topics discussed, rather than a chronological breakdown of the meeting, so some time stamps may appear to be out of sequence.

Animesh (Animated Mesh)

“I like the name ‘animated objects’ because I think it’s unambiguous, but it takes a long time to type!” – Vir Linden joking about the name “Animesh”.

Project Summary

The goal of this project is to provide a means of animating rigged mesh objects using the avatar skeleton, in whole or in part, to provide things like independently moveable pets / creatures, and animated scenery features via scripted animation. It involves both viewer and server-side changes.

In short, an Animesh object:

  • Can be any object (generally rigged / skinned mesh) which and contains the necessary animations and controlling scripts in its own inventory  (Contents tab of the Build floater) required for it to animate itself.
  • Can be a single mesh object or a linkset of objects (link them first, then set them to Animated Mesh via the Build floater > Features).
  • Has been flagged as and Animesh object in the project viewer, and so has an avatar skeleton associated with it.
  • Can use many existing animations.

However Animated objects will not (initially):

  • Have an avatar shape associated with them
  • Make use of an avatar-like inventory (although individual parts can contain their own inventory such as animations and scripts)
  • Make use of the server-side locomotion graph for walking, etc., and so will not use an AO
  • Use the avatar baking service
  • Will not support its own attachments in the initial release.

These are considered options for follow-on work, possibly starting with the notion of a body shape (to help with more fully-formed NPCs).

Viewer Progress

[1:11-3:40] We are now “very close” to seeing a project viewer, test content and test regions appearing. The viewer has been passed by LL’s QA team, the initial test content is being developed (and will be added to, although people can obviously also test their own content) and a total of five regions will be set-up on Aditi (the beta grid) for the purposes of test very soon. Four of these will be arranged in square and be rated Moderate (allowing for comprehensive content testing, including how Animesh is managed on region crossings), the fifth will be separate and rated Adult.

The main hold-up for now is documentation: release notes, wiki documentation, FAQ, etc. When all is ready, there will be a blog post on the project, and the viewer itself will be appearing on the Alternate Viewer wiki page – so those interested in testing Animesh should keep their eyes on both the official blogs and the viewer wiki page.

Root Positioning

[4:20-7:20] The most recent work with Animesh has remained focused on aligning the root joint of a skeleton associated with an Animesh to the position of the Animesh object in-world. The problem here is that the skeleton / avatars in SL are oriented along the X-axis; however, some tools (Avastar?) align along the Y-axis.

This means that when applying skeleton (X-axis orientation) with an in-world mesh objects with a Y-axis orientation to make it Animesh, the latter can jump and rotate. Vir had been looking to make changes in the skeleton’s orientation to handle mesh with a Y-axis orientation this. However, it turns out doing so could cause unwanted behavioural changes for scripts handling attachment rotation and positioning. To avoid this, the skeleton orientation will remaining X-axis oriented. Vir hopes that the upcoming testing will allow the settings for linking skeleton to mesh can be further refined to make the conversion from static mesh to Animesh as smooth as possible.

Animesh and Vehicles / Testing vs Release

[9:20-13:20] A view is expressed that Animesh is being “rushed” without sufficient thought being given to its application in vehicle design. This in part perhaps stems from confusion about public testing of Animesh and a release in Animesh. For example, while Bento had an initial “closed” period of develop to lay the ground work, it was followed by a broad and lengthy public period of testing, consultation and enhancement.

As such, the arrival of the project viewer doesn’t mark a move towards “releasing” Animesh – but rather allows more widespread testing of the capability, including its potential use in vehicles and elsewhere. Testing is the point at which the Lab can more fully look to creators for feedback, requests for improvement – and even fleshed-out feature requests  etc., which might be considered for folding-in to the current work or for follow-on work once the initial Animesh release has been made.

Quick Questions

  • Land Impact:
    • [15:54-16:29] Overall land impact constraints of Animesh are still TBD (testing will help determine what might be required). However, Vir points out that in terms of individual LI applied to Animesh objects it will initially be focused on a per skeleton basis, not on a per mesh basis (so a linkset of several rigged meshes converted to Animesh will be seen as single object with an LI, not a group of mesh objects, each with an LI).
    • [37:35-38:34] Couldn’t the Animesh constraint be purely land impact based? possibly, but then attachments would be excluded – as they are with avatars, and the most costly element in terms of performance are avatars; by only using LI as a constraint, then Animesh could become as impactful as avatars with multiple high render cost attachments.
  • [16:46-17:18] Is Animesh prim objects with an AO? No. Animesh is rigged mesh objects containing scripted animations within them, associated with an avatar skeleton.
  • [17:22-17:57] If an Animesh has linked skeletons, do animations play across all of them? No – Animesh does not involve linking skeletons. It can comprise multiple rigged mesh objects which are linked to an individual skeleton, which can have animations applied to it.
  • [18:09-18:30] will the attached mesh also count against the tri count? Yes. There will be a limit, most likely set to 20K when testing starts, but subject to review as testing proceeds.
  • [26:02-27:35] Short description of how Animesh will work, as covered in some of the bullet points in the project description, above. Includes some discussion of using Animesh with pathfinding.
  • [30:00-30:35] Is animation movement capped (in terms of moving an individual bone): yes, it is capped at 5 metres, as it always has been for avatars.
  • [30:41-32:55] Physical properties discussion: Animesh objects will operate like avatars: they well not have physical collisions as a result of animation (e.g. a moving arm on an Animesh will not collide with a passing avatar, for example). However, Animesh objects will have physical properties (based on mesh physics shape), and so can fall, etc.
  • [47:35-48:08] Will Animesh alter the land impact for existing objects in-world? No. Animesh is a new object property, only applicable to objects intentionally set as Animesh. As such, it will not affect pre-existing content in-world unless said content is converted to Animesh (if it can be).
  • [48:48-49:58] Will Animesh behave at altitude? There  is an issue which sees mesh rigged with software skinning deforms at altitude (starting around 1,000m and getting worse from there). Animesh might suffer the same, but it hasn’t been tested.
  • [52:47-56:10] The eyes have it – a discussion on automating eye movements on Animesh in a similar manner to avatars. Vir see this more as building-out the NPC-style capabilities with Animesh, which is very much a follow-up to the current project.
  • [58:18-59:20] Can Animesh work with other mesh? It should; the complex part is liable to be keeping the object’s actual position in sync with its rendered position based on the animations playing, particularly as there won’t be a “sit” equivalent. Vir hopes this is something that will be poked at in testing.

Side notes on Animesh Constraints:

[44:08-45:21] As Vir has previously indicated, all of the constraints in the viewer – when it appears – are for testing purposes, and to provide a means by which things can be tested.  For a catch-up on constraints and testing, please refer to my previous CCUG update.

[20:21-20:45] Also, to help with constraints and information, Vir has been updating the viewer’s avatar complexity panel to provide information on triangles, which will also work for Animesh objects.

[22:45-29:08] A feature request has also been submitted for an alternative method of handling land impact and incorporating Animesh – see BUG-139203. This has been left open for comment. However, it is something the Lab is interested in taking a look at.

Continue reading “SL project updates 41/2: Content Creation User Group”

Our Digital Selves: living within a virtualised world

Coming to a screen near you in 2018  – and not to be missed. Via: Draxtor Despres

In 2016 I wrote about the work of Tom Boellstorff and Donna Z Davis (respectively Tom Bukowski  and Tredi Felisimo in Second Life). Since 2015 Donna – a strategic communications professor at the University of Oregon specialising in mass media & society, public relations, strategic communication, virtual environments and digital ethnography – and Tom –  a professor of anthropology at the University of California, Irvine – have been engaged in a National Science Foundation funded study formally entitled  Virtual Worlds, Disability, and New Cultures of the Embodied Selfand more informally referred to as Our Digital Selves.

Their work, which will continue through into 2018, focuses on the experiences of people with disabilities – visible and invisible – who are using immersive virtual spaces to represent themselves, possibly free of the shadow of any disability, engage with others and do things they may not be able to do in the physical world.

Donna Z Davis and Tom Boellstorff (Tredi Felisimo and Tom Bukowski in Second Life), co-researchers in Virtual Worlds, Disability, and New Cultures of the Embodied Self, supported by the University of California, Irvine; the University of Oregon; and the National Science Foundation.

The work encompasses many aspects – physical, mental, technical, for example – of occupying both a physical space and a digital environment when living with both visible and invisible disabilities – the benefits that can be enjoyed, together with the potential risks / fears. Some of these aspects, particularly the more positive, are perhaps familiar to us: the power of being defined by who we are as a person, rather than in terms of a disability; the freedom presented by the ability to embody ourselves within an avatar howsoever we like, and so on. Other may not have been fully recognised for the fear they can create; while the “new era” for VR system may well be liberating for the able, it can be a frightening / debilitating threat for some with disabilities.

Given the extent of the study, it obviously crosses the physical / digital divide.  There have been experiments and discussions in-world. And there have been real-world interactions between Tom and Donna and those participating in the study.

One of those who has been following the study closely is Draxtor Despres. He has featured Tom’s work in The Drax Files World Makers, and is now engaged in producing a documentary  – also entitled Our Digital Selves – about the study, travelling with Donna and Tom to meet some of those participating in the work. While not due for release until early 2018, the first official trailer for the documentary was made public on Tuesday, October 11th, 2017.

Members of the study meet in-world. Credit: Draxtor Despres

“I’m not sure how long the finished piece will be,” Drax informed me in an exclusive one-to-one about the trailer and the film. “I’m aiming for around 40 minutes, but am currently editing an hour-long cut. It’s a massive project. We’ve been travelling across the United States and across the Atlantic meeting with people and interviewing them.”

It’s a massive undertaking; Drax goes on to note that there are around 15 participants in the study who have been involved in the filming, and he has around 3 hours of recording with each. Some of this was necessary simply to get to know people and overcome perfectly natural barriers – shyness, nervousness, and so on – and establish trust; however, it still means there is a lot which needs to be synthesised into a cohesive whole, whilst also doing justice to the stories of all of those volunteering to participate in the film.

Part of the study has involved participants being provided with a 32m x 32m parcel on Ethnographia Island which they could use to share their experiences, insights, and thoughts on their disability. Shown here, Jadyn Firehawk sands before her exhibition space (May 2016).

Nevertheless, the first public trailer does much to establish the structure of the documentary and present an accessible framework against which the broader story will naturally unfold.

This promises to be one of the most engaging, moving and informative documentaries on virtual living, embodiment and personal expression since, perhaps, Login2Life, and something that should not be missed once available. In the meantime, I’ll leave you with the trailer  – and the hope that, subject to feedback from Donna, Tom, Drax and those involved the work, I’ll be able to bring more on the documentary and the study in the run-up to the release of the completed film.

For Max and Lyrric: L$ one million raised in Second Life

For Max – October 3rd through 10th, 2017 – Over L$ 1,000,000 raised

The For Max event closed its doors on Tuesday, October 10th having broken through the L$ 1,000,000 barrier of funds raised to help Maxwell Graf and his partner in the physical world, Lyrric Fei.

The event was rapidly put together by Charlotte Bartlett, Sophia Harlow and Blazin Arubet with the assistance of Apple Fall and Shiny Shabby, at extremely short notice after Max reveal he and Lyrric were facing a heartbreaking plight thanks to a physical world situation (both have been on an overland journey to start a new life after the situation forced them out of house and home).

Writing on the event’s Facebook page, Charlotte Bartlett said:

I am literally shaking as I write this, as the level of love, help and generosity of the Second Life Community has shown us what true spirit means. The amount raised has topped across the donations/raffle and creator items over 1,000,000 Linden. This hopefully gives our Max a safety net for the short-term so he and his family can start to plan their new chapter. A list of all sales is already available to Max for audit (locked script) and donations/raffle in addition.

The total equates to just over  US $4,000, which will help Max and Lyrric in starting their new life. Nor is that all.

As I reported when For Max opened, Kylie Sabra also launched a sale of her art in support of Max and Lyrric, which ran for the same period as the shopping event. I’ve reached out to Kylie with an enquiry on how things went, but have yet to hear back. When I do, I’ll update this article.

Don’t forget as well, that even if you missed the For Max events, you can still help Max and Lyrric by visiting Rustica, their fabulous in-world store and purchasing one or two of Max’s range of furnishings, apparel, accessories, building kits and more.

Also benefiting Max and Lyrric, Kylie Sabra raised an additional L$ through the sale of her art

This is the second time this year a respected, liked and loved creator in Second Life has encountered a major life-impacting crisis, only to see the community in which they have so long played a role rally around, unbidden in support – and with remarkably similar outcomes.

In April 2017, I was privileged to work with Saffia Widdershins on Filling the Cauldron, an event aimed at helping and supporting creator Elicio Ember and his family. held over a near-identical period, that event  also raised US $4,000. The speed of response, support and outpouring at both these events again shows not only the generosity of all those involved, whether as organisers, participating creators or artists or entertainers, or as attendees shopping and donating – but also the depth of caring for one another inherent in the Second Life community.

Thank you to Charlotte, Sophia, Blazin and all involved in For Max.  To Max and Lyrric: stay safe and stay warm – and see you in Second Life soon!

Exodus: A Trip for Life in Second Life

Exodus: A Trip for Life

Art can be expressive in many ways. It can be an outflow of creativity, a reflection of moods and emotions, a cathartic release of hopes, fears, wants or needs; or an echo of joy or contemplation or endeavour or of life itself. And it can be a voice of conscience commenting on society, culture and politics.

Exodus: A Trip for Life is a full region installation which falls squarely it that last bracket: offering a voice of conscience in response to our societal and political outlook. In doing so, it touches – invokes – something we can so easily lose sight of – even when it might appear we are trying to empathize.

Exodus: A Trip for Life

Designed by Kicca Igaly and Nessuno Myoo, Exodus: A Trip For Life deals with the discomfiting issue of the world’s refugee crisis, which became a hot button topic on several fronts of the past couple of years; one in which some essential truths have perhaps been lost in the clamour of angry voices, political posturing, and perceived threats to security, jobs and income.

“It almost seems,” Nessuno says in introducing the installation, ” As if all the evils of our society, unable to find effective solutions to the problems which from time to time appear, have found, in the dark threat of the foreign ‘invader’ , the perfect scapegoat.”

Exodus: A Trip for Life

And yet the simple truth is, these feared ‘invaders’, these people risking life and limb and family, do so not because they’re seeking to exploit our vulnerabilities and our way of living. They do so because they already are vulnerable; their war of life has already been destroyed through war and / or political / religious upheaval and oppression. Everything they have known has been torn apart in ways we cannot understand; far from coming here as exploited, they arrive as the exploited, preyed upon in their journey by criminals and traffickers; people more interested in taking money and possessions than in saving lives.

All of this, and more is brought forth in Exodus: A Trip for Life. It starts out at sea, where a battered hulk rides a heavy swell, figures crammed into its rotting hold or crawling desperately up to the main deck and clinging in fear to anything looking remotely solid. The vessel is tossed by waves of money – a reference to the physical price those aboard have paid, while strings rise from the hull to a puppeteer’s controllers, a further reference to the exploitation inherent in trafficking the desperate, as they are time and again forced to travel in vessels unfit for purpose (and it is no coincidence that the bows of this ship bear two names, again underlining the dire circumstances faced by so many).

Exodus: A Trip for Life

Ashore, the imagery continues. New arrivals walk along a road, watched from a distance by locals, the gap between the two groups as telling at the walls that constrain the refugees to that single, lonely road. A camp sits close by, but again separated from  the locals as if in quarantine from the rest of the land, by walls and iron gates. Both the road and the camp stand as metaphors of how we see refugees; they may not be so alien, they may appear more human – but they are still “others” to be kept at bay. And we are far more comfortable when they can be moved from our sight and thoughts, as symbolised by the line of arrivals slowly vanishing into a white mist. They pass and are gone – to where does not matter, nor does the fact their plight still goes with them; we can resume our lives.

Poignant, pointed, provocative, richly nuanced and threaded with a wealth of observation and commentary, Exodus: A Trip For Life may not sit well with some; it may not even by easy to entirely decipher on a single pass. But it does have a voice; one that reaches into our conscience to whisper a stark reminder about the realities of the world around us even as sound bites, posturing and the fickle lens of the media would distract us and divert our thoughts and feelings.

Exodus: A Trip for Life

SLurl Details

Virtual Egyptology: a journey in time in Sansar

Voyages Live: Egypt: people arrive at the cenotaph at Gebel el-Silsila. as recreated in Sansar from a model built by INSIGHT Dr. Phillipe Martinez is centre foreground in the blue jacket

On Wednesday, October 4th, I was one of a number of people who joined a special immersive “voyage” through ancient Egypt, visiting three sites of antiquity which are not open to the public in the physical world, but which have been digitally re-produced in a virtual environment for the purposes of study, and have also been optimised for presentation in Sansar.

Joining us for the journey were Kevin Cain, Director of INSIGHT – the Institute for Study and Implementation of Graphical Heritage Techniques (also sometimes called Insight Digital) and Dr. Philippe Martinez, INSIGHT co-founder and Lead Archaeologist, author, and University of Sorbonne professor.

The cenotaph at Gebel el-Silsila, located on the banks of the river Nile, available to visit in Sansar

INSIGHT, in collaboration with the Egyptian Ministry of Antiquities, has been at the forefront of digitally capturing sites of antiquity in Egypt, and Dr. Martinez himself was one of the earliest exponents of computers and 3D capabilities in archaeology.

In the 1980s, for example, he encoded the decoration of 12000 blocks dating to the time of Amenhotep IV – Akhenaten, discovered reused in the 9th pylon at Karnak. The database was then used under artificial intelligence techniques with the output of hundreds of virtual reconstructions belonging to the first temple dedicated to the god Aten. Also around that time, he spent two years working on a 3D reconstruction of the ancient Egyptian temples of Karnak and Luxor.

Kevin Cain (l) and Dr. Phillipe Martinez (r) and, inset, their respective Sansar avatars (again, left and right)

INSIGHT’s work now involves state-of-the-art techniques such as LiDAR (Light Detection and Ranging), a surveying method that measures distance to a target by illuminating it  with a pulsed laser light and measuring the reflected pulses, and photogrammetry, the science of making measurements from photographs. INSIGHT’s work in recreating sites of anitiquity in 3D was also intriguing revealed during some of the earliest looks inside Sansar prior to the Creator Preview opening, when images of the tomb of Nakhtamon (“TT341”) were used in various promotional talks and demonstrations of the platform (see here for an example).

The tomb of Nakhtamon is one of three locations thus far reproduced in Sansar in a collaboration spanning INSIGHT, the Sansar Studios team, the University of Sorbonne and the Egyptian Ministry of Antiquities. Both it and the cenotaph at Gebel el-Silsila, were scheduled stops on the tour, but such was the interest shown in the tour and in INSIGHT’s work, the tour was  extended to include a reconstruction a section of the Ramesseum “Coronation Wall”.

Tomb of Naktamon (“TT341”) in Sansar – part of the collaboration with INSIGHT and visited on the tour

For the purposes of the event, the two primary destinations together with the Voyage Live: Egypt experience, where people initially gathered, were spun-up in their own special instances. This meant that casual visitors to either Voyages Live: Egypt or the locations on the tour would not feel that they were intruding on a private event or have their own visit spoiled by a group of avatars suddenly crowding them out and getting in the way.

This in itself demonstrated a key strength of Sansar: the ability to spin-up instances of experiences to deal with special events and the like, without necessarily having to close them off from public access / other uses occurring at the same time.

The Ramesseum “Coronation Wall” reproduced as a 2 million poly model in Sansar, optimised from an 800 million poly original.

At Voyages Live: Egypt, attendees were introduced to Kevin Cain and Philippe Martinez, and a little time was spent talking about INSIGHT’s work, the backgrounds of our guides (Mr. Cain, a specialist in computer graphics and imaging worked widely in the film industry before a fascination with preserving sites of antiquity drove him to establish INSIGHT as a non-profit entity specialising in the digital recording and mapping of sites of antiquity, as which has now worked in a dozen countries around the globe).

INSIGHT’s work is not only fascinating from a lay perspective – offering the potential for VR and a platform like Sansar to open-up historical sites for education and learning across all ages without putting the actual site at risk – but because it is of very real benefit in helping to preserve ancient sites from accidental damage, whilst providing archaeological teams an opportunity to effectively study locations even when the locations themselves are not open to study, again to help preserve them.

Continue reading “Virtual Egyptology: a journey in time in Sansar”

SL project updates 41/1: server, viewer

Oh Deer, Heavenly Waters; Inara Pey, October 2017, on FlickrOh Deerblog post

Server Deployments for Week #41

As always, please refer to the server release thread for updates and the latest news.

  • On Tuesday, October 10th, the Main (SLS) received the server maintenance package previously deployed to the RC channels.  17#17.09.29.509228, intended to address the unintended returns issue of two weeks ago – see The Return of the Living Objects: A Pre-Halloween Horror Story for more.
  • On Wednesday, October 11th, the RC channels should be updated as follows:
    • BlueSteel and LeTigre should be updated with a new server maintenance package, 17#17.10.06.509429, containing internal fixes
    • Magnum should also receive a new server maintenance package, 17#17.10.06.509394, also containing internal fixes.

Neither of the RC updates should have user-visible changes.

SL Viewer

There have been no viewer updates thus far this week, leaving the pipelines as follows:

  • Current Release version 5.0.7.328060, dated August 9th, promoted August 23rd – formerly the Maintenance RC
  • Release channel cohorts (please see my notes on manually installing RC viewer versions if you wish to install any release candidate(s) yourself):
    • Voice RC viewer, version 5.0.8.329250, dated September 29.
    • Maintenance RC viewer, version  5.0.8.329115, dated September 22nd.
    • Wolfpack RC viewer,version  5.0.8.329128, dated September 22nd – this viewer is functionally identical to the release viewer, but includes additional back-end logging “to help catch some squirrelly issues”
    • Alex Ivy 64-bit viewer, version 5.1.0.508209, dated September 1st
  • Project viewers:
  • Obsolete platform viewer version 3.7.28.300847, dated May 8th, 2015 – provided for users on Windows XP and OS X versions below 10.7.

Touch, Camming, Experiences, Avatars and Feedback

Note: the following is not indicative that the Lab is considering behavioural changes in Second Life at this time. The topics were simply raised with a view to generating discussion and feedback.

There are a lot of things  we can do in a region which might not always be in the spirit of what is intended by the region owner, particularly if the region is designed for a specific purpose, such as an experience or game. We can, for example, cam ahead, seek out secrets, identify traps or hazards and find ways to avoid them. We can also reach across a region to touch things when perhaps it would be better if we could only do so up close.

There are various ways to limit some of this – scripts can be used to limit touch, as can RLV, for example. But for things like experiences and games, it might be better if the region owner had the option to perhaps enforce limits on how far away from their avatars people can cam or touch.

However, this has to be balanced against those situations where camera and “long-distance” touch can be essential. Shopping at a crowded event, for example can be made much less of a fight against crowds and lag by using Area Search, the camera, and far touch to obtain specific items (even flycamming and far touch can make shopping a lot easier in any situation). Ergo, there’s a risk that imposing limits on either could have a more detrimental effect on people’s willingness to shop in popular locations.

Given that crowds themselves are a problem, would it be worthwhile limiting the number of avatars seen within a region in some way? Perhaps a limit set through the estate controls, or by some method within the viewer such that avatars / imposters beyond a certain radius aren’t rendered / have updates ignored by the Interest List until they come within the specified distance – just like people can move in and out of view in a real crowd.

Again, how something like this might work – should it really be a control at estate level, or would it be better as an option users could tweak (which would get my vote) – would have to be thought through in detail. Particularly given we already do have some control over the rendering, at least, of the avatars around us through various methods (e.g. render friends only, derender selected avatars, CTRL-ALT-SHIFT-4, avatar rendering, etc.); while although each of them might not be perfect, they can give flexibility of control to individual users.

As noted, the Lab aren’t planning any changes specific to any of the above  – but by throwing questions out and listening to feedback, concerns, alternative ideas, they are perhaps pulling ideas and thoughts into the melting pot on how SL might be refined and offer better controls and means to improve people’s individual and joint experiences.