Avaloir (which might be translated as “throat”, or more usually a device used to collect run-off water – that is a drain) is the title of what is perhaps best described as something of a retrospective installation of various works by Eupalinos Ugajin – or as Eupa describes it, when “random ideas met a playground”.
Those familiar with Eupa’s work will know that it covers a broad canvas, often containing humour, whimsy, a little self-deprecation, which can be mixed with social commentary, imaginative projection worth of the likes of Gilliam, and an artistic flair that can quite captivate the eye and mind.
It’s hard to say which of these boxes one might tick when it comes to Avaloir, but given the description and the majority of the pieces presented, I’d sway more towards the whimsy end of the scale than anywhere else.
A journey – and it is a journey – commences at a rather dark arrival point high in the sky. from here one can teleport to a number of destinations within the region, some on the ground and some in the air. For those who are familiar with Eupa’s work, some (all?) of the destinations will offer a change to renew acquaintances with various past pieces. Most notably, perhaps, are elements from Taxy! to the Zircus, which first appeared in 2014 (you can read more about it here).
Wonderfully Dada-esque in presentation, with a twist of the abstract, four elements from Zircus. Be prepared to mouse around for things to sit on, click on and generally have fun with. You might find yourself riding a spiralling mandolin, wearing cubed boxing gloves and sparring a star-like punching bag, engaging in a little artistic expression with a paint brush and a … hair dryer … and more besides.
Down on the ground can be found the very interactive The Plant and also Eupa’s giant water / strawberry powered catapult (more here), only this time without its accompanying tower target. There’s also The Concrete Kite, and a whole lot more – some interactive, some observable. Getting around is predominantly achieved using the teleport cubes located at each of the major elements of the installation, although you can also fly from point to point at ground level.
Quantifying Avaloir isn’t the point. Experiencing it is – so as noted above, make sure you do mouse around, click, try, and be prepared to walk into things. Do make sure, as well, that you have local sounds enabled, as this is very much an aural experience as well as an interactive one.
Now, if you’ll excuse me, after spending time at Moor the Wind, I’ve got an unquenchable desire to listen to Glenn Miller and his orchestra….
The following notes are taken from the Content Creation User Group (CCUG) meeting, held on Thursday, May 24th, 2018 at 13:00 SLT. These meetings are chaired by Vir Linden, and agenda notes, etc, are usually available on the Content Creation User Group wiki page.
Animesh
Project Summary
The goal of this project is to provide a means of animating rigged mesh objects using the avatar skeleton, in whole or in part, to provide things like independently moveable pets / creatures, and animated scenery features via scripted animation. It involves both viewer and server-side changes.
Current Status
Work is continuing on moving towards a main (Agni) grid deployment of the server-side code to support Animesh.
Everything has been through QA, and there is a remaining issue with validating attachments. This is liable to require an additional fix.
It is likely that the simulator code will initially be deployed grid-wide on the beta (Aditi) grid for a shakedown period, prior to starting its deployment to the main grid through the usual process of RC deployments ahead of a full deployment.
There are still a number of issues that are TBD within the viewer:
Broken Rotations Issue: two potentially related problems.
In one (BUG-139251), when some static mesh objects are converted to Animesh, the visual mesh is rotated through 90 degrees when seen in the Animesh viewer, but the physics mesh isn’t, leaving it perpendicular to the model. This is possibly an orientation issue, with the viewer expecting the mesh to be aligned to +x=forward – which not all mesh modelling tools follow.
The second problem is that when linking a series of objects into a single Animesh, then are visually located where the avatar skeleton supporting them is located, but the physics shapes remain in the original location of the objects prior to linking / converting.
What is to be done about this still hasn’t been determined, although the Lab has been playing with playing the physics shape in the root of the object.
Rigged Mesh Level of Detail / Bounding Box Issues: (BUG-214736) – Essentially, attachments on avatars swap their LOD models as if they were scaled to the overall avatar bounding box. Graham linden has been working on some fixes for this which should improve accuracy and performance, but further work may be required. Once this has happened, there will be a further update to the project viewer.
Additional Animesh Discussions
Animesh Complexity: there is a further update to come to the Animesh complexity values (Advanced menu > Performance Tools > Show Avatar Complexity – applies to both avatars and Animesh objects in the project viewer). This should fix some inconsistencies in the reported values.
Land Impact: streaming cost: some have questioned with some question with the 50% bounding on LODs could be more stringent. Vir’s view is that 50% is sufficient to allow for a broad range of capabilities among creators, and being more stringent could simply result in a greater addition to land impact, which could affect the use of Animesh. There is an argument that some models require 75% bounding between the high and medium models.
Triangle count limit: There have been requests for further changes to the triangle count limit (e.g. allowing land owners to somehow be able to change it). As the limit was raised to 100,000, it is unlikely to be changed again before Animesh is deployed, and the idea of land owners being able to adjust it is seen as potentially in inconsistencies in how Animesh works in different locations. If it is seen that there is a need to raise the tri count limit, it will be done globally.
Avatar attachments: due to performance concerns, the limit of only one Animesh attachment to an avatar at any given time is not going to be changed ahead of Animesh deployment, but might be revisited in the future.
Animesh ignores SL scale settings: this is a result of how rigged meshes are handled – scaling doesn’t matter for worn mesh items, as other mechanisms are available for avatar scaling (e.g. the shape sliders, etc.). As these mechanisms aren’t available for Animesh in the initial release (e.g. there is no actual body shape associated with Animesh objects, ergo, no sliders), scaling is dependent upon custom joint positions.
Including more capabilities, such as possibly associating a body shape with an Animesh object, is seen as being a follow-on project to come some time after the initial release.
Bakes On Mesh
Project Summary
Extending the current avatar baking service to allow wearable textures (skins, tattoos, clothing) to be applied directly to mesh bodies as well as system avatars. This involves server-side changes, including updating the baking service to support 1024×1024 textures, and may in time lead to a reduction in the complexity of mesh avatar bodies and heads.
This work does not include normal or specular map support, as these are not part of the existing baking service.
Anchor Linden is continuing to work on adding five further channels to the Bake Service (left arm, left foot + three additional “general purpose” channels to be defined and used as required by creators) and the associated tattoo layer UI support to the viewer to allow them to be used.
This is a three-way set of updates, requiring changes to the appearance service, the simulator and the viewer. This, combined with concerns over maintaining backwards compatibility in the appearance service means that getting the work to a testable state is taking a little longer to achieve.
In the meantime, the changes to the appearance service seem to have caused a glitch which can make some avatars appear to be wear “black skirts” when seen on the Animesh test regions Aditi. This will hopefully be corrected soon.
In Brief
Transparency shadow casting from rigged items: there is an issue with rigged / static meshes using transparencies (blended or masked), which causes shadows cast by them to render incorrectly (shadow rendering conforms only to the geometry silhouette).
Graham Linden is now looking at the issue, however, the release of Animesh is not contingent on the basis of any fix, which will probably be part of the ongoing rendering updates Graham is working on.
Rigged / static meshes using transparencies (blended or masked) tend to cast an incorrect shadow
Imposter Clipping: imposter avatars can appear “clipped”. This appears to be related to the problem of not having a good real-time bounding box associated with rigged meshes (which can result in some meshes appearing clipped as well). This is being looked at, and the hope is that if a better means of defining the bounding box for rigged meshes can be found, it will resolve the importer issue. However, the release of things like Animesh are again not contingent on a fix for this issue being implemented.
On May 25th, 2018 the European Union’s General Data Protection Regulation (GDPR) comes into force. While an EU regulation, the GDPR not only applies to organisations located within the EU but it will also apply to organisations located outside of the EU if they offer goods or services to, or monitor the behaviour of, EU data subjects.
Earlier in May, the Lab issued a blog post providing an initial outline of their compliance with the GDPR, which covers both Second Life and Sansar. In that post they promised they would provide further details on how EU citizens can exercise their rights under the GDPR. On May 24th, they issued an e-mail summarising updates to their Privacy Policy. The e-mail reads in full:
We value our relationship with our community and your privacy. We have updated our Privacy Policy to increase transparency and comply with the European Union data protection law known as the General Data Protection Regulation (GDPR), which goes into effect on May 25, 2018. We encourage you to read our policies in full, but here are some highlights of what’s changed:
We provide additional details about the types of data that we collect, the ways in which we use it, and the measures we take to keep your data safe,
We added information about new choices and controls for users to manage their privacy, and
We added information about user’s rights regarding their privacy.
The updates to our policies will go into effect on May 25, 2018. If you have questions, please contact us at privacy@lindenlab.com.
Thank you for being part of the Linden Lab community!
The Linden Lab Team
The specific sections of the Privacy Policy that have been updated are:
One of the things I’ve enjoyed doing in Second Life is filming short video pieces. Most of my work, such as it is, is available on my YouTube channel. This contains a mix of videos of regions I’ve visited, art exhibits and installations I’ve enjoyed and the odd promotional piece.
My video efforts largely tailed-off in 2015/16 as a result of the software I was using to capture footage deciding it didn’t really want to play nicely with the Second Life viewer any more. Essentially, while flycammig would appear smoothing during the capture process, on playback I’d suffer a lot of dropped frames (with no indication they were being dropped during capture), and just get general choppiness.
After a lot of fiddling around, trying different CODECs, trying different versions of the software I prefer to use and so on (not all at once, but as and when time has allowed, which actually hasn’t been that often), coupled (perhaps) with recent updates within the viewer has encouraged me to try again here and there with mixed results.
I recently updated my capture software (Bandicam) to the latest release (May 2018), and this seems to have smoothed out a few more issues. I’ve also recently picked up the latest release of my preferred editing software – Cyberlink Power Director – and have been finding my way around it. So, with both in hand, I thought I’d have a go with a short video of our Second Life island home to see how things turned out. More projects may now follow!
If anyone watching this piece is interested in having their home parcel / region landscaped, feel free to give me a shout in-world to discuss.
Officially opening on Thursday, May 24th, 2018 at 12:00 noon SLT, is the latest exhibition at Nitroglobus Roof Gallery, curated by curated by Dido Haas.
One of the reasons I return to this gallery so often is Dido’s ability to inviting artists to exhibit who have a talent to provoke the mind, give rise to feelings, and give us pause for contemplation with their art. In this The Colour of Unspoken Words by Natalia Serenade is no exception.
“Looking around me I’ve been wondering why at certain moments there’s silence and no words are spoken,” Natalia says in her introductory notes. “Where do all these unspoken words go? Do they disappear in nowhere, get stuck in our throats, or are they flying away like birds? When this happens you realise how much silence there really is.”
She continues, “I want to paint the silence, the words not spoken, the freedom, thoughts swirling around me, the desire, the joy, the fear, the pain; AND there is my mind that is thinking constantly as it’s always filled with ideas and I have no choice but to create. Putting the colours together, with many cheerful tones, I will colour the day before it gets dark. So, let’s make today the most colourful of days.”
The result is 16 images of a distinctly abstract tone, utilising a measured approach to tone and colour which are both unique and also rich in emotional content. This might be hinted at when looking at a specific painting and then given form by its title, or it might be clear from the lines of an image without the need to reference its name.
In the latter category, I’d point to the likes of A Broken Heart Can’t Bare To Speak and If You Would Know… Then there ar pieces which seem to contain subtext within them: Someone hears every unspoken word and Reborn, both of which can offer up at least two potential narratives within their lines. In all of them the use of colour plays an important role if revealing their meaning and story: the colours used, their proportions relative to one another, their individual prominence in an image.
“Some people only speak when their words are more beautiful than the silence, some hide their words because of shyness,” Natalia says of the exhibition. In these paintings she brings all of those words and the silence in which they can exist, colourfully to life.
Tuesday, May 22nd saw the launch of a new enterprise for Sansar, with the public opening of a model of one of the icons of the original Star Trek TV series: the Bridge of the USS Enterprise, NCC-1701.
The experience has been developed as a joint venture between Roddenberry Entertainment, run by Eugene “Rod” Roddenberry Jr, the son of Star Trek creator Gene Roddenberry, and Linden Lab through Sansar Studios. It has been designed as a tie-in with the Mission Log Live podcasts / live streams hosted every Tuesday by Ken Ray and or John Champion, which cover all things Star Trek (and often beyond), with news, discussions, Q&A sessions, guests, and so on.
The core rendering for the experience has been produced by OTOY, the creator of the OctaneRenderer. Some might be familiar with OTOY’s work on the opening title sequence of HBO’s stunning TV series Westworld. Given this pedigree, and having seen some of the publicity shot (as I covered here), I admit – as a long-term Star Trek fan – to looking forward to seeing the experience first-hand.
The Bridge of the USS Enterprise – a Sansar social space where people can watch weekly broadcasts of the Mission Log Live series, hosted by Ken Ray (seen on the viewscreen) and John Champion
Sadly, the official opening of the experience between 03:00 and 06:00 BST on the morning of Wednesday, May 23rd – FAR too late (or early!) for me. So it wasn’t until well after the event had finished that I was able to jump into the experience and have a look around.
The Bridge of the USS Enterprise is, first and foremost, visually stunning. It is beautifully rendered, with almost everything a Trek fan would expect to see there and (for the most part) in the correct colours. Visitors arrive close to the turbo elevator doors at the back the the bridge; to the left is the Engineering station, Montgomery Scott’s usual station when on the bridge, and to the right, Uhura’s Communications with Spock’s science station just beyond it.
Of course, the Captain’s chair is there, sitting in the central well behind the helm / navigation console and facing the main viewscreen. A point of note here is that the show isn’t actually recorded in the experience, but is intended as a place where fans of Star Trek and science fiction can drop into and watch the live stream broadcasts – or catch up with them after the fact – and enjoy the ambience of the Bridge. I understand that for the opening, around 25-30 people gathered in the experience – which must have been cosy, and Ken and John, the hosts of the show, dropped in after the fact.
All of the detailing ia for the most part exquisite, although it is – aside from the viewscreen – a static rendering (at least in Desktop Mode with Sansar – I cannot speak to VR mode).
For the hardcore Trek fan there are perhaps one or two missing elements: the commissioning plaque is absent from the wall next to the turbo elevator doors; Spock’s station is lacking his “I see all through this box with a glowing slit” dohicky, for example. Also, the helm and navigation console also appears to have been taken from the game Star Trek: Bridge Crew, rather than conforming to the original TV series design and colours. On the flip side, it’s interesting to see the upper sensor dome that sits above the bridge deck shown as a skylight with stars zipping by – something of a nod of the head towards the original Trek pilot episode The Cage, perhaps?
It would be nice to see some interactive elements in the design – being able to touch Sulu’s console and see his weapons target / sensor relay unfold itself, or to be able to “flick” switches on the ring consoles and see the images on the screen above them change – just to give visitors more of a sense of presence (not to mention the hoary old ability to sit on the chairs). However, these little niggles aside, for those who like / love / appreciate the original Star Trek TV series, the experience is a wonderfully nostalgic homage.
It’s a little disappointing that the first Mission Log to be broadcast with the opening of the experience didn’t show more in the way of images of the space to encourage interest among Trek fans watching the show – although it certainly was mentioned several times. However, this was somewhat made up for the broadcast including an interview with one of the incarnations of James T. Kirk himself, Vic Mignogna, the man behind the engaging web series Star Trek Continues, which picks up right where the original series left off at the end of its third season, and includes some unique follow-ups to some of the episodes from that series and well as featuring several special guest stars from the worlds of Star Trek.
While Sansar and the Enterprise bridge aren’t visually featured in the show, it is interesting to hear some of the comments Ken and John make in passing about Sansar – particularly where their avatars are concerned. While casual in nature, they do perhaps reflect one of the more noticeable “limitations” with the platform that even casual users are noting: the “sameness” evident in Sansar avatars at the moment, born out of a current lack of broad customisation capabilities.
Overall, Bridge of the USS Enterprise is an interesting experiment on the idea of offering social environments in virtual spaces that are specific to audiences who might not otherwise have an interest in such environments. With the planned tie-in with the Overwatch League now apparently on hold (assuming it still goes ahead), Bridge of the USS Enterprise is Sansar’s sole “partnership” social space of this kind right now, so it’ll be interesting to see how it continues to be used.
The next Mission Long Live event will be on Tuesday, June 5th, as John and Ken will be taking a break on Tuesday, May 29th.