A touch of Venetian H.R. Giger in Second Life

Giger Dead Venice, October 2022 – click any image for full size

I’ve missed out on a couple of recent builds by Hera (Zee9) – her builds have a habit of coming and going with some rapidity -, so when she dropped me the landmark to her October / Halloween 2022 build, I tried to hop over as soon as time permitted in the hope a write-up would reach the pages of my blog in time for others to enjoy this latest outing.

Giger Dead Venice brings together one of Hera’s poplar builds wrapped into a science-fiction / horror theme that is perfect for the season. The build in question is Venesha, Hera’s take on Venice, and which has often been a setting suggestive of dark arts, vampires, and the undead. The S-F / horror theme is that of H.R. Giger, as witnessed through the original Alien film and some of his broader work.

Giger Dead Venice, October 2022

Reached by taking a gondola teleport from the region’s landing point, this is a again a build that is unique Hera – with the majority of the mesh elements used within it created by her and utilising over 200 new textures she also created for the build. This version of Venesha also brings with it an enlarged port area, reflecting a recent version of Venesha reworked as the port of Kar from the Gor novels – one of the builds that did not see much light of day before being removed.

In terms of the Giger re-dress, this is exceptionally well done, with a richness of references to his and his work on the Alien movie to be found throughout. In this, Giger Dead Venice – to me – surpasses Drune Giger City, her H.R.G. inspired re-working of her Drune city (see: Hera’s Drune Giger City in Second Life).

Giger Dead Venice, October 2022

I say this because of the rich layering of motifs and in drawing out some of the more sexual aspects found in some of Giger’s work and combining them with suggestions of fetish and BDSM – at this point, I should note that aspects of this iteration of Venesha (or Venice if you prefer) might be considered somewhat NSFW.

The sexual elements are perhaps most obvious in some of the genitalia-like entrances to buildings and the phallic, umm, extensions around the base of the remodelled cathedral. However, these are not simply gratuitous, they are fully in keeping with much of Giger’s art, which often included a combinations of sexual / horror elements (just take his original drawings for the head and tail of his Xenomorph, as a basic example).

Giger Dead Venice, October 2022

Within the cathedral are more direct references to the Alien films – face hugger eggs, sculptures of baby Xenomorphs bursting from the chests of imprisoned humans, and etchings depicting the alien and a humanoid similar to the one known as the Pilot / Space Jockey.

The cathedral is not the only interior to the buildings here. For example, to the north, the church-like building has now been replaced by a temple with a mystical star map at its heart, whilst to the south and east, within what had at one time been home to a library, sits a lounge bar that more directly combines the Giger influences with elements seen within some of Hera’s previous lounge and bar designs and BDSM and sci-fi motifs. As for the others, such as the café and bistro-style settings, I’ll leave them to you to find.

Giger Dead Venice, October 2022

Other touches suitable for the Halloween season might also be found by the keen-eyed, offering a clever mix of the classical with that of H.P. Lovecraft, and what might be seen as a subtle commentary on the modern-day horror of right-wing “Christian” politics which puts love of the gun above love of human life.

Caught under a lurid sky that paints the setting with an otherworldly green tinge – the build really should be seen under the default EEP setting – Giger Dead Venice has much with which to commend itself to visitors and photographers. With its waterside walks, alleys, sculptures, mists and symbolism, it is one of the more imaginative “Halloween”-type settings I’ve seen this year – and definitely the most imaginative builds Hera has offered.

Giger Dead Venice, October 2022

Hera notes that there is no strict dress code for the setting, “But latex, rubber, leather, and metal just about covers it, or not depending on your preference 🙂 .”

SLurl Details

2022 Puppetry project week #43 summary

Puppetry demonstration via Linden Lab – see below.  Demos video with the LL comment “We have some basic things working with a webcam and Second Life but there’s more to do before it’s as animated as we want.”

The following notes have been taken from chat logs and audio recording of the Thursday, October 27th Puppetry Project meetings held at the Castelet Puppetry Theatre on Aditi. These meetings are generally held on alternate weeks to the Content Creation User Group (CCUG), on same day / time (Thursdays at 13:00 SLT).

Notes in these summaries are not intended to be a full transcript of every meeting, but to highlight project progress / major topics of discussion.

Project Summary

  • Previously referred to as “avatar expressiveness”, Puppetry is intended to provide a means by which avatars can mimic physical world actions by their owners (e.g. head, hand, arm movements) through tools such as a webcam and using technologies like inverse kinematics (IK) and the  LLSD Event API Plug-in (LEAP) system.
    • Note that facial expressions and finger movements are not currently enabled.
    • Most movement is in the 2D plain (e.g., hand movements from side-to-side but not forward / back), due to limitations with things like depth of field tracking through a webcam, which has yet to be addressed.
  • The back-end support for the capability is only available on Aditi (the Beta grid) and within the following regions: Bunraku, Marionette, and Castelet.
  • Puppetry requires the use of a dedicated viewer, the Project Puppetry viewer, available through the official Second Life Alternate Viewers page.
  • No other special needs beyond the project viewer are required to “see” Puppetry animations. However, to use the capability to animate your own avatar and broadcast the results, requires additional work – refer to the links below.
  • There is now a Puppetry Discord channel – those wishing to join it should contact members of LL’s puppetry team, e.g. Aura Linden, Simon Linden, Rider Linden, Leviathan Linden (not a full list of names at this time – my apologies to those involved whom I have missed).

Bugs, Feature Requests and Code Submissions

  • For those experimenting with Puppetry, Jiras (bug reports / fixes or feature requests) should be filed with “[Puppetry]” at the start of the Jira title.
  • There is also a public facing Kanban board with public issues – those experiencing issues can also contact Wulf Linden.
  • Those wishing to submit code (plug-ins or other) or who wish to offer a specific feature that might be used with Puppetry should:

Further Information

Meeting Notes

Protocol Overhaul

At the previous meeting, Leviathan Linden noted the project team is going to overhaul the Puppetry/LEAP protocol. Since then:

OpenXR Support

Leviathan Linden asked for feedback on what the requested “OpenXR support” mean to those requesting it – e.g.: Is it to run an OpenXR app and have a VR experience in SL,  or is it to run an OpenXR app as a plug-in to provide measurement input to Puppetry?

The general response was a mix of both:

  • To generally provide the means for “proper” hardware support for motion capture such that puppetry isn’t just a “best guess” response via a webcam
  • To allow for more accurate interactions between avatars and objects; eventually moving to provide full support for VR headsets and controllers (requiring the ability to intact with scripted devices, operating levers, controls, etc., which could be correctly interpreted and acted upon by said scripts).

Currently, LL are more willing to consider OpenXR support as a part of the Puppetry work whilst regarding it as a potential step towards wider VR support in SL in the future.

Avatar Constraints / Interactions

The above question led to a broader discussion on avatar-to-avatar and avatar-to-object interactions starting with the avatar constraints / collision system.

  • As they are right now, avatar constraints and collisions within SL have been adequate for the platform, but lacking (collisions, for example have no concept of the avatars arms / legs, limiting interactions with them and other objects).
  • OPEN-368 “[Puppetry] [LEAP]: Location Constraints” is a feature request outlining the benefits of overhauling the SL avatar constraints system to allow better interactions with objects, etc. This is currently open to those wishing to add further comments and feedback.
  • The question was raised as to how “fast” / reliable the required communications (including all the required bone interactions) could be made in order to ensure adequate / accurate response times with actions (e.g..so when shaking hands, he hands of each avatar arrive at the same point at the seem time to be seen as  shaking in both viewers).
  • Also discussed was determining how “reactions” might best be defined – could it be as “simple” a pre-set animation?
  • One issue with this – interactions, OPEN-368, etc., – is that direct hooks from Puppetry to LSL had been seen as outside the scope of the project, simply because puppetry and the LEAP API are entirely viewer-side, and LSL simulator-side.  However, the discussion opened a debate on whether some means for this interaction should be provided, with two options being put forward:
    • Broadening the LEAP protocol, essentially using it to make the viewer scriptable with plug-ins that run on their own threads.
    • Providing a specific LSL function that would enable LSL to be able to communicate / interact with the LEAP protocol / JSON (as is the case with the RLV / RLVa APIs used by some third-party viewers).
    • Both of these approaches were seen as potentially “doable”, if beyond the intended scope of the puppetry project.
  • A further issue  with interactions and bone tracking (which would be required for accurate avatar-based interactions) as that bone tracking via LSL is as best limited to non-existent; this raised the subject of possibly using attachment points as a proxy.
    • An additional problem here is whether or not is possible to track the location of the attachment points in 3D space relative to any animation the avatar is playing (e.g. if an animation causes the avatar to raise their arm, is it possible to check the position of the wrist point)? This is currently something of an unknown, as it would either:
      • Require the simulator to inject a lot of additional calculations for joint and attach positions;
      • Or require a  new (optional) protocol where the viewer would just supply its in-world positions at some frame rate – which would require some calculation overhead on the part of the viewer;
      • Or – given work is in-hand to add the in world camera position relative the viewer, and also the avatar’s world orientation and look at target – provide a straight dump of the animation mixdown together with the skeleton data, enabling the processing to be carried out in a module rather than the viewer.
  • As a result of these discussions, time has been requested to investigate the various options (which will likely include a determination of what, if anything is to be included in the current project in terms of these additional capabilities).

Date of Next Meeting

  • Thursday, November 10th, 2022, 13:00 SLT.