Hi-Fi and the Lab: in the press, & further speculation from me

Logos via Linden Lab and High Fidelity respectively

Following the announcement that High Fidelity, the company co-founded by Philip Rosedale in 2013 and after his departure form Linden Lab, has invested money, patents and staff into the latter, the news hit a lot of on-line tech publications and even the Wall Street Journal – creating a buzz around Second Life that has so far, if we’re honest, somewhat eluded the Lab in the wake of all the broader “metaverse” chatter that has been going on.

Of these articles, the most detailed came via GamesBeat / VentureBeat (by the ever-informative Dean Takahashi), c|net and The Wall Street Journal (the latter via Archive to avoid the paywall)¹ that added butter to the bread of the original announcement, which I’ve summarised below, and which gave me further pause for thought.

To deal with the bullet point takeaways first:

  • The patents transfer from Hi Fi is for distributed computing, and include “moderation in a decentralised environment patents.”
  • In all some 7 members of the Hi Fi team will be moving to work alongside the Second Life engineering team, effectively increasing it by around 20%.
  • The move will mean that around 165 people will be working on Second Life and Tilia.
  • Two elements of the work Hi Fi staff will be involved in are:
    • SL’s “social aspects”, given as “avatars and digital marketplace”. I assume the former is a reference to things like “avatar expressiveness”, on which more below.  And the latter potentially greater accessibility to SL’s Marketplace by users using mobile options, etc.
    • Oberwager also indicated that Hi Fi’s work will be to assist LL in developing “the tools to make virtual economies work” and a concept for “underpinning FinTech to metaverse” – which I assume is a reference to involvement in Tilia, per my original speculations on the investment.
  • Separate to its involvement with LL, High Fidelity will continue to develop its spatial audio capabilities, which have already been licensed by a number of other companies.
  • In terms of SL itself:
    • 2020 still seems to be the platform’s most robust year, with the economy put in terms of a US $650 million GDP, with 345 million annual transactions (virtual goods, real estate, and services) and US $80 million cashed-out.
    • The platform boasts more than 1.6 million transactions per day and generates 1.8 billion messages (presumably user-to-user and Group IMs) per month.
    • Second Life won’t be moved to support VR headsets any time soon, simply because the latter need much more time to mature, both in terms of their technology and their market reach; something Rosedale believes (and I’d agree, for whatever that is worth) is unlikely to be reached in the next 5 years. However, once SL itself is more performant and better placed to naturally leverage VR hardware.
Philip Rosedale and Brad Oberwager, via VentureBeat / GamesBeat. Credit: Linden Lab

In terms of my own speculation, this primarily arise – and rather belatedly, given my own previous coverage of High Fidelity in this blog – as a result of a comment from Philip Rosedale in the piece by Dean Takahashi:

The tech changes are all about communication,” Rosedale said. “I don’t think it’s about pixels. I don’t think it’s about radical richness. I don’t even think it’s all about 3D. I think the problem and the opportunity is communicating with people in a naturalistic way where I can interview you.

– Philip Rosedale, speaking to Dean Takahashi

We already know from Linden Lab’s own review of 2021, which includes a bullet list of deliverables planned for 2022 – that “avatar expressiveness” to Second Life that will bring “camera-based gestures and movement to your avatar for a whole new level of interaction and connectedness”. This is something that marries up to Rosedale’s comments above. More particularly, it is something High Fidelity started to develop back in 2014, when the company was working on its own decentralised virtual spaces – even producing an informal video that helped demonstrate that early work – and which I’ve embedded below.

Yes, the avatars are someone cartoonish is looks, but this work was carried out in Hi Fi early days and before their avatars developed into something SL users might find more appealing, so don’t get too hung up on that fact.

What’s important is to note that how the avatars (faces and hand movements) reflect those of the people behind them. Take, for example, Emily’s face as she emotionally responds to the lyrics she is singing, and the way Ryan’s avatar (with the beard) makes eye contact with viewers as it looks directly out of the screen, and they way his eyes / head naturally move as he also addresses Chris and Emily who are sharing the same office space with him – plus the capture of his real-time hand-clapping at the end of the song! (And as a total aside specific to SL “old timers”, not that the guy providing the backing vocals is none other than Andrew Meadows (once (and again….?) aka Andrew Linden.)

If this capability could be brought into Second Life – and again, I have no idea how much further down the road Hi Fi got in developing / enhancing it and am aware that SL presents a range of its own technical challenges (range of mesh heads, rigging /weighting, etc.) – then clearly, it could offer considerable depth to avatar interactions for those who would care to leverage them. Take the SL live music scene, for example, and the potential for performers to add gestures to their music and (like Emily) have the emotions in singing transferred to their avatars. (I’ve also submitted a question on this subject for consideration in the upcoming Lab Gab session with Brad Oberwager and Philip Rosedale.)

There is a lot more that might be unpacked from these articles – such as the idea of a “decentralised environment” and what that might mean for thing like SL and mobile device access, and a lot to chew on regarding SL’s approach to virtual spaces and how it stands apart from the recent headline-grabbers like Facebook / Meta. Some of these comments should give comfort to those concerned about matters of privacy and the like, and Rosedale at least has carried his view on things beyond talking to journalists, embodying them in some of his tweets.

Philip Rosedale via Twitter,, January 15th, 2022

Given what is available for consumption between the three articles, I would recommend a reading of all three rather than having me drone on further here, or dilute the core speculation I wanted to put forward as a possibility. As such, I’ll leave you to peruse them in your own time, if you’ve not already done so.

Related Links

 

  1. While there were other articles on the announcement, most were either baseline reproductions of the original press release (with a sprinkling of commentary in some cases) or re-treads of one of these three pieces.

6 thoughts on “Hi-Fi and the Lab: in the press, & further speculation from me

  1. It’s good to see good news coverage for Second Life once again and hopefully there will be more of it during 2022.

    I think there has been positive feedback from the Second Life community during the past week. It will be interesting to see what Philip will bring to the table to help Brad grow Second Life further.

    It would be good if the lab could announce some more updated Second Life statistics than just the 2020 figures.

    I guess we will hear about the special Lab Gab date and times shortly.

    Like

  2. Some interesting potentials in all that though I have to wonder about the “decentralized environments” not being more reference to something akin to Opensimulator’s grid architecture where there are multiple standalone grids tied together with the Hypergrid teleportation capability. Philip at a keynote speech at one of the Opensimulator conferences a few years ago, mentioned the possibility of making High Fidelity capable of travel between OPS grids and High Fidelity. How much better of a fit would it be to do that between Secondlife and Opensimulator initially, allowing goods, services and avatars to travel between them and thereby opening up the S/L marketplace to a much greater market potential.

    I have not seen any references other then yours Inara to what is being planned for anything regarding a Mobile viewer since this hint on the Forums: https://community.secondlife.com/forums/topic/478587-the-lab-ending-work-on-mobile-viewers/?do=findComment&comment=2369479 with no follow up since.

    Like

    1. The decentralised aspect has me wondering as well; I’m curious if these relate to High Fidelity’s current product offering, or link back to their decentralised approach to their “servers” using local hardware. If the latter, then it raises questions around how provisioning elements of Second Life to mobile devices might be handled.

      I really don’t see SL being opened-up to OpenSimulator. Frankly, and with due respect to the latter, its user base is far, far, too small (yes, land area across OS-powered grids may well exceed that of SL, but land doesn’t actually buy goods and services). If anything were to happen on those lines, I lean more towards the future possibility of some form of “self-hosting” solution (akin to the Second Life Enterprise product circa 2008), assisted by / connected to LL’s core infrastructure and which is specifically intended to service specific markets (such as education), providing a more attractive “behind our firewall” option.

      As to the mobile product, it’s a question I am regularly raising at the appropriate meetings, but thus far, it appears the Lab is not ready to comment further on what there precise plans are. But again, the decentralised patents might for a part of this – assuming the Lab isn’t going to look more towards a streaming-based solution.

      Like

  3. Terrifically useful, Inara: thanks.

    One thought that occurs: while I’m actually sort of impressed by the facial and hand expressions here, I do wonder if this is going to end up privileging voice in SL. There are, of course, many users who do voice, but there is also really strong resistance to it from a sizable segment of the community, for all kinds of reasons. I think an environment that makes voice more attractive, or even a sort of cultural necessity, is going to have a pretty profound impact on SL.

    Somewhat more trivially, perhaps, I also wonder, as someone who does put in a fair amount of time and effort into creating expressive faces for photographs (using HUDs and the like), whether the SL avatar skeleton is currently adequate for the kinds of expressiveness shown above. Possibly this might be the impetus required to revamp this for the first time since Bento became available?

    Like

    1. Thanks, Scylla!

      I agree, that there are cons (the resistance to voice) and pros (offering a means to overcome that resistance in some areas), but I would suggest the biggest “pro” for avatar expressiveness is to make Second Life even more appealing to market sectors where voice can play an important role, and the necessary hardware may be more readily available (or obtainable). Think of education, for example, where teacher / student interactions could be made a lot more responsive – being able so see the confusion on a student’s face as they struggle with a concept and being able to respond to it, for example.

      In terms of the skeleton – that’s one of the technical issues I glossed over by commenting “SL has a range of its own technical challenges” in the article largely because I didn’t want the piece to get bogged down in such a discussion. However, it is something I hope to be able to look at it in more detail with the help of subject matter experts once we do know something more of the approach, and can make more accurate assumptions about it (one question is whether it will use morphs or outright skeletal deformations or a mix (with a follow-on question of if both, how will this affect those head designs that already use a mix of both?)).

      Like

Comments are closed.