Tag Archives: High Fidelity

Philosophical frenemies: Altberg and Rosedale

High Fidelity - a composite promotional shot. Credit: High Fidelity (via Wired)

High Fidelity – part of a composite promotional shot. Credit: High Fidelity (via Wired)

Yesterday, a Tweet from Jo Yardley pointed me to an interesting article in Wired by Rowland Manthorpe, entitled Second Life was just the beginning. Philip Rosedale is back and he’s delving into VR. It’s a lengthy, fascinating piece, arising out of a week Manthorpe spent with High Fidelity, while also taking time to poke his head around the door of Linden Lab, offering considerable food for thought – and it kept me cogitating things for a day, on-and-off.

There’s some nice little tidbits of information on both platforms scattered through the piece. For those that have tended to dismiss High Fidelity as a place of “cartoony” avatars, the images provided with the article demonstrate that High Fidelity are walking along the edge of the Uncanny valley; compare the Rosedale-like figure seen the a High Fidelity promo shot within it with a photo of the man himself (below). There’s also further indication that in terms of broader creativity and virtual space, High Fidelity is “closer” to the Second Life model of a virtual world than Sansar will be.

On the Lab’s side of things, we also get confirmation that multiple instances of the same space in Sansar will not be in any way connected (“One school group visiting the Egyptian tomb won’t bump into another – they will be in separate, identical spaces.”). There’s also a hint that Linden Lab may still be looking at Sansar as a “white label” environment.

High Fidelity can still be critiqued by some in SL for it's "cartony" avatar. The reality is however, that for those who wish, avtars in High Fidelity can be extremely life-like, as this picture of what Philip Rosedale might look like in High Fidelity (r) shows when compared to an actual photograph of him. Credits: High Fidelity / Jason Madara

High Fidelity can still be critiqued by some in SL for its “cartoony” avatar. The reality is however, that for those who wish, avatars in High Fidelity can be extremely life-like, as this picture of what Philip Rosedale might look like in High Fidelity (r) shows when compared to an actual photograph of him. Credits: High Fidelity / Jason Madara

But what really makes the piece interesting is the philosophical differences apparent in developing these platforms; each is very much rooted in the nature of the man at the helm of each company.

Rosedale is a dreamer – and that’s not a negative statement. He’s been driven by “dreams” and “visions” throughout most of his post Real Networks career. He also leans heavily into the collaborative, open borders model of development. Both have influenced the working spaces he builds around him. Reading Manthorpe’s piece, the High Fidelity office appears to be run along a similar laissez-faire approach as marked the early years at Linden Lab:  people dabble in what interests them, focused on the technology; there’s a belief that if the company cannot solve a problem (such as practical in-world building using hand controllers), someone “out there” will, and all will be well.

By contrast, Altberg is more consumer / direction oriented with Sansar. Initial market sectors have been identified, work has been broken down into phases. A structured development curve has been set; as we’ve seen from Lab Chat and other sessions, there’s a reasonably clear understanding of what should be tackled first, and what can be pushed further down the development path. The platform itself is closed, controlled, managed.

Sansar Screen Shot, Linden Lab, August 2016, on Flickr Sansar (TM) Screen Shot, Linden Lab, October 2016, on Flickr

In adopting these approaches, and given their somewhat complicated business relationship (Rosedale still have “sizeable” financial holding in Linden Lab; linden Lab was one of the small investors in High Fidelity’s $2.4 million round of seed funding), Rosedale and Altberg describe their relationship as “frenemies”. They are both working towards similar goals, and dealing with the same consumer-facing technology, and are equally sniffy of the other’s product. Rosedale sees Sansar is being potentially too closed, too pigeon-holed in terms of how it will be perceived by consumers; Altberg sees High Fidelity as being to focused on the technology, and perhaps demanding more effort than most on-line consumers in the Facebook pre-packaged content age might be willing to invest.

When looked at from outside, the Rosedale / High Fidelity approach is perhaps more in keeping with the state of VR once all the hyperbole surrounding it is brushed aside:  VR may well be part of our future, but no-one can honestly say at this point just how big a part of our future it will be. The Altberg / Linden Lab approach is rooted business pragmatism: identify your markets and seek to deliver to those markets; build your product to reflect the market as it grows.

Neither approach is necessarily “right” or “wrong”, and there is certainly no reason why both cannot attract their own market share. But I have to admit I find myself leaning more in Altberg’s direction.

This is admittedly partly because a lot of Rosedale’s broader comments about High Fidelity, the Internet, etc., come across as re-treads of things said ten years ago about Second Life and a transformative future never realised. But it’s more particularly because  – as noted above – no-one really knows how pervasive VR will be on a broad level. Other technologies such as augmented reality (AR) and mixed reality (MR) currently lie within the shadow cast by the hyperbole surrounding VR, but have the potential for far greater impact in how we conduct our lives and business. So identifying a market share and aiming for it seems to be the more solid approach insofar as establishing a user base and revenue flow might be concerned*.

Time will obviously tell on this; but one fact is clear: however you regard the philosophies held by Rosedale and Altberg, Manthorpe’s article is a must read. A considered, well presented, in-depth piece, it is sits as a catalyst for considerable thought and potential discussion.

*Edited 25 October 2015, to include this sentence, which was accidentally removed from the initial publication of this piece.

Advertisements

High Fidelity moves to “open beta”

HF-logoTuesday, April 27th saw High Fidelity move to an “open beta” phase, with a simple Twitter announcement.Having spent just over a year in “open alpha” (see my update here), the company and the platform has been making steady progress over the course of the last 12 months, offering increasing levels of sophistication and capabilities  – some of which might actually surprise those who have not actually set foot inside High Fidelity but are nevertheless will to offer comments concerning it.

I’ve actually neglected updating on HiFi for a while, my last report having been at the start of March. However, even since then, things have moved on at quite a pace. The website has been completely overhauled and given a modern “tile” look (something which actually seems to be a little be derivative rather than leading edge – even Buckingham Palace has adopted the approach).

High Fidelity open beta requirements

High Fidelity open beta requirements

The company has also hired Caitlyn Meeks, former Editor in Chief of the Unity Asset Store, as their Director of Content, and she has been keeping people appraised of progress on the platform at a huge pace, with numerous blog posts, including technical overviews of new capabilities, as well as covering more social aspects of the platform, including pushing aside the myth that High Fidelity is rooted in “cartoony” avatars, but has a fairly free-form approach to avatars and to content.

High Fidelity may not be as sophisticated in terms of overall looks and content – or user numbers – as something like Second Life or OpenSim, but it is grabbing a lot of media attention (and investment) thanks to it have a very strong focus on the emerging ranging to VR hardware systems, and the beta announcement is timed to coincide with the anticipated increasing availability of the first generation HMDs from Oculus VR and HTC. Indeed, while the platform is described as “better with” such hardware and can be used without HMDs and their associated controllers, High Fidelity  describe it as being “better with” such hardware.

High Fidelity avatars

High Fidelity avatars

I still tend to be of the opinion that, over time, VR won’t be perhaps as disruptive in our lives as the likes of Mixed / Augmented Reality as these gradually mature; as such I remain sceptical that platforms such as High Fidelity and Project Sansar will become as mainstream and their creators believe, rather than simply vying for as much space as they can claim in similar (if larger) niches to that occupied by Second Life.

And even is VR does grow in a manner similar t that predicted by analysts, it still doesn’t necessarily mean that everyone will be leaping into immersive VR environments to conduct major aspects of their social interactions. As such, it will be interesting to see what kind of traction high Fidelity gains over the course of the next 12 months, now that it might be considered moving more towards maturity – allowing for things like media coverage, etc., of course.

Which is not to share the capabilities aren’t getting increasingly impressive, as the video below notes – and just look at the way Caitlyn’s avatar interact with the “camera” of our viewpoint!

 

 

.

High Fidelity: “VR commerce”, 200 avatars and scanning faces

HF-logoHigh Fidelity have a put out a couple of interesting blog posts on their more recent work, both of which make for interesting reading.

In Update from our Interns, the provide a report by Edgar Pironti and Alessandro, Signa two of High Fidelity’s interns who have been working as interns  on a number of projects within the company, including developing a single software controller for mapping inputs from a range of hand controllers, with the initial work involving the Razer Hydra, HTC Vive’s controllers and the Space Navigator. They also discuss working on recording and playback functionality, which is also expanded upon in the second blog post which caught my eye, the January newsletter, issued by Chris Collins.

This work has involved developing the ability to pre-record the data stream of an avatar – audio, facial expressions and body movement in a format which can later be played back on a server under the control of JavaScript. As Chris notes, this makes it very easy to quickly populate a VR experience with compelling pre-recorded avatar content, allowing life-like characters to a place or for use within a machinima film.

As their third reported project. Edgar and Alessandro discuss how they’ve been looking into creating a “VR commerce” environment. This combines elements of physical world shopping – sharing it with friends, actually grabbing and trying items, having discussions with sales staff, etc., with the convenience of e-shopping, such as quickly changing colours, seeing customer reviews and feedback, and so on. As well as explaining how they went about the task, Edgar and Alessandro have put together a video demonstration:

In the High Fidelity newsletter, Chris Collins covers a number of topics, including work on optimising avatar concurrency on a single High Fidelity server. While this work most likely used Edgar’s and Alessandro’s approach to pre-recording avatar  data streams mentioned above, the initial results are impressive: 200 avatars on a server, of which the 40 nearest the observer’s viewpoint are rendered at 75Hz when using the Oculus Rift, and with a very high level of detail, with full facial animations.

200 avatars on a High Fidelity server as the comes starts into optimising avatar concurrency (image: High Fidelity)

200 avatars on a High Fidelity server as the comes starts into optimising avatar concurrency (image: High Fidelity)

One of the things that High Fidelity has often be critiqued for by SL users is the cartoon like avatars which were first shown as the company gradually cracked open its doors. These are still in use, but there has also been a lot of work put into making the avatars more life-like should users so wish. However, there is a trade-off here, which has been discussed in the past: the so-called uncanny valley effect: the closer the facial features look and move almost, but not exactly, like natural beings, it causes a response of revulsion among some observers.

This has tended those investigating things like avatar usage to express caution in pushing too close to achieving any kind of genuine realism in their avatars, and Philip Rosedale has discussed High Fidelity’s own gentle pushing at the edge of the valley. Now it seems the company is stepping towards the edge of the valley once more, using 3D scans of people to create avatar faces, as Chris notes in the newsletter:

We scanned our whole team using two different 3D scanners (one for hair and one for the face), and then used some simple techniques to make the results not be (too) uncanny … Although there is still a lot of work to do, we achieved these results with less than 2 hours of work beyond the initial scans, and the effect of having our team meetings using our ‘real’ avatars is quite compelling.

High Fidelity's staff group photo using scans of their own hair and faces (image: High Fidelity)

High Fidelity’s staff group photo using scans of their own hair and faces (image: High Fidelity)

Not everyone is liable to want to use an avatar bearing a representation of their own face, and the idea of using such a technique does raise issues around identity, privacy, etc., which should be discussed, by High Fidelity’s work in this area is intriguing. Although that said, and looking at the staff group photo, I would perhaps agree there is still more work needed; I’m not so much concerned about the technique pushing towards the edge of the uncanny valley so much as I am about the avatars in the photo looking just that little bit odd.

Also in the update is a discussion on audio reverb, a look at how the use of particles can be transformed when you’re able to see your “hands” manipulating them, and a look at a purpose-built game in High Fidelity.  These are all discussed on the video accompanying the January video, which I’ll close with.

High Fidelity: Stephen Wolfram and more on tracking

HF-logoOn Tuesday, December 29th, High Fidelity announced that Stephen Wolfram has become their latest advisor.

British-born Stephen Wolfram is best known  for his work in theoretical physics, mathematics and computer science. He  began research in applied quantum field theory and particle physics and publish scientific papers when just 15 years old. By the age of 23, he was studying at the School of Natural Sciences of the Institute for Advanced Study in Princeton, New Jersey, USA, where he  conducted research into cellular automata using computer simulations.

Stephen Wolfram via Quantified Self

Stephen Wolfram via Quantified Self

When introducing Wolfram through the High Fidelity blog, Philip Rosedale notes this work had a profound impact on him, as did – later in life – Wolfram’s 2002 book, A New Kind of Science.

More recently, Wolfram has been responsible for WolframAlpha, an answer engine launched in 2009, and which is one of the systems used by both Microsoft’s Bing “decision engine” and also Apple’s Siri. In 2014, he launched the Wolfram Language as a new general multi-paradigm programming language.

In become an advisor to High Fidelity, Dr. Wolfram joins Peter Diamandis, the entrepreneur perhaps most well-known for the X-Prize Foundation, Dr. Adam Gazzaley, founder of the Gazzaley cognitive neuroscience research lab at the University of California, Tony Parisi, the co-creator of the VRML and X3D ISO standards for networked 3D graphics, and a 3D technology innovator, Professor Ken Perlin of the NYU Media Research Lab, and Professor Jeremy Bailenson, the director of Stanford University’s Virtual Human Interaction Lab.

In their October update published at the start of November, High Fidelity followed-up on the work they’ve been putting into various elements of tracking movement, including the use of BinaryVR for tracking facial movement and expressions. In particular, the company’s software allows users to create a their own virtual 3D face from 2D facial photos, allowing them to track their facial animations in real-time, transforming them onto the CG representation of their face.

Ryan balances one object atop another while Chris uses the gun he picked-up with a hand movement, then cocked it, to try and shoot the yellow object down

Ryan balances one object atop another while Chris uses the gun he picked-up with a hand movement, then cocked it, to try to shoot the yellow object down

Integration of the BinaryVR software allows High Fidelity to track users’ mouth movements through their HMDs, allowing their avatars to mimic these movements in-world, as Chris Collins demonstrates in the update video. The company has also been extending the work in full body tracking, as seen in my October coverage of their work, and this is also demonstrated in the video alongside of more in-world object manipulation by avatars, with Chris and Ryan building a tower of blocks and Chris then picking up a gun and shooting it down.

The hand manipulation isn’t precise at this point in time, as can be seen in the video, but this isn’t the point; it’s the fact that in-world objects can be so freely manipulated that is impressive. That said, it would be interesting to see how this translates to building: how do you accurately sized a basic voxel (a sort-of primitive for those of us in SL) shape to a precise length and width, for example, without recourse to the keyboard or potentially complicated overlays?

Maybe the answer to this last point is “stay tuned!”.