A look inside the alpha world of High Fidelity

HF-logoI tend to keep an eye on the High Fidelity blog as and when I have the time (I’m currently waiting to see if I get into the next phase of alpha testing, as I’ve so far failed to build the client (I sucketh at tech sometimes), so try to keep up with developments. I also confess to hoping for another video from AKA…). This being the case, it was interesting to get a look behind the doors at what has been going on within High Fidelity courtesy of self-proclaimed “bouncer”, Dan Hope.

Dan’s blog post turns the spotlight away from the work of the core High Fidelity team and focuses it on those alpha testers / builders who have built the client, made the connection and have started poking at various aspects of the platform and the worklist.

Austin Tate is a name well-known within OpenSim and Second Life. His c.v. is quite stellar, and includes him being the Director of the Artificial Intelligence Applications Institute (AIAI) and a Professor of Knowledge-Based Systems at the University of Edinburgh. Austin’s work has encompassed AI, AI planning and the development of collaborative workspaces using virtual environments and tools – particularly the I-Room.

Within High Fidelity, where he is known as Ai_Austin, he’s been extending the work on I-Rooms and collaborative spaces (both of which seem to have an ideal “fit” with High Fidelity) and has been working on 3D modelling, with Dan noting:

You might have figured out by now that 3D worlds are no good if they can’t handle 3D models accurately, which is why Ai_Austin also tests mesh handling for complex 3D objects. The image above shows the “SuperCar” mesh, which has 575,000 vertices and 200,000 faces, being tested in HiFi. There are several other meshes he uses, too, including one of the International Space Station that was provided by NASA.

SuperCar has also featured in Austin’s work within SL and OpenSim, where he has been providing invaluable insight into working with the Oculus Rift, the development of support for it within the viewer, using it with other hardware (such as the Space Navigator). In fact, if you have any interest at all in the areas of AI, virtual world workspaces, VR / VW integration, etc., then I cannot recommend Austin’s blog highly enough (We also share a passion for astronomy / space exploration and (I suspect) for racing cars, but that’s something else entirely!).

Ctrlaltdavid might also be a name familiar to many in SL and OpenSim, being the HiFi name of Dave Rowe (Strachan OFarrel in SL), the man behind the CtrlAltStudio viewer which focuses on adding OpenGL stereoscopic 3D and Oculus Rift support to the viewer.

With High Fidelity, he’s working on Leap Motion integration, to provide a higher degree of control over an avatar’s hands and fingers than can be achieved through the use of other tools, such as a the Razer Hydra. The aim here is to increase the sense of immersion for users without necessarily relying on clunky hand-held devices. As we know, the Leap Motion sits on the desk and leaves the hands free to gesture, point, etc., and thus would seem and ideal companion when accessing a virtual environment like HiFi (or SL) when using a VR headset; or even without the headset if one wishes to have a degree of liberation from the keyboard.

Dan Hope demonstrates avatar finger motion using the Leap Motion, as being coded by CtrlAltDavid in High Fidelity (Image: High Fidelity blog)

Opening this look at the work of various alpha testers / builders, Dan notes:

We can’t create a truly open system without making it compatible with other open-source tools, which is why Judas has been creating a workflow that will allow artists to make 3D models in the open source program Blender using HiFi’s native FBX format.

This forms a useful introduction to the work of Judas, who has been involved in bringing High Fidelity and Blender closer together in terms of providing improved FBX support for the platform, which is now bearing fruit. “Only last week something was added in that allowed me to import the HiFi avatars into Blender without destroying the rigs we need to animate them,” Judas is quoted as saying in the blog post.

The comments following the post are interesting as well. High Fidelity has been critiqued on a number of occasions for its “cartoonish” avatars. However, the reality is, High Fidelity offers considerable flexibility in the creation on avatars, and those seen on the various videos are often the “default” or “baseline” avatars people can opt to use and customise. There is no reason why the avatars cannot be far more complex / human-looking (or otherwise!) – as Philip Rosedale states in a reply to a comment on this very subject, while also giving more insight to why the “baseline” avatars appear as they do:

You’ll be able to build anything you want in high fidelity in terms of avatars and photorealism. You can import fuse/mixamo models, for example, which are much more photorealistic. What we’ve found is that desaturating the skin detail makes the complex facial animations that are driven by our 3D camera setup much more compelling.

Also in the comments, Chuck Baggett points to a concern that some have voiced with regards to High Fidelity; that it is perhaps putting far too much emphasis on the “real” – capturing facial gestures, body movements, and a reliance on tools to achieve all this, which may well put some people off using it. As Chuck correctly notes, many people prefer to use virtual worlds as a mean of extension / release; escape the demands and limitations of the physical world; they may not want every nuance of their physical actions and expressions reflected in the virtual. Philip Rosedale again offers some reassurance in this regard:

I’m pretty sure we’ll go in both directions really well. For example, we’ve got really deep voice control integration working in the early alpha … And the fundamental strategy for avatar control is one that blends animation with controller inputs, meaning for example that a small motion could be detected and automatically turned into a pre-record[ed] animation. Finally, we don’t require that your avatar frown if you do – the way we connect people’s faces to avatars allows us to have a different baseline from human->avatar, as you suggest.

All-in-all, another interesting and informative blog post from the HiFi team, which helps to illustrate the broader social / creative elements and capabilities which are being built-in to the platform to make it a far more rounded and potentially compelling environment for users. Hopefully, there will be more such inside looks at this kind of work in the future, as it certainly helps bring a greater sense of depth to what is happening at HiFi.

 

2 thoughts on “A look inside the alpha world of High Fidelity

  1. Inara, the HiFi Interface client (needed to act as a “normal” user and the server side software are actually is made available to alpha testers on Mac and Windows as prebuilt binaries with almost daily (sometime more than one a day) updates.

    Like

    1. I think I need to mark you as my lucky omen – 3 minutes are replying yo your comment, my invite to the alpha actually arrived in my in box! Serendipity strikes again!

      Like

Comments are closed.