High Fidelity launches documentation resource

HF-logoHigh Fidelity have opens the doors on their new documentation resource, which is intended to be a living resource for all things HiFi, and to which users involved in the current Alpha programme are invited to contribute and help maintain in order to see it develop and grow.

Introducing the new resource via a blog post, Dan Hope from High Fidelity states:

This section of our site covers everything from how to use Interface, to technical information about the underlying code and how to make scripts for it. We envision this as being the one-stop resource for everything HiFi.

What’s more, we want you to be a part of it. We’ve opened up Documentation to anyone who wants to contribute. The more the merrier. Or at least, the more the comprehensive … er. And accurater? Whatever, we’re better at software than pithy catchphrases. Basically, we think that the smart people out there are great at filling in holes we haven’t even noticed yet and lending their own experience to this knowledge base, which will eventually benefit everyone who wants to use it.

Already the wiki-style documentation area contains a general introduction and notes on documentation standards and contributions, a section to the HiFi coding standard; information on avatar standards, including use of mesh, the skeleton, rigging, etc; information on various APIs, a range of tutorials (such as how to build your avatar from MyAvatar), and client build instructions for both OS X and Windows.

The documentation resource includes a number of tutorials, including the basic creation of an avatar from the MyAvatar "default" (top); and also includes sections on standards, such as (bottom)
The documentation resource includes a number of tutorials, including the basic creation of an avatar from the MyAvatar “default” (top); and also includes a section on avatar standards, which includes information on the avatar construction, the skeleton, joint orients, rigging, etc. (bottom) – click for sull size

All told, it makes for an interesting resource, and Dan’s blog post covers the fact that the documentation project is also linked to the HiFi Worklist, allowing those who prefer not to write documentation to highlight areas of improvement / clarification or which need writing to those who enjoy contributing documentation, and being rewarded for their efforts.

As well as the link from the blog post, the documentation resource can be accessed from the High Fidelity website menu bar – so if you’re playing with HiFi, why not check it out?

Related Links

With thanks to Indigo Mertel for the pointer.

 

A look inside the alpha world of High Fidelity

HF-logoI tend to keep an eye on the High Fidelity blog as and when I have the time (I’m currently waiting to see if I get into the next phase of alpha testing, as I’ve so far failed to build the client (I sucketh at tech sometimes), so try to keep up with developments. I also confess to hoping for another video from AKA…). This being the case, it was interesting to get a look behind the doors at what has been going on within High Fidelity courtesy of self-proclaimed “bouncer”, Dan Hope.

Dan’s blog post turns the spotlight away from the work of the core High Fidelity team and focuses it on those alpha testers / builders who have built the client, made the connection and have started poking at various aspects of the platform and the worklist.

Austin Tate is a name well-known within OpenSim and Second Life. His c.v. is quite stellar, and includes him being the Director of the Artificial Intelligence Applications Institute (AIAI) and a Professor of Knowledge-Based Systems at the University of Edinburgh. Austin’s work has encompassed AI, AI planning and the development of collaborative workspaces using virtual environments and tools – particularly the I-Room.

Within High Fidelity, where he is known as Ai_Austin, he’s been extending the work on I-Rooms and collaborative spaces (both of which seem to have an ideal “fit” with High Fidelity) and has been working on 3D modelling, with Dan noting:

You might have figured out by now that 3D worlds are no good if they can’t handle 3D models accurately, which is why Ai_Austin also tests mesh handling for complex 3D objects. The image above shows the “SuperCar” mesh, which has 575,000 vertices and 200,000 faces, being tested in HiFi. There are several other meshes he uses, too, including one of the International Space Station that was provided by NASA.

SuperCar has also featured in Austin’s work within SL and OpenSim, where he has been providing invaluable insight into working with the Oculus Rift, the development of support for it within the viewer, using it with other hardware (such as the Space Navigator). In fact, if you have any interest at all in the areas of AI, virtual world workspaces, VR / VW integration, etc., then I cannot recommend Austin’s blog highly enough (We also share a passion for astronomy / space exploration and (I suspect) for racing cars, but that’s something else entirely!).

Ctrlaltdavid might also be a name familiar to many in SL and OpenSim, being the HiFi name of Dave Rowe (Strachan OFarrel in SL), the man behind the CtrlAltStudio viewer which focuses on adding OpenGL stereoscopic 3D and Oculus Rift support to the viewer.

With High Fidelity, he’s working on Leap Motion integration, to provide a higher degree of control over an avatar’s hands and fingers than can be achieved through the use of other tools, such as a the Razer Hydra. The aim here is to increase the sense of immersion for users without necessarily relying on clunky hand-held devices. As we know, the Leap Motion sits on the desk and leaves the hands free to gesture, point, etc., and thus would seem and ideal companion when accessing a virtual environment like HiFi (or SL) when using a VR headset; or even without the headset if one wishes to have a degree of liberation from the keyboard.

Dan Hope demonstrates avatar finger motion using the Leap Motion, as being coded by CtrlAltDavid in High Fidelity (Image: High Fidelity blog)

Opening this look at the work of various alpha testers / builders, Dan notes:

We can’t create a truly open system without making it compatible with other open-source tools, which is why Judas has been creating a workflow that will allow artists to make 3D models in the open source program Blender using HiFi’s native FBX format.

This forms a useful introduction to the work of Judas, who has been involved in bringing High Fidelity and Blender closer together in terms of providing improved FBX support for the platform, which is now bearing fruit. “Only last week something was added in that allowed me to import the HiFi avatars into Blender without destroying the rigs we need to animate them,” Judas is quoted as saying in the blog post.

Continue reading “A look inside the alpha world of High Fidelity”

Tony Parisi joins High Fidelity

HF-logoOn Thursday August 21st, Philip Rosedale announced that Tony Parisi has joined High Fidelity.

Precisely what Mr. Parisi’s position at HiFi is, isn’t stated, but Mr. Rosedale does say:

Tony has just joined us as an advisor, and is also working with us on some secret High Fidelity stuff that is coming soon. He’s a perfect person to add to the High Fidelity team.

Tony Parisi (via SVVR)
Tony Parisi (via SVVR)

Tony Parisi is the co-creator of the VRML and X3D ISO standards for networked 3D graphics, and a 3D technology innovator. He’s a career CTO / software architect and entrepreneur, has and is serving on a number working groups, and may also be familiar to some as one of the SVVR Creating the VR Metaverse  panel in April 2014. More recently, he was featured in a Drax Files Radio Hour feature-length interview, which I also reviewed (and am embedding again at the end of this piece, as it really is worth listening to if you missed it the first time around).

Tony’s full bio can be found here, and while the work he’ll be doing at HiFi is currently “secret”, Philip Rosedale does expand on why his involvement is a good fit for the company:

What we are building at High Fidelity is a bigger project than any one designer or company.  To bring virtual reality to everyone will mean a broad set of standards and open systems, and Tony has been designing and championing big pieces of those standards for his whole career, most recently with WebGL.

There can be no doubting Tony’s background and understanding of the potential for consumer-focused VR – again, just listen to the interview below for proof of that.

So interesting times at High Fidelity just got more interesting!

(Nice touch on the updated website as well, with the video header.)

Taking it Easy with High Fidelity, aka AKA sing

HF-logoThe folks at High Fidelity has been blogging a lot lately. I covered recent moves with improvements to the avatar facial expressions and synch the mouth / lips to better reflect their movements as we speak (and sing!), and one of the more recent blog posts is something of a follow-up to this, with members of the Hi Fi team having a little fun. It’s fair to say that if they keep things up, Emily and Ozan and (I think that’s) Andrew on backing vocals could find themselves in-demand for gigs virtual and otherwise!

Anyway, we’ll get to that in a moment.

The other two posts are focused on Philip’s favourite subject: reducing latency, particularly where sound is concerned. As the oldest of the posts Measuring the Speed of Sound, from August 13th, reducing latency is something of an obsession at High Fidelity, and the post talks about various experiments in trying to reduce audio latency. I’m still not convinced on Philip’s big downer on voice communications over mobile devices, where he’s in the past referred to the 500 msec delay as a “barrier” to communications; I’ve yet to find it silting conversations.

That said, I can see his point in ensuring that audio and video remain synched when it comes to direct interaction, particularly given the nature of what High Fidelity are trying to achieve with the likes of facial and gesture capture to achieve a greater sense of presence. Within the post, Philip discusses the most recent work HiFi have been carrying out in comparing various mediums and how they handle audio and audio latency.

Paloma’s Javascript Project touches on the work of 17-year-old Paloma Palmer. A high school student, Paloma has been honing her JavaScript skills during the summer vacation as an intern at High Fidelity. Video interviewed by HiFi’s Chris Collins, she describes her project in coding voxels to respond directly to volume inputs over a microphone in real-time, coding a form of graphic equaliser in voxel cubes which responds, with minimal delay, directly to both her and Chris’ voices and intonations as they speak – a further demonstration of the low latency goal HiFi are aiming towards, and one which, as the blog post notes, “opens up a bunch of new creative content areas for the virtual world”.

HiFi's Chris Collins talks with Paloma Palmer, the 17-year-old intern who has been working at HiFi through her summer vacation (inset)
HiFi’s Chris Collins talks with Paloma Palmer, the 17-year-old intern who has been working at HiFi through her summer vacation (inset)

However, it is with High Fidelity’s AKA covers Easy, which sits sandwiched between Measuring and Paloma which offers the most fun, as well as demonstrating some intriguing elements of HiFi’s capabilities.

The post actually takes the form of another music video (and embedded below) in which Emily, with Ozan on guitar and I think (and I see Ciaran Laval is of the same mindset as me) Andrew Meadows (himself aka  – or at least previously aka – Andrew Linden) providing the backing vocals. Together they’ve formed HiFi’s own band, AKA (as in Also Known As), a name chosen because, as Emily explains, it allows them to be anyone they want to be. Chris Collins and Ryan Karpf are also on hand, although they don’t participate in the song.

The video this time is a cover of the Commodore’s Easy. We’re promised a deeper explanation of some of the technicalities behind it from “Executive Producer” Ryan at a later date. What is great about the video is that it is totally informal (witness the start, and keep running right until the end when you watch it).

The video is worth watching for the way Emily’s avatar clearly reflects her emotional response to the lyrics, and for the way Ozan’s avatar appears to be playing his guitar, rather than simply strumming it one-handed, as we’re perhaps used to seeing with avatars; his response to the music is also clear. I assume this has been done by some form of motion capture via whatever camera system he is using, but we’ll have to wait for Ryan’s follow-up to know more.

There are other great delights in the video – Andrew’s surfacing from the pond waters to give the backing “ahs” had me snorting coffee; they are delightfully surreal. I have to say that Chris Collin’s avatar looks somewhat blissed out (aka a little stoned – no offence, Chris!), an impression heightened with the cutaway to Emily’s look on his comment about feeling very cool and relaxed prior to the song starting!

All told, the video is an absolute delight, and also reveals some interesting little elements within HiFi (witness Ryan’s enthusiastic hand-clapping at the end).

Anyway, enjoy!

Getting more animated at High Fidelity

HF-logoOne of the things people have critiqued High Fidelity about is the look of their avatars. Yes, they can use 3D cameras to capture a user’s facial expression and translated them into facial movements on an avatar but, well, the avatars just look a little odd.

Or at least, that’s an oft-heard or read comment. I’m not entirely in disagreement; SL avatars may not be technically up-to-snuff in many ways, but they can look good, and over they years, they have spoiled us somewhat.

However, High Fidelity is still only in an alpha phase; and things are bound to improve over time with the look and feel of their environments and their avatars. As a demonstration of their attempts to improve things, the HiFi team have recently released a couple of videos and a blog post from their animator, Ozan Serim, formerly of Pixar Studios.

In the post – which marks his first time writing  for the blog, Ozan explains how he’s trying to bring more advanced animation to the platform’s avatars to, as he puts it, “make live avatars look really amazing – as close to what we see in animated films today.” This isn’t as easy at it sounds, as he goes on to note:

This is a big challenge – we have to do everything in a fraction of a second without the benefits of an animator (like me!) being able to ‘post-process’ the results of what is motion captured.  So I’ve been working on the ‘rigging’: how a live 3D camera and a motion capture package like Faceshift is able to ‘puppeteer’ an avatar.  With less accurate data, we have to be clever about things like how we move the mouth to more simplistically capture the phonemes that make up speech.

To demonstrate the result, Ozan includes a video of Emily Donald, one of the other HiFi staff members, singing

 As well as this video, using the “default” format of HiFi avatar, Ozan and members of the HiFi team have been working on improving the overall look of their avatar, and some early results of their efforts can be seen in another music video released at the start of August, and which is linked-to in the blog post.

This is again experiment in rigging facial expressions to more fully match those of a human being, with special attention being paid to the “A”s and “M”s as the avatar (Ozan) lip-synchs to Freddie Mercury singing Queen’s Bohemian Rhapsody. This is another video where it’s worth watching the avatar’s mouth movements – and also eye and eyebrow movements, which also reflect a strong level of emotion.

Again, there’s a fair way to go here, but these early results are fascinating, and not just for the technical aspects of what is being done here: capturing, processing and rigging subtle facial expressions in real-time. As a commentator on the Bohemian Rhapsody notes, “cool but creepy” – a reflection of the fact that HiFi have taken a further step into the Uncanny Valley. It’s going to be interesting to see how well they fare in crossing it.

Related Links

With thanks to Indigo Martel for the pointer.

 

High Fidelity: running the client

HF-logoUpdate: It appears the video referred to in this article wasn’t for public consumption, as it has been made fully private.

High Fidelity recently started alpha testing elements of their platform, which follows-on from a public call made in January via the High Fidelity website for alpha testers. The Alpha Sign-Up form is still available, and the client and other code is available through High Fidelity’s public code repository for those wanting to give it a go.

For those that do, Chris Collins (not to be confused with AvaCon’s Chris Collins / Feep Tuque!) from High Fidelity has produced a video (no longer open to public viewing) introduction to the High Fidelity client (simply called “Interface” by High Fidelity), which is designed to get people comfortable with using some of the basics, and which provides a useful means of gaining greater insight into the platform. I’m including a link here rather than embedding, as the video is currently unlisted, so I’m not sure how far he wants it shared, although I’ve dropped him a line to obtain an OK. In the meantime, I’ve taken the liberty of including some screen shots with this article.

Chris doesn’t run through the steps required to build the client, but instead takes launching the client (on a Mac system in his case) as his starting-point, which allows the initial “what you can do” screen to be displayed – a quick overview of what can be done with the current alpha release and also – possibly – a useful way in the future of drawing people’s attention to the very basics of using a client.

The "What you can do" pop-up displayed when the Hi-Fi client starts. Could a pop-up like this help provide new users with basic pointers to the UI?
The “What you can do” pop-up displayed when the Hi-Fi client starts. Could a pop-up like this help provide new users with basic pointers to the UI?

An interesting aspect with High Fidelity is that even with the alpha, many optional hardware devices – such as a Razer Hydra, Leap Motion, Kinect, PrimeSense, Oculus Rift, etc. – appear to be pretty much plug-and-play.

The layout of the client is remarkably similar to that of the SL viewer 3.x UI. At the top is a typical menu bar, while to the left and bottom of the screen are a set of toolbar buttons, all related directly to building, and which can be turned off/on by tapping the Tab key.  An interesting aspect of the UI is the inclusion of a picture-in-picture (PiP) frame, which shows you your own avatar, as seen by others. Whether this frame can be repositioned around the UI window isn’t clear from the video, but it does appear to be pretty fixed in place.

High Fidelity's Interface UI, with picture-in-picture frame showing the user their avatar (clisk for full size)
High Fidelity’s Interface UI, with picture-in-picture frame showing the user their avatar (click for full size)

Even with a standard webcam, the system will pick-up the user’s facial expressions and translate them to the avatar’s face. As voice is the primary means of communication with High Fidelity (although not the sole means of communication – text is also possible), Voice Over IP (VoIP) is enabled on starting the client, and this is reflected in a sound level bar directly beneath the PiP avatar, which is graduated between blue, green and red, with the latter indicating that perhaps the microphone is being over-driven. There’s also a mute button to mute the sound of your own voice in your own headset / speakers.

The default avatar is a little robot, and the video demonstrates the easy with which this can be changed – although as an alpha, the avatars within High Fidelity, even with their facial expressions, are very basic which compared to the like of a grid-based VW; it’ll be interesting to see how far down the road towards detailed customisation the company will go, and how much further that takes them into the Uncanny Valley should they do so. Altering an avatar is done via menu selection and file name – there are no image previews of the avatars (as yet – something that would likely be better received by users).

There are a number of default avatars supplied with the system, and while changing your appearance is somewhat basic at this point, it's a simple matter of a couple of menu selections
There are a number of default avatars supplied with the system, and while changing your appearance is somewhat basic at this point, it’s a simple matter of a couple of menu selections

There is an option to upload avatars of your own – but the format and complexity of such models isn’t explored in the video.

As the video progresses, building using voxels is demonstrated, and more particularly, the coalesced nature of the voxels is demonstrated – as Chris hovers a distance from the default @alpha.highfedility.io location, everything appears as voxel cubes of varying sizes, and doesn’t make for a pleasant-looking world at present. However, as he flies closer, the voxels “break down” into smaller and smaller units and reveal more and more detail. Again, I assume the overall “big voxel blocks” will be somewhat more refined and allow greater detail at a distance in the future, vis-a-vis Philip Rosedale’s discussion of the High Fidelity architecture and use of voxels, at the moment things are terribly blocky even from what seems to be a reasonable distance, and may draw unfavourable comparisions with something like Minecraft.

Anyone familiar with building in Second Life will be instantly familiar with building in High Fidelity; voxels, in shape, are analogous to the default cube prim, and even the way detail “pops-out” at you could be said to be akin to how the shape of sculpties pop-out in an SL-style grid VW, although obviously the underpinning technology is vastly different. There are also options to import / export voxel models, although as with the avatar upload options, there are outside the scope of this initial video.

Continue reading “High Fidelity: running the client”