High Fidelity have opens the doors on their new documentation resource, which is intended to be a living resource for all things HiFi, and to which users involved in the current Alpha programme are invited to contribute and help maintain in order to see it develop and grow.
Introducing the new resource via a blog post, Dan Hope from High Fidelity states:
This section of our site covers everything from how to use Interface, to technical information about the underlying code and how to make scripts for it. We envision this as being the one-stop resource for everything HiFi.
What’s more, we want you to be a part of it. We’ve opened up Documentation to anyone who wants to contribute. The more the merrier. Or at least, the more the comprehensive … er. And accurater? Whatever, we’re better at software than pithy catchphrases. Basically, we think that the smart people out there are great at filling in holes we haven’t even noticed yet and lending their own experience to this knowledge base, which will eventually benefit everyone who wants to use it.
Already the wiki-style documentation area contains a general introduction and notes on documentation standards and contributions, a section to the HiFi coding standard; information on avatar standards, including use of mesh, the skeleton, rigging, etc; information on various APIs, a range of tutorials (such as how to build your avatar from MyAvatar), and client build instructions for both OS X and Windows.
The documentation resource includes a number of tutorials, including the basic creation of an avatar from the MyAvatar “default” (top); and also includes a section on avatar standards, which includes information on the avatar construction, the skeleton, joint orients, rigging, etc. (bottom) – click for sull size
All told, it makes for an interesting resource, and Dan’s blog post covers the fact that the documentation project is also linked to the HiFi Worklist, allowing those who prefer not to write documentation to highlight areas of improvement / clarification or which need writing to those who enjoy contributing documentation, and being rewarded for their efforts.
As well as the link from the blog post, the documentation resource can be accessed from the High Fidelity website menu bar – so if you’re playing with HiFi, why not check it out?
I tend to keep an eye on the High Fidelity blog as and when I have the time (I’m currently waiting to see if I get into the next phase of alpha testing, as I’ve so far failed to build the client (I sucketh at tech sometimes), so try to keep up with developments. I also confess to hoping for another video from AKA…). This being the case, it was interesting to get a look behind the doors at what has been going on within High Fidelity courtesy of self-proclaimed “bouncer”, Dan Hope.
Dan’s blog post turns the spotlight away from the work of the core High Fidelity team and focuses it on those alpha testers / builders who have built the client, made the connection and have started poking at various aspects of the platform and the worklist.
Austin Tate is a name well-known within OpenSim and Second Life. His c.v. is quite stellar, and includes him being the Director of the Artificial Intelligence Applications Institute (AIAI) and a Professor of Knowledge-Based Systems at the University of Edinburgh. Austin’s work has encompassed AI, AI planning and the development of collaborative workspaces using virtual environments and tools – particularly the I-Room.
Within High Fidelity, where he is known as Ai_Austin, he’s been extending the work on I-Rooms and collaborative spaces (both of which seem to have an ideal “fit” with High Fidelity) and has been working on 3D modelling, with Dan noting:
You might have figured out by now that 3D worlds are no good if they can’t handle 3D models accurately, which is why Ai_Austin also tests mesh handling for complex 3D objects. The image above shows the “SuperCar” mesh, which has 575,000 vertices and 200,000 faces, being tested in HiFi. There are several other meshes he uses, too, including one of the International Space Station that was provided by NASA.
SuperCar has also featured in Austin’s work within SL and OpenSim, where he has been providing invaluable insight into working with the Oculus Rift, the development of support for it within the viewer, using it with other hardware (such as the Space Navigator). In fact, if you have any interest at all in the areas of AI, virtual world workspaces, VR / VW integration, etc., then I cannot recommend Austin’s blog highly enough (We also share a passion for astronomy / space exploration and (I suspect) for racing cars, but that’s something else entirely!).
Ctrlaltdavid might also be a name familiar to many in SL and OpenSim, being the HiFi name of Dave Rowe (Strachan OFarrel in SL), the man behind the CtrlAltStudio viewer which focuses on adding OpenGL stereoscopic 3D and Oculus Rift support to the viewer.
With High Fidelity, he’s working on Leap Motion integration, to provide a higher degree of control over an avatar’s hands and fingers than can be achieved through the use of other tools, such as a the Razer Hydra. The aim here is to increase the sense of immersion for users without necessarily relying on clunky hand-held devices. As we know, the Leap Motion sits on the desk and leaves the hands free to gesture, point, etc., and thus would seem and ideal companion when accessing a virtual environment like HiFi (or SL) when using a VR headset; or even without the headset if one wishes to have a degree of liberation from the keyboard.
Dan Hope demonstrates avatar finger motion using the Leap Motion, as being coded by CtrlAltDavid in High Fidelity (Image: High Fidelity blog)
Opening this look at the work of various alpha testers / builders, Dan notes:
We can’t create a truly open system without making it compatible with other open-source tools, which is why Judas has been creating a workflow that will allow artists to make 3D models in the open source program Blender using HiFi’s native FBX format.
This forms a useful introduction to the work of Judas, who has been involved in bringing High Fidelity and Blender closer together in terms of providing improved FBX support for the platform, which is now bearing fruit. “Only last week something was added in that allowed me to import the HiFi avatars into Blender without destroying the rigs we need to animate them,” Judas is quoted as saying in the blog post.
On August 14th, the High Fidelity team issued a blog post featuring the first number by AKA, the company’s informal group of singers of Emily, Ozan and Andrew. While light-hearted in nature, the video further demonstrated HiFi’s work on facial expression and gesture capture.
I wrote about the video and post as a part of a quick update on HiFi, and noted at the time that “executive producer” (and HiFi co-founder) Ryan Karpf would be providing more information on what went into the video and session.
Ryan Karpf, HiFi co-founder and “executive producer” for AKA’s cover of “Easy”
Keeping to his word, Ryan did just that on Tuesday August 26th, releasing a video on how it was all done (embedded below), together with a brief blog post inviting those already in the Hi Fi Alpha testing programme to consider submitting their own videos … assuming, that is, they have the hardware.
Ryan’s piece explains how the team put together the music video and overcame some stumbling blocks, although I admit I’d probably have a better chance of understanding Brad Hefta-Gaub’s explanation of a server crash issues had he been speaking Klingon (which is probably why I’m not in the Alpha)! Fortunately, Ryan is on-hand to offer a single-sentence translation into English. The video also reveals how the team were unable to film the song as a single “live” performance, as had been hoped, but in the end had to rely on traditional post-recording editing to produce the finished piece.
As well as being informative, Ryan’s video is also somewhat hypnotic … I confess to becoming quite captivated by the level of conversation going on between his eyebrows even before he presents us with more exaggerated facial movements to underscore a point! 🙂
It’ll be interesting to see how this work develops, and whether the HiFi team really do get to the point of being able to record a completely fluid and “live” performance; I rather suspect they will. But even without this, the Easy video tends to demonstrate how much more engaging something like a musical set could be when one can see more of the performer’s facial expressions and actions when playing a musical instrument reflected in their avatar.
In the meantime, and for ease of reference (and because I like it and find myself singing along with Emily), is the music video itself, complete with Chris and Ryan’s “outtakes”.
Precisely what Mr. Parisi’s position at HiFi is, isn’t stated, but Mr. Rosedale does say:
Tony has just joined us as an advisor, and is also working with us on some secret High Fidelity stuff that is coming soon. He’s a perfect person to add to the High Fidelity team.
Tony Parisi (via SVVR)
Tony Parisi is the co-creator of the VRML and X3D ISO standards for networked 3D graphics, and a 3D technology innovator. He’s a career CTO / software architect and entrepreneur, has and is serving on a number working groups, and may also be familiar to some as one of the SVVR Creating the VR Metaverse panel in April 2014. More recently, he was featured in a Drax Files Radio Hour feature-length interview, which I also reviewed (and am embedding again at the end of this piece, as it really is worth listening to if you missed it the first time around).
Tony’s full bio can be found here, and while the work he’ll be doing at HiFi is currently “secret”, Philip Rosedale does expand on why his involvement is a good fit for the company:
What we are building at High Fidelity is a bigger project than any one designer or company. To bring virtual reality to everyone will mean a broad set of standards and open systems, and Tony has been designing and championing big pieces of those standards for his whole career, most recently with WebGL.
There can be no doubting Tony’s background and understanding of the potential for consumer-focused VR – again, just listen to the interview below for proof of that.
So interesting times at High Fidelity just got more interesting!
(Nice touch on the updated website as well, with the video header.)
The folks at High Fidelity has been blogging a lot lately. I covered recent moves with improvements to the avatar facial expressions and synch the mouth / lips to better reflect their movements as we speak (and sing!), and one of the more recent blog posts is something of a follow-up to this, with members of the Hi Fi team having a little fun. It’s fair to say that if they keep things up, Emily and Ozan and (I think that’s) Andrew on backing vocals could find themselves in-demand for gigs virtual and otherwise!
Anyway, we’ll get to that in a moment.
The other two posts are focused on Philip’s favourite subject: reducing latency, particularly where sound is concerned. As the oldest of the posts Measuring the Speed of Sound, from August 13th, reducing latency is something of an obsession at High Fidelity, and the post talks about various experiments in trying to reduce audio latency. I’m still not convinced on Philip’s big downer on voice communications over mobile devices, where he’s in the past referred to the 500 msec delay as a “barrier” to communications; I’ve yet to find it silting conversations.
That said, I can see his point in ensuring that audio and video remain synched when it comes to direct interaction, particularly given the nature of what High Fidelity are trying to achieve with the likes of facial and gesture capture to achieve a greater sense of presence. Within the post, Philip discusses the most recent work HiFi have been carrying out in comparing various mediums and how they handle audio and audio latency.
Paloma’s Javascript Project touches on the work of 17-year-old Paloma Palmer. A high school student, Paloma has been honing her JavaScript skills during the summer vacation as an intern at High Fidelity. Video interviewed by HiFi’s Chris Collins, she describes her project in coding voxels to respond directly to volume inputs over a microphone in real-time, coding a form of graphic equaliser in voxel cubes which responds, with minimal delay, directly to both her and Chris’ voices and intonations as they speak – a further demonstration of the low latency goal HiFi are aiming towards, and one which, as the blog post notes, “opens up a bunch of new creative content areas for the virtual world”.
HiFi’s Chris Collins talks with Paloma Palmer, the 17-year-old intern who has been working at HiFi through her summer vacation (inset)
However, it is with High Fidelity’s AKA covers Easy, which sits sandwiched between Measuring and Paloma which offers the most fun, as well as demonstrating some intriguing elements of HiFi’s capabilities.
The post actually takes the form of another music video (and embedded below) in which Emily, with Ozan on guitar and I think (and I see Ciaran Laval is of the same mindset as me) Andrew Meadows (himself aka – or at least previously aka – Andrew Linden) providing the backing vocals. Together they’ve formed HiFi’s own band, AKA (as in Also Known As), a name chosen because, as Emily explains, it allows them to be anyone they want to be. Chris Collins and Ryan Karpf are also on hand, although they don’t participate in the song.
The video this time is a cover of the Commodore’s Easy. We’re promised a deeper explanation of some of the technicalities behind it from “Executive Producer” Ryan at a later date. What is great about the video is that it is totally informal (witness the start, and keep running right until the end when you watch it).
The video is worth watching for the way Emily’s avatar clearly reflects her emotional response to the lyrics, and for the way Ozan’s avatar appears to be playing his guitar, rather than simply strumming it one-handed, as we’re perhaps used to seeing with avatars; his response to the music is also clear. I assume this has been done by some form of motion capture via whatever camera system he is using, but we’ll have to wait for Ryan’s follow-up to know more.
There are other great delights in the video – Andrew’s surfacing from the pond waters to give the backing “ahs” had me snorting coffee; they are delightfully surreal. I have to say that Chris Collin’s avatar looks somewhat blissed out (aka a little stoned – no offence, Chris!), an impression heightened with the cutaway to Emily’s look on his comment about feeling very cool and relaxed prior to the song starting!
All told, the video is an absolute delight, and also reveals some interesting little elements within HiFi (witness Ryan’s enthusiastic hand-clapping at the end).
One of the things people have critiqued High Fidelity about is the look of their avatars. Yes, they can use 3D cameras to capture a user’s facial expression and translated them into facial movements on an avatar but, well, the avatars just look a little odd.
Or at least, that’s an oft-heard or read comment. I’m not entirely in disagreement; SL avatars may not be technically up-to-snuff in many ways, but they can look good, and over they years, they have spoiled us somewhat.
However, High Fidelity is still only in an alpha phase; and things are bound to improve over time with the look and feel of their environments and their avatars. As a demonstration of their attempts to improve things, the HiFi team have recently released a couple of videos and a blog post from their animator, Ozan Serim, formerly of Pixar Studios.
In the post – which marks his first time writing for the blog, Ozan explains how he’s trying to bring more advanced animation to the platform’s avatars to, as he puts it, “make live avatars look really amazing – as close to what we see in animated films today.” This isn’t as easy at it sounds, as he goes on to note:
This is a big challenge – we have to do everything in a fraction of a second without the benefits of an animator (like me!) being able to ‘post-process’ the results of what is motion captured. So I’ve been working on the ‘rigging’: how a live 3D camera and a motion capture package like Faceshift is able to ‘puppeteer’ an avatar. With less accurate data, we have to be clever about things like how we move the mouth to more simplistically capture the phonemes that make up speech.
To demonstrate the result, Ozan includes a video of Emily Donald, one of the other HiFi staff members, singing Christina Aguilera’s Beautiful. While Emily’s avatar might still look somewhat cartoonish, what is interesting to note is the way her mouth moves, and how the emotional content of the lyrics are captured in very subtle facial movements. Ozan notes himself that things are a little simplistic and there is more work to do – but even so, this early experiment shows much promise.
As well as this video, using the “default” format of HiFi avatar, Ozan and members of the HiFi team have been working on improving the overall look of their avatar, and some early results of their efforts can be seen in another music video released at the start of August, and which is linked-to in the blog post.
This is again experiment in rigging facial expressions to more fully match those of a human being, with special attention being paid to the “A”s and “M”s as the avatar (Ozan) lip-synchs to Freddie Mercury singing Queen’s Bohemian Rhapsody. This is another video where it’s worth watching the avatar’s mouth movements – and also eye and eyebrow movements, which also reflect a strong level of emotion.
Again, there’s a fair way to go here, but these early results are fascinating, and not just for the technical aspects of what is being done here: capturing, processing and rigging subtle facial expressions in real-time. As a commentator on the Bohemian Rhapsody notes, “cool but creepy” – a reflection of the fact that HiFi have taken a further step into the Uncanny Valley. It’s going to be interesting to see how well they fare in crossing it.