One of the things people have critiqued High Fidelity about is the look of their avatars. Yes, they can use 3D cameras to capture a user’s facial expression and translated them into facial movements on an avatar but, well, the avatars just look a little odd.
Or at least, that’s an oft-heard or read comment. I’m not entirely in disagreement; SL avatars may not be technically up-to-snuff in many ways, but they can look good, and over they years, they have spoiled us somewhat.
However, High Fidelity is still only in an alpha phase; and things are bound to improve over time with the look and feel of their environments and their avatars. As a demonstration of their attempts to improve things, the HiFi team have recently released a couple of videos and a blog post from their animator, Ozan Serim, formerly of Pixar Studios.
In the post – which marks his first time writing for the blog, Ozan explains how he’s trying to bring more advanced animation to the platform’s avatars to, as he puts it, “make live avatars look really amazing – as close to what we see in animated films today.” This isn’t as easy at it sounds, as he goes on to note:
This is a big challenge – we have to do everything in a fraction of a second without the benefits of an animator (like me!) being able to ‘post-process’ the results of what is motion captured. So I’ve been working on the ‘rigging’: how a live 3D camera and a motion capture package like Faceshift is able to ‘puppeteer’ an avatar. With less accurate data, we have to be clever about things like how we move the mouth to more simplistically capture the phonemes that make up speech.
To demonstrate the result, Ozan includes a video of Emily Donald, one of the other HiFi staff members, singing
As well as this video, using the “default” format of HiFi avatar, Ozan and members of the HiFi team have been working on improving the overall look of their avatar, and some early results of their efforts can be seen in another music video released at the start of August, and which is linked-to in the blog post.
This is again experiment in rigging facial expressions to more fully match those of a human being, with special attention being paid to the “A”s and “M”s as the avatar (Ozan) lip-synchs to Freddie Mercury singing Queen’s Bohemian Rhapsody. This is another video where it’s worth watching the avatar’s mouth movements – and also eye and eyebrow movements, which also reflect a strong level of emotion.
Again, there’s a fair way to go here, but these early results are fascinating, and not just for the technical aspects of what is being done here: capturing, processing and rigging subtle facial expressions in real-time. As a commentator on the Bohemian Rhapsody notes, “cool but creepy” – a reflection of the fact that HiFi have taken a further step into the Uncanny Valley. It’s going to be interesting to see how well they fare in crossing it.
Related Links
- Emily in Paris – High Fidelity blog
- High Fidelity website
- High Fildiety in this blog (Menu > Pey’s Travelogues > Other Worlds > High Fidelity)
With thanks to Indigo Martel for the pointer.


















