High Fidelity moves to “open beta”

HF-logoTuesday, April 27th saw High Fidelity move to an “open beta” phase, with a simple Twitter announcement.Having spent just over a year in “open alpha” (see my update here), the company and the platform has been making steady progress over the course of the last 12 months, offering increasing levels of sophistication and capabilities  – some of which might actually surprise those who have not actually set foot inside High Fidelity but are nevertheless will to offer comments concerning it.

I’ve actually neglected updating on HiFi for a while, my last report having been at the start of March. However, even since then, things have moved on at quite a pace. The website has been completely overhauled and given a modern “tile” look (something which actually seems to be a little be derivative rather than leading edge – even Buckingham Palace has adopted the approach).

High Fidelity open beta requirements
High Fidelity open beta requirements

The company has also hired Caitlyn Meeks, former Editor in Chief of the Unity Asset Store, as their Director of Content, and she has been keeping people appraised of progress on the platform at a huge pace, with numerous blog posts, including technical overviews of new capabilities, as well as covering more social aspects of the platform, including pushing aside the myth that High Fidelity is rooted in “cartoony” avatars, but has a fairly free-form approach to avatars and to content.

High Fidelity may not be as sophisticated in terms of overall looks and content – or user numbers – as something like Second Life or OpenSim, but it is grabbing a lot of media attention (and investment) thanks to it have a very strong focus on the emerging ranging to VR hardware systems, and the beta announcement is timed to coincide with the anticipated increasing availability of the first generation HMDs from Oculus VR and HTC. Indeed, while the platform is described as “better with” such hardware and can be used without HMDs and their associated controllers, High Fidelity  describe it as being “better with” such hardware.

High Fidelity avatars
High Fidelity avatars

I still tend to be of the opinion that, over time, VR won’t be perhaps as disruptive in our lives as the likes of Mixed / Augmented Reality as these gradually mature; as such I remain sceptical that platforms such as High Fidelity and Project Sansar will become as mainstream and their creators believe, rather than simply vying for as much space as they can claim in similar (if larger) niches to that occupied by Second Life.

And even is VR does grow in a manner similar t that predicted by analysts, it still doesn’t necessarily mean that everyone will be leaping into immersive VR environments to conduct major aspects of their social interactions. As such, it will be interesting to see what kind of traction high Fidelity gains over the course of the next 12 months, now that it might be considered moving more towards maturity – allowing for things like media coverage, etc., of course.

Which is not to share the capabilities aren’t getting increasingly impressive, as the video below notes – and just look at the way Caitlyn’s avatar interact with the “camera” of our viewpoint!

 

 

.

High Fidelity: “VR commerce”, 200 avatars and scanning faces

HF-logoHigh Fidelity have a put out a couple of interesting blog posts on their more recent work, both of which make for interesting reading.

In Update from our Interns, the provide a report by Edgar Pironti and Alessandro, Signa two of High Fidelity’s interns who have been working as interns  on a number of projects within the company, including developing a single software controller for mapping inputs from a range of hand controllers, with the initial work involving the Razer Hydra, HTC Vive’s controllers and the Space Navigator. They also discuss working on recording and playback functionality, which is also expanded upon in the second blog post which caught my eye, the January newsletter, issued by Chris Collins.

This work has involved developing the ability to pre-record the data stream of an avatar – audio, facial expressions and body movement in a format which can later be played back on a server under the control of JavaScript. As Chris notes, this makes it very easy to quickly populate a VR experience with compelling pre-recorded avatar content, allowing life-like characters to a place or for use within a machinima film.

As their third reported project. Edgar and Alessandro discuss how they’ve been looking into creating a “VR commerce” environment. This combines elements of physical world shopping – sharing it with friends, actually grabbing and trying items, having discussions with sales staff, etc., with the convenience of e-shopping, such as quickly changing colours, seeing customer reviews and feedback, and so on. As well as explaining how they went about the task, Edgar and Alessandro have put together a video demonstration:

In the High Fidelity newsletter, Chris Collins covers a number of topics, including work on optimising avatar concurrency on a single High Fidelity server. While this work most likely used Edgar’s and Alessandro’s approach to pre-recording avatar  data streams mentioned above, the initial results are impressive: 200 avatars on a server, of which the 40 nearest the observer’s viewpoint are rendered at 75Hz when using the Oculus Rift, and with a very high level of detail, with full facial animations.

200 avatars on a High Fidelity server as the comes starts into optimising avatar concurrency (image: High Fidelity)
200 avatars on a High Fidelity server as the comes starts into optimising avatar concurrency (image: High Fidelity)

One of the things that High Fidelity has often be critiqued for by SL users is the cartoon like avatars which were first shown as the company gradually cracked open its doors. These are still in use, but there has also been a lot of work put into making the avatars more life-like should users so wish. However, there is a trade-off here, which has been discussed in the past: the so-called uncanny valley effect: the closer the facial features look and move almost, but not exactly, like natural beings, it causes a response of revulsion among some observers.

This has tended those investigating things like avatar usage to express caution in pushing too close to achieving any kind of genuine realism in their avatars, and Philip Rosedale has discussed High Fidelity’s own gentle pushing at the edge of the valley. Now it seems the company is stepping towards the edge of the valley once more, using 3D scans of people to create avatar faces, as Chris notes in the newsletter:

We scanned our whole team using two different 3D scanners (one for hair and one for the face), and then used some simple techniques to make the results not be (too) uncanny … Although there is still a lot of work to do, we achieved these results with less than 2 hours of work beyond the initial scans, and the effect of having our team meetings using our ‘real’ avatars is quite compelling.

High Fidelity's staff group photo using scans of their own hair and faces (image: High Fidelity)
High Fidelity’s staff group photo using scans of their own hair and faces (image: High Fidelity)

Not everyone is liable to want to use an avatar bearing a representation of their own face, and the idea of using such a technique does raise issues around identity, privacy, etc., which should be discussed, by High Fidelity’s work in this area is intriguing. Although that said, and looking at the staff group photo, I would perhaps agree there is still more work needed; I’m not so much concerned about the technique pushing towards the edge of the uncanny valley so much as I am about the avatars in the photo looking just that little bit odd.

Also in the update is a discussion on audio reverb, a look at how the use of particles can be transformed when you’re able to see your “hands” manipulating them, and a look at a purpose-built game in High Fidelity.  These are all discussed on the video accompanying the January video, which I’ll close with.

High Fidelity: Stephen Wolfram and more on tracking

HF-logoOn Tuesday, December 29th, High Fidelity announced that Stephen Wolfram has become their latest advisor.

British-born Stephen Wolfram is best known  for his work in theoretical physics, mathematics and computer science. He  began research in applied quantum field theory and particle physics and publish scientific papers when just 15 years old. By the age of 23, he was studying at the School of Natural Sciences of the Institute for Advanced Study in Princeton, New Jersey, USA, where he  conducted research into cellular automata using computer simulations.

Stephen Wolfram via Quantified Self
Stephen Wolfram via Quantified Self

When introducing Wolfram through the High Fidelity blog, Philip Rosedale notes this work had a profound impact on him, as did – later in life – Wolfram’s 2002 book, A New Kind of Science.

More recently, Wolfram has been responsible for WolframAlpha, an answer engine launched in 2009, and which is one of the systems used by both Microsoft’s Bing “decision engine” and also Apple’s Siri. In 2014, he launched the Wolfram Language as a new general multi-paradigm programming language.

In become an advisor to High Fidelity, Dr. Wolfram joins Peter Diamandis, the entrepreneur perhaps most well-known for the X-Prize Foundation, Dr. Adam Gazzaley, founder of the Gazzaley cognitive neuroscience research lab at the University of California, Tony Parisi, the co-creator of the VRML and X3D ISO standards for networked 3D graphics, and a 3D technology innovator, Professor Ken Perlin of the NYU Media Research Lab, and Professor Jeremy Bailenson, the director of Stanford University’s Virtual Human Interaction Lab.

In their October update published at the start of November, High Fidelity followed-up on the work they’ve been putting into various elements of tracking movement, including the use of BinaryVR for tracking facial movement and expressions. In particular, the company’s software allows users to create a their own virtual 3D face from 2D facial photos, allowing them to track their facial animations in real-time, transforming them onto the CG representation of their face.

Ryan balances one object atop another while Chris uses the gun he picked-up with a hand movement, then cocked it, to try and shoot the yellow object down
Ryan balances one object atop another while Chris uses the gun he picked-up with a hand movement, then cocked it, to try to shoot the yellow object down

Integration of the BinaryVR software allows High Fidelity to track users’ mouth movements through their HMDs, allowing their avatars to mimic these movements in-world, as Chris Collins demonstrates in the update video. The company has also been extending the work in full body tracking, as seen in my October coverage of their work, and this is also demonstrated in the video alongside of more in-world object manipulation by avatars, with Chris and Ryan building a tower of blocks and Chris then picking up a gun and shooting it down.

The hand manipulation isn’t precise at this point in time, as can be seen in the video, but this isn’t the point; it’s the fact that in-world objects can be so freely manipulated that is impressive. That said, it would be interesting to see how this translates to building: how do you accurately sized a basic voxel (a sort-of primitive for those of us in SL) shape to a precise length and width, for example, without recourse to the keyboard or potentially complicated overlays?

Maybe the answer to this last point is “stay tuned!”.

High Fidelity: September update and things to come

HF-logoThe September newsletter from High Fidelity appeared at the end of that month, with Chris Collins highlighting some of the work that has been going on of late, providing an update on particle effects, procedural textures and – most interestingly – avatar kinematics and in-world object manipulation using an avatar’s hands and via suitable controllers.

Procedural textures allow for complex, algorithm based textures to be created using tools such as ShaderToy and used directly within High Fidelity. Brad Davis has created a video tutorial on procedural entities which Chris references in the newsletter, the write-up also follows a short video released on the High Fidelity  you Tube channel which briefly demonstrates procedural textures in HiFi.

However, it is the object manipulation that’s likely to get the most attention, together with avatar kinematics and attempts to imply a force when moving an object.

In terms of avatar kinematics, Chris notes:

In 2016, when the consumer versions of the HMD’s are released, you are also going to be using a hand controller. It is therefore important that we can make your avatar body simulate correct movement with the hand data that we receive back from the controllers.

The results are shown in the newsletter in the form of  some animated GIFs. In the first, Chris’ avatar is shown responding to a Hydra controller for hand movements and echoing his jaw movements. The second demonstrates object manipulation, with Chris’ avatar using its hand to pick up a block from an in-world game, echoing Chris’ motions using a hand-held controller.

Manipulating in-world objects in High Fidelity via an avatar's hands and a set of controllers (image: High Fidelity)
Manipulating in-world objects in High Fidelity via an avatar’s hands and a set of controllers (image: High Fidelity)

The animation in picking up the block may not be entire accurate at this point in time – the block seems to travel through the avatar’s thumb as the wrist is rotated – but that isn’t what matters. The level of manipulation is impressive, and it’ll be interesting to see if this might be matched with things like feedback through a haptic style device, so that users can really get a sense of manipulating objects.

The object manipulation element, together with attempts to imply a force when moving objects in-world which make up a core part of the video accompanying the newsletter (and which is embedded below). Again, this really is worth watching, as the results are both impressive, and illustrate some of the problems High Fidelity are trying to solve in order to give virtual spaces greater fidelity.

Coupling object manipulation with implied force opens up a range of opportunities for things like in-world games, physical activities, puzzles, and so on. There’s also potential for learning and teaching as well, so it’ll be interesting to see how this aspect of the work develops.

The newsletter also promises that we’ll be seeing some further VR demo videos from High Fidelity in October, so keep an eye out for those as well.

High Fidelity: into the solar system and STEM grant recipients

HF-logoI’m rather into space and astronomy – that much should be obvious from my Space Sunday reports, and coverage of mission like the Curiosity rover, astronomical events like the transit of Venus and so on.

So when High Fidelity posted news on the 2015 summer intern project, and the words “solar system” featured in it, my attention was grabbed. The post opens:

Hello! I’m Bridget, and I’ve been interning at High Fidelity this summer, working to build some JavaScript content in HF. As a math and computer science major, I had the opportunity to hone my programming skill set, learning from Hifi’s superb team of software engineers and design-minded innovators.

So here’s the culmination of my work this summer: a virtual orbital physics simulation that provides an immersive, interactive look at our solar system.

Bridget's solar system model correctly simulates the movement of planetary bodies around a stellar object , utilsing both Newton's and Kepler's laws, thus producing a dynamic teaching model for orbital mechanics and gravity
Bridget’s solar system model correctly simulates the movement of planetary bodies around a stellar object , utilising both Newton’s and Kepler’s laws, thus producing a dynamic teaching model for orbital mechanics and gravity – with a potential application for teaching aspect of physical cosmology

The goal of Bridget’s project is to demonstrate what can be built using JavaScript (and some C++), with a particular emphasis on building educational content in High Fidelity, and by using the solar system, she has come up with a highly innovative approach to teaching orbital mechanics – and more besides.

Essentially, she has created a model of the solar system which uses “real” gravitational physics to simulate the motion of the planets around the Sun. The planets themselves occupy orbits scaled relative to Earth, and fixed reference values are used for the orbital period, large and small body masses, and gravity. Then, a little Newtonian physics is thrown into the mix, together with a sprinkling of Kepler’s Laws of planetary motion. Thus, the scripting ensures that the planets maintain a stable orbit, while updates correct mimic each planet’s orbital trajectory around the Sun.

This generates a model that is interesting enough in itself, if somewhat simplified in nature, as Bridget notes, whilst also pointing to its potential for further use:

While the simulation exploits a somewhat simplified model, namely neglecting the elliptical nature of the planets’ orbits, it can easily be modified to account for additional factors such as the n-body problem.

In other words, there is the potential here to both refine the model in terms of orbital mechanics and planetary motion as a part of the teaching / learning process, and perhaps even dip a toe into physical cosmology.

the simulation
the simulation includes a UI which allows users to perform a number of tasks, including playing a little game and being able to zoom into the planets.

Bridget also notes:

Another fun aspect of the project was implementing UI to create possibilities for exploration and experimentation within the simulation. A panel with icons lets you:

  • Pause the simulation and show labels above each planet revealing its name and current speed
  • Zoom in on each planet
  • Play a “Satellite Game” (think Lunar Lander, but with a satellite around the earth), where you attempt to fling a satellite into stable orbit
  • Adjust gravity and/or the “reference” period, and see what happens!

Bridget’s work marks the second time a summer intern has reported on working at High Fidelity during the summer hiatus. In 2014, Chris Collins chatted to the (then) 17-year-old Paloma Palmer, a high School student also honing her coding skills. She focused on coding voxels to respond directly to volume inputs over a microphone in real-time. You can see her discussion with Chris on the HiFi YouTube channel.

Staying with education, and following on from my coverage of High Fidelity’s STEM VR challenge, Ryan Kampf announced the first of the grant recipients on Friday, August 14th.

The VR challenge invited educators, be they individuals or groups, to take up the STEM VR Challenge, to submit proposals for educational content in High Fidelity which meets the criteria set-out in the Challenge website, namely that the content is:

  • HMD (e.g. Oculus Rift) featured
  • High school age appropriate
  • STEM focused
  • Social (can be experienced by >3 people together).

On offer were up to three grants of US $5,000 each for recipients to further develop their ideas.

In his  announcement Ryan indicated that two recipients for grants had been selected from submissions: the TCaRs VR Challenge and Planet Drop VR.

Both use game mechanics, with TCaRs (Teaching Coding – a Racing simulation) enabling users get to interact with and customise their racing cars using JavaScript, while Planet Drop places players into an alien planet environment which they must explore through “cooperative asymmetrical gaming”. Each has highly specialised information, based on their chosen STEM field and provided to them via a game HUD, and the aim is for them to work together, sharing the information they receive as quickly and effectively as possible to allow the team to solve challenges and advance through a story arc of increasingly impressive accomplishments.

Conceptual illustration of the "Mech Pods" the players in Planet Drop will use to explore their alien environment
Conceptual illustration of the “Mech Pods” the players in Planet Drop will use to explore their alien environment

Congratulations to Bridget on her summer intern project (the script is available for those wishing to use it), and to the STEM VR challenge recipients.

The Drax Files Radio Hour: giving it the HiFi!

radio-hourOne of the big use-cases is going to be kids maybe doing an extra, like instead of doing their homework in the normal way in the evening, they go on-line where they join a study group where they join a teacher..

So opens segment #75 of the with some thoughts from Philip Rosedale, co-founder of Second Life, and more particularly now the CEO of start-up virtual worlds company, High Fidelity.

At just over 89 minutes in length, this is a special show, exploring High Fidelity from the inside, so to speak, complete with conversations with Mr. Rosedale, Ryan Karpf (HiFi’s co-founder and ex-Linden), Chris Collins and Ozan Serim, while David Rowe (perhaps more familiarly known to SL users as Strachan Ofarrel creator of the Oculus Rift compatible CtrlAltStudio viewer), who has been working with the HiFi team, becoming a guest host for the segment.

Since its founding, High Fidelity has made remarkable strides in developing its next generation, open-source virtual world environment, both technically and financially. Since April 2013, the company has undergone three rounds of funding, attracting around US $16 million, most of which has come from True Ventures, Google Ventures and, most recently, Paul Allan’s Vulcan Capital (which also participated in the October 2014 US $542 million investment round for Magic Leap). In addition, HiFi has attracted a number of high-profile advisers, including VR veteran Tony Parisi and, most recently, professors Ken Perlin and Jeremy Bailenson.

As well as Philip Rosedale, Drax talks with Chris Collins (l), Ryan Kampf and Ozan Serim from high Fidelity
As well as Philip Rosedale, Drax talks with Chris Collins (l), Ryan Karpf and Ozan Serim from high Fidelity

The interviews themselves are quite wide-ranging. With Dave Rowe, (known in HiFi as CtrlAltDavid) the open-source nature of the platform is explored, from the ability to download and run your owner HiFi server (aka “Stack Manager“) and client (aka “Interface“), through to the concept of the worklist, which allows contributors to bid for work on offer and get paid based on results.In Dave’s case, this has led him to working on various aspects of the platform such as integrating Leap Motion capabilities to improving eye tracking within HiFi’s avatars, so they track the movements of other avatars, just as our own eyes track other people’s facial and other movements as they interact with us.

In terms of general looks, the avatars – which have in the past been critiqued for being “cartoony” (despite it is still very early days for HiFi) –  are still very much under development. In particular, Ozan Serim has been working to raise –  and no pun intended here – the overall fidelity of the avatars in terms of looks and capabilities. He’s well-placed to do so, being an ex-Pixar animator.

One of the problems here is that the more real in appearance and capabilities they get, the closer the avatars come to the Uncanny Valley, which has led HiFi and Ozan to look at a number of avatar styles, from those which are very human in appearance through to those that are more “cartoonish” in looks.

A 2014 video showing Ozan’s work in improving the rigging around a more “realistic” HiFi avatar to more actually reflect mouth forms and facial movement when singing. High Fidelity now use Faceshift for real-time facial expression capture, rigging and animation, using either 3D or standard webcams

In discussing the Uncanny Valley, and particularly people’s reactions to avatars that are somewhat less-than-real (and we can include SL avatars in this, given their inability to naturally reflect facial expressions), Ozan raises the interesting question of whether people who critique the look of such avatars actually want to have a “realistic” looking avatar, or whether it is more a case of people wanting an avatar look that is appealing to their aesthetics which they can they identify with.

This is and interesting train of thought, as it is certainly true that – limitations of the avatar skeleton aside – most of us in Second Life are probably more driven to develop our avatars to a point where they have a personal aesthetic appeal, rather than in wanted them to be specifically “more realistic”.

Currently, HiFi is leaning towards a somewhat stylised avatar as seen in Team Fortress 2, which is allowing them to develop a natural-looking avatar look that doesn’t come too close to the Uncanny Valley. They use Adobe Maximo as their avatar creation tool, which Ozan views as a capable workflow package, but which may have some creative limitations. However, as an open-source environment, HiFi does offer the potential for someone to script in “in-world” character modelling tools, or at least to offer upload capabilities for avatar model generated in tools such as Blender. Avatars can also, if wanted, by uploaded as a complete package with all required / defined animations, such as walks, etc, included.

Chris Collins has very much become the voice of High Fidelity on You Tube, producing a wide range of videos demonstrating features of the platform, together with short tutorial pieces. The video above is one of his, demonstrating how to code interactive 3D content, using the Planky game as an example

While Ozan and his team work on avatar animations and rigging using real-time capture, Ryan Karpf reveals that by default, an avatar’s facial expressions are driven via the audio more than by direct capture: the mouth movement, for example, comprises 3 positions based on the audio, while a rising of voice or tone can result in the avatar’s eyebrows rising and falling. Ryan also touches on the Uncanny Valley issue of people’s increasingly discomfiture the closer avatars become to looking “photo-realistic”.

In talking to Chris Collins, an ex-Linden Lab alumni who headed the former SL Enterprise division, who now wears a number of hats at HiFi, Drax discusses how HiFi deals with the ever-changing face of the emerging VR hardware market, where headsets, input, tracking, and so on, is in something of a state of flux. Chris points out that while open-source, HiFi does have a set of strict coding standards and licensing, and offer external libraries to help support third-party SDK integration.

One of the powerful elements of High Fidelity is the ability you to have full agency over your environment, if you so wish; using the Stack Manager, you can create your own server / world / space, and control who might access it.  The scripting tools similarly allow users to download and tweak elements – such as walking animations, a basic avatar appearance, etc., quickly and easily.

Continue reading “The Drax Files Radio Hour: giving it the HiFi!”