High Fidelity recently started alpha testing elements of their platform, which follows-on from a public call made in January via the High Fidelity website for alpha testers. The Alpha Sign-Up form is still available, and the client and other code is available through High Fidelity’s public code repository for those wanting to give it a go.
For those that do, Chris Collins (not to be confused with AvaCon’s Chris Collins / Feep Tuque!) from High Fidelity has
produced a video (no longer open to public viewing) introduction to the High Fidelity client (simply called “Interface” by High Fidelity), which is designed to get people comfortable with using some of the basics, and which provides a useful means of gaining greater insight into the platform. I’m including a link here rather than embedding, as the video is currently unlisted, so I’m not sure how far he wants it shared, although I’ve dropped him a line to obtain an OK. In the meantime, I’ve taken the liberty of including some screen shots with this article.
Chris doesn’t run through the steps required to build the client, but instead takes launching the client (on a Mac system in his case) as his starting-point, which allows the initial “what you can do” screen to be displayed – a quick overview of what can be done with the current alpha release and also – possibly – a useful way in the future of drawing people’s attention to the very basics of using a client.
An interesting aspect with High Fidelity is that even with the alpha, many optional hardware devices – such as a Razer Hydra, Leap Motion, Kinect, PrimeSense, Oculus Rift, etc. – appear to be pretty much plug-and-play.
The layout of the client is remarkably similar to that of the SL viewer 3.x UI. At the top is a typical menu bar, while to the left and bottom of the screen are a set of toolbar buttons, all related directly to building, and which can be turned off/on by tapping the Tab key. An interesting aspect of the UI is the inclusion of a picture-in-picture (PiP) frame, which shows you your own avatar, as seen by others. Whether this frame can be repositioned around the UI window isn’t clear from the video, but it does appear to be pretty fixed in place.
Even with a standard webcam, the system will pick-up the user’s facial expressions and translate them to the avatar’s face. As voice is the primary means of communication with High Fidelity (although not the sole means of communication – text is also possible), Voice Over IP (VoIP) is enabled on starting the client, and this is reflected in a sound level bar directly beneath the PiP avatar, which is graduated between blue, green and red, with the latter indicating that perhaps the microphone is being over-driven. There’s also a mute button to mute the sound of your own voice in your own headset / speakers.
The default avatar is a little robot, and the video demonstrates the easy with which this can be changed – although as an alpha, the avatars within High Fidelity, even with their facial expressions, are very basic which compared to the like of a grid-based VW; it’ll be interesting to see how far down the road towards detailed customisation the company will go, and how much further that takes them into the Uncanny Valley should they do so. Altering an avatar is done via menu selection and file name – there are no image previews of the avatars (as yet – something that would likely be better received by users).
There is an option to upload avatars of your own – but the format and complexity of such models isn’t explored in the video.
As the video progresses, building using voxels is demonstrated, and more particularly, the coalesced nature of the voxels is demonstrated – as Chris hovers a distance from the default @alpha.highfedility.io location, everything appears as voxel cubes of varying sizes, and doesn’t make for a pleasant-looking world at present. However, as he flies closer, the voxels “break down” into smaller and smaller units and reveal more and more detail. Again, I assume the overall “big voxel blocks” will be somewhat more refined and allow greater detail at a distance in the future, vis-a-vis Philip Rosedale’s discussion of the High Fidelity architecture and use of voxels, at the moment things are terribly blocky even from what seems to be a reasonable distance, and may draw unfavourable comparisions with something like Minecraft.
Anyone familiar with building in Second Life will be instantly familiar with building in High Fidelity; voxels, in shape, are analogous to the default cube prim, and even the way detail “pops-out” at you could be said to be akin to how the shape of sculpties pop-out in an SL-style grid VW, although obviously the underpinning technology is vastly different. There are also options to import / export voxel models, although as with the avatar upload options, there are outside the scope of this initial video.
Text chat within the client is enabled by tapping the Enter key (although interestingly, Enter is also apparently used to re-synch avatar eye tracking with your own – how the system tells the difference between which operation you actually want, isn’t actually revealed. In the video, Chris taps Enter to achieve both functions, quite independently of one another). The chat window, when open, resides in a sidebar which lists the users currently connected to the system, a large space to display text, and an input box. In the alpha, chat is global, so all users are visible, although the implication is that in the future it will be more localised.
Currently, according to Chris, clicking on a user name in the chat bar will take you directly to that person. It’s not clear from the video whether there is any privacy ability within the system to stop people just randomly pitching-up wherever you are and no matter what you are doing, but I would assume that if not implemented for the alpha (which is, after all, about testing the system), such a capability would be added down the road. After all, you wouldn’t want a stranger randomly plonking themselves down in the middle of what was otherwise a private meeting…
Other methods of moving around are also menu-driven, and here we see the idea of high Fidelity virtual world domains (e.g. “@alpha.highfidelity.io” or “@myimaginaryplace”, etc.), managed by the High Fidelity Nameserver coming into play – there is a dedicated option for finding registered domains, as well as option to go to available locations.
There is also a Go To option, which opens-up a separate floater allowing you to jump directly to a place by entering its domain, or the name you’ve assigned to it yourself (#place), using the Name This Location option in the File menu, or by entering a user name.
There are a number of destinations available in the alpha, including a public sandbox, which Chris visits, where people can build, experiment and generally kick the tyres of the tools without actually doing anything which might cause problems elsewhere.
All told, the video, at just under seven minutes, provides a useful introduction to High Fidelity, and while it might be easy to pick holes in how avatars and the world appear at present, this is just an alpha, and the aim appears to be towards showing what can be done, rather than in demonstration how wonderful everything might look (although in not addressing the latter, High Fidelity do leave themselves somewhat open to “looking like Minecraft” critiques, which might deter some). As such, it’ll be interesting to see how things develop in the future.
As noted towards the top of this article, those wanting to run the client for themselves and connect to the default worlds, can download the required code from the High Fidelity repository, although they’ll have to build the client for themselves.
- Chris Collins’ High Fidelity client video
- High Fidelity website
- High Fidelity code repository
- High Fildiety in this blog (Menu > Pey’s Travelogues > Other Worlds > High Fidelity)
With thanks to Daniel Voyager for the pointer to the video, via the LivingSL blog feed.