The Federal Consortium of Virtual Worlds 2015 workshop

Moses

The US Army’s Military Open Simulator Enterprise Strategy (MOSES) and AvaCon have announced the first Federal Consortium of Virtual Worlds (FCVW) workshop, which will take place in a specially built virtual conference centre on Friday, March 6th and Saturday March 7th, 2015.

The workshop will be an active experience, with on-line exhibits and presentations provided in an interactive manner. Workshop participants are encouraged to engage and interact with the presenters, and the exhibits will range from cultural training material in a mock village to scientific ethical dilemmas in a city landscape.

The press release for the workshop notes that:

Virtual world technology has matured significantly and rapidly over the past eight years to the point where hundreds of people are able to simultaneously participate in an on-line event. The workshop is open to military and civilian personnel, including the public. The conference will be held entirely within an Open Simulator virtual environment, and reservations will be free for attendees.

The workshop will be a multi-track event, featuring keynote speakers and break-out sessions, and the FCVW and conference organisers are inviting proposals to be a speaker, presenter, or performer in one of the following tracks:

  • The Alternative User Interfaces track 
  • The Metacognition
  • Military Applications track
  • Security, Privacy and Identity track

In addition, the Knowledge Transfer track seeks public sector participants for a panel entitled Public Service Education in Virtual Worlds: Past, Present, and Future, which will discuss public service education uses for virtual world learning simulations as well as will feature panelists’ views on public service virtual world education projects from the past, present, and future. Participants in this discussion will be able to showcase relevant Open Simulator virtual world learning simulations via OAR and IAR uploads to be coordinated with the workshop organisers.

Full details on the above tracks, including information on areas of interest applicable to each of them, can be found in the workshop Call for Proposals page of the official website. Proposals must be received by the organisers by Monday, January 5th, 2015.

About the FCVW

The Federal Consortium for Virtual Worlds (FCVW) supports individuals and organisations from government (federal, state, local, and international), academia, and corporate sesectors to improve government collaboration through the use of virtual worlds, enrich collaborative online experiences, explore technologies that may enhance telework, and foster cross-agency collaboration.

About MOSES

The Military Open Simulator Enterprise Strategy (MOSES) is operated by the operated by the US Army’s Simulation & Training Technology Center (STTC), a part of the U.S. Army Research Laboratory, Human Research and Engineering Directorate. It is a coalition of military, industry, and academic partners who share a common interest in the advancement of virtual world technology for simulation based training and education. The MOSES Project seeks to address issues surrounding current game based virtual environment training systems in the two key areas of scalability and flexibility, and create a practical and deployable virtual simulation-based training system capable of providing a learner with a means to test skills in an accreditable manner. http://militarymetaverse.org/

About Avacon

AvaCon, Inc. is a 501(c)(3) non-profit organisation dedicated to promoting the growth, enhancement, and development of the metaverse, virtual worlds, augmented reality, and 3D immersive and virtual spaces. We hold conventions and meetings to promote educational and scientific inquiry into these spaces, and to support organized fan activities, including performances, lectures, art, music, machinima, and much more. Our primary goal is to connect and support the diverse communities and practitioners involved in co-creating and using virtual worlds, and to educate the public and our constituents about the emerging ecosystem of technologies broadly known as the metaverse.

Rock-paper-scissors at HiFi, with thanks to SL’s Strachan Ofarrel!

HF-logoDan Hope over at High Fidelity has provided  a light-hearted blog post on using the Leap Motion gesture device with the High Fidelity Alpha.

The blog post includes a video showing Chris Collins and Ozam Serim in-world in High Fidelity playing a game of rock-paper-scissors. The intention is to provide something of an update on integrating Leap Motion with High Fidelity.

Both Chris and Ozan’s avatars have intentionally-oversized hands, which although they look silly / awkward, help emphasise the  dexterity available in the High Fidelity avatar. Not only can avatars mimic user’s gestures, they can mimic  individual finger movements as well (something Dan has shown previously in still images).

Dan also points out the work to integrate Leap Motion hasn’t been done internally, but has  been a contribution from CtrlAltDavid – better known in Second Life as Strachan Ofarrel (aka Dave Rowe), the man behind the CtrlAltStudio viewer. As such, Dan points to it being an example of the High Fidelity Worklist being put to good use – although I say it’s more a demonstration of  Dave’s work in getting new technology into virtual environments :).

A lot of people have been fiddling with Leap Motion – including fixing it to the front of an Oculus Rift headset (as noted in the HiFi blog post) in order to make better use of it in immersive environments.Having it fixed to an Oculus, makes it easier for the Leap Motion to capture gestures – all you need to do is hold your hands up in your approximate field-of-view, rather than having to worry about where the Leap is on your desk.

Mounting the Leap motion to the front of Oculus Rift headsets is seen as one way to more accurately translate hand movements and gestures into a virtual environment. Perhaps so - but a lot of people remain unconvinced with gesture devices as they are today
Mounting the Leap motion to the front of Oculus Rift headsets is seen as one way to more accurately translate hand movements and gestures into a virtual environment. Perhaps so – but a lot of people remain unconvinced about using gesture devices as we have them today

Away from the ubiquitous Oculus Rift, Simon Linden did some initial experiments with Leap Motion with Second Life in early 2013, and Drax also tried it out with some basic gesture integration using GameWAVE software, however the lack of accuracy with the earlier Leap Motion devices didn’t easily lend their use to the platform, which is why more recent attempts at integration didn’t really get off the ground. However, Leap Motion have been working to improve things.

That said, not everyone is convinced as to the suitability of such gesture devices when compared to more tactile input systems such as haptic gloves, which have the benefit of providing levels of feedback on things (so when you pick a cube up in-world, you can “feel” it between your fingers, for example). Leap certainly appears to suffer from some lack of accuracy  – but it is apparently getting better.

Given a choice, I’d probably go the haptic glove + gesture route, just because it does seem more practical and assured when it comes to direct interactions. Nevertheless, it’s interesting to see how experiments like this are progressing, particularly given the Lab’s own attempts to make the abstraction layer for input devices as open as possible on their next generation platform, in order to embrace devices such as the Leap Motion.

Related Links

2014 Opensimulator Community Conference: tune-in

A fascinating Gource visualisation posted by nebadon2025 charting the growth of the OpenSimulator project by code commits from core developers up until the time of the 2014 conference

Saturday, November 8th, and Sunday, November 9th mark the 2014 OpenSimulator Community Conference, which is being jointly run by AvaCon and the Overte Foundation. The weekend promises to be packed with talks, presentations, workshops and more; and while in-world registrations have sold out, it is not too late to register for the livestream broadcasts of the conference events.

The full programme can be found on the conference website, however, the keynote events comprise:

Saturday, November 8th, 07:30 SLT – OpenSimulator Developer Panel: featuring: Mic Bowman, Planning Committee, Intel Labs; Michael Cerqoni; Justin Clark-Casey, Overte Foundation; James Hughes, Founder, BlueWall Information Technologies, LLC; Oren Hurvitz, Co-Founder and VP R&D of Kitely; Crista Lopes, Overte Foundation and the University of California, Irvine; and Melanie Milland, Planning Committee, Avination. Together they will discuss  the future of the OpenSimulator platform, covering a range of issues including: the future of the Hypergrid, content licensing and permissions, scalability, project maturity, and more.

Saturday, November 8th, Noon SLT – Philip Rosedale: “How will we build an open platform for VR over the internet?”  a presentation exploring the future of the Metaverse and the challenges that lie ahead.

Sunday, November 9th, 07:30 SLT – Dr. Steve LaValle: “Virtual Reality. How real should it be?” Although VR has been researched for decades, many new challenges arise because of the ever-changing technology and the rising demand for new kinds of VR content.  This talk will highlight some of the ongoing technical challenges, including game development, user interfaces, perceptual psychology, and accurate head tracking.

The OSCC conference centre from the inaugrual 2013 conference
The OSCC conference centre from the inaugural 2013 conference

The conference website also lists all of the speakers attending the event, who will be participating in the keynote events and in the various conference tracks which will be running throughout the weekend:

  • The Business & Enterprise track will feature sessions that cover a broad range of uses related to doing business in and with OpenSimulator, such as those by grid hosts, third-party developers, private entrepreneurs, in-world and enterprise businesses, as well as corporations and organizations using OpenSimulator for marketing, fundraising, product research, focus groups, and more.
  • The Content & Community Track will feature sessions about all of the wonderful things that happen in-world. Building and content creation includes large-scale immersive art installations, ballet, theatre, performance art, machinima, literary arts, clothing designs, virtual fashions, architecture, music performances and other cultural expressions.  There are also communities for nearly every interest, including role-playing groups, science fiction communities, virtual towns and interest groups, historical explorations, religious and spiritual communities, book clubs, and so much more.
  • The Developers & Open Source track will cover the technical side of OpenSimulator, encompassing servers, viewers, external components, grid architecture, development, administration – anything that is necessary for the installation, operation and use of an OpenSimulator system.
  • The Research and Education Track will explore the ways in which OpenSimulator has become a platform for computationally understanding complex problems, characterizing personal interactions, and conveying information. This track seeks presentations regarding OpenSimulator use towards research applications in computer science, engineering, data visualization, ethnography, psychology, and economics. It will additionally feature sessions that cover a broad range of uses related to teaching and learning in and with OpenSimulator.
  • The Learning Lab will provide conference attendees the opportunity to explore and practice their virtual world skills, share their best OpenSimulator strategies, and experiment and discover diverse ways to use OpenSimulator to support creativity, knowledge production and self-expression. If you are a gamer or game enthusiast, this is the track for you! The Learning Lab features interactive sessions where attendees get to practice and apply skills hands-on, either in design or to play a game.

All of the event tracks are colour-code within the main programme guide, and their respective pages on the conference website include their livestream feeds for those who are watching events.

OSCC-6There will also be a number of social events taking pace during the conference and, for those of a daring disposition, the OpenMeta Quest: “Your mission, should you be brave enough to accept it, is to find 12 hexagon-shaped game tokens across 7 sims while matching your MetaKnowledge for prizes. Look for the Adventure Hippo to begin your journey.”

For those who have registered to attend the conference in-world, don’t forget you can find your way there via the log-in information page. When doing so, do not that the organisers recommend not using the OSCC viewer which was made available for the inaugural conference in 2013. Singularity is the recommended viewer for this year’s conference.

As well as the conference venue, the OSCC Grid includes a number of Expo Zone regions, featuring conference sponsors and community crowdfunder exhibits; a  Shopping Centre region; exhibits created by speakers in the Content & Community, Research & Education, and Learning Lab tracks.

All told, this packed weekend should be informative, fun and educational.

2014 banner

About the Organisers

The Overte Foundation is a non-profit organization that manages contribution agreements for the OpenSimulator project.  In the future, it will also act to promote and support both OpenSimulator and the wider open-source 3D virtual environment ecosystem.

AvaCon, Inc. is a 501(c)(3) non-profit organization dedicated to promoting the growth, enhancement, and development of the metaverse, virtual worlds, augmented reality, and 3D immersive and virtual spaces. We hold conventions and meetings to promote educational and scientific inquiry into these spaces, and to support organized fan activities, including performances, lectures, art, music, machinima, and much more. Our primary goal is to connect and support the diverse communities and practitioners involved in co-creating and using virtual worlds, and to educate the public and our constituents about the emerging ecosystem of technologies broadly known as the metaverse.

 Related links

High Fidelity launches documentation resource

HF-logoHigh Fidelity have opens the doors on their new documentation resource, which is intended to be a living resource for all things HiFi, and to which users involved in the current Alpha programme are invited to contribute and help maintain in order to see it develop and grow.

Introducing the new resource via a blog post, Dan Hope from High Fidelity states:

This section of our site covers everything from how to use Interface, to technical information about the underlying code and how to make scripts for it. We envision this as being the one-stop resource for everything HiFi.

What’s more, we want you to be a part of it. We’ve opened up Documentation to anyone who wants to contribute. The more the merrier. Or at least, the more the comprehensive … er. And accurater? Whatever, we’re better at software than pithy catchphrases. Basically, we think that the smart people out there are great at filling in holes we haven’t even noticed yet and lending their own experience to this knowledge base, which will eventually benefit everyone who wants to use it.

Already the wiki-style documentation area contains a general introduction and notes on documentation standards and contributions, a section to the HiFi coding standard; information on avatar standards, including use of mesh, the skeleton, rigging, etc; information on various APIs, a range of tutorials (such as how to build your avatar from MyAvatar), and client build instructions for both OS X and Windows.

The documentation resource includes a number of tutorials, including the basic creation of an avatar from the MyAvatar "default" (top); and also includes sections on standards, such as (bottom)
The documentation resource includes a number of tutorials, including the basic creation of an avatar from the MyAvatar “default” (top); and also includes a section on avatar standards, which includes information on the avatar construction, the skeleton, joint orients, rigging, etc. (bottom) – click for sull size

All told, it makes for an interesting resource, and Dan’s blog post covers the fact that the documentation project is also linked to the HiFi Worklist, allowing those who prefer not to write documentation to highlight areas of improvement / clarification or which need writing to those who enjoy contributing documentation, and being rewarded for their efforts.

As well as the link from the blog post, the documentation resource can be accessed from the High Fidelity website menu bar – so if you’re playing with HiFi, why not check it out?

Related Links

With thanks to Indigo Mertel for the pointer.

 

Return to Blue Mars

The Amida Hall of the Byōdō-in Temple, Uji in Kyoto Prefecture, Japan, as recreated in Blue Mars by IDIA Labs
The Amida Hall of the Byōdō-in Temple, Uji in Kyoto Prefecture, Japan, as recreated in Blue Mars by IDIA Labs (click any image for full size)

Remember Blue Mars, the  mesh-based virtual world which arrived in open beta in 2009? Despite initially high hopes, it struggled to find an audience, either among general users or those of us familiar with the more free-form sandbox environments provided by the likes of SL. At its peak in 2010, it had attracted some 50,000 registrations , but only around one-tenth of that number were reportedly actually using the platform.

The statue of Buddha in the Amida Hall
The statue of Buddha in the Amida Hall

By January 2011, Avatar Reality, the company behind the platform, had reduced staffing by two-thirds, to just 10 people, before opting to try the mobile route with an iOS app, and then pinning their hopes on a “Lite” version for the PC and Mac which offered  users a “mixed reality” chatroom tool  utilising Google Street View. Neither of these really worked out, and in 2012, Avatar Reality granted expanded rights to the Blue Mars technology, valued at $10 million in research and development, to Ball State University for 3-D simulation and research projects outside of gaming applications.

For most people, that seemed to be the end for Blue Mars – but that isn’t actually the case. Since 2012, the Institute for Digital Intermedia Arts (IDIA) Laboratories at Ball State University has undertaken a number of projects utilising the platform for a variety of educational, media and research activities as a part of their  Hybrid Design Technologies initiative.

This work has been a natural outgrowth of IDIA’s early use of Blue Mars to create the Virtual Middletown Project, a simulation of the Ball Glass factory from early 20th century Muncie, Indiana. The factory and its personnel were key factors in studies carried out by Robert and Helen Merrell in the late 1930s, which became classic sociological studies, establishing the community as a barometer of social trends in the United States.

Today, the Virtual Middletown Project remains a part of Blue Mars, accessible to anyone with the original Blue Mars Windows client, as is IDIA’s other major early Blue Mars project, a reconstruction of the 1915 World’s Fair in San Francisco. In addition, a number of more recent historical and educational projects have been created for a range of purposes, and these all sit alongside some of the surviving original “city” builds from Blue Mars, all of which are also open to exploration by the curious.

My own curiosity about the status of Blue Mars was rekindled in early 2014, when I caught a re-run of the BBC’s The Sky At Night, which examined the ancient monument of Stonehenge as a place for prehistoric solar and lunar studies (potentially up to and including predicting eclipses. The programme featured models of Stonehenge constructed in Blue Mars by IDIA Labs in 2013, and which were subsequently used in programmes for the History Channel as well.

Stonehenge in Blue Mars during the 2014 summer soltice. The model can also be viewed from the persepective of 2700 BC and in a range of lighting conditions
Stonehenge in Blue Mars during the 2014 summer solstice.

As well as Stonehenge, Middletown and the 1915 World’s Fair, the existing IDIA catalogue includes models of Edo from the 1700s, the Mayan city of Chichen Itza; the pre-Columbian archaeological site of Izapa; Kitty Hawk, where the Wright Brothers experimented with powered flight; the Giza Necropolis, the Apollo 15 landing site on Hadley Rille,  and so on.

All of the builds are fairly static in nature, although they can be explored, and some offer various levels of interaction, which itself comes in a variety of forms. In Edo, for example, there are various items asking visitors to CLICK ME, in order to reveal additional information within the client; elsewhere, such as in the art gallery, clicking on the displayed pictures takes you to an associated web or wiki page; elsewhere still, “transport spheres” offer the opportunity to “jump into” real-world images of the place you’re visiting.

In addition, all of the builds offered by IDIA Lab feature a HUD system, located in the bottom right corner of the screen, which in turn offers differing options, depending on the model, which may range from a pop-up, browser-like panel offering further information on the location being visited, or which may also include opportunities for setting different lighting conditions, time of day, or even views of the location, based on different dates in history.

The winter solstice, Stonehenge, 2700 BC. Note the Map buttons, lower right, which provide access to additional options and resources
The winter solstice, Stonehenge, circa 2700 BC. Note the HUD buttons, lower right, which provide access to additional options and resources

Continue reading “Return to Blue Mars”

A look inside the alpha world of High Fidelity

HF-logoI tend to keep an eye on the High Fidelity blog as and when I have the time (I’m currently waiting to see if I get into the next phase of alpha testing, as I’ve so far failed to build the client (I sucketh at tech sometimes), so try to keep up with developments. I also confess to hoping for another video from AKA…). This being the case, it was interesting to get a look behind the doors at what has been going on within High Fidelity courtesy of self-proclaimed “bouncer”, Dan Hope.

Dan’s blog post turns the spotlight away from the work of the core High Fidelity team and focuses it on those alpha testers / builders who have built the client, made the connection and have started poking at various aspects of the platform and the worklist.

Austin Tate is a name well-known within OpenSim and Second Life. His c.v. is quite stellar, and includes him being the Director of the Artificial Intelligence Applications Institute (AIAI) and a Professor of Knowledge-Based Systems at the University of Edinburgh. Austin’s work has encompassed AI, AI planning and the development of collaborative workspaces using virtual environments and tools – particularly the I-Room.

Within High Fidelity, where he is known as Ai_Austin, he’s been extending the work on I-Rooms and collaborative spaces (both of which seem to have an ideal “fit” with High Fidelity) and has been working on 3D modelling, with Dan noting:

You might have figured out by now that 3D worlds are no good if they can’t handle 3D models accurately, which is why Ai_Austin also tests mesh handling for complex 3D objects. The image above shows the “SuperCar” mesh, which has 575,000 vertices and 200,000 faces, being tested in HiFi. There are several other meshes he uses, too, including one of the International Space Station that was provided by NASA.

SuperCar has also featured in Austin’s work within SL and OpenSim, where he has been providing invaluable insight into working with the Oculus Rift, the development of support for it within the viewer, using it with other hardware (such as the Space Navigator). In fact, if you have any interest at all in the areas of AI, virtual world workspaces, VR / VW integration, etc., then I cannot recommend Austin’s blog highly enough (We also share a passion for astronomy / space exploration and (I suspect) for racing cars, but that’s something else entirely!).

Ctrlaltdavid might also be a name familiar to many in SL and OpenSim, being the HiFi name of Dave Rowe (Strachan OFarrel in SL), the man behind the CtrlAltStudio viewer which focuses on adding OpenGL stereoscopic 3D and Oculus Rift support to the viewer.

With High Fidelity, he’s working on Leap Motion integration, to provide a higher degree of control over an avatar’s hands and fingers than can be achieved through the use of other tools, such as a the Razer Hydra. The aim here is to increase the sense of immersion for users without necessarily relying on clunky hand-held devices. As we know, the Leap Motion sits on the desk and leaves the hands free to gesture, point, etc., and thus would seem and ideal companion when accessing a virtual environment like HiFi (or SL) when using a VR headset; or even without the headset if one wishes to have a degree of liberation from the keyboard.

Dan Hope demonstrates avatar finger motion using the Leap Motion, as being coded by CtrlAltDavid in High Fidelity (Image: High Fidelity blog)

Opening this look at the work of various alpha testers / builders, Dan notes:

We can’t create a truly open system without making it compatible with other open-source tools, which is why Judas has been creating a workflow that will allow artists to make 3D models in the open source program Blender using HiFi’s native FBX format.

This forms a useful introduction to the work of Judas, who has been involved in bringing High Fidelity and Blender closer together in terms of providing improved FBX support for the platform, which is now bearing fruit. “Only last week something was added in that allowed me to import the HiFi avatars into Blender without destroying the rigs we need to animate them,” Judas is quoted as saying in the blog post.

Continue reading “A look inside the alpha world of High Fidelity”