AltspaceVR: the return

Courtesy of AltspaceVR

Following the announcement of its closure, Altspace VR is still open. I’d actually been holding off on this since Ciaran Laval first drew my attention to the news on August 16th, in case further details were forthcoming.

As I noted towards the end of July, the company had been planning to close shop on August 3rd. However, following the closure announcement, the company apparently received an outpouring of support – and with it, apparently the means to say open. This prompted an announcement on August 15th that the platform would be continuing:

It has been a roller coaster of a ride for our team and our community since we announced that AltspaceVR was coming to an end. We are elated to follow-up that dismal proclamation with some very good news: AltspaceVR is going to live on…

Thanks to that outpouring of support, we’re now deep in discussions with others who are passionate about AltspaceVR who want to guarantee that our virtual oasis stays open. We feel confident saying to our community that you don’t need to find another place to meet your friends in virtual reality. AltspaceVR is not closing down.

It’s not clear on exactly with whom the company has been in discussion – and that’s primarily the reason I’d been holding back on covering the news, lest further information was forthcoming on this matter. However, speculation following the announcement is the Oculus Rift co-founder Palmer Luckey may be involved in trying to maintain the company’s viability. He tweeted a poll following the news of the company’s intended shut-down, asking followers if he should step in. He then re-tweeted the news that Altspace VR would remain open, which further stoked speculation of his involvement.

AltspaceVR: avatar customisation

Techcrunch were perhaps the first news outlet to cover the evolving situation, with writer Lucas Matney noting:

It’s honestly unclear what to make of the sudden shutdown and un-shutdown announcements and whether they were just efforts to grab attention and put together a last-minute deal, but it is apparent that AltspaceVR still has their work cut out for them as they look to carve out a niche in a crowded social VR space that still has Facebook to compete with. 

He goes on to note that sources close to the company indicated that it had laid off several of its employees and had shut down the majority of its servers. However, the AltspaceVR clients all remain available for download, and the platform can be accessed and used (they’ll be hosting a solar eclipse event on Monday, August 21st as well).

Whatever the future of AltspaceVR, given its high-profile nature, the turmoil surrounding its survival highlights the risks associated with virtual reality when reliant on venture capital – and the benefits of being self-financed, as is the case with platforms such as Sansar – which is not so say there are no other risks involved in building a “social VR environment”.

High Fidelity reveal currency and IP protection roadmaps

In a pair of blog posts, Philip Rosedale of High Fidelity revealed the company’s plans to use blockchain technology as both a virtual worlds currency and for content protection.

The blockchain is described as “an incorruptible digital ledger of economic transactions that can be programmed to record not just financial transactions but virtually everything of value” (Don Tapscott, Blockchain Revolution: How the Technology Behind Bitcoin Is Changing Money, Business, and the World). It allows transactions to be simultaneously anonymous and secure by maintaining a tamper-proof public ledger of value. While it is most recognised for its role in driving Bitcoin, the technology is seen by more than 40 of the world’s top financial institutions as a potential means to provide speedier and more secure currency transactions. However, the technology has the potential to have far wider application.

To understand the basics of the blockchain, think of a database duplicated across the Internet, allowing any part of it to be updated by anyone at any time, and the updates being immediately available across all the duplicates of the database. Information held on a blockchain exists as a shared — and continually reconciled — database existing across multiple nodes. The decentralised blockchain network automatically checks with itself every ten minutes, reconciling every transaction, with each group of transactions checked referred to as a”block”. Within the network, nodes all operate as “administrators” of the entire network, and are encouraged to join it through what is (mistakenly) referred to as “mining”  – competing to “win” currency exchanges, sometimes for financial reward to the node’s operators (High Fidelity indicate that node operators will not gain directly from “mining” activities, but will instead be paid in HFCs for their computing resources used by the network).

Centralised, distributed and de-centralised networks – blockchains utilised decentralised networks

The key points to all this is that the blockchain is both openly transparent – the data is embedded in the network as a whole, not in any single point, and is by definition “public”. The lack of any centralisation also means it cannot be easily hacked – doing so would require huge amounts of computing power; nor is there a single point of data which can be corrupted or reliant on a single point of management for its continued existence – as High Fidelity point out, this means that the service can continue, even if High Fidelity does not. Thus, blockchain networks are considered both highly robust and very secure.

An estimated 700 Bitcoin-like crypto-currencies are already thought to be in operation, although the potential use of blockchains goes far, far beyond this (identity management, data management, record-keeping, stock broking, etc., etc.).

High Fidelity plans, over the coming months, to deploy their own blockchain network which will support both a new crypto-currency, the HFC (presumably “High Fidelity Currency”), which will ultimately operate independently of High Fidelity’s control. In addition, the system will provide a mechanism to protect intellectual property by embedding object certification affirming item ownership into the blockchain. This means that creators of original digital content. As High Fidelity explain:

Digital certificates issued by the High Fidelity marketplace (and likely other marketplaces choosing to use HFC) will serve a similar function as patents or trademarks — creators will register their works to get the initial certificates, and these certificates will be given out only for work that is not infringing on other or earlier works…. Once granted, these durable certificates cannot be revoked and can then be attached to purchases on the blockchain to prove the origin of goods. The absence of an accompanying digital certificate and blockchain entry will make digital forgery more obvious and impactful than in the real world — for example, server operators may choose not to host content without certificates and end-users may choose not to ‘see’ content according to it’s certificate status.

This approach could provide an extremely durable and trusted means of sharing digital content, one which is more durable than other approaches to digital rights management, for the same reasons as the blockchain offers security, transparency and robustness to operating a crypto-currency.

That the HFC blockchain is designed to operate independently of High Fidelity means that it can become self-sustaining, providing a currency environment that can be traded with other crypto-currencies and which can be exchanged for fiat currency through multiple exchanges.

The two blog posts – Roadmap Currency and Content Protection and Roadmap: Protecting Intellectual Property in Virtual Worlds – are very much companion pieces to be read in the order given. The first provides an overview of the HFC blockchain system and currency management, including how High Fidelity hope to establish a stable exchange rate mechanism without running into the issues of speculative dabbling in the system, inflated ICOs, etc., and on the use of digital wallets and personal security. It also outlines the certification mechanism for content protection, which the second article takes a deeper dive into, explaining how the relative strengthen of a blockchain approach as very quickly sketched out above could be used in protecting creator’s IP and controlling how their products / creations are used.

The decentralised approach to currency and digital rights management is something that has been pointed to numerous times during High Fidelity’s development, but this is the first time the plans have been more fully fleshed out and defined in writing. It’s an ambitious approach, one likely to stir debate and discussion – particularly given the current situation regarding the Decentraland / Ethereum and the risk of speculation around ICOs (again, something High Fidelity hope to avoid).

it’s also one which again points to High Fidelity’s founders looking far more towards more of an “open metaverse” approach to virtual environments and goods than others might be considering.

Space Sunday: Voyager at 40

Voyager: 40 years on. Credit: NASA

August 20th 2017 marks the 40th anniversary of the launch of Voyager 2, which with its sister craft Voyager 1 (launched on September 5th, 1977) are humanity’s furthest-flown operational space vehicles, with Voyager 1 being the most distant human made object from Earth, at some 140 AU (AU= astronomical unit, the average distance between the Earth and the Sun; 140 AU equates to about 20.9 billion kilometres / 13 billion miles).

Despite being so far away from Earth, both craft are still sending data back to Earth as they fly through the interstellar medium in the far reaches of the solar system (the Pioneer 10 and Pioneer 11 craft which pre-date the Voyager programme by some 5 years, ceased transmissions to Earth in January 2003 and September 1995 respectively, although Pioneer 10 is the second most distant human made object from Earth after Voyager 1).

The Voyager programme stands as one of the most remarkable missions of early space exploration. Originally, the two vehicles were to be part of NASA’s Mariner programme, and were at first designated Mariner 11 and Mariner 12 respectively. The initial Mariner missions – 1 through 10 – were focused on studying the interplanetary medium and  Mars, Venus and Mercury (Mariner 10 being the first space vehicle to fly by two planets beyond Earth – Venus and Mercury – in 1974). Mariner 11 and Mariner 12 would have been an expansion of the programme, intended to perform flybys of Jupiter and Saturn.

A drawing of the Voyager vehicles. Credit: NASA/JPL (click for full size)

However, in the late 1960s Gary Flandro, an aerospace engineer at the Jet Propulsion Laboratory (JPL) in California noted that in the late 1970s, the outer planets would be entering a period of orbital alignment which occurs once every 175 years and which could be used to throw a series of probes out from Earth, which could then use the gravities of the worlds they encountered to “slingshot” them on to other targets. This led to the idea of a “Grand Tour” mission: sending pairs of probes which could use these gravity assists to fly by Jupiter, Saturn, Uranus, Neptune and Pluto in various combinations.

Funding limitations eventually brought an end to the “Grand Tour” idea, but the planetary alignment was too good an opportunity to miss, and so elements of the idea were folded into the Voyager Programme, which would utilise Mariner 11 and Mariner 12. However, as the mission scope required some significant changes to the vehicles from the basic Mariner design, they were re-designated as Voyager class craft.

(As an aside, the Mariner class is the longest-lived of NASA’s space probe designs; as well as the ten missions of the 1960s and 1970s which carried the design’s name,  the Mariner baseline vehicle – somewhat enlarged – was used for the Viking 1 and Viking 2 orbiter missions to Mars, and formed the basis of the Magellan probe (1989-1994)  to Venus and the Galileo vehicle (1989-2003) which explored Jupiter. And uprated and updated baseline Mariner vehicle, designated “Mariner Mark II”, formed the basis of the Cassini vehicle, now in the terminal phase of its 13-year study of Saturn and its moons.)

Each of the Voyager mission vehicles is powered by  three plutonium-238 MHW-RTG radioisotope thermoelectric generators (RTGs), which provided approximately 470 W at 30 volts DC when the spacecraft was launched. By 2011, both the decay of the plutonium and associated degradation of the thermocouples that convert heat into electricity has reduced this power output by some 57%, and is continuing at a rate of about 4 watts per year.

To compensate for the loss, various instruments on each of the vehicles have had to be turned off over the years. Voyager 2’s scan platform and the six instruments it supports, including the vehicle’s two camera systems, was powered-down in 1998. While the platform on Voyager 1 remains operational, all but one of the instruments it supports – the ultraviolet spectrometer (UVS) – have also been powered down. In addition, gyro operations ended in 2016 for Voyager 2 and will end in 2017 for Voyager 1. These allow(ed) the craft to rotate through 360 degrees six times per year to measure their own magnetic field, which could then be subtracted from the magnetometer science data to gain  accurate data on the magnetic fields of the space each vehicle is passing through.

However, despite the loss of capabilities, both Voyager 1 and Voyager 2 retain enough power to operate the instruments required for the current phase of their mission – measuring the interstellar medium and reporting findings back to Earth. This phase of the mission, officially called the Voyager Interstellar Mission, essentially commenced in 1989 as Voyager 2 completed its flyby of Neptune, when the missions as a whole was already into their 12th year.

A plume rises 160 km (100 mi) above Loki Patera, the largest volcanic depression on Io, captured in March 1979 by Voyager 1. Credit: NASA/JPL

Voyager 2 was launched on August 20th, 1977. Of the two vehicles, it was tasked with the longer of the planned interplanetary missions, with the aim of flying by Jupiter, Saturn, Uranus and Neptune. However, the latter two were seen as “optional”, and dependent upon the success of Voyager 1.

This was because scientists wanted the opportunity to look at Saturn’s moon Titan. But doing so would mean the Voyager craft doing so would have to fly a trajectory which would leave it unable to use Saturn’s gravity to swing it on towards an encounter with Uranus. Instead, it would head directly towards interstellar space.

So it was decided that Voyager 1, which although launched after Voyager 2 would be able to travel faster, would attempt the Titan flyby. If it failed for any reason, Voyager 2 could be re-tasked to perform the fly-past, although that would also mean no encounters with Uranus or Neptune. In the end, Voyager 1 was successful, and Voyager 2 was free to complete its surveys of all four gas giants.

Along the way, both missions revolutionised our understanding of the gas giant planets and revealed much that hadn’t been expected, such as discovering the first active volcanoes beyond Earth, with nine eruptions imaged on Io as the vehicles swept past Jupiter. The Voyager missions were also the first to find evidence that Jupiter’s moon Europa might harbour a subsurface liquid water ocean and to return the first images of Jupiter’s tenuous and almost invisible ring system. Voyager 1 was responsible for the first detailed examination of Titan’s dense, nitrogen-rich atmosphere, and Voyager 2 for the discovery of giant ice geysers erupting on Neptune’s largest moon, Triton. In addition, both of the Voyager vehicles added to our catalogues of moons in orbit around Jupiter and Saturn, and probed the mysteries of both planet’s atmospheres, whilst Voyager 2 presented us with our first images of mysterious Uranus and Neptune – and thus far remains the only vehicle from Earth to have visited these two worlds.

This is the last full planet image captured of Neptune. Taken by Voyager 2 on August 21st, 1989, from a distance of 7 million km (4.4. million mi). A true colour image white balanced to reveal the planet under typical Earth lighting conditions, it shows Neptune’s “Great Dark Spot” and surrounding streaks of lighter coloured clouds, all of which persisted through the period of Voyager 2’s flyby. More recent Hubble Space Telescope images suggest the “Great Dark Spot”, initially thought to be a possible cloud / storm formation, similar to Jupiter’s Great Red Spot, has vanished, leading to speculation that it may have actually been a “hole” in a  layer of Neptune’s layered clouds. Credit: NASA/JPL

The flyby of Neptune also sealed Voyager 2’s future. Scientists were keen to use the flyby of the planet to take a look at Triton, Neptune’s largest moon. However, because Triton’s orbit around Neptune is tilted significantly with respect to the plane of the ecliptic, Voyager 2’s course to Neptune  had to be adjusted by way of a gravitational assist from Uranus and a number of mid-course corrections both before and after that encounter, so that on Reaching Neptune, it would pass over the north pole, allowing it to bent “bent” down onto an intercept with Triton while the Moon was at  apoapsis – the point furthest from Neptune in its orbit – and well below the plane of the ecliptic. As a result, Voyager 2 passed over Triton’s north pole 24 hours after its closest approach to Neptune, its course now pointing it towards “southern” edge of the solar system.

Continue reading “Space Sunday: Voyager at 40”

Space Sunday: Curiosity’s 5th, Proxima b and WASP-121b

On August 6th 2016, NASA delivered the Mars Science Laboratory (MSL) to the surface of Mars in what was called the “seven minutes of terror” – the period when the mission slammed into the tenuous Martian atmosphere to begin deceleration and a descent to the surface of the planet which culminated in the Curiosity rover being winched down gently from a hovering “sky crane” and then lowered until its wheels made firm contact with the ground.

The “seven minutes of terror” actually had a double meaning. Not only did it represent the time MSL would smash into Mars’ atmosphere and attempt its seemingly crazy landing, at the time of the event, the distance between Earth and Mars meant it took seven minutes to be returned to mission control from the red planet. Thus, even as the initial telemetry indicating the craft was entering the upper reaches of Mars’ tenuous atmosphere was being received, mission controls knew that in reality, the landing had either succeeded or failed.

Obviously, the attempt succeeded. Everything worked flawlessly, and Curiosity was delivered to the surface of Mars at 05:17 GMT on August 6th, 2012 (01:17, August 6th EDT, 22:17 PDT, August 5th). In the five years since that time, it is helped revolutionise our understanding of that enigmatic world – as well as adding somewhat to its mystery.

To call the mission a success is not an exaggeration; within weeks of its arrival inside the 154 kilometre (96 mile) wide Gale Crater, Curiosity was examining an ancient riverbed en route to a region of the crater dubbed “Yellowknife Bay”. It was there the rover made its first bombshell discovery: analysis of the area showed that billions of years ago it was home for the ideal conditions to potentially kick-start microbial life. It was, in essence, the achievement of mission’s primary goal: to identify if Mars may have once harboured the kind of conditions which might have given rise to life.

This 360-degree view was acquired on August 6th, 2016, by Curiosity’s Mastcam as the rover neared the “Murray Buttes” on lower “Mount Sharp”. The dark, flat-topped mesa seen to the left of the rover’s arm is about 15 metres (50 ft) high and about 61.5 metres (200 ft) long. Credit: NASA/JPL / MSSS

For the first year following its arrival on Mars, Curiosity continued to survey the regions relatively close to its landing zone, finding more evidence of a benign ancient environment. Then it started out on the next phase of its mission: the long traverse towards the massive bulk of “Mount Sharp” – officially called Aeolis Mons. A huge mound of rock deposited against the crater’s central impact peak, “Mount Sharp” rises from the crater floor to an altitude of some 5.5 km (3 mi), and imagining from orbit strongly suggested its formation was due, at least in part, to the presence of water in the crater at some point in Mars’ past.

The 8 km (5 mi) trip took the rover a year to complete, in part due to its relatively slow speed, in part due to the fact is had to travel a good way along the base of “Mount Sharp” to reach a point where it could commence an ascent up the slope; but mostly because there were a number of points of interest along the way where the mission scientists  wanted to have a look around, investigate and sample.

Mount Sharp as seen from Curiosity, on January 24th, 2017. The light grey banding befpre the sandy coloured slopes is the clay unit the rover will reach in about 2 years. In front of it is the “Vera Rubin Ridge”, the next location for study by the rover. Credit: NASA/JPL / MSSS

For the last three years, the rover has been slowly making its way up “Mount Sharp”, climbing around 180 metres (600ft) vertically above the surrounding crater floor and visiting numerous points of interest – such as “Pahrump Hills”, the mixed terrain where “Mount Sharp” merges with the crater floor. Along the way, Curiosity has both confirmed that “Mount Sharp” was most likely the result of sedimentary deposits laid down during several periods of flooding in the crater before the water finally receded and wind action took over, sculpting the mound into its present shape down through the millennia.

The lakes within Gale crater may have actually been relatively short-lived, perhaps lasting just 1,000 years at a time, but Curiosity has shown that even during the dry inter-lake periods, water was very much a feature of Gale Crater, finding evidence of compressed water channels within the layers of rock which sit naturally exposed on “Mount Sharp’s” flanks.

In December 2014, NASA issued a report on how “Mount Sharp” was likely formed. On the left, the repeated depositing of alluvial and wind-blown matter (light brown) around a series of central lakes which formed in Gale Crater, where material was deposited by water and more heavily compressed due the weight of successive lakes (dark brown). On the right, once the water had fully receded / vanished from the crater, wind action took hold, eroding the original alluvial / windblown deposits around the “dry” perimeter of the crater more rapidly than the densely compacted mudstone layers of the successive lake beds, thus forming “Mount Sharp”

Alongside the sedimentary layering of the mudstone comprising “Mount Sharp” and the compressed and long-dry water channels, a further sign that the region was once water rich comes in the form of the mineral hematite, which Curiosity has found on numerous occasions. Right now, the rover is making its way towards a feature dubbed “Vera Rubin Ridge” which orbital analysis shows to be rich in this iron-bearing mineral which requires liquid water to form. Beyond that is a clay-rich unit separating the hematite rich ridge from an area which show strong evidence for sulphates. These are also indicative of water having once been present, albeit less abundantly than along “Vera Rubin Ridge”, and thus hinting at a change in the local environment. Currently, Curiosity is expected to reach this area in about two years’ time, after studying “Vera Rubin Ridge” and the clay unit along the way.

Selfies from Mars: how Curiosity has weathered the dust on Mars over five years – the dates are given as Sols – Martian days, top left and the locations where the pictures were taken. Credit: NASA/JPL

Throughout the last five years, Curiosity has remained relatively healthy. There has been the odd unexpected glitch with the on-board computers, all of which have been successfully overcome. There has been some damage to the rover’s aluminium wheels. This did give rise to concern at the time it was noted, resulting in a traverse across rough terrain being abandoned in favour of a more circuitous and less demanding route up onto “Mount Sharp”. But overall, the wheels remain in reasonably sound condition.

The one major cause for concern at present lies with Curiosity’s drill mechanism. Trouble with this first began when vibrations from the drill percussive mechanism was noted to be having a negative impact on the rover’s robot arm.

More recently – since December 2016, in fact – all use of the drill has ceased, limiting Curiosity’s sample gathering capabilities. This has been due to an issue with the drill feed motor, which extends the drill head away from the robot arm during normal drilling operations, preventing the arm physically coming into contact with targets. Attempt to rectify the problem have so far been unsuccessful, so engineers are loot at ways to manoeuvre the rover’s arm and place the drill bit in contact with sample targets, avoiding the need to use the feed motor.

So with five years on Mars under its belt, and barring no major unforeseen incidents, Curiosity will continue its mission through the next five years, further enhancing our knowledge of Mars.

Continue reading “Space Sunday: Curiosity’s 5th, Proxima b and WASP-121b”

Looking at Altspace VR’s closure

Courtesy of AltspaceVR

Update: AltspaceVR is hoping to remain open – see my update for more (such as was available at the time of writing).

Altspace VR, once regarded by The Verge as “one of the most fully developed platforms” for social VR, is shutting down. The new came via an AltspaceVR blog post, which was quickly picked-up by a number of tech media outlets.

In A Very Sad Goodbye, the company state:

It is with a tremendously heavy heart that we let you all know that we are closing down AltspaceVR on August 3rd, 7PM PDT. The company has run into unforeseen financial difficulty and we can’t afford to keep the virtual lights on any more. This is surprising, disappointing, and frustrating for every one of us who have put our passion and our hopes into AltspaceVR. We know it will probably feel similarly for you…

What happened?
We’re a venture-backed start-up. We had a supportive group of investors that last gave us money in 2015. It looked like we had a deal for our next round of funding, and it fell through. Some combination of this deal falling through and the general slowness of VR market growth made most of our investors reluctant to fund us further. We’ve been out fund-raising but have run out of time and money.

In all, AltspaceVR raised some US $26.3 million in funding through two rounds of investment, with US $16 million raised in 2014, and a further US $10.3 million raised in a second round of funding led be Raine Ventures. Techcrunch reports other investors including Comcast Ventures, Dolby Family Ventures, Lux Capital and Rothenberg Ventures.

Playing Dungeons and Dragons in AltspaceVR. Image courtesy of Techcrunch

Initially, AltspaceVR was seen as quirky given the initial avatars were simple in approach compared to virtual world platforms, but users who tried it out tended to be attracted by the platform’s ability to offer virtual spaces for socialising, giving the company something of a lead in the so-called “social VR” space which is now the subject of much talk. Fellow blogger and VR / tech expert Austin Tate was one of those who dipped his toes into the application, and he offered insight into things as it opened its doors, including a look at the interactive capabilities then on offer.

At its height, AltspaceVR reported around 35,000 monthly users on the platform, who use it for around 35 minutes each per day. That might not sound a lot by Second Life standards, but considering the slow take-up of VR outside of certain niche areas of early adoption, it’s actually not bad and perhaps indicates there is potential for VR environments where people can get together and share time and (web-based) content (the platform also offered a dedicated SDK for building “in-world” content and games).

Certainly, the take-up was enhanced by the push to make AltspaceVR genuinely cross-platform in approach and accessibility  – although some of the claims around the application, such as it hosting the “worlds first VR wedding” did cause some eye rolling among established users of virtual spaces given just how long wedding in VR (albeit without fancy headsets) have been going on. Nevertheless, the platform has developed a loyal and supportive community – and may have done as much as anything else to convince the likes of Facebook that there is something to the “social VR” thing.

Elsewhere, the news of the closure is likely to be seen by some as a stroky-chin-I-told-you-so moment, quite possibly with sagely negative nods towards the future of Sansar and similar platforms. However,while Sansar is making a play for the “social VR” space as well, it’s important to remember that AltspaceVR is a very different, more focused beast than Sansar, despite some (incorrectly) labelling AltspaceVR as “Second Life for VR” in the past.

The recent AltspaceVR MACH event featuring Bill “the Science Guy” Nye showcased the use of “social VR” space for outreach whilst also, perhaps, highlighting some of the applications’ limitations in terms of fidelity and immersiveness. Image courtesy of AltspaceVR

Sansar is clearly aiming for a much higher sense of immersion, with far more involved capabilities which will allow it to function as an effective platform across a range of potential markets and audiences and meet the needs of a broad range of use cases. However, it is perhaps a salient reminder as to just how nascent the current VR market really is, and why keeping a weather eye on how things progress  – and the time frames involved in seeing them progress – is vital.

In the meantime, AltsapceVR is unsure as to what might happen in the future, the blog post noting that the team has poured a significant amount of effort into the application, which might be “foundational” to the development of “social VR”. As such those behind the company would, “love to see this technology, if not the company, live on in some way, and we’re working on that.”

For those engaged in AltspaceVR, the announcement of the closure is worth reading through in full, as it offer tips on saving photos and friends lists, and how those using the SDK might see the web content they developed for AltspaceVR live on elsewhere. There’s also a note that come Thursday, August 3rd, there was be a final party in Altspace VR, which will culminate in the doors closing at 19:00 PDT.

Space Sunday: ninja space stations, Falcons, Dragons and ET

The cislunar Deep Space Gateway with an Orion Multi-Purpose Crew Module approaching it. Credit: NASA

Lockheed Martin has announced it will build a full-scale prototype of NASA’s proposed Deep Space Gateway (DSG), a space habitat occupying cislunar space. The facility, which if built, will be both autonomous and crew-tended, and is intended to be used as a staging point for the proposed Deep Space Transport NASA is considering for missions to Mars, as well as for robotic and crewed lunar surface missions.

DSG is part of a public-private partnership involving NASA in developing technologies for carrying humans beyond low Earth orbit called Next Space Technologies for Exploration Partnerships (NextSTEP). A Phase I study for the facility has already been completed, and the full-scale prototype will be constructed as a part of the Phase II NextSTEP habitat programme, which will examine the practical issues of living and working on a facility removed from the relative proximity of low Earth orbit, outside of the relative protection of the Earth’s magnetic field and subject to delays of up to 3 seconds in two-way communications.

“It is easy to take things for granted when you are living at home, but the recently selected astronauts will face unique challenges,” said Bill Pratt, Lockheed Martin NextSTEP program manager.

“Something as simple as calling your family is completely different when you are outside of low Earth orbit. While building this habitat, we have to operate in a different mindset that’s more akin to long trips to Mars to ensure we keep them safe, healthy and productive.”

The proposed Gateway, which if built would likely enter service in 2027/2028, will be designed to make full use of the Orion Multi-Purpose Crew Module as its command and control centre, and will also use avionics and control systems designed for the likes of NASA’s MAVEN mission in order around Mars and the Juno mission at Jupiter, which will allow the facility to operate in an uncrewed automated flight mode around the Moon for up to seven months at a time.

NASA’s MPLM mission logo. Credit: NASA / Marshall Space Flight Centre

The core of the prototype will be the Donatello Multi-Purpose Logistics Module (MPLM), originally designed and built for flights aboard the space shuttle and capable of delivering up to nine metric tonnes of supplies to the International Space Station (ISS). Two of these units, Leonardo and Raffaello flew a total of 12 missions to the ISS between 2001 and 2011, with Leonardo becoming a permanent addition to the space station in early 2011. And if film and comic fans are wondering, yes, the modules were all named after a certain band of mutant ninja turtles – hence the MPLM mission logo (right).

Donatello was a more capable module than its two siblings, as it was designed to carry payloads that required continuous power from construction through to installation on the ISS. However, it was never actually flown in space, and some of its parts were cannibalised to convert Leonardo into a permanent extension to the space station. In its new role, Donatello will form the core habitat space for the DSG prototype, and will be used as a testbed for developing the living and working space in the station, which will also have its own power module and multi-purpose docking adapter / airlock unit.

The Phase II development of the DSG is expected to occur over 18 months. Mixed Reality (augmented reality and virtual reality) will be used throughout the prototyping process to reduced wastage, shorten the development time frame and allow for rapid prototyping of actual interior designs and systems. The results of the work and its associated studies will be provided to NASA to help further the understanding of the systems, standards and common interfaces needed to make living in deep space possible.

The DSG is one of two concepts NASA is considering in it attempts to send humans to Mars. The second is the so-called Deep Space Transport (DSH). This is intended to be a large vehicle using a combination of electric and chemical propulsion to carry a crew of six to Mars. It would be assembled at the Deep Space Gateway.

While having a facility in lunar orbit does make sense for supporting operations on the Moon’s surface, when it comes to human missions to Mars, the use of the DSG as an assembly  / staging post for the DST actually makes very little practical sense. Exactly the same results could be achieved from low Earth orbit and without all the added complications of lunar orbit rendezvous. The latter simply adds an unnecessary layer of complexity to Mars missions whilst providing almost no practical (or cost) benefits, and perhaps again demonstrates NASA’s inability to separate the Moon and Mars as separate destinations – something which has hindered their plans in the past.

Musk Walks Back SpaceX Aspirations

SpaceX CEO and chief designer, Elon Musk has walked back on expectations for the initial lunch of the Falcon Heavy booster and on longer-terms aspirations for the Dragon 2 crew capsule.

Musk: a successful maiden flight of the Falcon Heavy “unlikely”. Credit: Associated Press

Speaking at the International Space Station Research and Development Conference held in Washington DC in mid-July 2017, Musk indicated that a successful maiden flight of the Falcon Heavy rocket is extremely unlikely. He also indicated that the company is abandoning plans to develop propulsive landing techniques for the Dragon 2 when returning crews to Earth from the ISS – and to achieve a soft landing on Mars.

Falcon Heavy is slated to be the world’s most powerful rocket currently in operation when it enters service in 2018, capable of lifting a massive 54 tonnes to low Earth orbit – or boosting around 14 tonnes on its way to Mars. Designed to be reusable, the rocket uses three core stages of the veritable Falcon 9 rocket – one as the centre stage, two as “strap on boosters” either side of it.

But computer modelling has revealed that firing all 27 motors on the stages (nine engines apiece) at launch has dramatically increased vibrations throughout the vehicle stack, making it impossible to gauge by simulation whether or not the rocket will shake itself apart without actually flying it. Hence Musk’s statement that the maiden flight of the Falcon Heavy  – slated for later in 2017 – is unlikely to achieve a successful orbit. However, telemetry gathered during the flight – should the worse happen – will help the company more readily identify stresses and issues created by any excessive vibration, allowing them to be properly countered in future launches.

Once Falcon Heavy is fully operational, all three of the core stages are intended to return to Earth and achieve a soft landing just as they do when used as the first stage of a Falcon 9 launch vehicle, and SpaceX is also working to make the upper stage of the Falcon 9 / Falcon Heavy  recoverable as well.

Also at the conference, Musk announced SpaceX will no longer be using propulsive landings for the crewed version of their Dragon 2 space capsule, due to enter operations in 2019 ferrying crews two and from the ISS, operating alongside Boeing’s CST-100 Starliner capsule. Initial flights of the Dragon 2 were intended to see the vehicle make a “traditional” parachute descent through Earth’s atmosphere followed by an ocean splashdown – the technique currently used by the uncrewed Dragon I ISS resupply vehicle.

However, SpaceX had planned to shift Dragon 2 landings from the sea to land – using parachutes for the majority of the descent back through the atmosphere, before cutting the vehicle free and using the built-in Super Draco engines (otherwise used as the crew escape system to blast the capsule free of a Falcon launch vehicle if the latter suffers any form of pre- or post-launch failure). The engines would fire during the last few metres of decent, placing the capsule into a hover before setting it down on four landing legs.

Extensively tested in tethered “hover” flights, propulsive landings would in theory made the recovery and refurbishment of Dragon capsules for future launches a lot easier, lowering the overall operating costs for the capsule. In announcing the decision to scrap the propulsive landing approach, Musk indicated it would have unnecessarily further drawn out the vehicle’s development as SpaceX sought to satisfy NASA’s requirements for crewed vehicle operations.

The decision also affects Musk’s hope of placing a robotic mission on the surface of Mars in 2020. Under that mission, a special cargo version of Dragon 2 – called Red Dragon- would fly a NASA science payload to Mars and use supersonic propulsive landing to slow itself through the tenuous Martian atmosphere and achieve a successful soft landing. This approach was seen as ideal, because using parachutes on Mars is extremely difficult with heavy payloads – NASAs studies suggest parachute on Mars have an upper limit of payloads around 1.5-2 tonnes. A Red Dragon capsule is liable to mass around 8-10 tonnes.

SpaceX have dropped plans to use propulsive landings on both their crewed Dragon 2 vehicles returning from the ISS and on their Red Dragon automated Mars lander (above). Credit: SpaceX

However, Musk no longer believes the use of a propulsive landing mechanism is “optimal” for Red Dragon, and the company has a better way of realising their goal – although he declined to indicate what this might be. Instead, propulsive landing systems would seem to be something the company will return to in the future – particularly given their hopes of placing vehicles massing as much as 100 tonnes on the surface of Mars.

No, ET Isn’t Calling Us

The Internet was agog recently after it was announced some very “peculiar signals” had been noticed coming from Ross 128, a red dwarf star just 11 light-years away. While not known to have any planets in orbit around it, and despite the best attempts of astronomers – including the team picking up the signals at the Arecibo radio telescope, Puerto Rico – news of the signals led to widespread speculation that “alien signals” had been picked up.

The usual signals – officially dubbed the “Weird!” signal, due to the comment made in highlighting the signals in an image – were first picked up on May 12th/13th, 2017. However, it was not until two weeks later that the signals were identified and analysed, the PHL team concluding that they were not “local” radio frequency interference, but were in fact odd signals coming from the direction of Ross 128 – sparking the claims of alien signals, even though the director at PHL and the survey team leader -Abel Mendez – was one of the first to pour water on the heat of the speculation. “In case you are wondering, he stated in response to the rumours, “the recurrent aliens hypothesis is at the bottom of many other better explanations.”

The Weird! signal. Credit: UPR Aricebo

Without drawing any conclusions on what might be behind the signals, PHL liaised with  astronomers from the Search for Extra-Terrestrial Intelligence (SETI) Institute to conduct a follow-up study of the star. This was performed on Sunday, July 16th, using SETI’s Allen Telescope Array and the National Radio Astronomy Observatory‘s (NRAO) Green Bank Telescope. The fact that SETI was involved probably also helped fan the flames of “alien signal” theories. However, initial analysis of the signal and the portion of the sky where it was observed have suggested a far more mundane explanation:  geostationary satellites.

“The best explanation is that the signals are transmissions from one or more geostationary satellites,”  Mendez stated in an announcement issued on July 21st. “This explains why the signals were within the satellite’s frequencies and only appeared and persisted in Ross 128; the star is close to the celestial equator, where many geostationary satellites are placed.”

While certain this explanation is correct, Mendez does note it doesn’t account for the strong dispersion-like features of the signals (diagonal lines in the figure). His theory for this is that it is possible multiple reflections caused the distortions, but the astronomers will need more time to evaluate this idea and other possibilities.

So sorry, no ETs calling out into the night – yet.