Space Sunday: more Einstein, plus space planes and Wow!

This illustration reveals how the gravity of a white dwarf star warps space and bends the light of a distant star behind it. Credit: NASA, ESA, and A. Feild (STScI)

In his general theory of relativity, Albert Einstein predicted that whenever light from a distant star passes by a closer object, gravity acts as a kind of magnifying lens, brightening and bending the distant starlight in an effect known as “gravitational microlensing”.  While such microlensing has been observed using the Sun, it has never been used against an individual star (although it has been used binary pairs of stars) – until now.

During a two-year period between October 2013 and October 2015, astronomers used the Hubble Space Telescope (HST) did just that, allowing them to measure the mass of a star in the process.

The star in question is a white dwarf called Stein 2051 B, roughly 18 light years from Earth and part of a binary system, paired with a red dwarf. Essentially, the team of astronomers used Hubble to observe the effect the white dwarf had on the light being received from a star 5,000 light years away. By measuring the amount of apparent light deflection, the team were not only able to further confirm Einstein’s theory of relativity – they were able to measure the mass of the white dwarf itself, even though the deflection was tiny – only 2 milliarcseconds from its actual position.

“This microlensing method is a very independent and direct way to determine the mass of a star,” Kailash Sahu, the lead researcher on the project, explained following the publication of his team’s findings on June 7th, 2017. “It’s like placing the star on a scale: the deflection is analogous to the movement of the needle on the scale.”

On top of this, the observations confirmed the theory that a white dwarf star’s size is determined by its mass, first postulated in 1935 by Subrahmanyan Chandrasekhar, the Indian-American astronomer. Thus, a single set of observations have further confirmed Einstein’s theory of space-time to be correct (and sits alongside the detection of gravitational waves – see my last Space Sunday update – and observations of rapidly spinning pulsars in doing so), and confirmed the defining limits for a white dwarf star and allowed astronomers effectively measure the mass of a star.

Space Plane News

The United States Air Force has confirmed that the next mission for its X37B automated space plane will utilise a SpaceX Falcon 9 launch vehicle to boost it into orbit in August 2017.  This will be the fifth launch of the X-37B, which is also known as the Orbital Test Vehicle, and the first time a United Launch Alliance Atlas V booster hasn’t been used. It also marks the highest-profile US national security launch SpaceX will have signed-up for.

There are actually two of the uncrewed X-37B vehicles operated by the USAF which have been flown on alternate missions. The second of these two craft returned to Earth in May 2017 after spending an astonishing 718 days in orbit, carrying a mixed classified and non-classified cargo. The August mission will likely use the first of the two vehicles in its third mission, and will feature the Air Force Research Laboratory (AFRL) Advanced Structurally Embedded Thermal Spreader (ASETS-11) to test experimental electronics and oscillating heat pipes in the long duration space environment.

The USAF’s X-37B Orbital Test Vehicle (OTV) on the runway at Kennedy Space Centre at the end of the programme’s fourth mission, May 7th, 2017. The uncrewed vehicle is being “safed” by a Boeing team in protective suits to guard against harmful fumes and gases given off by the vehicle. Credit: USAF

At the same time as the USAF announcement about the X-37B, the South China Morning Post reported China’s own space plane programme is making “significant progress”.

China has been investigating the potential of operating some form of space plane since the late 1980s. Those plans ultimately didn’t go anywhere, and rumours of a new Chinese space plane, capable of flying astronauts and / or cargo to low Earth orbit started circulating in 2016, thanks to a news broadcast on Chinese state television service CCTV. However, as the report used imagery clearly taken from the UK’s Skylon programme, there was some doubt as to the veracity of the report.

 

In “announcing” the new space plane in 2016, China State television used images of the UK’s Skylon programme. Credit: CCTV

Like Skylon, the new Chinese vehicle, which the South China Morning Post refers to as athe Casic (the initials of the China Aerospace Science and Industry Corporation, said to be building the vehicle), will be able to take-off horizontally and use a hybrid propulsion system capable of flying it through the atmosphere and into space, carrying a crew and / or cargo to low Earth orbit. At the end of a mission, the vehicle will return to Earth and land on a conventional runway, where it can be re-serviced pretty much like a conventional military aircraft.

The South China Morning Post indicates that the new vehicle has “finished almost all ground experiments and overcome key technical hurdles such as engine design and construction”. However, no dates on when the vehicle might be rolled-out or start flight tests have been given. Nor have any specifics or official images of the vehicle been released. All that has been said is the vehicle will have an “aerodynamic shape” for atmospheric flight, and be larger than Virgin Galactic’s SpaceShipTwo, the VSS Unity.

Continue reading “Space Sunday: more Einstein, plus space planes and Wow!”

Enter the Snapdragon: Qualcomm and “XR”

The ODG R9 AR headset. Credit: Osterhout Design Group

It’s no secret that when it comes to augmented reality (AR) and virtual reality (VR), I’m swayed more towards AR and “mixed reality” (MR) as potentially being the “thing” of the future. Not, as I’ve often said, that I don’t believe in VR – it will in time grow to fill various niches and requirements. Rather, I just feel that AR / MR have a much wider field of application when it comes to impacting our daily lives.

I mention this because earlier in June I read an interesting piece by Dean Takahashi, examining Qualcomm’s emerging role in what they like to call “XR” – or “eXtended Reality”, which they define as a fusion of VR, AR and MR.

Qualcomm spells out the hurdles to ‘extended reality’ glasses offers a transcript  from a chat Dean had with Tim Leland, Qualcomm’s vice president of product management, on the company’s goal and the challenges they see in bringing headset-style devices to the market.

Qualcomm, of course, is the company behind the veritable Snapdragon family of processors. In 2016, they announced their intention to make the Snapdragon 835 chipset the heart of a new range of self-contained VR and AR devices. To that end, they are about to start shipping the Snapdragon 835 VR HMD to OEMs wishing to produce Android-powered VR headsets using the chipset and Google Daydream.

More particularly – from my perspective – Qualcomm has already partnered with Osterhout Design Group (ODG), to develop a range of Snapdragon-powered AR headsets. I first became aware of the first of these units, the R7, in mid-2016. Intended to be a heads-up AR system for enterprise solutions (selling at US $2,750), it has gained a degree of traction in a number of fields – hazardous environments (oil exploration and production, chemical production and pharmaceuticals, healthcare and surgery), and has been involved in tests helping the visually impaired.

In 2017, ODG are due to release two more units – the “prosumer” R8 (around US $1,500) and the “consumer R9 (at “sub-$1,000”). Again, these are Snapdragon 835 based, and will be fully self-contained units with Android and their operating system.

It is the self-contained aspect of such headsets which Qualcomm sees as being one of the keys to the future success of “XR”.

“Any type of cable is just a non-starter,” Leyland notes. “Fans will not exist. We think there might be a niche market for glasses that maybe stream to a PC, but that’s a small part of it. The big part is everything self-contained in a mobile device. All the visual processing systems are very close to the inertial sampling systems, so everything is very fast.”

Qualcomm see “XR” systems potentially becoming a mainstay of our daily lives, fusing VR and AR into a single headset unit which can meet a variety of needs at any given time, and which can also be used as the basis for specific use-cases.

A conceptual “first responder” XR headset for fire fighters. Credit: Qualcomm

In this Qualcomm see XR units being both general purpose and specific to market sectors. The company is already looking at a concept for a “first responder” headset for fire fighters. Containing night vision capabilities and thermal imaging sensors, the headset could allow fire fighters overlay their field-of-view with floor plans of the building they are in, helping them find their way through the smoke, while the thermal sensors warn of potential hotspots and possible behind-the-door risks of backdraft – and could even guide them to people trapped in a burning location.

A simplified rendering of the kind of information the “first responder” headset might provide. Credit: Qualcomm

For more general use, Qualcomm are looking at headsets which integrate much of what they’ve developed with the likes of ODG – multiple cameras, integral motion tracking, the ability to track eye and hand movements, etc., but in a very lightweight, unobtrusive form-factor with a low price point which makes them an attractive proposition.

Not that this is going to happen overnight. A refreshing aspect of Qualcomm’s view is that they are looking at a development / adoption curve measured in at least a decade. As Leyland notes, the ability to have AR and VR heads headsets exists today, but there are hurdles to be overcome before they are as ubiquitous as the mobile phone for many of the tasks we perform today.

Some of these hurdles are being independently addressed – 5G, for example, is expected to be of huge benefit to those uses which require a lot of rendering and so are latency intolerant. Others are going to take time to progress and solve:  display requirements – the vergence and accommodation conflict, human field of view (190ox130o) etc; common illumination); motion and tracking for intuitive head, hand and eye movements; and power and thermal issues.

The technical hurdles “XR” needs to overcome. Credit: Qualcomm

Leyland doesn’t see any of these hurdles as being problematic – he just emphases that the time frame required to solve them is not going to be as compressed as some of the more bullish predictions about VR growth made in 2016 would have us believe. Instead, he points to 2020 as still being a year when numbers of shipped headset units of all types is still measured in the hundreds of millions, although he does see it growing from there.

IDC VR shipment numbers (in thousands), have been seen as a means to question the reality of the VR market

But will these systems ever reach the ubiquitousness of the smartphone? Right now, going on the shipments of VR headsets some are quick to pooh-pooh the entire mixed reality (or XR if Qualcomm prefer) ecosystem in favour of alternatives. On the surface, they would seem to be right – but on a longer-term look? I’m not so sure. Again, this is where the much-hyped smartphone analogy with VR is misleading – as Leyland points out in talking to Takahashi.

While it is true the first “genuine” smartphone as we know them today only appeared a decade ago, the fact remains that it was founded on some three decades of cellular phone development. right now, headset capabilities are roughly in the “1990s” phase of that overall curve – so there is a way to go. As such, while headsets that more closely resemble glasses / sunglasses may not necessarily become as all-pervasive as smartphones are today, there is little reason to doubt they could – if they have an intuitive ease of use – take over from handsets (and associated wearables) for a wide variety of tasks / uses.

Qualcomm isn’t alone in pursuing a convergent future of mobile VR / AR / MR capabilities. However, through Dean Takahashi’s article (and courtesy of Qualcomm’s Augmented World Expo presentation, it is good to see how level-headed is the approach being taken be tech companies to both understand the technology , its potential and to look beyond the buzz phrases like “killer app” or order to make “XR” work.

Detectives, skulls, moons and ghosts

It’s time to kick-off another week of storytelling in Voice by the staff and volunteers at the Seanchai Library. As always, all times SLT, and events are held at the Library’s Second Life home at Bradley University, unless otherwise indicated.

Sunday, June 11th

13:30: The Thin Man

New York, 1932. Nick Charles, a retired west coast private detective, and his wealthy socialite wife, are in the Big Apple for Christmas. It’s a place where Nick is perfectly happy getting drunk in their hotel room or in speakeasies. Which is not to say the couple are unhappy; far from it. They enjoy witty repartee and banter with one another, and Nora is every inch Nick’s match in wit and intelligence.

Things change when Nick is visited by Dorothy Wynant, the daughter of a former client, businessman Clyde Wynant, who has apparently vanished ahead of his daughter’s wedding. Nick reluctantly – and to Nora’s amusement – agrees to find the missing businessman (the titular Thin Man). But what starts as a search for a missing man quickly turns into the hunt for a murderer after Wynant’s secretary is found dead, with all the evidence points to Wynant himself as her killer.

Corwyn Allen, John Morland, Kayden Oconnell, and Caledonia Skytower read Dashell Hammett’s 1933 classic, which became the first in a series of films following Nick and Nora’s adventures, as played by the inimitable William Powell and Myrna Loy.

15:00: Seanchai at the Kultivate Summer Show

Caledonia with Eleseren Brianna with A dyptic of tales based on “The Curio”, a virtual sculpture.

Monday, June 12th 19:00: The Book of Skulls

Gyro Muggins reads Robert Silverberg’s novel.

Four friends, college room-mates, go on a spring break trip to Arizona: Eli, the scholar, who found and translated the book; Timothy, scion of an American dynasty, born and bred to lead; Ned, poet and cynic; and Oliver, the brilliant farm boy obsessed with death.

Somewhere in the desert lies the House of Skulls, where a mystic brotherhood guards the secret of eternal life. There, the four aspirants will present themselves–and a horrific price will be demanded.

For immortality requires sacrifice. Two victims to balance two survivors. One by suicide, one by murder.

Now, beneath the gaze of grinning skulls, the terror begins. . . .

Tuesday, June 13th 19:00: More Tales from Thorton Burgess

With Faerie Maven-Pralou.

Wednesday, June 14th 19:00: The Girl Who Drank the Moon

Caledonia Skytower reads Kelly Barnhill’s 2017 Newbery Medal winner.

Every year, the people of the Protectorate leave a baby as an offering to the witch who lives in the forest. They hope this sacrifice will keep her from terrorizing their town. But the witch in the forest, Xan, is kind and gentle. She shares her home with a wise Swamp Monster named Glerk and a Perfectly Tiny Dragon, Fyrian.

Xan rescues the abandoned children and deliver them to welcoming families on the other side of the forest, nourishing the babies with starlight on the journey.

One year, Xan accidentally feeds a baby moonlight instead of starlight, filling the ordinary child with extraordinary magic. Xan decides she must raise this enmagicked girl, whom she calls Luna, as her own.

To keep young Luna safe from her own unwieldy power, Xan locks her magic deep inside her. When Luna approaches her thirteenth birthday, her magic begins to emerge on schedule–but Xan is far away. Meanwhile, a young man from the Protectorate is determined to free his people by killing the witch. Soon, it is up to Luna to protect those who have protected her–even if it means the end of the loving, safe world she’s always known.

Thursday, June 15th

19:00: Ghost Stories of the Old West

With Shandon Loring (also presented in Kitely hop://grid.kitely.com:8002/Seanchai/108/609/1528).

21:00: Seanchai Late Night

With Finn Zeddmore.

 


Please check with the Seanchai Library’s blog for updates and for additions or changes to the week’s schedule.

The featured charity for May through July is Alex’s Lemonade Stand Foundationd, raising awareness of childhood cancer causes and funds for research into new treatments and cures.

An ensemble of art at Blue Orange

Blue Orange – Gitu Aura

Currently on display at Blue Orange, the music and arts venue in Second Life curated by Ini (In Inaka), is an ensemble exhibition of 2D and 3D art featuring work by Cica Ghost, Theda Tammas, Rebeca Bashly, Jarla Capalini, Gitu Aura, and Ini herself.

One of the delights of this particular venue is the layout; the warren-like design of the venue, with its feeling of disused subway station, ignored by the trains rushing by in a blur, adds considerable atmosphere to Blue Orange both as a gallery space and a music venue. A hallway, lined with images of Blue Orange events taken by NicoleX Moonwall, connects the landing point with the music venue, and the first art space lies at the end of this hallway, through a hole in the wall.

Blue Orange – Jarla Capalini

This space is devoted to displaying thirteen pieces by Jarla Capalini. Split between landscapes and avatar studies, they have all been carefully post-processed to resemble paintings, and the results are more than eye-catching. The landscapes have a richness to them which suggests oil on canvas, while the avatar studies perhaps lean more to watercolour or pencils on paper and have the feel of studio pieces, rather than of finished works. The contrast between the two styles combines to give this display further depth.

The second art space is best reached via the double doors at the end of the landing area’s platform. Split into two levels, this large space features Gitu’s and Ini’s 2D art, and a single piece by Rebeca entitled The Great Escape. Gitu’s work, Colourful Dreams features ten pieces, all of which have a post-processed, art-like finish to them, albeit one leaning more towards a digital feel with a touch of abstract in places. Between these two, and around the stairwell leading to the lower level, are three dramatic, large-format pieces by Ini, which perhaps set the tone for the main display on the lower level.

Blue Orange – Theda Tammas

Labrinto, by Theda Tammas, is a dramatic powerful piece, with slight hint, perhaps of nightmares (or at least darker dreams) and violence. As the name suggests, this is a labyrinth, defined by crystalline walls and within which bronze like figures are cast, individually and in pairs. Frozen in time, their skins are etched as jigsaws, each with pieces missing, their expressions sometimes hinting at the darker edge to the piece.

Sharing the same space as Labrinto, and located on the other side of the dividing stairway is a far more whimsical piece by Cica Ghost. Between the two, and against the wall, sit is single door. open it, and a TP button will return you to the club.

Blue Orange – Cica Ghost

As noted, Blue Orange is an atmospheric venue, whether you visit for the music or the art, and the current set of exhibitions are well worth take the time to see. Should you appreciate your time there, do please consider making a donation towards the continued presence of the venue for the enjoyment of everyone.

SLurl Details

New blog layout – poll results and thoughts

Contemplating the blog layout…

On June 2nd, I blogged about this blogs new layout and asked for feedback directly or via a poll. As a week has now passed, I thought it time to provide an update on things.

The new layout is not without its problems (notably the banner image on every page), and some had issues I couldn’t easily replicate. My thanks to Richard, Sue W and JMB in particular for their feedback on specific issues, all of which helped me further tweak things – hopefully for the better.

Overall, of those who responded to the poll, most seemed in favour of the changes and the layout, and I’m growing accustomed to it. As such, it will be remaining for a the time being, so I’m not annoying everyone with what feels like a changing look and feel.

However, I’m still looking at options to get something which offers a similar level of functionality and allows plenty of room for images to appear in a decent size, but without having the huge banner image appear on individual articles and pages.

Sadly, while there are a fair few WordPress themes which avid the big banner on individual pages and offer things like a scrolling menu bar, etc., they tend to do so at the expense of text & images, which often get squeezed by an inordinate amount of left-side white space. So, I’ll keep looking.

In the meantime, thanks again to all who responded, be it with comments and / or via the poll.

SL project updates, 23/2: Content Creation Meeting

The Content Creation User Group meeting, at the Hippotropolis Camp Fire (stock)

The following notes are taken from the Content Creation User Group meeting, held on  Thursday, June 8th, 2017 at 1:00pm SLT at the the Hippotropolis Camp Fire Circle. The meeting is chaired by Vir Linden, and agenda notes, etc, are available on the Content Creation User Group wiki page.

A video recorded at the meeting by Medhue Simoni is embedded at the end of this update, my thanks to him making it available. Timestamps in the text below refer to this recording. The meeting was disrupted by three region crashes, and this is reflected in the stream recording.

Asset HTTP Viewer

[2:50] The Asset HTTP RC viewer (version 5.0.6.326593 at the time of writing) has an update with LL’s QA. As noted in my last TPV Developer meeting update, this includes the new viewer update management code. It is now expected to appear in the release channel and an RC update in week #24 (week commencing Monday, June 12th).

Animated Objects

[3:18] Vir is continuing to work on the animated objects project, and now has an internal version of the viewer that hooks-up to a correctly configured simulator. It is still some way from being ready to be offered as a project viewer, however.

Skeleton Positioning

[4:09] One issue to be considered with animated objects using the avatar skeleton is where the skeleton is supposed to be positioned. Avatars are placed by the simulator providing information on where the agent is, and the bones are then positioned and things like hover height are applied, and whatever rigged objects are being worn are positioned relative to the skeleton’s position. With an animated object, the reverse is true: the object has a defined location, and some means needs to be found for the system to position the bones accordingly; it’s not currently clear how this should be done.

Vir has tried experimenting using the mPelvis bone, and aligning that with the object’s position, with mixed results. So, should the Lab simply pick a convention and have people build their animated objects accordingly, or should a smarter, more adaptive solution be sought?

Collisions

[10:50] Collisions (being struck by avatars, other objects). Collision detection isn’t currently carried out in SL for skinned objects, however, Vir is considering calculating collisions based on the collision volume of the skeleton, although this has yet to be investigated.

Setting a Prim as Object Root

[11:19] Cathy Foil has suggested using a prim as the root for an animated object, with the skeleton positioned relative to that prim. This has the advantage of potentially allowing the skeleton, as a child linkset of the root, to have physics; further, the prim could be set statically at a fixed location in a region, and the skeleton  / object animated to roam independently or it could be scripted to move (and even use Pathfinding), with the animated skeleton / object carried along with it. Thus, it could offered a flexible approach to the problem.

[14:34] One of the things Vir is aiming for is for creators to be able to take existing skinned mesh content and turn it into animated objects, without the need for the model to be re-worked / re-uploaded.

Multiple Rigged Meshes in an Animated Object

[17:38] With his current work, Vir believes it should be possible to have multiple rigged / skinned mesh objects animated by a single skeleton (e.g. so an avatar body can be split into the notional lower body, upper body, head). This could have some interesting uses providing the meshes don’t try to use the same bones.

Frame Rates

[20:05] Vir has had a number of animated objects running at the same time, and he has not seen a significant impact on frame rates. However, the caveat here is the relative rendering complexity of animated objects and how that affects client-side processing. The current hope is that the impact of any given animated object will equate to that of a similarly rigged and complex avatar, so the potential for performance impact is there; it’s just too early in the project to make any definitive statements.

Editing Size

[20:45] At the moment, the size of an object is governed by the size of the skeleton; it could be more flexible if the size of the objects could be set / edited, and this determines the size of the skeleton. This might, for example, be done by sizing the skeleton to the object’s bounding box (which adjusts as the object is resized). However, it’s again too early in the project to offer a definitive way this might be done.

[23:12] Cathy points out that having a root prim for an animated objects, sizing them could be tied to the size of the root prim. So, for example, doubling the size of a root prim would double the size of the object.

Applying Baked Textures to Mesh Avatars

[33:41-35:45] A short explanation of this project for those unfamiliar with it. In brief, a means to apply composited textures bakes (skin, tattoo, clothing layers, etc), to mesh bodies using the SL baking service, with the aim of potentially reducing the complexity of avatar bodies.  This work is being carried out alongside of animated meshes, but is not dependent upon that project (or vice-versa).

[29:06] Updates to the baking service to support baking textures on mesh avatars has now started. This is currently infrastructure work – updating the baking service to a newer version of Linux, etc.

After this, the first step in getting the service to work with mesh bodies will be updating it to support 1024×1024 textures and producing a corresponding viewer update. Once the latter is available for testing, then the Lab will be ready to look at the feature set for supporting bakes on mesh.

Materials Support and the Baking Service

[30:30] There may be a misunderstanding circulating that the baking service will “disable” materials on meshes. This is not the case.

The baking service has never supporting materials processing, and the work to enable texture baking on meshes will not include extending the baking service to handle materials  – this would be a huge undertaking. However, it will not prevent materials from being used via other means (application directly on the mesh, etc.), or any other way in which materials are used in-world.

The baking service uses is a composited diffuse (texture map). This may be less than is currently possible when using applier systems (which should continue to work alongside bakes on mesh). [40:34] It will also be possible to still manually apply normal and specular maps to an avatar mesh using the bakes.

Baked Texture Delivery to a Mesh / Persistence

[31:53 and 38:47] Once a bake has been completed it would be delivered to the mesh by means of flagging the face to which it is to be applied. This flag will remain persistent, so when the avatar appearance is updated texture will be re-applied to the face, until the face is flagged as requiring a different baked texture.

Arbitrary Use of Bakes

[36:24] As noted in my last Content Creation UG update, there has been some discussion of a more arbitrary use of bake textures and applying them to other objects, but this in not the focus of this current work. However, these ideas might be considered in the future.

Anchor Linden

[41:58] Anchor Linden is a new name at the Lab, and is currently working with Vir, focusing on the texture baking project.

Supplemental Animations

[41:38] The supplemental animations work, designed to overcome issues of animations states keyed by the server-side llSetAnimationOverride() conflicting with one another, is still on the card, just no further movement as yet.

General Discussion

[44-22-end] General discussion: mesh uploads, proper management of LODs, etc.