Concierge support closed for Thanksgiving

secondlifeThe Lab has issues a reminder via the Grid Status Reports page that there will be no concierge telephone and chat support available from midnight SLT on Wednesday, November 26th through until 08:00 SLT on Friday, November 28th.

This is to allow support staff in the USA to enjoy Thanksgiving with their family and friends.

The status update reads in full:

Concierge phone and chat support will be offline this coming Thursday, 27 November, so that team members can spend the Thanksgiving holiday with their friends and family. Both services will close at midnight Wednesday evening and will re-open at 8am Pacific on Friday morning.

To the support staff and all at Linden Lab, I’d like to pass on my best wishes for a happy Thanksgiving, and the same also goes out to all those I’ve come to know in SL who are  celebrating Thanksgiving as well.

Monty Linden discusses CDN and HTTP

Monty Linden talking CDN and HTTP
Monty Linden talking CDN and HTTP

In show #46 of The Drax Files Radio Hour, which I’ve reviewed here, Draxtor pays a visit to the Lab’s head office in Battery Street, San Francisco. While there, he interviews a number of Linden staffers – including Monty Linden.

Monty is the man behind the Herculean efforts in expanding and improving the Lab’s use of HTTP in support of delivering SL to users, and which most recently resulted in the arrival of the HTTP Pipeline viewer (the code for which is currently being updated).

He’s also been bringing us much of the news about the content delivery network (CDN) project, through his blog posts; as such, he’s perhaps the perfect person to provide further insight into the ins and outs of the Lab’s use of both the CDN and HTTP in non-technical terms.

While most of us have a broad understanding of the CDN (which is now in use across the entire grid), Monty provides some great insights and explanations that I thought it worthwhile pulling his conversation with Drax out of the podcast and devoting a blog post on it.


Monty Linden talks CDN and HTTP with Draxtor Despres on the Drax Files Radio Hour

Monty starts out by providing a nice, non-technical summary of the CDN (which, as I’ve previously noted, is a third–party service operated by Highwinds). In paraphrase, this is to get essential data about the content in any region as close as possible to SL users by replicating it as many different locations around the world as is possible; then by assorted network trickery, ensure that data can be delivered to users’ viewers from the location that is closest to them, rather than having to come all the way from the Lab’s servers. All of which should result in much better SL performance.

“Performance” in this case isn’t just a case of how fast data can be downloaded to the viewer when it is needed. As Monty explains, in the past, simulation data, asset management data, and a lot of other essential information ran through the simulator host servers. All of that adds up to a lot of information the simulator host had to deliver to  every user connected to a region.

The CDN means that a lot of that data is now pivoted away from the simulator host, as it is now supplied by the CDN’s servers. The frees-up capacity on the simulator host for handling other tasks (an example being that of region crossings), leading to additional performance improvements across the grid.

LL's CDN provider (Highwinds) has a dedicated network and 25 data centres around the world which should help to generate improvements in the speed and reliablity of asset downloads to your viewer, starting with mesh and textures
Highwinds, a CDN provider Linden Lab initially selected for this project, has 25 data centres around the world and a dedicated network from and through which essential asset data on avatar bakes, textures and meshes (at present) can be delivered to SL users

An important point to grasp with the CDN is that it is used for what the Lab refers to as “hot” data. That is, the data required to render the world around you and other users. “Cold” data, such as the contents of your inventory, isn’t handled by the CDN. There’s no need, given it is inside your inventory and not visible to you or anyone else (although objects you rez and leave visible on your parcel or region for anyone to see will have “hot” data (e.g. texture data) associated with it, which will gradually be replicated to the CDN as people see it).

The way the system works is that when you log-in or teleport to a region, the viewer makes an initial request for information on the region from the simulator itself. This is referred to as the scene description information, which allows the viewer to know what’s in the region and start basic rendering.

This information also allows the viewer to request the actual detailed data on the textures and meshes in the region, and it is this data which is now obtained directly from the CDN. If the information isn’t already stored by the CDN server, it makes a request for the information from the Lab’s asset servers, and it becomes “hot” data stored by the CDN. Thus, what is actually stored on the CDN servers is defined entirely by users as they travel around the grid.

The CDN is used to deliver
The CDN is used to deliver “hot” texture and mesh data – the data relating in in-world objects – to the viewer on request

The HTTP work itself is entirely separate to the CDN work (the latter was introduced by the Lab’s systems engineering group while Monty, as noted in my HTTP updates, has been working on HTTP for almost two-and-a-half years now). However, they are complimentary; the HTTP work was initially aimed at making both communications between the viewer and the simulator hosts a lot more reliable, and in trying to pivot some of the data delivery between simulator and viewer away from the more rate-limited UDP protocol.

As Monty admits in the second half of the interview, there have been some teething problems, particularly in when using the CDN alongside his own HTTP updates in the viewer. This is being worked on, and some recent updates to the viewer code have just made it into a release candidate viewer. In discussing these, Monty is confident they will yield positive benefits, noting that in tests with users in the UK,, the results were so good, “were I to take those users and put them in out data centre in Phoenix and let them plug into the rack where their simulator host was running, the number would not be better.”

So fingers crossed on this as the code sees wider use!

In terms of future improvements / updates, as Monty notes, the CDN is a major milestone, something many in the Lab have wanted to implement for a long while,  so the aim for the moment is making sure that everyone is getting the fullest possible benefit from it. In the future, as Oz linden has indicated in various User Group meetings, it is likely that further asset-related data will be moved across to the CDN where it makes sense for the Lab to do this.

This is a great conversation, and if use of the CDN has been confusing you at all, I thoroughly recommend it; Monty does a superb job of explaining things in clear, non-technical terms.

Viewer-managed Marketplace: beta testing and a look at the project viewer

In October 2014, I reported on the viewer-managed Marketplace (VMM) project, which the lab has been developing for several months.

The aim of the project is to enable merchants to manage the creation and management of Marketplace product listing through the viewer, bypassing the need to use the Merchant Outbox (and have copies of items stored on the Marketplace inventory servers) or using Magic Boxes.

VMM does this by adding a new Marketplace Listing panel to to viewer, of which more below.

On Friday, November 21st, the Lab announced that wider beta testing of VMM is now ready to start on Aditi (the Beta grid). and is inviting merchants to download a new VMM project viewer they can use to test creating and managing product listing through the viewer.

Alongside of the announcement, the Lab also made available:

If you are a merchant and wish to test the VMM functionality, you’ll need to download and install the project viewer, and use one of the following three test regions on Aditi: ACME D; ACME E and ACME F. Using the viewer anywhere else can generate error messages when first logging-in (designed to indicate VMM is not available, and which will not interfere with using the viewer for other activities).

If you’ve never logged-into Aditi, please refer to the instructions on how to do so on the beta grid wiki page.

You may also wish to be logged-in to the Aditi Marketplace place.

When testing VMM, remember that it is not intended to enable all Marketplace-related activities through the viewer. Rather, it is intended to allow merchants to create new Marketplace listings with inventory, associate inventory with an existing Marketplace listing, remove items from a listing and unlist goods entirely. All other Marketplace activities will still have to be carried out within the Marketplace itself.

Also note that at present there is a bug within the Aditi Marketplace that will cause purchases to fail. The Lab is working to address this, and it shouldn’t interfere with testing VMM to create and modify product listings.

The following notes are intended to get you started with the project viewer and beta testing, please refer to the Lab’s VMM FAQ for other pertinent information.

The Marketplace Listings Panel

An active Marketplace Listings panel showing the four tabs used to manage inventory
An active Marketplace Listings panel showing the four tabs used to manage inventory

The heart of the viewer-managed Marketplace is the new new Marketplace Listing panel within the viewer. This will eventually replace the Merchant Outbox,  although both are provided in the project viewer.

The Marketplace Listing panel allows a merchant to carry out a number of Marketplace tasks from within the viewer, such as:  create a new product listing, modify a listing, change the items associated with a listing, etc.

It does this by enabling merchants to directly associate products in their inventory with product item listings on the Marketplace, eliminating the need to either upload copies of products to the Marketplace inventory servers via the Merchant Outbox or, in the case of limited stock items that are No Copy for the merchant, having them stored in-world in a Magic Box. When a customer purchases an item listed via VMM, it is delivered to them directly from the Lab’s asset servers.

This does mean that care must be taken when handing product items in inventory in order to avoid occidentally deleting items associated with Marketplace listings. To help with this, the folder associated with the Marketplace Listing panel remain hidden from view (as far as is possible) when working directly in the inventory .

Google Form

The first time you open the Marketplace Listing panel, it may display the following message:

This feature is currently in beta. Please add you name to this Google form if you would like to participate

If this happens, it is likely because you logged-in to a non-VMM region and then teleported to the test regions. To correct, simply log-in directly to one of the three ACME test regions (ACME D; ACME E and ACME F). The Marketplace Listing panel should open correctly; if you haven’t already created an Aditi Marketplace store, it will display a message requesting you do so, with a link to the Marketplace.

Continue reading “Viewer-managed Marketplace: beta testing and a look at the project viewer”

Lab blogs about the Nov 17th-21st region restarts

secondlifeUpdate: At the time this article went to press, it appeared the daily restarts were still in progress (hence the reference to the restarts being Nov 17th-21st). Subsequent to this article appearing, the Lab updated the Grid Status report to indicate the work has actually bee completed, therefore the Lab’s blog post did in fact mark the end of the work.

The week of November 17th – 21st 2014 has been marked with daily periods of region restarts. Notice that these would be going on was first posted via a Grid Status update on Friday, November 14th.

As I noted in the first of my SL project updates for the week, Simon Linden indicated that restarts and the attendant maintenance was hardware-related, requiring servers to be taken down and physically opened-up, although precise details on what was being done was still scant.

In a blog post published on Thursday, November 20th, the Lab provided a detailed explanation on the reasons for the restarts, which reads in full:

Keeping the systems running the Second Life infrastructure operating smoothly is no mean feat. Our monitoring infrastructure keeps an eye on our machines every second, and a team of people work around the clock to ensure that Second Life runs smoothly. We do our best to replace failing systems pro actively and invisibly to Residents. Unfortunately, sometimes unexpected problems arise.

In late July, a hardware failure took down four of our latest-generation of simulator hosts. Initially, this was attributed to be a random failure, and the machine was sent off to our vendor for repair. In early October, a second failure took down another four machines. Two weeks later, another failure on another four hosts.

Each host lives inside a chassis along with three other hosts. These four hosts all share a common backplane that provides the hosts with power, networking and storage. The failures were traced to an overheating and subsequent failure of a component on these backplanes.

After exhaustive investigation with our vendor, the root cause of the failures turned out to be a hardware defect in a backplane component. We arranged an on-site visit by our vendor to locate, identify, and replace the affected backplanes. Members of our operations team have been working this week with our vendor in our data centre to inspect every potentially affected system and replace the defective component to prevent any more failures.

The region restarts that some of you have experienced this week were an unfortunate side-effect of this critical maintenance work. We have done our best to keep these restarts to a minimum as we understand just how disruptive a region restart can be. The affected machines have been repaired, and returned to service and we are confident that no more failures of this type will occur in the future. Thank you all for your patience and understanding as we have proceeded through the extended maintenance window this week.

Once again, it’s good to see that Landon Linden and his team are keeping the channels of communication open, and working to keep users appraised of what’s happening whenever and wherever is necessary / they can.

Philip Rosedale and virtual worlds: “we still don’t get it yet”

As noted by Ciaran Laval, Philip Rosedale appeared at the Gigaom Roadmap event held in San Francisco on November 18th and 19th. He was taking part in a (roughly) 30-minute discussion with Gigaom’s staff writer, Signe Brewster, entitled Designing Virtual Worlds, in which he explores the potential of virtual worlds  when coupled with virtual reality, both in terms of High Fidelity and in general. In doing so, he touches on a number of topics and areas – including Second Life – providing some interesting insights into the technologies we see emerging today, aspects of on-line life that have been mentioned previously in reference to High Fidelity, such as the matter of identity, and what might influence or shape where VR is going.

This is very much a crystal ball type conversation such as the Engadget Expand NY panel discussion Linden Lab’s CEO Ebbe Altberg participated in at the start of November, inasmuch as it is something of an exploration of potential. However, given this is a more focused one-to-one conversation than the Engadget discussion, there is much more meat to be found in the roughly 31-minute long video.

Philip Rosedale in conversation with Gigaom's Signe Brewster
Philip Rosedale in conversation with Gigaom’s Signe Brewster

Unsurprisingly, the initial part of the conversation focuses very much on the Oculus Rift, with Rosedale (also unsurprisingly, as they’re all potentially right) agreeing with the likes of the Engadget panel, Tony Parisi, Brendan Iribe, Mark Zurkerberg et al, that the Oculus Rift / games relationship is just the tip of the iceberg, and there there is so much more to be had that lies well beyond games. Indeed, he goes so far to define the Oculus / games experience as “ephemeral” compared to what might be coming in the future. Given the very nature of games, this is not an unreasonable summation, although his prediction that there will only be “one or two” big game titles for the Rift might upset a few people.

A more interesting part of the discussion revolves around the issue of identity, when encompasses more than one might expect, dealing with both the matter of how we use our own identity as a means of social interaction – through introducing ourselves, defining ourselves, and so on, and also how others actually relate to us, particularly in non-verbal ways (thus overlapping the conversation with non-verbal communications.

Identity is something Rosedale has given opinion on ion the past, notably through his essay on Identity in the Metaverse from March 2014 –  recommended reading to anyone with an interest in the subject. The points raised are much more tightly encapsulated here in terms of how we use our name as a means of greeting, although the idea of of trust as an emerging currency in virtual environments is touched upon: just as in the physical world, we need to have the means to apply checks and balances to how much we reveal about ourselves to others on meeting them.

Can the facial expressions we use, exaggerated or otherwise, when talking with others be as much a part of out identity as our looks?
Can the facial expressions we use, exaggerated or otherwise, when talking with others be as much a part of out identity as our looks?

The overlap between identity and communication is graphically demonstrated in Rosedale’s relating of an experiment carried out at High Fidelity. This saw several members of the HiFi team talking on a subject, a 3D camera being used to capture their facial expressions and gestures, recording them against the same “default” HiFi avatar.  When a recording of the avatar was selected at random and played by to HiFi staff sans any audio, they were still very quickly able to identify who the avatar represented, purely by a subconscious recognition of the way facial expression and any visible gestures were used.

This is actually a very important aspect when it comes to the idea of trust as virtual “currency”, as well as demonstrating how much more we may rely on non-verbal communication cues than we might otherwise realise. If we are able to identify people we know – as friends, as work colleagues, business associates, etc. – through such non-verbal behavioural prompts and cues, then establishing trust with others within a virtual medium which allows such non-verbal prompts to be accurately transmitted, can only more rapidly establish that exchange of trust, allowing for much more rapid progression into other areas of interaction  and exchange.

Interaction and exchange also feature more broadly in the conversation. There is, for example the difference in the forms of interaction which take place within a video game and those we’re likely to encounter in a virtual space. Those used in games tend to be limited to what is required in the game itself – such as shooting a gun or running.

If 3D spaces can be made to operate as naturally as we function in the real world - such as when handing some something, as Mr. Rosedale is miming, might they become a more natural extension of our lives?
If 3D spaces can be made to operate as naturally as we function in the real world – such as when handing some something, as Mr. Rosedale is miming, might they become a more natural extension of our lives?

Obviously, interactions and exchanges in the physical world go well beyond this, and finding a means by which natural actions, such as the simple act of shaking hands or passing a document or file to another person can be either replaced by a recognisable virtual response, or replicated through a more natural approach than opening windows, selecting files, etc., is, Rosedale believes, potentially going to be key to a wider acceptance of VR and simulated environments in everyday life.

There’s a certain amount of truth in this, hence the high degree of R&D going on with input devices from gesture-based tools such as Leap Motion or haptic gloves or some other device. But at the same time, the mouse / trackpad / mouse aren’t going to go away overnight. There are still and essential part of our interactions with the laptops in front of us for carrying out a ranges of tasks that also aren’t going to vanish with the arrival and growth of VR. So any new tool may well have to be as easy and convenient to use as opening up a laptop and then starting to type.

Drawing an interesting, on a number of levels, comparison between the rise of the CD ROM and the impact of the Internet’s arrival, Rosedale suggests that really, we have no idea where virtual worlds might lead us simply because, as he points out, even now “we don’t get it yet”. The reality is that the potential for virtual spaces is so vast, it is easy to focus on X and Y and predict what’s going to happen, only to have Z arrive around the same time and completely alter perceptions and opportunities.

There are some things within the conversation that go unchallenged. For example, talking about wandering into a coffee shop, opening your laptop and then conducting business in a virtual space is expressed as a natural given. But really, even with the projected convenience of use, is this something people will readily accept? Will they want to be sitting at a table, waving hands around, staring intently into camera and sharing their business with the rest of the coffee shop in a manner that potentially goes beyond wibbling loudly and obnoxiously  over a mobile phone? Will people want to do business against the clatter and noise and distractions of an entire coffee shop coming over their speakers / headphones from “the other end”? Will we want to be seated next to someone on the train who is given to waving arms and hands, presenting  corner-eye distraction that goes beyond that encountered were they to simply open a laptop and type quietly? Or will we all simply shrug and do our best to ignore it, as we do with the mobile ‘phone wibblers of today?

That said, there is much that is covered with the discussion from what’s been learnt from the development of Second Life through to the influence of science-fiction on the entire VR/VW medium, with further focus on identity through the way people invest themselves in their avatar in between, until we arrive at the uncanny valley, and a potential means of crossing it: facial hair! As such, the video is a more than worthwhile listen, and I challenge anyone not to give Mr. Rosedale a sly smile of admiration as he slips-in a final mention of HiFi is such a way as to get the inquisitive twitching their whiskers and pulling-up the HiFi site in their browser to find out more.

A rebuttal to one-dimensional writing

Sarawak by Loverdag on Flickr, one of the images used in my rebuttal to Marlon McDonald's article on SLSarawak by Loverdag on Flickr, one of the images used in my rebuttal to Marlon McDonald’s article on SL

On Friday, November 14th, erstwhile contributor to Moviepilot,com Marlon McDonald wrote an article about Second Life which, is to say the least, predictably one-dimensional.

The item in question, entitled These Strange Stories Prove Second Life Isn’t The Dreamworld You Believed… takes as its rather predictable focus, the subject of pornography in Second Life. It’s lead to a fair level of upset among SL users – and rightly so; Mr. McDonald goes to considerable lengths to make his case by apparently passing on the opportunity to try the platform for himself, and instead dig through Google searches for articles that are anything up to seven years old (and none more recently written than three years ago).

Marlon McDonald: one-dimensional article
Marlon McDonald: one-dimensional article

There is much that is wrong with the piece; not only does it present a one-side view of SL, it’s clearly intended as clickbait – if not for Moviepilot.com directly (although it doesn’t hurt them!), then certainly for Mr. McDonald himself, a regular contributor there, Most of what is wrong is easy to spot and cane be said through a comment on the piece. However, I opted to present a more direct rebuttal to the article through Moviepilot’s own pages, in the hopes of also reaching Mr. McDonald’s intended audience and perhaps persuading them to look on SL differently.

You can read the article over on Moviepilot.com.

I don’t usually ask for page views – but in this case, I am. Not for myself, but to help the article get right up there alongside Mr. McDonald’s piece and truly give Moviepilot users an alternative point of view on SL. So please, if you wouldn’t mind, follow the link and have a read, Or if you’re tired of my writing – just follow the link and go make yourself a cup of tea / coffee!