On Wednesday, March 18th, Ebbe Altberg gave the keynote presentation at the 8th annual Virtual Worlds Best Practice in Education (VWBPE) conference, which runs from March 18th through 21st inclusive, in both Second Life and OpenSimulator.
His key address lasted a little over an hour, in which he outlined the Lab’s approach to education and non-profits, provided some insight into what Lab’s future plans, and discussed further information on the Next Generation platform. Following this, he entered into a Q&A session, which ran beyond the main session time, switching from voice to text chat in the process.
The following is a transcript of his core comments on the Lab’s next generation platform. These commence at the 31:20 mark into the official video of the event, although obviously, mention is made of both in reference to education earlier on in his presentation, as Ebbe discusses education, issues of accessibility, etc. I’ve included audio excerpts here as well, for further ease of listening to his comments whilst reading. Time stamps to both the audio tracks and the video are supplied.
Click the links below to go the relevant section.
- One the Name
- Progress to Date
- New User Discovery and Experience
- Platform Accessibility
- Scalability and Creativity
- Quality and Ease of Use: Physics, Avatar Design, Shopping
- Revenue Generation for the Lab
- Initial Alpha Access
- Initial Content Focus on Virtual Reality
- Accessibility for the Impaired
- Content Creation and C# as the Scripting Language
- More on Maya and Adding Other Tools
- 13+ Access
- Moving Content Between Platforms
- Other Points of Note
On the Name
[00:00 / 31:20] So, the future platform for virtual experiences. We’ve said that the next generation platform, we still don’t have a name for this thing; we have a code name internally, but we don’t want to leak that out or use that, because that could just be confusing and distracting, and it’s probably going to change soon anyway. So we just refer to it as “the next generation platform”.
[00:21 / 31:42] We do not refer to it as “SL 2.0”, because that might imply a little too much linearity, and we don’t want to necessarily constrain ourselves by the past; but we also want to obviously take advantage of, and leverage, our learnings from the past.
Progress to Date
[00:39 / 31:38] But the progress is going every well. I would say we’re about 8-9 months in on working on this; I would say the last six months have been absolute, full-on with a big crew. We’re talking close to 40 people or more; probably 30+ just engineers, and then obviously a bunch of product managers and designers working on this product.
New User Discovery and Experience
[01:12 / 33:33] And there’s a number of areas where we think about it quite differently from Second Life, and we did spend quite a lot of time thinking about why did Second Life hit the ceiling, if you will. You know, many years ago it peaked at 1.1 million monthly users and these days it’s around 900,000, so it’s not a huge difference from the highs and where we are today,
[01:38 / 32:59] But why didn’t it go to five million, ten million, 100 million? And what can we do to solve some of the things we thought caused it to sort-of max out there?
[01:52 / 33:11] One area where we want to think quite differently is discovery; how do I discover an experience? Today you pretty much have to be inside Second Life to discover an experience, and we want to make it a lot easier for people to be able to discover an experience from the outside. So that you can create an experience, and [people can] much more easily find your experience and enter your experience without having to necessarily at that point being aware of the notion of this platform or what other types of things are available to them. They can discover those as they go along. Make it easier for you to bring your audience directly into your experiences.
[02:36 / 33:55] Accessibility. Today, when you leave your PC, you pretty much leave Second Life behind, [so] what can we do to make sure it’s available on more platforms? It’s obviously getting more complicated now with all these VR platforms, so what used to be PC, windows and Mac, which we support today; and then mobile, which you can get access to today if you use a third-party service like SL Go or some other clients that support mobile.
[03:10 / 34:29] But we want to think about mobile as something we can support form the beginning; but again, the number of platforms across mobile, PCs and VR … [there’s] more and more of them. so it’s tough to keep up. So we are building a next generation platform from the ground up to make possible for us to take advantage of all these different platforms.
Scalability and Creativity
[03:37 / 34:57] Scalability. This is a really important one; an event like this highlights it. There’s a tremendous amount of effort that goes into putting on a meeting like this with just a couple of hundred people in-world. We have to put together four corners and you have to do a lot of work, and it’s still creaking at the seams as we speak, to put something like this on.
[04:06 / 35:25] We want [with the] next generation platform to make the size of an event like this to be a trivial exercise, and then figure out how, with various techniques, to make it possible to do events like this for tens of thousands of people.
[04:26 / 35:46] That’s one way to think of scalability: how do you get more people in a region, how do you get more people to be able to participate in an event at the same time. but [there’s] also the scalability for creators. How do you make it possible for creators to not only be able to reach a larger audience, but also make more money, too.
[04:44 / 36:14] Take the classroom that Texas A&M put together for teaching kids chemistry. The developers of that experience of teaching chemistry, they probably did as a one-off, for some fee, job for Texas A&M to create that classroom. When the classroom is used by students at Texas A&M, you know, 20 students, whatever, then that experience is fully in use.
[05:22 / 36:41] What if that developer could have an unlimited number of copies of that experience to rent out or sell, and every institution could use that virtual classroom all at the same time? That makes for a much more appealing prospect for a creator of an experience, and gives them a greater opportunity to monetise their experience. And then we’ll get more high-quality content creators introduced into the economy, and then everything sort-of heads upwards. So that’s something we think about a lot.
Quality and Ease of Use: Physics, Avatar Design, Shopping
[05:56 / 37:16] We also think about quality. Quality is a range of things: ease of use, quality of physics, lighting, basic performance of how smooth are things, how easy is it to do things, how natural an avatar can we make.
[06:21 / 37:41] The skeleton system in the new avatars we’re working on are way, way, way, more complex than what we have in Second Life.
Revenue Generation for the Lab
[06:46 / 38:05] And then monetisation – the way we [Linden Lab] monetise. I’d say our business model is a little be strange in Second Life today. We charge you a lot for land, and then we charge you almost nothing for all of the transactions that happen in-world. So, I’ve said this before, but generally we think about how do we lower our property taxes by a lot and at the same time, we’ll have to raise sales taxes to make some of the difference.
[07:15 / 38:35] And then also how can we build a platform that [is] technically less demanding, so that it costs us less to operate all of this content that we’re running all of the time, so that we can have a lower barrier to entry, and make it possible for people to come it and create some really interesting things at very low cost. And so that’s a big focus for us. How can we make less money per user, almost, but have a lot more users, is kind-of the core of the puzzle we’re trying to solve for.
Initial Alpha Access
[07:52 / 39:13] At the beginning, this platform we’re working on will start to reveal itself to just a few, hand-picked alpha users this summer.
[08:02 / 39:21] Those users will, for starters, need to know a tool called Maya, which is a fairly sophisticated and complicated tool which most of us normal human beings cannot even begin to think of how to use.
[08:16 / 39:36] The reason we started with that is because it would take a lot of effort for us to create those creation tools, and … by allowing third-party content tools, we can focus more of our energy on more of the runtime aspects of the environment and then layer-in more and more in-world creation tools over time.
[08:39 / 39:59 ] And Maya is just a starting point; ultimately, we want to be able to support a huge array of third-party tools: Maya, 3D Max, Sketchup – any tool that any creator is comfortable with using, we want to make it possible for them to take content from there directly into this next generation platform, and then basically just instantly walk into that content, and easily invite people into that content and start to socialise in and around that content.
[09:17 / 04:36] And after it starts to reveal itself for a few this summer, as time progresses, we’ll invite more and more people as it gets easier and easier to use, and as we figure out a lot of the bugs and issues to make it a useful experience for you, then obviously, more and more of you will be invited to come on-board over time.
[09:42 / 41:01] Meanwhile, Second Life is not going anywhere. We will continue to improve it, like I said earlier, and it could be years before any of you decide that you would rather use this new thing we’re working on versus Second Life; and that’s fine with us.
Initial Content Focus on Virtual Reality
[10:00 / 41:19] The new platform, in the beginning I mentioned accessibility and multiple platforms; but the two platforms we’re definitely focused on as number one and two, is virtual reality and PC. So any content that’s created in the new platform will be a great experience in something like Oculus and on PCs, and then we’ll continue to think about what platforms to bring on next after that. but those were the sort-of number one and two platforms that we’ll support either way.
[10:36 / 41:56] We will spend a lot of time understanding what it means to create content in the context of virtual reality hardware like the Oculus; how does that change the type of content you want to create? What kind of use-cases make sense in the context of virtual reality? So we’ll be spending quite a bit of time in there.
[10:56 / 42:17] And as all of this hardware starts to reveal itself for a few hundred bucks later this year, but more likely early next year, so we’re still a year away, I would say, from some of these HMDs or head mounted displays to start to make their way into the hands of ordinary consumers. But we want to make sure that we’re well aware of what it means to create content and experience for that content. So that’s definitely top-of-mind.
At this point, Ebbe gave his closing remarks, which were focused on education in general, rather than specific to SL or the next generation platform, and which will be available in my full transcript of the presentation.
Comments on the Next Gen Platform Arising During the Q&A Session
During the main Q&A session, several questions were asked relating to the next generation platform in particular, and Ebbe’s response to these are given below for reasons of completeness. Timestamps refer to the video only.
Accessibility for the Impaired
[44:47] Will there be voice-to-text and text-to-voice in the new platform? It’s not on the road-map right now, but I also don’t necessarily see it being a hugely complicated thing to add. there are some really great third-party services that we could hook into to made some of those capabilities possible, and it might not even be us; maybe a creator will add that functionality to the next platform.
[45:20] But it is something we talk about quite a bit, because in virtual reality, for those who have tried on some of those goggles, your keyboard and mouse suddenly feel like not proper instruments for interacting and communicating. Voice will obviously be a very natural way of interacting in VR, but that doesn’t work for everybody and in all use cases, so voice-to-text and text-to-voice makes absolute sense.
[45:49] I don’t see it as being a super difficult thing to do, I’m sure we could use some Google service or something like that to made that a pretty darn good experience without too much work. So I’m not sure when that could happen, but I’m sure it’s completely doable.
[46:32] What other work would we do with regards to accessibility … so that the next generation can easily be used by someone who is blind? It’s very early to know exactly how we’ll tackle it; it’s a good question, and I don’t have an answer. We have stated that we’re not planning for our client, at least for the beginning – and possibly never, but never say never -, but we’re not starting with it being open-source. so some of these viewers you have for Second Life can specifically target those use cases, will not be easily done. We might have to partner with some of those people to better understand how we can make sure that we build some of those things into the one viewer we will do. So I will take notes on this, and ask the product team to make sure this is in their minds as they move forward.
Content Creation and C# as the Scripting Language
[48:28] One thing is obviously, like I mentioned earlier, is that we want to make the ability to use a huge range of third-party tool [in the platform] and make sure we support sort-of common file formats as well as we can. So whether it’s .FBX or .OBJ, stuff like that, so a lot of third-party tools that do a much better job of specifically creating content for various use cases, you could leverage, which is not that easy to do in Second Life today. We want to make that very easy, support a huge number of third-party tools to be a part of the creation, or tool chain as well call it, for the next generation platform.
[49:15] The in-world tools will start to focus mostly on how you can lay things out. so I can import a lot of things, but then sort-of place them, rotate, scale, and things of that nature. And then, obviously, to ultimately make it easy for you to add scripting on things to create interactivity and other functionality.
[49:37] the scripting language will be C#, which is a good thing, because there’s obviously a lot more people who know how to do things in C# than there ever will be to learn how to do something in Linden Scripting Language, so you’ll have a real programming language to work with.
[49:53] So right there you can easily use a lot of existing talent in the world today to create some really incredible content with third-party tools and C# . you have millions and millions of people that can contribute to creating content, versus having to train someone from the ground up how to create something inside of Second Life, which has proprietary tools for 3D and scripting.
 Like I said, over time, we’re obviously going to make it easier to do layout without within the world, but we’re also exploring technologies like voxels to think of ways to make it easy for non-3D experts to be able to create environments and structures; so that’s an area we’re investing some time in right now, to understand what we can bring to the table there.
[50:50] So we can hit a much broader range of creators, from professionals who can use the tools they’re comfortable with today to hobbyists who are willing to learn some new tools and who could benefit from using things like voxel systems to easily “paint” and chip away to create terrain and tunnels and caves and stuff like that; all the way to making sure that most of us who really don’t create, but more-or-less just customise the environment, that it’s very ease for us to just furnish our house or set-up our lab or get dressed, which I would say can sometimes today be maddeningly difficult in Second Life. So we want to make that as easy as it can possibly be,
[51:47] and we also want to continue to be as open as possible, so that whatever we don’t supply, third parties can sort-of extend what we’re doing to provide additional solutions and value on top of what we’re doing.
More on Maya and Adding Other Tools
[50:30] Why do we start with Maya as opposed to something open-source like Blender? Why did we choose something that’s so expensive versus something that’s free or cheap?
[52:52] Like I said, this is very early. We started with the most sophisticated tool that allows us to create the most sophisticated content possible. Not just 3D content, but also animation, where we can get a full stress of almost every use case that we can think of, so it’s almost for our convenience. It’s not the intent that this is going to be the starting point for you guys. By the time most of you would find it worthwhile to start working in this platform, I would expect for us to have support for many other tools.
[53:34] but it was the tool that we could get our expert users to create the most variety of content and stress our engine to the maximum with the least amount of effort. So it was basically the fastest path for us to get the most complex content created as soon as possible without have to build a lot of tools to do that.
[57:45] One interesting decision we made among ourselves, just the other day, is that we want the next generation platform to be 13+; so we’re going to lower the age by which someone can participate. today I think it’s 16+, and actually we’ve realised that legally speaking, there’s no difference between 13 and 18 versus 16 and 18; so our goal is to make it something we can get users 13+ to participate in. and we therefore have to solve whatever issues arise because of that, and that’s a challenge we’ve put in front of ourselves.
Moving Content Between Platforms
[01:02:16] Again, don’t expect full backwards compatibility. just because you’ve built a fully-functional experience in Second Life doesn’t mean you can just airlift it in, and drop it in [to the new platform] and just have it work. Like I said, it will be a different scripting language, the way we think about 3D content will be different, a much more modern approach to it.
[01:02:40] So we [will] take raw content from the outside world, and we will be able to convert that into our internal format, which will be a highly optimised format for a sort-of run-time, and it’s not even clear to us if we will preserve the original format in any shape or form because it does get converted, like I said, a highly optimised runt-time type of experience.
[01:03:12] So, exporting a full experience, it’s not clear how easy it would be, but you obviously still have full control over the content that you originally imported in the first place.
[01:03:26] And also not clear [is] where you could easily drop that content in. I’m not necessarily expecting that there will be lots of different worlds which are compatible with each other, where you can just easily take everything that you do and just move it other to the next one.
[01:03:45] A lot of things will be different. Our scripting language will be different; the way we think about 3D content will be different; the way we think about avatars and skeleton systems will be different. so we’re focused more on high performance and high quality, and probably less about portability would love for a day for [things] to be more portable, but it’s not our main focus.
[01:04:13] A lot of other companies and groups are thinking about universal standards and common ways of describing content and scripting and whatnot so that it could be portable. and maybe when those standards come into place, we will participate, but I think it will be a long time before you can have twelve different companies creating twelve different platforms for having virtual experiences having easily interchange of content among themselves. It would just add so much complexity. And ultimately, to have that succeed you have to target lowest common denominator, which doesn’t necessarily bring you into the future quickly enough. so that’s our current perspective.
Other Points of Note
Items with timestamps can be found within the meeting video, those without a timestamp were raised in the additional chat conversation with Ebbe.
- [00:16:18] As has been repeatedly stated, the new platform will not be 100 % compatible with Second Life, so direct content migration in all cases will not be possible; however, import of things like mesh and textures held locally will most likely be possible
- [00:19:03] It is anticipated that with the next generation platform, content would not have to disappear permanently due to financial constraints on the part of the creator in meeting costs; if something is not visited very often, it can be stored off-line and very quickly brought back quickly should someone wish to visit it
- [00:20:03] Third-party authentication and access control to experiences is being built-in to the foundation of the next generation platform, which should help organisations to manage access to their experiences using tools already at their disposal
- [01:07:22] Very little, if any, functionality is being carried forward from Second Life; almost everything is being built from the ground up, including how inventory is managed
- People will be able to preserve their SL identities and use them on the next generation platform, and will be able to move back and forth between the two, once the later is more open to users
- The new platform is still over a year from general availability
- A “master account” system is being considered, such that multiple avatar accounts will be possible under a single user account. whether this will include ability to move inventory between accounts is not at this time clear
- On the subject of land:
- Individual land areas will be much larger than the SL concept of regions, potentially “thousands” of metres across, and will support the concept of a “mainland” environment
- Land areas can be connected, but the mechanism for moving between them has yet to be decided; a gateway system is one idea being considered
- The Lab does not view the new platform as a contiguous “world” with a unified “geography”as SL is generally seen – hence the use of “platform”, rather than “world”; as such it might be analogous to a series of interconnected experiences
- Experiences are likely to be instance-based; when an avatar limit is reached, an additional iteration is created, allowing more people to engage in it (“With instancing, we create an experience that is optimum with 150 users, but when it reaches that, spin up another one””)
- There will be segregation based on age / content
- SL might eventually be layered on top of the new platform
- Spatial audio with be a part of the new platform.
With thanks to Mal Burns for the video.