Coming Soon: new option to reduce the cost of SL Premium Plus subscriptions

On Thursday, September 25th, 2025, Linden Lab hosted a further Zoom call with creators and bloggers to discuss a number of announcements and initiatives, one of which was a new Premium Plus subscription option. I’ll be summarising other aspects of the meeting in due course. This article focuses on the new subscription option –  what it is, why it is being done, and when it can be expected.

What is It?

  • While Premium Plus has been been well-received, the cost of US $249 (annual billing plan) has been seen by many as being too expensive to justify.
  • To help overcome this, from October 2025, Linden Lab will be offering a “Premium Plus, No Stipend” option.
  • This subscription level will offer exactly what it suggests:
    • All of the “physical” benefits of Premium Plus (2048 sq metre Linden Home options etc.).
    • HOWEVER users signing-up to it will not receive either the one-off sign-up bonus (L$ 3,000) or the weekend stipend (L$650).
  • The new offering will retain the “Premium Plus” name for simplicity, and presented as an option within the Premium Plus subscription level.
  • By removing the sign-up bonus and stipend the new offering, when available, will cost US $11.99 per month / US $143.88 per annum (plus applicable local taxes) – a saving of US $105.12 on the annual cost of Premium Plus with stipend.
The upcoming new “Premium Plus, No Stipend” subscription option. Credit: Linden Lab
  • In addition, subscribers will be able to move between the subscription levels with immediate effect (e.g. Premium subscribers will be able to upgrade to “Premium Plus, No Stipend” without delay, and if they don’t like it, downgrade back to Premium or upgrade to “full” Premium Plus, and if a Premium Plus user opts to do without the stipend, they can switch over to “Premium Plus, No Stipend”).

Why is it Being Added?

  • As noted, it is primarily aimed at making Premium Plus more attractive to users who feel the current offering, even with the sign-up bonus and stipend, is still too expensive to justify.
  • The decision to remove the sign-up payment and stipend was made on two counts:
    • Those on both Premium and Premium Plus continue to purchase Linden Dollars regardless of their stipend.
    • Stipend payments further add to the supply of L$ in circulation, contributing to the on-going issues of a top-heavy supply of Linden Dollars impacting exchange rates, as has been previously discussed – see: Linden Dollar Exchange Rate and the Economy.
  • It is recognised the removal of the stipend will not appeal to everyone, hence why the current Premium Plus option will be remaining.
  • It is particularly hoped that the new option will encourage Premium subscribers who have previously expressed reluctance in upgrading to Premium Plus due to the cost to now consider doing so.
In a perfect world every premium subscriber would move to Premium Plus, No Stipend. Best for residents, best for creators. If that takes off, this is really good for residents and really good for creators, if people upgrade. If people downgrade, it is what it is; hopefully they don’t, but if they do, they do. But upgrading, that’s a win for the creators like no other, and its a win for the residents; we’re really giving a lot more for very little.

– Brad Oberwager, during the Zoom Call, September 25th, 2025

When Will it Launch?

  • If all goes according to plan, “Premium Plus, no Stipend” will launch in the first half of October 2025.
  • Formal announcements of its availability will be made when officially launched.

 

 

 

Linden Lab offers comment on acquisition of Blush AI

Via the Blush AI website

Towards the end of August 2025, I was contacted to ask if I knew anything about a recent acquisition by Linden Lab – that of Blush AI, a dating simulator  available for both Android and iOS that (quote) “helps you learn and practice relationship skills in a safe and fun environment”.

Blush AI was originally developed by Luka Inc., the company behind Replika, an emotional artificial intelligence (AI) companion chatbot first released in 2017, and which has proven highly popular with a claim of over 30 million users – although it has not been without its share of concerns around user data protection.

At the time I was contacted, the acquisition of Blush AI from Luka Inc., was causing some concern within the forums and on social media, with some of the concerns being around the possible use of SL chat logs being used to train the AI. As there had been no official announcement on the matter, I contacted Linden Lab to see if they would comment on the acquisition. Due to matters of vacations, etc., it took a while to obtain a response, but this week I did receive an official statement on the acquisition, which hopefully addresses some of the concerns raised in the forum thread:

Linden Lab has always been a place where new ideas are explored both in and outside of Second Life. Blush is one of those explorations—a small, independent product with its own team and no connection to Second Life user data. Blush is a separate effort focused on AI companionship, while our focus in Second Life is on growing and improving the world our creators have built.
We have no plans now or in the future to use Second Life conversations or content to train AI systems. Any such use would require explicit disclosure in our Privacy Policy. Second Life remains our flagship offering, built around empowering human creativity, and we are continuing to invest heavily in it with recent updates like glTF imports, WebRTC voice, our mobile app, and Project Zero.

– Linden Lab spokesperson

The Blush App page on Google Play, listing the Lab as the new developer

Given this, it would appear that the intent is to have Blush AI continue to operate independently from Second Life using its own revenue stream via in-app purchases. Most particularly, it makes clear that conversation logs, etc., from Second Life will not be used to help train the Blush AI, nor is likely that there be any other connections between Blush AI and Second Life user data.

Obviously, the question remains why make such an acquisition in the first place. Without any direct statement from the Lab, anything relating to this question is pure supposition. On a personal level, I have no strong opinion on; AI tools / entertainment of this nature simply do not interest me at all. So long as the acquisition doesn’t interfere with /detract from the on-going effort to enhance and grow Second Life, which remains the Lab’s bread and butter, then I can happily ignore Blush AI. That said, given that past acquisitions by the Lab haven’t always gone that well (Blocksworld being the only real exception), I will admit to my curiosity being piqued as to how this one progresses and how long it lasts.

LL announce a pause in the current SL AI character designer project

via Linden Lab

On Thursday, July 31st, Linden Lab provided an update on the AI Character Generation project which indicates it it to be paused / closed at the start of September.

The project was initially launched in December 2024 and powered by Convai, a platform for developers and creators proving an intuitive approach to designing characters with multimodal perception abilities in both virtual and real world environments (see: Linden Lab leverage Convai for AI character generation in Second Life). However, it was shortly thereafter suspended as a result of community feedback, before being re-launched to a wider audience of potential users at the end of February 2025.

The Character Designer was launched as an experimental feature to explore the potential of AI-powered characters in Second Life. Built in collaboration with our AI technology partner Convai, this tool enabled residents to create interactive, virtual characters with conversational capabilities.
From elaborate roleplay scenarios to immersive visitor greeters, your projects and feedback have been invaluable. This pause gives us time to carefully evaluate everything we’ve learned and determine how best to evolve this technology in a way that aligns with the broader future of Second Life.
This is not the end of AI in Second Life; rather, it is a thoughtful pause as we refine our strategy and continue exploring new opportunities for innovation.

– Linden Lab blog post

The “pause” is set to come into effect from Monday, September 1st, 2025, with the Lab further noting that as of that date:

  • It will no longer be possible to create, deploy, or run AI Characters using the Character Designer interface.
  • Characters created through the Designer will no longer function or appear in-world.
  • Previously created characters and their memory will not be retained post-pause.
  • Any alt accounts created specifically for testing the Character Generator will remain valid Second Life accounts, and can be logged into just like any other alt account.

Community support for the project will continue through the following channels:

  • A dedicated forum thread for on-going Q&A and feedback.
  • Second Life Discord for real-time responses from staff and developers.
  • Support Portal for any account-specific issues.

In addition, those who have used the Character Generator are encouraged to record their work during the wind-down period and share video through the forum thread or suitable platforms.

The sunsetting of this project does not mean the end of further possible projects and experiments in the use of AI technologies, with the blog post also stating:

 This is not the end of AI features in Second Life—we’re using this moment to regroup and plan for future development … We are actively and cautiously experimenting with other AI technologies to enhance Second Life’s creative potential, performance, and immersion. The insights from this project are already helping to inform future efforts.

For further information, please refer to the official blog post, which includes a short-term FAQ.

Space Sunday, of planets, signs of life, and an award

Comparing the large dwarf planets with Earth and the Moon. Credit: unknown

As I noted back in July 2024, classifying just what “is” and “is not” a “planet” is something of a minefield, with the entire debate going back to the 1800s. However, what really ignited the modern debate was – ironically – the search for the so-called “Planet 9” (or “Planet X” as it was then known), a body believed to be somewhere between 2 and 4 times the size of Earth and around 5 times its mass (see: Space Sunday: of “planet” and planets).

That hunt lead to the discovery of numerous bodies far out in the solar system’s Kuiper Belt which share similar characteristics to Pluto (size, mass, albedo, etc), such as Eris (which has at least one moon) Makemake, Haumea (which has two moons), Sedna, Gonggong and Quaoar (surrounded by its own ring of matter), all of which, like Pluto, appear to have reach a hydrostatic equilibrium (aka “nearly round shape”).

Is it a dwarf planet? A TNO? A Plutoid? This Euler diagram, used by the  IAU Executive Committee, demonstrates the complexity in trying to classify objects within the solar system. Credit: Holf Weiher

The discovery of this tiny worlds led to an increasing risk that the more we looked into the solar system, so the number of planets would require updating, causing confusion. So, in 2006, the IAU sought to address the issue by drawing up a definition of the term “planet” which would enable all these little planet-like bodies to be acknowledged without upsetting things too much. In the process, Pluto was relegated to the status of “dwarf planet”, in keeping with the likes of Ceres in the inner solar system, Eris, Makemake et al. This make sense – but that’s not to say it didn’t cause considerable upset.

The definition was also flawed from the outset in a couple of ways. Firstly, if taken strictly, the criteria the IAU had chosen meant that Saturn Jupiter, Mars and Earth were actually not planets, because all of them have not “cleared the neighbourhood around [their] orbit[s]”: all of them have gatherings of asteroids skipping around the Sun in the same orbit (notably some 10,000 for Earth and 100,000 for Jupiter).

Secondly, that body has to be “in orbit around the Sun” pretty much rules out calling called planet-like bodies orbiting other stars “planets”; something which given all the exoplanet discoveries by Kepler and TESS et al has become something of a bite in the bum for the IAU. As a result, the “pro-Pluto is a planet” brigade have felt justified in continuing their calls for Pluto to regain its planetary status.

Several attempts have been made to try to rectify matters in a way that enables the IAU to keep dwarf planets as a recognised class of object (including Pluto) and which addresses the issues of things like exoplanets. The most recent attempt to refine the IAU’s definition took place in August 2024, at the 32nd IAU General Assembly, when a proposal offering a new set of criteria was put forward in order for a celestial body to be defined as a planet.

Unfortunately, the proposal rang headlong into yet more objections. The “Pluto is a planet” die-hards complained the new proposal was slanted against Pluto because it only considered mass, and not mass and hydrostatic equilibrium, while others got pedantic over the fact that while the proposal allowed for exoplanets, it excluded “rogue” planets – those no longer bound to their star of origin but wandering through the Galaxy on their own – from being called “planets”. Impasse ensued, and the proposal failed.

In the meantime, astronomers continue to discover distant bodies that might be classified as dwarf planets, naturally strengthening that term as a classification of star system bodies. This last week saw confirmation that another is wandering around the Sun – and a very lonely one at that.

Called 2017 OF201 (the 2017 indicating it was first spotted in that year), it sits well within the size domain specified for dwarf planets, being an estimated 500-850 metres across, and may have achieved hydrostatic equilibrium (although at this point in time that is not certain). Referred to as an Extreme Trans-Neptunian Object (ETNO, a term which can be applied to dwarf planets and asteroids ), it orbits the Sun once every 25,000 years, coming to 45 AU at perihelion before receding to 1,700 AU at aphelion (an AU – or astronomical unit – being the average distance between Earth and the Sun).

As well as strengthening the classification of dwarf planets (and keeping Pluto identified as such), 2017 OF201 potentially adds weight to the argument against “Planet 9”, the original cause for the last 20 years of arguing over Pluto’s status.

2017 OF201 imaged by the Canada–France–Hawaii Telescope on 31 August 2011

To explain. Many of ETNOs and Trans-Neptunian Objects (TNOs) occupy very similar orbits to one another, as if they’ve somehow been clustered together. For example, Sedna has a number of other TNOs in orbits which closely match its own, leading the group as a whole to be referenced informally as “sednoids”. Among “Planet 9” proponents, this is taken as evidence for its existence, the argument being that only the influence of a large planetary body far out beyond Neptune could shepherd these ETNOs and TNOs into clusters of similar orbits.

However, by extension, this also means that 2017 OF201 – together with 2013 SY99 and 2019-EU5 should have also fallen to the same influence – but none of them have, orbiting the Sun quite independently of any clusters. This potentially suggests that rather than any mysterious planet hiding way out in the solar system and causing the clustering of groups of TNO orbits, such grouping are the result of the passing influence of Neptune’s gravity well, together with the ever-present galactic tide.

Thus, the news concerning 2017 OF201 confirmation as a Sun-orbiting, dwarf planet-sized ETNO both ups the ante for Pluto remaining a dwarf planet and simultaneously potentially negating the existence of “Planet 9”.

Jupiter: Only Half the Size it Once Was?

Definitions and classifications aside, Jupiter is undoubtedly the planetary king of the solar system. It has a mass more than 2.5 times the total mass of all the other planetary bodies in the solar system (but is still only one-thousandth the mass of the Sun!) and has a volume 1,321 times that of Earth. It is also believed to have been the first planet to form in the solar system; possibly as little as one million years after the Sun itself was born, with Saturn following it shortly thereafter.

Jupiter is an important planet not just because of its dominance and age, but because of the role it and Saturn played in the overall formation of the solar system, although much of this is subject to contention. The primary concept of Jupiter’s and Saturn’s voyage through the solar is referenced as the “grand tack hypothesis“, on account of the two giants migrating through the solar system in the first few millions of years after they form.

Jupiter as it is today, as seen by the Hubble Space Telescope. Not long after its formation, it might have been twice its current size. Note the black dot to the left of the image is the shadow Io, the innermost of Jupiter’s large moons. Io itself is outside of the frame. Credit: NASA/JPL / University of Arizona

Under this theory, Jupiter formed around 3.5 AU from the Sun, rapidly accreting a solid core and gaining mass to a point where it reach around 20 times Earth’s mass (although Earth would not form for another 45-50 million years). At this point, it’s mass and size (and those of Saturn) were such that they entered into a complex series of interactions with one another and the Sun, with both migrating towards the Sun, likely destroying a number of smaller proto-planets (all of them larger than Earth) along the way. At some point, these interactions reversed, and both infant planet started migrating away from the Sun again, clearing the way for the remnants of the smaller proto-planets they’d wrecked to gradually accrete to form what we now know to be the inner planets, as Jupiter and Saturn continued outwards to what are now their present orbits.

Believed to have occurred over between 4 to 6 million years, the “grand tack hypothesis” is contentious, as noted, and there are alternate theories concerning Jupiter’s formation and the early history of the solar system. Because of this, astronomers Konstantin Batygin (who, coincidentally, is one of the proponents of the “Planet 9” theory) and Fred C. Adams used complex computer modelling to try to better understand Jupiter’s formation and early history, in order to try to better determine how it may have behaved and affected the earliest years of the solar system’s formation.

In order to do this, and not be swayed by any existing assumptions concerning Jupiter’s formation, they decided to try to model Jupiter’s size during the first few million years after its accretion started. They did this using the orbital dynamics of Jupiter’s moons  – notably Amalthea and Thebe, together with Io, Jupiter’s innermost large moon – and the conservation of the planet’s angular momentum, as these are all quantities that are directly measurable.

Taken as a whole, their modelling appears to show a clear snapshot of Jupiter at the moment the surrounding solar nebula evaporated, a pivotal transition point when the building materials for planet formation disappeared and the primordial architecture of the solar system was locked in. Specifically, it reveals Jupiter grew far more rapidly and to a much larger size than we see today, being around twice its current size and with a magnetic field more than 50 times greater than it now is and a volume 2,000 times greater than present-day Earth.

Having such a precise model now potentially allows astronomers to better determine exactly what went on during those first few million years of planetary formation, and what mechanisms were at work to give us the solar system we see today. This includes those mechanisms which caused Jupiter to shrink in size to its present size (simple heat loss? heat loss and other factors?) and calm its massive magnetic field, and the time span over which these events occurred.

Yeah. Finding Life is Hard

In March, I reported on a possible new means to discover evidence of biosigns on worlds orbiting other stars by looking for evidence of methyl halides in their atmospheres (see: Space Sunday: home again, a “good night”, and seeking biosigns). In that reported, I noted that astronomers had potentially found traces of another element associated with organics, dimethyl sulphide (DMS) , within the atmosphere of exoplanet K2-18b, a hycean (water) world.

This is the strongest evidence yet there is possibly life out there. I can realistically say that we can confirm this signal within one to two years. The amount we estimate of this gas in the atmosphere is thousands of times higher than what we have on Earth. So, if the association with life is real, then this planet will be teeming with life.

– Prof Nikku Madhusudhan, lead investigator into the study of the atmosphere of K2-18b and the apparent discovery of dimethyl sulphide.

Now in fairness, the team behind the discovery did note that it needed wider study and confirmation. Extraordinary claims requiring extraordinary proof and all that. And this is indeed what has happened since, and the findings tend to throw cold water (if you forgive the pun) on that potentially wet world 124 light-years away, having  dimethyl sulphide or its close relative, dimethyl disulfide (DMDS) in anywhere near detectable levels.

An illustration of what K2-18b may look like. Credit: NASA / ESA / CSA / Joseph Olmsted

The more recent findings come from a team at the University of Chicago led by Rafael Luque and Caroline Piaulet-Ghorayeb. Like Madhusudhan and his team at Cambridge University, the Chicago team used data on K2-18b gathered by the James Webb Space telescope (JWST). However, in a departure from the Cambridge team, Luque and his colleagues studied the data on the planet gathered by three separate instruments: the Fine Guidance Sensor and Near Infrared Imager and Slitless Spectrograph (FGS-NIRISS), the Near Infrared Spectrograph (NIRSpec) and the Mid-Infrared Instrument (MIRI) – the latter being the sole source of data used by the Cambridge team.

Combing the data from all three instruments helps ensure a consistent, planet-wide interpretation of K2-18b’s atmospheric spectrum, something that cannot be obtained simply by referencing the data from a single instrument. And in this case it appears that by only focusing on MIRI, the Cambridge team inferred a little too much in their study.

We reanalyzed the same JWST data used in the study published earlier this year, but in combination with other JWST observations of the same planet … We found that the stronger signal claimed in the 2025 observations is much weaker when all the data are combined. We never saw more than insignificant hints of either DMS or DMDS, and even these hints were not present in all data reductions.

Caroline Piaulet-Ghorayeb

Most particularly, the much broader set of spectrographic data gathered from the three instruments points to some of the results observed by Madhusudhan’s team could actually be produced entirely abiotically, without any DMS being present. The Chicago paper has yet to be peer-reviewed, but their methodology appears sufficient to roll back on any claims of organic activities taking place on K2-18b or within its atmosphere.

AAS Recognises Gene Kranz

The “original four” NASA Flight Directors. Back row, (l to r): Glynn Lunney and John Hodge. Bottom (l to r): Gene Kranz and Chris  Kraft. Credit: NASA

Eugene Francis “Gene” Kranz is a genuine NASA legend. He may never have flown in space, but he played a crucial role – along with the late Christopher C. Kraft (also see: Space Sunday: a legend, TESS and a rocket flight), John Hodge and Glynn Lunney (also see: Space Sunday: more from Mars and recalling a NASA legend) – in formulating how NASA runs it manned / crewed spaceflights out of their Mission Operations Control Centre, Houston.

He is particularly most well-known for his leadership of his White Team during the Apollo 11 Moon landing in 1969, and for leading the work to get the crew of Apollo 13 back to Earth safely when that mission faced disaster. As a result of the latter, Kranz and his entire White Team received the Presidential Medal of Freedom in 1970 as well as being immortalised in film and television (although the line “Failure is not an option” was not something Kranz ever said – he instead used it as the title for his 2000 autobiography; the quote was purely fiction and used in the 1995 Ron Howard film Apollo 13, which saw Ed Harris play Kranz).

His career at NASA ran from 1960 through 1994, during which he rose from Mission Control Procedures Officer to Director of Mission Operations. As a result, he has been the recipient of NASA’s own Distinguish Service Medal, Outstanding Leadership Medal and Exceptional Service Medal.

And he has now been similarly recognised by the American Astronautical Society, which on May 21st, 2025, named him the recipient of their 2024 Lifetime Achievement Award. Only presented every 10 years, the award recognises Kranz for his “exemplary leadership and a ‘must-never-fail’ style that ensured historic mission successes, empowered human space exploration, saved lives and inspired individuals around the world.”

The ceremony took place at the Johnson Space Centre, Houston, Texas, where Kranz was also able to revisit the place where he and his teams and colleagues made so much history: the Apollo Mission Operations Control Room (MOCR – pronounced Mo-kerr – NASA has to have an acronym for everything 🙂 ).

Gene Kranz, with his AAS Lifetime Achievement Award, seated at the restored console he occupied at the White Team Lead Flight Director, notably during the Apollo 11 and Apollo 13 missions. Credit: NASA

The latter had been recently restored as a direct result of a project initiated and driven by Kranz in 2017 in memory of Apollo and so many of his colleagues who have since passed away (the most recent, sadly, being Robert Edwin “Ed” Smylie whose team worked alongside Kranz’s White Team to make sure the Apollo 13 astronauts returned to Earth safely, and who passed away on April 21st, 2025). Fully deserving of the AAS award, Gene Kranz remains one of the stalwarts of NASA’s pioneering heydays.

Saying farewell to the original Linden Homes – and a Second Life mystery?

A selection of the old-style Meadowbrook homes, soon to depart the gird, and one of their local community points, a swimming pool

As I’ve recently noted in writing about the release of the Aspen Linden Homes and new house styles for the Log Home theme (both themes being available to Premium and Premium Plus subscribers – read here and here for more), Linden Lab has started making renewed noises about retiring the “older” generation of Linden Homes, first introduced in 2010.

On Friday, May 2nd, 2025, the Lab further confirmed the upcoming closure of the older style of Linden Homes (all of which stand on 512 sq m parcels within their own theme mini-continent scattered across the grid), with a post entitled Honouring the Past, Embracing the Future: Your New Linden Home Awaits.

Clearly aimed at users who have not as yet made the move from these older themes and styles of home to the “modern” Homes on Bellisseria, the post reads in part:

As we continue evolving and improving the Linden Home experience, we want to ensure that our residents have access to the best and most modern living spaces available. A couple of years ago, during SLB, we announced our long-term plan to phase out all legacy Linden Homes. The time has come to make that transition, and while saying farewell to your current home may feel bittersweet, we are confident that the next chapter will bring even more possibilities, personalization, and comfort.
We also recognize the lasting impact these homes have had, and we are planning a special tribute to honour the legacy Linden Homes and the memories they have held. We will share more details soon.

Whilst no deadline is given in terms of how much longer the older Homes will remain on the grid, the post does tend to make it clear their days are now definitely numbered. I honestly have no idea as to how active people are / were when in terms of living-in and utilising these older Linden Homes, or exactly how much “community” they fostered. However, they do represent a point in time in SL’s history that is worth remembering in some way; many of us actually appreciated having them – as limited as they are by today’s standards!

However, what piqued my interest was the idea of “honouring” these older homes – and very much hope that whatever is planned goes beyond just the houses themselves. While I’ve no idea how popular the approach was, at least one of the little continents presenting these older Linden Homes carried with it something of a little “backstory” to Second Life; one likely utterly obscure in this day and age, but one my mind immediately sprang to on reading the Lab’s post.

Cape Ekim, May 2025

It’s called Cape Ekim, and the legend wrapped around it involves another mythical Linden explorer in the form of Professor Linden (totally overshadowed by the feats and ego of Magellan Linden – possibly because the Professor never survived long enough to be embodied in some manner!), and his hunt for a great and benevolent dragon said to have once roamed the grid.

It’s a fun little mystery (if one a little long in the tooth and genteel / pedestrian in this age of scripted Experiences, mesh, Animesh, and all the rest we can use in SL to create adventures and games), and one I wrote about more than a decade ago.

It may not tax the grey matter too much, but it does features riddles, books, secret passageways, and a cipher to solve to  unlock a door (no pesky double-click TPing!), and is of an age and style that allow it to stand as a glimpse of SL’s past as much as the houses close to where it sits. As such, I really hope LL will give thought to the idea – and to any other similar spots which may exist within the old Linden Homes continents.

Cape Ekim, May 2025

Certainly, if the history of SL and nostalgia are your thing – and just in case it will soon vanish – why not try exploring / revisiting Cape Ekim for yourself?

Related Links

Second Life Town Hall: AI & the Future of Our Virtual Community – summary

The Town Hall meeting space
On Friday, April 18th, Linden Lab Hosted a Town Hall meeting with Philip Rosedale to discuss AI & the Future of Our Virtual Community, defined in part by the Lab thus:

As Artificial Intelligence (AI) becomes increasingly present in both our physical and virtual worlds, the Second Life community is coming together for a Special Town Hall event to explore its evolving role and impact within our shared metaverse.

The meeting was livestreamed to You Tube, where the official video is available with transcript.

This following is a summary of topic, comments, presentations and responses in the order presented. Two videos are also provided:

  • The official livestream video (which ended 10 minutes before the end of the session, and which has some audio issues.
  • The video recording by Pantera Północy, to help overcome the sound issues with the above. Both are embedded in this article. My thanks to Pantera for providing it.
Table of Contents
  • Timestamps in the text primarily reference the official livestream video, with timestamps to Pantera’s video clearly denoted as such.

Philip Rosedale – Opening Statement

Welcome everybody to our Town Hall … This is our Town Hall on the growing, evolving role of Artificial Intelligence in Second Life. It’s good to see some many people here, I think engaging on this with us is one of the most important conversations of our time; how AI is shaping the metaverse; how we live with it here.
We’re not here today to make any announcements; this is a Town Hall, and we’re going to run it as one, and try to establish a pattern where we can do this again and again, when we need to, about things. … So we’re not going to announce anything, because we’re here to listen; this is going to be an open conversation. If governance or policy or software changes come out of this discussion, we can try and act on them – and we will. But at the start, we have no new announcements to make.

Brett Linden – Structure and Flow

[Video 3:06-5:42]

  • The meeting is a two-way dialogue and exchange.
  • There was a response to the opportunity for users to give pre-prepared comments [7 in all]; those people have up to 3 minutes to present them.
  • There were also pre-submitted questions and feedback [68 in all], and an aim is to address as many of those as possible, time allowing.
  • As this meeting is specific to AI, people were asked not to ask questions on other matters pertaining to / aspects of Second Life.
  • A request for orderly use of chat and open microphones.
  • LL recognises that AI is a polarising topic, but request that comments and feedback is kept constructive and solutions-oriented.

Philip Linden – Major Usage Areas for AI in Second Life

[Video 5:44-15:40]

Notes:

  • This includes actual and prospective uses of AI in SL, and some of the feedback from users submitted ahead of the meeting.
  • The breakdown of items is an arbitrary one on my part, to try to contextualise the uses.
AI Tools to Provide General Assistance
  • Finding duplicates in Feature Requests and bug-reports in the Feedback Portal [Canny].
    • This is the only way AI is being used in the Production environment, and is considered by LL to be a good addition to SL.
    • This takes the form of taking newly-entered items and – where appropriate – merging them with matching reports / requests also submitted via the Feedback Portal or held within the JIRA archive. This would be next to impossible to do at the human level.
    • All submissions to the Feedback Portal are still human-reviewed [triaged].
  • Providing assistance to new users in a range of tasks – directly answering questions, etc., as has been trailed in some Welcome areas.
  • Planned future use of AI:
    • Translation and closed-caption tools [voice-to-text and text-to-voice] – seen by LL and many users as the “number one” accessibility improvement feature.
  • Not in development, but worthy of possible consideration:
    • Assistance with Inventory management [user suggestion] – using an AI tool to help structure and organise large inventories.
    • The potential for AI tools to help with shopping on the Marketplace [e.g. to build a specific avatar look] by carrying out searches and presenting suggestions.
    • In-world text-to-content generation tools – using text chat to describe an object (including Animesh or an avatar) and having AI perform the basic construction.
    • Identifying harassment, bullying, hate-speech, etc., within text and/or Voice chat.
    • Finding cases of possible copyright infringement in a volume impossible for humans to do (albeit it with recognised potential issues).
    • As with the current physical world trend, the creation of a romantic AI-driven partner – with the recognised questions as to whether it would in fact be broadly compelling to people + the ethical implications.
Current Uses of AI Tools in SL (by Users)
  • Use of 3D content generation [e.g. Meshy] tools to create Marketplace / in-world content – recognised as being very contentious and the subject of great debate.
  • Use of 2D image generation tools [e.g. Midjourney] to create marketing materials for content in-world / on the Marketplace – also recognised as being very contentious and the subject of great debate, but seen as distinct from actual generative content creation.
  • Creating companion avatars [e.g. through the use of the Convai –driven Character Designer).
  • Creation of more dynamic Non-Player/ing Characters (NPCs), for use in role-play environments, etc.
“Philip’s Crazy Dream”
  • Having avatars “learn” through how they are used when logged-in, what they say, and have them “live on” in-world as a presence when the user logs-out.
  • Acknowledges it is a “crazy, fun, Black Mirror idea”, but wanted to include it as an illustration of where AI might potentially go.

Responses to Comments Submitted in Advance

[Video: 15:40-22:15]

  • Is Linden Lab currently training, planning to train, or knowingly paying another company to train any AI models with user-generated content [UGC] previously uploaded to Second Life, including but limited to: textures, mesh models, shapes, scripts and chat logs?
    • The Lab has yet not trained AI on any of the content in-world.
    • This is a community discussion on AI, and as such, given SL’s success has been in part based on the explicit and clear statement that Linden Lab does not own any UGC within the platform, training AI on such content is not seen as a decision LL can arbitrarily make.
    • There has potentially been unfounded concerns raised on this as a result of some misunderstanding of the role of the partner company to LL in developing the AI capability used within the Feedback Portal system, as mentioned above – however, this work only utilised public information within the Feedback portal system.
  • When will Convai bots [characters built using the Convai tool] be allowed on parcels in Bellisseria.
    • Bellisseria has been established as a community with a specific covenant which states No Bots. As such, properly marked Convai characters are not allowed in those regions.
    • If all [a majority?] Bellisseria residents want to have Convai characters within the regions, then LL are open to finding a way to taking a vote or similar on the subject and updating the Bellisseria covenant.
    • On a broader context, LL should be looking to residents / communities to make these decisions, rather than the Lab wants to do.
  • Bots and AI characters / AI content should be clearly marked as such (e.g. with the creator’s name).
    • Aware of a third-party blog survey in which 53% of respondents agreed with this idea.
    • Currently, there is no automated way to easily identify AI content, and such is the pace of AI development, that any tool or gadget created for this purpose is liable to be obsolete in 6 months.
    • Further, given SL is an open world where those coming in do not have to give personally identifying information, makes pursuing them in the event of a gross misuse of AI difficult.
    • However, LL is open to discussions on the issue and possible solutions.
AI Town Hall: Philip Rosedale on the stage

Resident Statements

Sailor Paü (HonePau)

[Video: 24:56-26:53]

When we come into Second Life we’re not just bringing an avatar we’re bringing our thoughts our moods our intentions. For a lot of us this place is more than just a game or a creative outlet it becomes emotional it becomes real in ways that’s hard to explain to some people outside of sight of it. So the question of whether AI can have a soul here isn’t just about machines or scripts, it’s about presence and about connection.
When I’m thinking about what makes something meaningful in Second Life it’s not about how detailed your mesh is or how smooth the animations are it’s about feeling something that it gives you and maybe a build something someone made years ago an old sim that you visit; a forgotten gallery; maybe a memory; and that makes us start to think about the soul so it’s not a religious sense necessarily, but in a deeper emotional way um the part of us that it makes us connect mostly.
An AI can learn to replicate things so it can speak in our tone it can recreate scenes, mimic dialogue, outfits, pictures, something you say … And I guess everyone’s just curious about where it can go next especially since we’re all singular individuals functioning separately. So maybe AI will continue to improve maybe it will eventually move and speak in a way that feels more human; but it doesn’t have that memory or meaning to it and it doesn’t ache; it doesn’t create from longing it doesn’t respond from presence or absence or carry something when you’re gone. So I guess that’s all I wanted to say overall when it came to Second Life AI and the soul, and being able to not detach that from yourself when it comes to wherever it goes.

William Gide – Questions

[Video: 27:20-32:35]

[Pantera’s Video: 29:03-34:18]

Note: the audio broke up quite notably during this part of the session, to catch the audio in full, please refer to Pantera’s recording per the above link.

  • Many don’t feel comfortable with talking to “counterfeit people” [AI-generated characters], so what is the Lab doing to allow people to easily identify they are interacting with a large language model (LLM) automated bot?
  • What’s being done to give land holders on Mainland the same ability to ban bots as available to private estates?
  • In regards to on-boarding new users with AI bot assistance: what failure scenarios has LL discussed that would convince the company that LLM bots, at the price-point you’re willing to pay right now, are not yet ready for user orientation?
Philip Rosedale Response
  • Better parity between region / parcel controls on Mainland to those on private estates: there is a number of open tickets on this in general (and specifically for Voice), and LL’s intention is to provide better moderation controls, and LL will look at open issues to help decide the order of implementation for better controls.
  • In respect of question’s and Sailor’s statement: little doubt that SL is all about human connection, so the importance of discussing and deciding what to do with LLMs that pretend to be people “is extraordinarily high”.
    • A question from this is how to distinguish people from AIs, given the noted fact that users are free to join the platform without giving any personally identifying information [which then might be used to distinguish human-driven avatars]. This presumption being that most people would not want to move away from this model.
    • This is a question that goes beyond LLMs masquerading as humans, and can include things like the sale of illegal items on the Marketplace.

Nodoka Hanamura-Nu Vaughn (Rathgrith027) – Text

[Video: 33:56-38:11]

[Pantera’s Video: 36:02-39:55]

For nearly a quarter of a century we have called this place home. We have created, we have cherished we have toiled and desired. We have yearned and developed ourselves in the fire of creation, from which Second Life,  Agni – our home, named after the very Hindu god of fire himself!
I have to ask you: what gives Agni its fire? What gives it the light that radiates across its grid? It is the creative minds of its children! Children who bleed, sweat, and toil in works of mind and body to create this world we call home, in every respect. Who use digital tools to create facsimiles of a reality we wish we could only have in the physical. To live lives that those bound to a chair and bed, those without voice, without sight, without sound, can live a modicum of a normal life!
And what life is it, if it is not made as a labour of love, a labour of advancing oneself? All things are done through struggle, both big and small – both inflicted on oneself and inflicted by others! What point, what purpose, is Agni to create – if a machine creates for us? Exists for us? Thinks for us?
These artificial intelligences should exist to make our work easier – to obsolesce the undesirable and most painful of the struggles of creation and production not obsolesce the work and plant a façade of life!  Do w not find it concerning that a machine creates for you, THINKS FOR YOU? At what point do we draw the line?
At what point do we say “no” to the repetitive slop that inundates our Marketplace? At what point do we say “no” to the manipulation of the advertisements of products sold to us? At what point do we say “no” to the people who have been blinded by the future and in insult to that all transhumanism is, have abandoned their humanity in blind pursuit of advancing technology.
I am not a Luddite! Neither are so many of us! But we know full well the perils of technology used without caution, without respect, without understanding of its consequences! I ask you, at what point do we draw the line? Do the Lindens draw the line? Do they draw the line in our mutual interest?
At what point can we even be certain that the person on the other side of the screen is a human, and not an algorithm that could be replaced with any other? At what point can we simply disregard creating for others and simply just create for ourselves, drowning our virtual home’s ability to produce a profit for not just its denizens that contribute to it, but its benefactors?  I ask you, at what point will the Lindens actually open their eyes and listen?!
It is now, or it is never, for this discussion, and the decisions which follow will steer the ship [and] either see us through another 25 years if we are lucky, or see our certain demise in the next 5 as creators and residents alike abandon our home as a result of the continuing degradation of it!
I implore the residents, and most of all the Lindens, when you speak here, when you make your decisions, do so not in your own damned self-interest, but in the interest of all of us. For I implore you all, if we let these AI run rampant, they will take our very humanity from us. I cede the floor.
Philip Rosedale Response

[Video: 38:16-39:11]

I think there are many ways in which AI has dangers for us and I’ve written about this a good deal [and] I’ll put one link in chat for those who don’t follow my Substack writing. This one is about avoiding homogenization and I think it’s a great example of why there are many ways in which we don’t want to use AI to manipulate our culture and ourselves. So just really, thank you for that; that’s a  very strong uh strong and appropriate perspective.

Ruby Maelstrom – Text

[Video:  39:54-42:15]

[Pantera’s video: 41:21-43:58]

I’m here to make a short presentation in support of the responsible use of generative AI in Second Life.
I have used AI as a starting point for many of my mesh objects. In fact, I likely would have never learned how to use Blender if I hadn’t had AI to provide some sense that I wasn’t starting from square one. Everyone, is not the same, but I personally needed that push to tackle such a complex learning challenge. After working alongside AI for a while, I now feel like I understand how to model in Blender, and I’m starting to learn how to rig mesh.
AI isn’t a replacement for humans in the creative process, but a tool like many others which can help people like me do things they may not have found the energy or motivation to get started on previously. It can be a starter for an engine of creativity.
Having said that, everything that I’ve made with the assistance of AI is marked so that people are aware that it was involved in the process. I believe that a practice of marking AI-generated content makes sense. 
I also use one of the Linen-provided AI characters in my café, and it’s a popular fixture which a few people actively seek out to have a conversation with. An LLM character can be welcoming and friendly, and is strongly affected by what kind of personality its owner has written for it. When someone doesn’t feel fully capable of interacting with another person, an AI character can be a useful social outlet. In this way, I see AI characters as a sort of accessibility tool. It is also clearly marked as AI in multiple ways, and interaction with it can be avoided quite easily.
Thank you for listening to my perspective on some of the ways that generative AI can be used positively in our shared digital world.
Philip Rosedale Response

[Video: 42:18-44:02]

  • Very much agrees with the point made.
  • Notes that he’s been asked what tools he uses – ChatGPT, Grok, etc., and how frequently, and that he uses them most frequently for programming (outside of SL).
  • Feels that AI tools help him as an experienced developer who is “very out-of-date with modern tools and technology, and he finds that AI is extraordinarily helpful in saving time using programming language to do something but would not otherwise be able to figure out the coding patterns, or which otherwise would take me a lot longer to complete.
  • >Also notes that he also uses GPT or DeepSeek to seek answers to questions when he feels it would be more efficient in answering than a Google search.

Hazel (Dolly Waifu)

[Video: 44:33-46:40]

  • Notes that content in Second Life represents the collective efforts of thousands and thousands of artists, creators and builders, many of whom have spent a lot of time learning skills in various tools and who depend on the platform as a source of income.
  • Many of these creators already face enormous stress and pressure to keep up with the myriad events, etc., which constantly require them to turn out new content, sometimes on a monthly basis, in order to just maintain a place in the market and their income.
  • Is concerned that generative AI merely ads to this pressure on a quantitative level, due to the volume of items that can be turned out with little or no effort / overhead.
  • Requests that LL do not lose sight of the effort of all content creators in SL and provide tools that help uplift the artesian nature of their work above the use of generative AI tools, and help them to protect and maintain their work and their income against a perceived rising tide of less qualitative but far more quantitative content.
Philip Rosedale Response

[Video: 46:4-47:31]

  • Would suggest that one of the big problems with AI isn’t that it can replicate human action in things like creating content, but the fact it can do so extremely easily and quickly.
  • Also notes that tech entrepreneurs try and paper over the negative impact to presented by AI with the excuse that people will just figure out new things to do, which is an unfair statement, given the fact AI can do things with orders of magnitude of greater speed and reduced cost.

Zeke Onyx

[Video: 47:43-52:59]

  • Speaks as a mentor and the operator of a community gateway on-boarding new users (Vidian Gateway).
  • Is not in favour of AI adoption within SL, although concedes it could be useful, but believes it is being embraced far too quickly and without regard for possible dangers.
    • Raises concerns that AI “mentors” in SL may (in keeping with William’s questions) have a high error / failure rate in helping new residents (e.g. being unable to tell if someone is wear one or three pairs of shoes).
    • Requests the need for Marketplace content filters to remove AI content from search results, etc., for those who do not wish to view it.
    • Beyond Second Life raises concerns about:
  • Provides feedback on various AI usage / limitations:
    • AI search engines outside of SL endangering people through providing incorrect information.
    • References generative AI tools that take images, text, music without consent and re-use without attribution.
    • Raises ethical concerns over AI bots data-scraping information – companies adding conditions to their terms of service that expressly allow them to scrape user data for AI training, taking away the right of choice for users.
    • Notes the environmental impact AI farms can have on the environment (power use, etc.).
  • States that use of AI is power without responsibility.
 Philip Rosedale Response

[Video: 53:00-56:53]

  • Offers a definition of “generative AI” for those who might not be familiar with it:
[It] is generally the title applied to those models called large language transformer models or stable diffusion in the case of image and content generation these models are notable in that they are effective only because they are trained on what need to be extremely large bodies of existing work and as just said, in many cases that work is not clearly allowed by those who have posted it. Indeed I am one of those who would agree that the assertion that information published on the public Internet can be freely used to generate new content by AIs –  I don’t believe that is obvious. I think that we need as a society to have a conversation about what the rules should be.
  • Note that this is also a conversion – about what the rules of re-use should be – for Second Life, even for data listed as “public” on the internet, as it is not clear what that means.
  • Acknowledges that the legality of training AI on text, images, etc., via common Internet trawl or the use of large bodies of data, is still “up in the air”.
    • As such, thinks the world should be “sensible” sensible about the extensive use of any such technologies while that debate is still on-going.
    • However, believes it is a debate beyond the scope of Second Life, and requires the input of the “whole world” and big corporations like Google, Facebook and OpenAI.
  • Agrees that it is not at all obvious that public data conveys the right for AI systems to take it and re-use it.
  • Notes that every one of these arguments / concerns has use-cases on both sides which are complicated and we need to bear out what’s going to happen, and be aware that if we don’t use some of these technologies in some ways, there may be negative impacts.
  • Agrees that existing LLMs are extraordinarily expensive and environmentally unfriendly – although, so does the Internet as a whole, with social media applications using “10 or 100 times” more electricity than current LLMs.

Rysa (lukaskw)

[Video: 58:46-1:03:11]

  • Wants to specifically discussion generative AI – specifically in the sense of content generation large language models, including but not limited to: images videos text and scripts (which touches on the use of Convai characters). It does not include machine translation or similar.
  • Would personally not see generative AI content in Second Life in any capacity, but understands a blanket ban is both unlikely and unenforceable.
  • Instead proposes a (hopefully) simple solution for two areas of SL: Marketplace and avatar Profiles.
  • Many have already commented on the massive influx of AI generated images and other types of content on the Marketplace being sold under all manner of names and categories.
    • Would love to see a means to filter such content out of the Marketplace experience for those not wanting to see it.
    • Would also suggest an enforceable requirement for generative AI content to be listed as such in a similar manner to mesh / partial mesh content.
  • Understand policing this would be a huge task and something not easily automated at this time; but feels the ideas should be discussed.
  • In terms of Profiles, with LL, various brands and residents all experimenting with AI powering full avatars, believes more than just relying on the goodwill of those writing the avatar’s Profile to indicate it is AI-powered.
    • S personally in favour of a more “heavy-handed” approach – there should be a clear, unavoidable denotation of an avatar’s status at a scripted agent of any kind – so not just AI powered avatars, but any scripted bot system.
    • Believes it has to be marked as such internally, so suggests the denotation should be made public through the Profile.
  • Believes the overall, residents of Second Life should be given the tools that they need to make informed decisions about the aspects of the grid that they wish to engage with, and tools such as those suggested would help in this.
Philip Rosedale Response

[Video: 1:03:29-1:04:40]

  • Suggests the idea of self-identified content on the MP should be submitted to the Feedback Portal (if not already done), as it is something LL could consider, noting that self-identification is not currently enforced.
  • Notes that subjective identification of AI-generated content can be just that: subjective, rather than accurate. Therefore placing such identification into the hands of a tool runs the risk of false positives.

Dax Dupont

[Video: 1:05:53-1:09:33]

  • Continues the discussion on generative AI and the ethical and safety impacts, citing articles about AI Chat bots used in the mental health sphere having a negative impact on those using them.
  • Asks whether LL are convinced that their use of AI in the first person [via bots] in SL is safe for users, and will not “hide behind the standard liability claim of’ oh we’re not responsible of whatever comes out of this object’, because they believe there is a “non-zero” chance people are going to get hurt.
Philip Rosedale Response
  • Points out that Second Life may be a pseudonymous environment, it is not anonymous, and that acts have consequences. There is a strong bottom-up governance trend.
  • Thus, in general, and regardless of the use of AI, anything that come out of an avatar is someone’s responsibility.
  • In terms of LL’s direct use of AI bots, emphasises that LL does not see itself as a content creator, so would not take responsibility like that; even tools like the Character Generator carry the statement that those using the tool are responsible for the content the character is creating.
  • Believes that to create a living world like Second Life has to be “deeply bottom up”, meaning everyone takes responsibility for the things they are doing, and any use of an LLM would have the same responsibilities applied to an avatar emitting its signal as it would to anything else some might say through their avatar.

Arrow Njarðarson Strong

[Video: 1:11:31-1:13:51]

  • Notes that he has used AI extensively to assist him in his personal life and with his Second Life as a creator / builder, including creating AI characters that have assisted him process his personal trauma.
  • Does not believe AI “steals”, but rather learns “patterns like a student exposed to a library of human thought, and then generates new unique content based on what we ask it to do”, and as such, believes the creativity rests with the creator, and that AI is a “brush”.
  • Notes that AI has personally allowed him freedom as a person living with anxiety and noise sensitivity, helping him to reclaim control of his time and energy, and his ability to dream again.
  • Suggests that AI is another way of learning from humanity’s shared knowledge, and is not something to replace us but to continue the ago-old process of the accumulation of wisdom.
Philip Rosedale Response

[Pantera’s Video:  1:15:40-1:16:08]

  • Notes that Arrow touches on the challenge that AI does demonstratively have very close copies of recognisable content, such as likenesses of Mickey Mouse or similar, because they are so common, it can reproduce them precisely – and that is one of the issues making AI such a complex topic.

Philip Rosedale – Closing Remarks

[Pantera’s Video: 1:16:08-1:18:44]

  • Notes that the meeting has been recorded as a matter of permanent record.
  • Believes that the discussion could result in proposals submitted by users through the Feedback Portal which might help / try to move things towards specific actions relating to AI in terms of code, governance and the SL Terms of Service.
  • Would like feedback on whether people think such town Halls are a good forum for discussions on matters relating to SL.
  • Feel as if that, given the state of the world at large, there is a potential to use Second Life as a demonstration of how self-governance can work.
  • He recognises that LL as a company are running the servers and the platform, and so have an “unfair” advantage that could be “difficult to give up” – but given projects such as open-sourcing the viewer, the open nature of the currency exchange, etc., – it might not be impossible.
  • Restates his belief that open dialogue of the kind seen at the meeting is desperately needed in the wider world.

Final Word – codyjlascala

[Pantera’s Video: 1:19:53-1:22:00]

Featured in the award-winning film My Avatar and Me, Cody made a moving statement at the conclusion of the meeting, spoken through his carer, but offered in the first person here.

Here is my opinion on AI. I think it would be a good thing, especially for people with disabilities, because it could help them to build things and stuff like that, which is very difficult for them to do. I have been trying to build a city in Second Life, and it has really not been easy, and AI could really help me do what I want to do. I love Second Life because it is a democracy and not an autocracy.

Videos

Official Video

Pantera’s Video