Logos representative only and should not be seen as an endorsement / preference / recommendation
Updates from the week through to Sunday, June 29th, 2025
This summary is generally published every Monday, and is a list of SL viewer / client releases (official and TPV) made during the previous week. When reading it, please note:
It is based on my Current Viewer Releases Page, a list of all Second Life viewers and clients that are in popular use (and of which I am aware), and which are recognised as adhering to the TPV Policy.
This page includes comprehensive links to download pages, blog notes, release notes, etc., as well as links to any / all reviews of specific viewers / clients made within this blog.
By its nature, this summary presented here will always be in arrears, please refer to the Current Viewer Release Page for more up-to-date information.
Outside of the Official viewer, and as a rule, alpha / beta / nightly or release candidate viewer builds are not included; although on occasions, exceptions might be made.
Second Life Project glTF Mesh Import, version 7.1.14.15830455952 June 24 – NEW.
This is an early Alpha release with some of the rough edges and already resolved many bugs and crashes, although more are to be found, together with general feedback from the community. Please read the release notes if you intend to test this viewer.
Second Life Project Lua Editor Alpha (Aditi only), version 7.1.12.14888088240, May 13 – No Change.
The SpaceX Dragon Grace, carrying the Axiom Mission 4 (Ax-4) crew, approaches the International Space Station with it nose cone open to expose the docking mechanism within. At the time of this shot, both spacecraft were orbiting 421 km above the coast of southern Madagascar. Credit: NASA
After delays and concerns over pressure leaks within the Russian section of the International Space Station (ISS) – see Space Sunday: frustrations and extensions and Space Sunday: Rockets, updates and Planet Nine – the Axiom Ax-4 private mission to the station finally lifted-off from Kennedy Space Centre on June 25th, carrying an international crew of four to the station.
The SpaceX Falcon Nine booster lifted-off from Launch Complex 39A at 06:31:52 UTC, carrying mission commander Peggy Whitson, a highly-experienced former NASA astronaut and now Axiom’s Director of Human Space Flight; India Space Research Organisation’s (ISRO) astronaut Shubhanshu Shukla, filling the role of mission pilot; and mission specialists Sławosz Uznański-Wiśniewski, a European Space Agency project astronaut from Poland, and Tibor Kapu representing the Hungarian Space Office.
The fifth (and final) Crew Dragon vehicle – to be named Grace by the Ax-4 crew – atop its flacon 9 booster as both are raised to a vertical position at Launch complex 39-A, Kennedy Space Centre. Credit: SpaceX
The four were flying aboard the newest Crew Dragon vehicle built by SpaceX, which the crew christened Grace following a flawless launch and ascent to orbit.
We had an incredible ride uphill and now we’d like to set our course for the International Space Station aboard the newest member of the Dragon fleet, our spacecraft named ‘Grace’.
“Grace” is more than a name. It reflects the elegance with which we move through space against the backdrop of Earth. It speaks to the refinement of our mission, the harmony of science and spirit and the unmerited favour we carry with humility. Grace reminds us that spaceflight is not just a seed of engineering, but an act of good work for the benefit of every human everywhere.
– Peggy Whitson, AX-4 Crew commander
Following launch and separation from the Falcon 9 upper stage, Grace preceded on a “slow” orbital trajectory to “chase” the ISS, rendezvousing with the station some 24 hours after launch. This allowed the crew to check-out the vehicle and perform the first of their broadcasts to Earth. Docking with the ISS took place on June 26th, at 10:31 UTC, to mark the start of a say that is designed to last at least 14 days, but could extend to up to 21 days.
The Axiom Mission 4 (Ax-4) crew (in the blue jumpsuits) and the ISS Expedition 73 in a group portrait within the Harmony module of the ISS. Form left to right: back row – cosmonauts Alexey Zubritskiy, Kirill Peskov, Sergey Ryzhikov and NASA astronauts Jonny Kim and Nichole Ayers; front row: Ax-4 crew Tibor Kapu, Peggy Whitson, Shubhanshu Shukla, and Sławosz Uznański-Wiśniewski, with Anne McClain (NASA) and Takuya Onishi (JAXA). Credit: NASA
Also aboard the flight are a number of science experiments, notably from Poland and India, further emphasising the international focus of the mission. The flight is especially significant for Shukla; he is the first Indian to fly into space as a part of India’s newly-instigated astronaut corps (although not the first Indian national to fly in space), and has already been selected to fly in the first crewed mission aboard India’s home-grown Gaganyaan space capsule. His time aboard Ax-4 is very much seen as preparing him for that mission. For Axiom and NASA, Ax-4, is intended to signify a desire to maintain on-orbit operations aboard space stations as an international endeavour as the ISS researches its end-of-life in 2030, and facilities such as Axiom’s own space station take over from it.
Ax-4 also carries aboard it some special treats for everyone on the ISS: Shukla and Kapu have taken along specifically-developed national dishes and treats such as moong dal halwa, carrot halwa and mango nectar, together with a specially-formulated version of Hungarian chocolate and a range of Hungarian spices to help pep-up the taste of food on the ISS. Uznański-Wiśniewski, meanwhile worked with ESA, NASA and Polish chef and restaurateur Mateusz Gessler to develop an entire menu for the Ax-4 crew which includes pierogi, tomato soup with noodles, Polish ‘leczo’ stew with buckwheat, and apple crumble for dessert.
Nor is carrying such foods simply a matter of catering to personal whims; food can have a positive psychological impact – particularly comfort foods that bring with them memories of home and which offer a departure from the more usual offerings. As such, experiments like this can help nutritionists and psychologists bring more and better varieties of meals and foods to crews on long-duration missions, bolstering their sense of well-being and comfort.
Vera C. Rubin Opens its Eyes
Located on the El Peñón peak of Cerro Pachón, a 2,682-meter-high mountain in northern Chile is the world’s biggest digital camera, a 3.2 gigapixel charge-coupled device. It sits at the heart of the Vera C. Rubin Observatory, a major new astronomy facility capable of imaging the entire southern sky every few nights.
Originally called the Large Synoptic Survey Telescope (LSST), where synoptic describes observations that give a broad view of a subject, the observatory was first proposed in 2001, with work on the 8.4-metre primary mirror starting on 2007 with the aid of private funding.
The Vera C. Rubin Observatory imaged in 2022 during the final construction phase, seen against the backdrop of the Milky Way galaxy. The latter appears to be split in two by a dark path called the Great Rift. This is actually a shroud of dust sitting between Earth and out view of the centre of our galaxy, preventing the light of the stars beyond it breaking through as it scatters visible light. Credit: Rubin Observatory/NSF/AURA/B. Quint
In 2010, the observatory became the top-rated large ground-based project in the 2010 Astrophysics Decadal Survey, moving to be funded through and overseen by the US National Science Foundation (NSF), with the actual funding provided by the US Department of Energy and the non-profit international LSST Discovery Alliance.
Overall construction of the physical observatory commenced in 2015, with initial testing of the on-sky observational capabilities taking place in late 2024 utilising an engineering test camera, with the First Light images captured with the observatory’s Simonyi Survey Telescope and the 3.2 gigapixel camera taken on June 23rd, 2025.
The primary aim of the observatory is designed to build a continuous survey of the southern sky over 10-years in an attempt to answer a number of questions, including:
How did the Milky Way galaxy form?
What is 95% of the Universe made of?
What will a full inventory of Solar System objects reveal?
What will we learn from watching millions of changes in the night sky over 10 years?
Combining 678 separate images taken by the Vera C. Rubin Observatory in just over seven hours during its First Light test, this mosaic shows a region of space in the southern skies in which two nebula – the Lagoon Nebula (Messier 8), 4000 light-years away and shown in a vibrant pink colour, and the Trifid Nebula (Messier 10), some 5,000 light years away and also pink-looking – can be seen. Labelled are various stars and galaxies which lie in, before or beyond the Nebulae. The Lagoon Nebula is a stellar nursery, and is brightly illuminated by a cluster of young, massive stars within it, their illumination allowing it to be just visible with even a modest telescope. Click for full size. Credit: Rubin Observatory/NSF/AURA
To answer these questions the observatory will carry out science in four principal areas:
Understanding the nature of dark matter and dark energy.
Creating an inventory of the Solar System.
Mapping the Milky Way.
Exploring objects that change position or brightness over time.
The telescope’s wide field of view is extraordinary – 3.5 degrees in diameter, or 9.6 square degrees. Combined with the telescope’s large aperture (light-collecting ability), the telescope’s optics have an imaging capability three times that of the largest-view telescopes currently in use. This means the observatory can “see” literally everything – from the smallest sources of reflected light in our own solar system to remote deep-space objects.
A rendering of Vera C. Rubin’s Simonyi Survey Telescope (in the centre of the ring structure) and its mounting frame. Credit: Rubin Observatory project office.
To achieve this, the Simonyi Survey Telescope’s 8.4 metre diameter primary mirror is supported by a 3.2-metre diameter secondary mirror, and a tertiary 5-metre mirror, the world’s largest convex mirror. Both the primary and tertiary mirrors were designed to be placed together to make the telescope very compact and easier to re-orient, which it must do quickly and efficiently each night.
Further, it allows the placement of three additional corrective mirrors to reduce image aberrations without over-complicating the optical train. This in turn allows the telescope to avoid the usual adjustable optical mechanisms required to counter atmospheric image dispersion as a telescope is repointed and encounters different atmospheric conditions. This is particularly important as the Vera Rubin must be able to bet re-point and be ready to take an image within 5 seconds after the previous image capture has been confirmed – leaving no time for the usual atmospheric adjustments.
First light with a telescope refers to the first time a telescope and its instruments capture one of more astronomical images after its construction. This moment is significant for astronomers and engineers as it is an important step towards fully calibrating a telescope and correcting potential issues within the optics so that it is ready to start formal operations.
Made from over 1100 images captured by the Vera C. Rubin Observatory during its 10-hour First Light test, this mosaic contains an immense variety of objects, including some 10 million other galaxies, demonstrating the broad range of science Rubin will transform with its 10-year Legacy Survey of Space and Time. Annotated within it are a number of galaxies and brighter stars. Click the image for full size. Credit: Rubin Observatory/NSF/AURA
For the Vera C. Rubin observatory, First Light tests produced images revealing over 10 million galaxies and led to the discovery over 2,000 new asteroids. Once operational, the observatory will be capable to capture more information about the universe than all the optical telescopes used throughout history thus far, combined. Its image gathering capability means it will be generate 20 terabytes of image data per night. This data will be collected and transmitted to a series of “data brokers” around the world, ensuring that the data is not only secured across multiple redundant sites, but allows the brokers to serve the information and alerts to astronomers and research centres globally.
To assist in making sure astronomers and institutions can access the data and images they are interested in, the cloud-based data brokers are supported by a dedicated system called Data Butler. This holds all the relevant metadata for every image captured by the observatory, allowing astronomers with access to it to query it using astronomical terms – object type, time scale of observations, object co-ordinates, etc., and receive the images they need.
Vera Florence Cooper Rubin, 1828-2016. Credit: Mark Godfrey
The alert system allows the system to identify “transients”, unexpected events which could require an immediate response by astronomers: things like supernovas, kilonovas that produce gravitational waves, novas, flare stars, eclipsing binaries, magnetar outbursts, asteroids and comets moving across the sky, quasars, and so on.
Once operational it is expected that the observatory will issue up to 10 million such alerts per night, all of which will be parsed through the Data Brokers, allowing the system to analyse them and determine what should be immediately passed on to astronomers for further / detailed investigation.
In all, the Vera C. Rubin Observatory – named for Vera Florence Cooper Rubin, the American astronomer who pioneered research into galaxy rotation rates which is seen as evidence for the existence of dark matter – is set to revolutionise our visual understanding of the universe, our galaxy and our own solar system. However, there is a cloud on the horizon.
As it moves towards entering service, the observatory’s major source of funding, the National Science Foundation, is facing significant budget cuts and uncertainty about its future operation allocation.
Under the Trump Administration’s budget, NSF is set to have its budget cut by 56%, from US$8.83 billion under the Biden Administration to just US $3.9 billion. Already, the Trump administration has frozen or terminated 1,600 NSF grants. While on the day following Vera C. Rubin’s First Light test, 1,800 NSF staff were informed the administration intends to remove them from their current headquarters building as a part of “government efficiency”. Ironically, NSF only moved into the building under the first Trump administration. Worse, no word has been given as to where NSF staff are to be relocated. As a result, the attempt to displace the NSF is meeting strong resistance from both Capitol Hill and the American Federation of Government Employees (AFGE).
The particular concern for the Vera C. Rubin observatory is that if the Trump budget passes as is, the NSF’s Mathematical and Physical Sciences Directorate, which is responsible for funding astronomical activities under the NSF’s remit, will only have an operation budget of US $500 million. This means that optical and radio centres such as Kitt Peak, and Cerro Tololo Inter-American Observatory some 10 km from Vera C. Rubin, are to be “phased out” of the NSF’s budget, with the hope their operations can be transferred to “other organisations”. Similarly, the Nobel Physics Prize winning Laser Interferometer Gravitational Wave Observatory (LIGO), is to have its budget reduced by 40%, resulting in the closure of one of its two facilities, reducing its effectiveness enormously.
In response to concerns the Trump Administration emphasises “support” for the observatory, noting its 2025/26 budget allocation is increased from US $17.7 million to US $32 million over 2024/25. However, the former budget amount was for the final development phase of the project, not operations, and the US$32 million promised to the observatory is some 20% less than had been requested in order for it to start observational operations.
These concerns aside, the First Light images from Vera C. Rubin are astonishing – and one hopes the observatory will be funded to a point where it can complete its initial 10-year mission.
Grauland, June 2025 – click any image for full-size
Jim Garand and his SL partner PaleLily have re-developed Jim’s Homestead region of Grauland to present another photogenic setting with hints of mystery and story to tickle the imagination.
This iteration of the setting offers no themed title. However, the overall look of the setting blends characteristics seen within past builds to offer what might be called industrial-artistic, bringing together multiple elements into a unified whole. In places it comes across as familiar, but when taken as a whole is new and unique to itself.
Grauland, June 2025
For this iteration, the region has been split into two, north-to-south, the larger western portion of the setting home to the Landing Point and a large, abandoned industrial facility, part of which straddles the waterway and appears to have once been used to load / off-load barges.
The smaller and lower eastern side of the setting is dominated by the remnants of a brick-built lighthouse which presumably once helped guide vessels into the channel between the two landmasses. With most of the steel lantern house now gone and holes blasted in tits sides, the ruin look as if it at some point faced a bombardment of some sort, making the lighthouse an interesting feature.
Grauland, June 2025
Also on this smaller spit of land are concrete cubes, some solid and some hollow and with large holes cut into their sides. Those familiar with past Grauland builds may well recognise them as a familiar artistic piece; they do the same here, whilst mixing in one of those elements of mystery narrative the imagination might want to chase: who set them there and why?
Then there is the question of quite what was manufactured in the industrial units – or what they have may have been used for, and why does the warehouse still appear to be in use?
Grauland, June 2025
However, among the larger settings are smaller vignettes which tickle the imagination. What are the cars in field, and who brought them to this isolated place? Why does one have a windscreen that looks like some tried to shoot the driver? Who is using the “club room” on the ground floor of one of the buildings – and who has turned the upper floor into an adult-like film set? Is it s one-off use, or is the place now a sight for illicit film-making? The more you explore the more the opportunities to ask questions of yourself and create little stories.
Right across the island there are dozens of opportunities for photography, together with plenty of places to set and pass the time – and again, contemplate suitable back stories for all you can find. The places to sit are widely varies, from the wooden deck with its sun loungers and the pool rings and floats inviting people to try the water, to the chairs up on the catwalk over the big tanks.
Grauland, June 2025
Another fascinating and engaging build from Jim and PaleLily.
The Project Zero User Group provides a platform for open discussion about Project Zero, the cloud-streamed version of the Second Life Viewer. Topics can range from sharing the goals for Project Zero, demoing the current experience, and gathering feedback to help shape the future of cloud access for Second Life.
These meetings are conducted (as a rule):
The second and fourth Thursday of every month at 13:00 noon SLT.
Meetings are open to anyone with a concern / interest in the above topics, and form one of a series of regular / semi-regular User Group meetings conducted by Linden Lab.
Dates and times of all current meetings can be found on the Second Life Public Calendar, and descriptions of meetings are defined on the SL wiki.
Firestorm Zero has – for the time being – been shut down as a streaming option.
Those who still had remaining purchased time for Firestorm Zero should have been refunded and received an e-mail notification that the service was being shut down.
The reason for shutting it down is primarily related to:
The amount of work required to maintain two different streaming products.
The need for the Lab to migrate project Zero to a new platform at the behest of their streaming provider, and not having time to also migrate Firestorm Zero in the same period.
This does not mean there will never be a further offering of Firestorm as a part of the Second Life streaming service.
New Joiner Workflow Updates
There has been a complete refresh of the Lab’s web-based new joiner workflow at join.secondlife.com, which now leverages the Avatar Picker first seen in Project Zero. For more on this, please see A short look at the refreshed Second Life web-based join flow.
As a part of this, the sign-up flow specific to Project Zero has also been updated. In particular, the Project Zero web site web.secondlife.com (formerly zero.secondlife.com) has been refreshed so:
It now shows the same backdrop image as now seen on the web sign-up flow.
It Includes a slide show of Polaroid-like snapshots intended to illustrate to ne users what people can do in SL, whilst waiting for an instance of the viewer to become available and load in their browsero.
My own tests of this suggested that the page will keep parsing through the images seemingly ad infinitum if you cannot be connected to a browser instance of the viewer, rather than indicating this is the case.
The revised web page for launching Project Zero (web.secondlife.com) with the backdrop splash image and the “polaroid snapshots” slide show
Overall, these changes – both to web.secondlife.com and (particularly) join.secondlife.com have resulted in more incoming users sticking around in-world.
Current Focus
In line with Philip Rosedale’s SL22B Meet the Lindens session on Wednesday, June 25th (see the video, or refer to my summary of that event for specifics), Project Zero is focused on the new user experience.
Specifically, how to get a new user from logging in and choosing an avatar through to customising that avatar so it reflects what they want to reflect, as easily as possible, using the Avatar Welcome Pack content, rather than Senra.
This is likely to be an iterative process over multiple months, with a cadence of updates intended to test and refine ideas and approaches.
The work will also involve gathering quantitative data on how well the approach(es) seem to be working, and also qualitative data through spending time at areas where people coming into SL are, and watching and interacting with them as they customise their avatar, and gain feedback.
General Discussion
Project Zero remains primarily focused on use by incoming new users, with a “very small concurrency” of existing users available to access it.
The reason for the focus on incoming new users is because it is easier to try new ideas and iterate on them easier using Project Zero, rather than trying to do so through the desktop.
It is recognised that there is a hunger among existing users to try the streaming viewer, and one way to do this might be to start to offer it on a paid basis as was the case with Firestorm Zero. However, this is not something that is likely to happen in the short-term.
The current action of Project Zero simply booting a user off when their allotted time is up without any warning / gracefully log-out is something to be addressed in “the very near future”.
Whilst the UI for Project Zero largely resembles that of the official viewer, there are differences, some visible (such as the lack of a left-side toolbar button field, and some not so visible.
The not-so-obvious difference is that elements linked to buttons are subject to redesign, so that if any are updated – such as Chat – the associated Conversations floater can be redesigned so it does not obscure so much of the in-world view when it is open.
These aspects of the work potentially allow elements of the viewer to be displayed outside of the world view, but withing the browser tab, as has been done with the Avatar Picker, leaving the in-world view unimpeded.
An example of a viewer toolbar button (Avatars) causing the Avatar Picker to be displayed within the browser tab, but outside of the Second Life viewer window, leaving the latter unencumbered.
A general discussion on helping new users understand customising their avatars – from providing freebies suitable for the bodies in the Avatar Welcome Pack in placing such as the Welcome Hub and locating these together with the teleports to the changing rooms; providing additional help and information boards, etc., within the changing rooms, offering new users a “home space” where they can go to change (much like Sansar had / has), etc.
Offering some form of recognised changing room area where new users could experiment with avatar dressing / customisation was seen as advantageous on several levels:
It has a familiarity with the way we go about trying clothing in the physical world, and thus is a comfortable environment.
It could be used by a new user and a mentor or friend, so the latter can give support and lessons without interruption.
It can save on unintended embarrassment (e.g. having a fatpack for multiple different bodies and accidentally wearing everything).
A broader discussion on how to offer new users more of an experience in Second Life – should they be prompted / directed on the basis of interest, or should it be free-form? Should users be provided with lists of Groups which share their interests? How can this be done?, etc.
The refreshed Second Life web account sign-up page with the new splash image backdrop
Linden Lab has recently refreshed the Second Life web-based new user sign-up flow at join.secondlfe.com in order to offer incoming new users a smoother, easier experience when using the web sign-up process. Some of this incorporates work carried out with Project Zero – the viewer in a browser – and the new sign-up flow applies to both the viewer and Project Zero.
Key elements of the update include:
An image backdrop for the account creation page.
Quality of life updates to make it clear what information needs to be entered and when a mistake is made; use of a clearer font, etc.
There is no longer an avatar picker for those pointed to the viewer download workflow; instead, after completing the account creation page, new users are directed to download and install the viewer.
On logging-in through the viewer, a new user will be automatically assigned an initial avatar from the Avatar Welcome Pack, and desktop version of the Avatar Picker deployed to Project Zero (see here for more) is automatically displayed to allow for avatar customisation.
The web join flow still offers a random chance of a new sign-up being directed towards accessing SL via Project Zero rather than being asked to download the viewer.
The refreshed Second Life web sign-up goes from the account creation page directly to either the viewer download page (displaying the SL Mobile options) or to the Project Zero page
Further it should also be noted:
These changes do not apply to the sign-up flow for SL Mobile, although that sign-up process has been updated independently of join.secondlife.com.
The reason for removing the Avatar Picker from the web workflow was because data showed that a lot of new sign-ups were spending a significant amount of time actually in the avatar picker and customising their avatar, and then not actually going on to actually log-in to Second Life, so was seen as a blocker to getting people in-world.
All incoming new users are given the same avatar (I’m not sure if the selected avatar is periodically rotated), which can be interesting.
The Avatar Picker / Avatar Welcome Pack
As noted, the Avatar Picker – reference to as the Avatar Welcome Pack – is an idea that originated with Project Zero, and is now offered through the official viewer (and those TPVs that have adopted the 7.1.15.15596336374 – 2025.04.01 code base) with some tweaks – such as being presented as a floater within the viewer, and not having the Avatars toolbar button as is the case with Project Zero.
New users installing the release viewer for the first time should find the Avatar Picker open by default after logging-in, with the avatar tab displayed.
The Avatar Picker / Avatar Welcome Pack floater and tabs. Note: due to a known issue, only the female outfits are currently only presented / available (and Male clothing folder in the Library is empty)
Clicking on one of the six avatar images within the tab will automatically apply that avatar.
Clicking the Clothing tab with display the available outfits. Clicking on the image of an outfit will apply it to the currently-selected avatar.
Note: The update was deployed with a known issue that only the female outfits from the Avatar Welcome Pack are available in the system library. This is being addressed.
Selecting an avatar or outfit from the Avatar Picker will add the associated folder(s) to the Clothing system folder in Inventory.
If the floater is closed, it can be accessed again via the Avatar menu → Avatar Welcome Pack…, which replaces the old Choose Avatar option.
The Avatar Welcome Pack menu option
Personal Feedback
This is not intended to be an in-depth analysis of the now flow, but I have some general observations.
Overall the changes make for a smoother on-boarding, even allowing for the viewer having to be downloaded and installed (if the user is pushed through that flow).
This is very much assisted by taking the avatar customisation process out of the sign-up process, which as noted above, had become a bottleneck.
The avatar picker is fairly intuitive, but could perhaps benefit from some tool-tip prompts.
There are some areas of concern:
Each time the Avatar Picker is used, it generates a completely new folder for the selected and / or selected outfit within the Clothing system folder in Inventory. Whilst this is not directly visible to new users who might not be aware of Inventory to start with, it does potentially lead to a lot of duplication and additional inventory bloat.
There is now two very different and completely incompatible “starter avatar” systems still within the viewer:
The creator-supplied Avatar Welcome Pack (which I believe will be expanded upon).
The Senra avatar system.
Fortunately, the Senra system is fairly well buried within the system library; however, the majority of in-world information at places like the Welcome Hub, and resources on-line, such as the Second Life University videos focus on Senra. Hopefully, if both systems are to be run side-by-side, this balance will be redressed.
Senra at the Welcome Hub – but no Avatar Welcome Pack guidance as yet
The fact that the same avatar from the Avatar Welcome Pack is given to all incoming users means that the various spawn points where new users arrive can end up looking like a beam-in point for a gathering of clones.
This last point is really trivial to a point, but it does make arrival points for new avatars look and feel a little odd. As to the rest, nothing is impossible to correct – and much of it is hopefully already on LL’s radar; with limited resources, updates to all aspects of a process can take time, some of which can be spent engaged in testing and revising basic ideas and approaches.
Overall, the refresh to join.secondlife.com comes over as positive, and helps to bring the viewer and Project Zero a little closer together for those who might use both.
SL22B Meet the Lindens: Philip Rosedale, with Brett Linden (r)
On Wednesday, June 25th, 2025, Linden Lab held the second of the SL22B Meet the Lindens events, featuring Second Life and company founder, Philip Rosedale, in conversation with Brett Linden. The session was live and featured a mix of pre-submitted questions, and those asked during the session.
This is a summary of the majority of topics discussed at the session, and the official video of the session is embedded at the end of this article.
Fore ease of reference:
Timestamps are provided to the relative points within the video where specific topics are discussed, allowing readers who prefer to listen to the comments directly to be able to do so.
In the hopes of better continuity, questions asked during the open Q&A session which related back to comments made earlier in the session have been incorporated within the topic section itself under the heading “From the Live Q&A Session”.
Note this is not intended to be a full transcript, but rather covers those items discussed which are liable to be of the most interest.
Has been back working on Second Life full-time for some seven months. Prior to that, was sharing office space with LL, but was not working on SL day-to-day (that work was with In Reality Lab (IRL415).
Obviously, for the first 10 years, was the CEO of Linden Lab.
Recognises the company has 25 years of history and SL has changed a lot, but is delighted that overall, the technology LL has created / brought together is good, positive and humane for people, when as a whole within the industry, the overall impact of technology can be uncertain – or even negative (as with social media).
Would say is he is not proud of anything he has created – but is proud of what people have created in SL.
From the start, was always most interested to see what people would create in SL, and started out more from a physics perspective than a creator perspective: making various physics laws and then seeing what people created using those laws.
Points specifically to:
The volume of art produced within Second Life, and how it is exponentially larger than any single art museum.
The number of teaching / learning spaces and all the various ways they are used.
All the communities that have grown in SL – particularly their support for diversity.
Is “super, super proud” of the positive feedback he receives from people about how SL has helped and changed them.
Thinks Second Life has so many hidden gems. Second Life could be compared to Los Angeles, but has for more to discover within it than can be found in LA, so picking on a particular thing as a “hidden Gem” feels “goofy”.
That said, would say that one thing he would like to see more of is the “sandbox experience”: people building and sharing together in sandboxes, becoming friends learning from one another, teaching one another.
Does not believe LL has intentionally changed this, but the nature of the platform has simply drifted away from it. So while not a “hidden gem”, would like to see a move back towards it more.
From the Live Q&A Session
[Video: 52:41-54:54] In response to a comment that while he mention in-world building, Second Life has become more about building content outside of the platform and then “showcasing” it in-world, rather than fostering the arts of in-world creation, and asking if SL will “again” get tools to encourage the latter?
Believes SL has always been a place for content creation, and believe it should remain so, but recognises a need to provide more tools for content creation.
Notes that earliest content creation was prims, and most users learned to build using them. However, mesh is a more sophisticated means of content creation, which necessitated leaning on external building tools and then import the results.
Were it possible, he would have some form on in-world mesh editing capability, but providing such would be difficult because of the complexities and capabilities of the toolsets involved (e.g. blender).
Suggests that if SL is not supporting enough of the features of something like Blender in-world, then perhaps the Lab might need to figure out how to correct that.
[Video: 58:58-1:01:07] What is the one thing you would change in Second Life if you could?
While there are lots of things he would like to change, the first thing that came to mind was better support for democracy and Groups.
A lot of good work has been done with Groups, but when SL was started, Groups weren’t considered, nor were things like social media, as SL was supposed to be a place.
Groups were one of the things that came out of SL that he didn’t really expect, and it has given him pause for thought on how self-governance tends to be done through Group membership.
So sees making Groups stronger in terms of governance, democracy, identity and reputation, etc., as important, and as such, would liked to have considered Groups from the outset as a “kind of a fundamental all primitive in Second Life”.
Second Life can do something really special – preserve diversity in an increasingly homogenous world.
A lot of what is going on in the physical world – social media, AI, the geopolitical situation in many countries (notably, but not exclusively the United States) – is trying to make everyone in the world more similar and repressing differences between people.
So in thinking about a “vision” for Second Lie, tends not to focus on a specific feature or reason for the platform, but the opportunities SL has to stand as a place that preserves diversity and differences, and the ways in which it can deliver on that.
On Expanding the User Base: Non-Verbal Communication Cues
Find it amazing that SL has been around for 20+ years, and still represents probably the largest gathering of “grown-ups” in a virtual world.
Recognises that SL is not for everybody and only a tiny fraction of a single percent of people are using it, whereas social media is used by a majority of people worldwide.
Personally believes that to make SL more accessible to the majority is to get people comfortable with using SL through better transmission of nonverbal cues.
Right now, an SL avatar is non-expressive in terms of what a person is expressing by way of hand-gestures, facial expression, etc.
Such no-verbal cues are critical to communications, and in order to gain millions (or more) users, SL needs to be able to someday express them.
Believes this is a particular reason why VR Chat has been so successful, because VR headsets allow more non-verbal cues on communication to be expressed.
Obviously recognises there are other technical aspects of SL which need to be addressed: the complexity of the viewer UI, avatar dressing, etc., but to grow the user base, SL needs to get everyone more comfortable talking to people they don’t know by transmitting these non-verbal cues.
From the Live Q&A Session
[Video: 35:07-37:23] In response to a comment on the benefits of the asynchronicity of text chat, which many users prefer:
Didn’t mean to dismiss other means of communication in Second Life such as text chat, avatar actions, etc., and totally values these means of communication.
Rather, he meant that if two complete strangers are put together with Second Life, unable to physically see one another and their avatars their only means of communication / interaction, most will express discomfort at conversing with a stranger without the benefit of body language and non-verbal cues as to what the other person is thinking.
So while the more “advanced” means of communications within Second Life are valuable, there are table stakes around getting people comfortable when they are conversing with strangers through non-verbal cues.
Bringing native VR headset support to SL is something the Lab is “always” thinking about. However, there are no current plans to do so.
Has a deep respect for VR Chat in the way it has demonstrated how to build an experience like SL entirely for VR headsets – something he tried to do with High Fidelity, and recognises there are “amazing” experiences to be opened up by VR headsets.
However, also recognises that VR headsets are not for everyone and they’re may not be particularly with people to whom something like Second Life appeals.
As such feels the overlap between VR headsets and virtual worlds is not clear, but is by no means a 100% overlap.
But again, does feel there are capabilities SL has which VR could “completely open up” (e.g. the already mentioned non-verbal communication cues).
Also notes that some of this could possibly be achieved without the need for a VR headset, such as using a combination of AI tools and a webcam (or webcams) as a means of conveying non-verbal cues and controlling an avatar, without the need to put on a “face toaster”.
So, does think VR support for SL is interesting, it’s just not something the Lab has any announcements about at this time.
As a sophisticated technologist, and is very involved in AI, and is on the board of the California Institute for Machine Consciousness (AI), and so is doing a lot of work on AI outside of Second Life.
Within Second Life, believes the Lab has been very cautious with regards to AI, and has tried to be respectful as to the risks and the debate about how to use AI.
The Lab hasn’t “done anything broad” as yet with AI outside of a few experiments.
Stated that any use of AI within Second Life “has got to end up by enhancing human contact not reducing it”, by helping people to connect to one another, and must do this “well”.
If AI can help with that, I’m all for it; if it hurts that, we shouldn’t do it. And as long as I’m here, I’m going to think I maintain precisely that position.
– Philip Rosedale on AI
Outside of Second Life, thinks that AI offers “big possibilities, but also even more enormous risks”, and is going to cause some “necessary existential risk” which will cause “profound change” in the whole human population, and believes this will include:
Forcing us as a planet-girdling species to change how we work together and do things.
As someone who programs a lot with AI, and as an engineer, is both shocked and excited about the potential impact of AI in the world whilst still trying to make sense of it.
Returning to SL, believes the Lab is being “super cautious” and respectful of the risks that people have been raising.
From the Live Q&A Session
[Video: 41:31-47:10] On the use of AI Tools in Content Creation & managing the influx of copyrighted and potentially copyright-derived material produced using AI being upload to SL / the Marketplace.
Notes that there are many different types of AI content, and that the general thinking among experts is that there is little hope of using AI tools to detect AI generated content.
Given this, it would not be technically possible to add a “no AI” filter to the SL upload mechanisms; the technology is simply moving too fast, and would outstrip any such capability.
That said, LL does respect the conversation going on around whether and to what extent and what should be the proper approach for AI should be.
Believes that the wider global discussions on the use of AI are appropriate to be considered.
Also feels that a lot of AI generated 3D content is so bad he doesn’t believe it would gain much of a foothold in SL compared to the content people are making, and that given the overall state of AI 3D content tools, doesn’t feel that at this point in time, it is not a major concern – although this could change.
Points to his essay Ultravalletic Catastrophe and the potential for AI to completely overwhelm social media messaging due to its ability to impersonate people, which he sees as a major issue, and notes that this could result in fake AI “people” creating Second Life accounts, and we’re not even going to know it; so we’re all going to have to deal with the problem, and there’s not an easy way to just put a there’s not an easy way to put an AI filter on Second Life, even if everyone wanted one.
Believes that with his background and LL’s geographic positioning, the company will probably figure out what to do “right” with AI.
Is particularly looking forward to the full deployment of the WebRTC (Real Time Communication) Voice service to replace Vivox.
As well is improving Voice quality, this will allow:
Speech-to-text and text-to-speech.
Language translations.
Thinks this can all be done later in 2025, with captioning (speech-to-text) and text-to-speech being the “number one” accessibility request of which he is aware.
Notes that between Project Zero (the SL viewer in a browser) and SL Mobile, the Lab has been able to get 10 times as many people from the sign-up page and into SL compared to having to download and install the viewer.
This looks to have doubled the number of people returning to SL after the first couple of times they initially log-in.
Regards this as an important metric in allowing LL to further grow Second Life.
Following-on from this, the focus is shifting towards
The experience new users have after coming into Second Life.
The whole dressing an avatar / changing an avatar’s appearance.
Some of the latter has been initiated through the Avatar Welcome Packs, and will be expanded into the complexities of actually customising an avatar, with the aim of simplifying it – or at least making it “modestly difficult rather than almost impossibly difficult.”
His own experiences in trying to create an avatar / look (see the videos here and here) have helped inform a design direction for the Lab to take, and so updates should be coming Soon to Welcome Areas.
From the Q&A Session: Project Zero
[Video: 37:24-38:51] Will Project Zero ever be free to those on lightweight hardware?
The hope is to make it free at some point; currently, it is expansive to stream the viewer to a browser by the hour.
So yes, longer term the hope is there, but right now, LL cannot offer it for free, and are focusing on offering it more to incoming new users rather than existing users – hence why some have not been unable to access it.
Is optimistic about being able to offer it potentially for free because the rapid advances in AI are driving the price of GPUs down very fast, so there will be a crossover point which, once reached, will make more sense to offer Project Zero free to everybody, but things are not yet at that point.
On Creativity and Messaging / Policies and Supporting Creativity
Agrees that maybe there are ways LL could do a better job to about what should be compelling to you as you’re coming to Second Life as a creator.
Outside of for the “table stakes” of content creation: keeping fees low, the GDP stable, the ability to make money through content creation, etc., believes that adding creative features and capabilities is a mechanism for supporting creators.
Support for glTF mesh import, which is currently available (at the time of writing) within an initial release candidate viewer.
Notes that support of glTF is important a) because it is a recognised standard and makes it easier to import from content creation tools into SL; b) support for the current mesh import format (COLLADA.DAE) is being increasingly deprecated within content creation tools.
These are projected aimed for this year, and more broadly, notes the Lab needs to keep adding new (and modern) capabilities to support content creation.
There are often questions concerning how LL prioritises work, e.g. bug fixes vs. Stability vs. implementing new features. Part of this is because it’s a little harder to see what goes on behind the scenes.
Feels that since his return full-time to SL, for any release and at any given time, around 75% of development time is toward infrastructure maintenance performance improvements and bug crashes and fixes, including (but not limited to): DDOS attacks; upgrading the platform on Amazon services. etc.
So for the 50-ish strong development team, 75% at any time are working on “keeping the wheels on the car” and some 25% are working on “new shiny” features.
[Video: 34:05-34:55] It is hard to keep Second Life running, but everyone at the Lab is doing good work, and there are not a lot of things the teams could just stop doing, which drives the 75%/25% split.
SL has been fortunate as the mechanisms put in place (e.g. land fees) allowed the company to become profitable very early on – breaking even in around 2006.
Since then, the company has been able to increase its profitability “a bit beyond that”. So yes, the company is “comfortably profitable”.
This means the company will continue to be able to do what it has always done: set its own course. There is no critical market fit or monetisation problem requiring further funding from investors.
As such, the company is able to sustain profitability and choose where it wants to take things next without having to worry about additional capital inflow.
In theory should be relatively easy to move those servers which are most heavily used by a geographic region, but is not sure how much this has been looked into.
Also notes the same is true for Project Zero – the servers running the viewer instances could be located in multiple Amazon centres. This is something that the Lab “should definitely do” as Project Zero starts to open up to more users.
On Users Helping Spread the Word about SL / Helping Improve the New User Experience
With the company able to bring-in more people (via Project Zero / SL mobile, for example), the Lab needs help in figuring out how to make a newcomer’s experience in their first few minutes in SL “10 times better”.
Is not sure exactly how users and communities can help LL to do this, but one way might be to experiment with different ideas and bring them to the Lab (e.g. develop an experience that would be comprehensible, compelling and interesting to a new user who has just arrived in SL).
There are a couple of on-going programmes built around the new user experience, and those interested in helping are encouraged to join with those programmes.
Has also ready noted the design challenge of helping people to more easily dress and customise their avatar, making it fun while also exposing them to SL content. To be effective, this work must involve changes to the software and to in-world content so a new user can more easily create an avatar that speaks to their needs.
Spatial audio can be a pain in the posterior is that Bluetooth devices (including Apple) do not support stereo when the microphone is on. This means that spatial audio cannot be presented to the ears when the microphone is on.
This has been one of the contributing factors as to why spatial audio hasn’t been more directly pursued within SL despite the environment being “amazing” for its use.
Does believe that this issue will get fixed “in the next year or two”, and Bluetooth will support spatial audio correctly when microphones are on. When this happens, SL will definitely embrace everything that you can do with spatial audio; and you have to have spatial audio for group conversations to be comfortable.
Thinks it would be “wonderful” to have LindenWorld (the precursor to Second Life) in-world.
Suggests that perhaps the “easiest” way to do this would be to have an intern who can focus on the project. However, the challenge would be to just get LindenWorld running again, as it did involve a different approach and a lot of user-generated content.
Believes LL have created a strong asset with SL Mobile, which has Unity under the hood, giving LL access to the Unity renderer which could be used on the desktop and be a potential stepping-stone towards VR support and offers “solid base that we can use for a lot of stuff.”
Understands that in the coming quarters there will be further improvements.
Everybody here, all of us, should be proud to have been part of what is now more than 20 years of an experiment in demonstrating that online experiences, something you do online on your computer with other people can actually bring people together and make them happier and healthier. As opposed to making them lonelier and angrier, which sadly seems to be what we’ve done with most of our technological product time in the last decade or so…
And the hope that that gives to the world is something we all need right now … The whole world needs to see that you can use technology to bring us together; it is not just a force for evil. And so I think everybody here should be incredibly proud of having been part of that experiment and hopefully continuing to be a part of it indefinitely as we as we carry on so thank you.