
The following notes were taken from the Tuesday, March 19th, 2024 Simulator User Group (SUG) meeting. They form a summary of the items discussed, and are not intended to be a full transcript, and were taken from my chat log and the video embedded below, recorded by Pantera – my thanks as always for her work.
Meeting Overview
- The Simulator User Group (also referred to by its older name of Server User Group) exists to provide an opportunity for discussion about simulator technology, bugs, and feature ideas.
- These meetings are conducted (as a rule):
- Every Tuesday at 12:00 noon SLT.
- In text (no Voice)
- At this location.
- They are open to anyone with a concern / interest in the above topics, and form one of a series of regular / semi-regular User Group meetings conducted by Linden Lab.
- Dates and times of all current meetings can be found on the Second Life Public Calendar, and descriptions of meetings are defined on the SL wiki.
Simulator Deployments
- The SLS Main channel was restarted on Tuesday, March 19th without any deployment.
- Wednesday, March 20th, should see the next RC update deployed to the Bluesteel RC channel. This mostly comprises internal (non-user visible) updates, together with these user-visible additions:
- llSetLinkSitFlags/llGetLinkSitFlags – allow you to adjust the sit flags for a prim. It supports the existing two SIT_FLAG_ALLOW_UNSIT and SCRIPT_ONLY.
- At some future point, SIT_FLAG_HIDE_AVATAR should also be added, so you don’t need to play an animation that squishes the avatar so they aren’t visible in something like a very small vehicle.
- A feature for estate managers that will allow them to schedule automatic region restarts (see below).
- A new constant in llSPP PRIM_SIT_FLAGS it will contain all the sit flag information, (including ALLOW_UNSIT and SCRIPTED_ONLY (the two older constants will still be available).
- A new capability to load item inventory lists via HTTP (so items with large contents will load faster when accessed, although this will require a viewer update as well).
- A fix for avatars going into an animation thrash between falling and flying when using llSetHoverHeight() from an attachment.
- llSetLinkSitFlags/llGetLinkSitFlags – allow you to adjust the sit flags for a prim. It supports the existing two SIT_FLAG_ALLOW_UNSIT and SCRIPT_ONLY.
SL Viewer Updates
No viewer updates at the start of the week, leaving the current official viewers as:
- Release viewer: version 7.1.3.7878383867, the Emoji Viewer, issued February 15, promoted March 1st, 2024 – NoChange.
- Release channel cohorts (please see my notes on manually installing RC viewer versions if you wish to install any release candidate(s) yourself).
- glTF PBR Materials Maintenance-2 RC viewer, version 7.1.4.8149792635, March 11, 2024.
- Maintenance X RC (usability improvements), version 7.1.4.8148263040 , March 11, 2024.
- Maintenance-W RC (bug and crash fixes), version 7.1.4.8113624779, March 6, 2024.
- Maintenance Y RC ( My Outfits folder improvements; ability to remove entries from landmark history + Maint Z RC integration) updated to version 7.1.4.8114240508, March 6, 2024.
- Project viewers:
- Puppetry project viewer, version 6.6.12.579958, May 11.
WebRTC Voice
- Officially announced on Tuesday, March 18th, is WebRTC (RTC=”real-time communication”), intended to replace Vivox as the Voice component in Second Life.
- From my TPVD meeting notes from Friday, March 15th (video here) + notes from this meeting:
- WebRTC is something of a “defacto standard”, being built-in to most web browsers and supporting wide range of real-time communications tools in common use (e.g. Google Meet), supporting audio, video and data communications.
- In terms of audio / voice (the primary focus here), WebRTC has a number of standard features expected of audio communications services (such as automatic echo cancellation, better noise cancellation and automatic gain control, etc.) and offers much improved audio sampling rates for improved audio quality.
- Work has already progress so that WebRTC supports all of the current SL Voice capabilities (e.g. region Voice, parcel Voice, peer-to-peer, ad-hoc and group capabilities, muting, etc.).
- The service is provided to the viewer as a library with a wrapper – no separate .EXE plug-in.
- In addition, work has been put into securing the use of WebRTC Voice against attempts to eavesdrop, etc., and to route peer-to-peer communications via an internal server to avoid revealing user’s IP addresses to the risk of doxing, etc.
- Voice Morphing is not part of the initial implementation.
- Group voice will be capped at 50 people, but may be extended later.
- Speech-to-text and text-to-speech are being looked at by LL, but are not currently part of this project.
- There are test viewers with the necessary viewer-side WebRTC support (not yet at Project or RC status), and regions on Aditi, the Beta grid (webRTC1 and webRTC2) with the back-end support.
- Wiki documentation is in progress, and includes links to the test viewers (currently Windows and Mac OS) + a public code repo and other information. Note, this is subject to further update.
- There is a dedicated WebRTC board on the SL Feedback Portal where issues, etc., can be reported / raised for discussion.
- There is no backwards compatibility. Regions using the WebRTC back-end will only support voice on viewers using the WebRTC library. Ergo, once fully deployed, older viewer still using Vivox will not be able to access Voice services.
- LL is working with TPVs to enable the switch to using WebRTC (once deployed to the Main grid) will not be made util there is an assurance that the majority of users will be on viewer supporting WebRTC.
- The switch to WebRTC also opens the door to adding new features and capabilities to SL Voice, some of which have been long-requested.
In Brief
- The PBR team is going to be changing llSetColor/llSetAlpha so that they behave on PBR in a similar way to legacy materials – just no time frame on when at present.
- A discussion on scripting + notecard reading through the latter part of the meeting.
† The header images included in these summaries are not intended to represent anything discussed at the meetings; they are simply here to avoid a repeated image of a rooftop of people every week. They are taken from my list of region visits, with a link to the post for those interested.
Does this mean that Voice should once again work with Linux viewers without complicated fiddlng about?
LikeLike
Indeed it does 🙂 WebRTC, as Inara so well explained, is already a <a href=”https://www.w3.org/TR/webrtc/“>W3C Recommendation</a>, which is the last step before becoming a standard (W3C — The World-Wide Web Consortium — is the entity responsible for defining and publishing all Web-related standards, to which everybody should adhere).
Current voice support requires launching a proprietary app (known as SLVoice.exe under Windows, SLVoice under macOS), developed with code from Vivox, which cannot be distributed in its source code format — that’s one reason why it’s tough to get it working under Linux. You can’t simply compile it from source, only LL can do that. The viewer will then communicate with that external application in order to connect to Vivox’s servers; the viewer itself does not have (yet) any “voice capabilities” per se.
By contrast, WebRTC is a technology that is available everywhere, most notably on Web browsers of course, but nothing forces it to run under existing Web browsers. It can run directly inside the SL Viewer itself. And since it’s an open standard, LL can freely show that code and make it public, under the same open source license than the rest of the viewer itself. At such, you can compile it into a Linux version of the viewer, without any problem whatsoever.
Of course, it also means that the mobile version of the SL Viewer, once released, will also be able to communicate via WebRTC as well.
An interesting use case is imagining the ability of having a “stand-alone” voice app, which connects to the SL voice servers, thus allowing people to make voice calls into, say, a SL group voice call. All that requires is a simple app with WebRTC capabilities which can be freely tweaked to connect to the SL Grid. As you may imagine, there are a trillion such free apps — mostly to enable peer-to-peer voice/video conferencing — so they should be easily adaptable.
Another use case is slightly trickier to accomplish, but it’s feasible. You can use a “normal” streaming service and push the stream via WebRTC. Anyone connected to voice would therefore be able to listen to the streaming service. Why would we do that?… well, LL has sort of hinted that each SL Grid simulator will run its own voice server in parallel (therefore, each region will be connected to a voice server of its own, as soon as it launches or gets rebooted). This has the advantage of decentralising the system — except for group calls, in-world public voice chat can therefore be restricted to an exclusive voice server. Think of it as the equivalent of a local exchange, in telephone communications. It’s only when you need to reach someone else outside the region you’re in that you need to establish some routing between those regions — exactly, in fact, as telephone local exchanges coordinate among themselves to establish multipoint voice conferences. Since not all regions will have residents on them wishing to use voice, LL only needs to launch a limited number of voice servers (thus, keeping their running costs down!).
By using LL’s own system, therefore, you could stream to a region by simply pointing your streaming app towards LL’s WebRTC-enabled technology. There will be no need to hire an additional SHOUTcast/Icecast2 server, or any other technology able to provide streams that can be listened to. LL would, in that scenario, act as the “streaming server”, so to speak, distributing a single stream pushed towards that region among all participants — but via WebRTC.
Now, I’m not saying that LL will do that, of course, only that they could do it. Presently, with a little audio tweaking, you can already do something like that using the voice service — it just means that, instead of a microphone, you directly connect the output of your DJing app via a virtual device to the SL Viewer. It certainly works. It’s just a problem if you crash for some reason — the stream would be cut short for all listeners. Using WebRTC, however, if you crash, the stream will remain (as it happens today when using SHOUTcast, for example), since it will not require the viewer to be “active”.
Granted, things are not that easy, because, in order to establish communications to the voice server, you need to be an authenticated avatar, and, via SL’s internal authentication methods, get its credentials to be accepted by Vivox as a “legitimate user”. This is currently not easy to accomplish (and from what I’ve read so far, it’s really non-trivial, mostly because of the many layers that we have).
Anyway, I’m looking forward to this change. While I use voice very seldom, any improvements in that area are most certainly welcome!
Note: this will also allow TPVs to connect to OpenSimulator grids, too. OpenSimulator currently suffers with the issue of voice getting discontinued, since Vivox is definitely and permanently shutting down their (current) free “preview” system, which they have maintained as “legacy” for several years now for OpenSim users. As Unity bought Vivox, they told Vivox to discontinue those “free” services being used in non-Unity-based virtual worlds…
LikeLike
In short – yes. And with – down the road, the potential for a lot more flexibility of use. But Gwen has given the fuller explanation, so I’ll leave it at that 🙂 .
LikeLike