2023 SL SUG meetings week #9 summary

Buddha Garden, December 2022 – blog post

The following notes were taken from the Tuesday, February 28th, 2023 Simulator User Group (SUG) meeting. They form a summary of the items discussed and is not intended to be a full transcript. A video of the entire meeting is embedded at the end of the article for those wishing to review the meeting in full – my thanks to Pantera for recording it.

Server Deployments

  • On Tuesday, February 28th, the SLS Main channel servers were restarted without any deployment, leaving them on simulator version 577734.
  • On Wednesday, March 1st, the majority of RC servers will be restarted without any change. However, those on the BlueSteel RC will be updated with server release 578370.

Available Official Viewers

On Tuesday, February 28th, the Maintenance R RC viewer updated to version 6.6.10.578285.

The rest of the official viewers currently available remain unchanged from the start of the week:

  • Release viewer: Maintenance Q(uality) viewer, version 6.6.9.577968 Thursday, February 2.
  • Release channel cohorts (please see my notes on manually installing RC viewer versions if you wish to install any release candidate(s) yourself).
    • Maintenance S RC viewer, version 6.6.10.578270, issued February 24.
    • Performance Floater / Auto FPS RC viewer updated to version 6.6.10.578172, February 21.
  • Project viewers:
    • PBR Materials project viewer, version 7.0.0.578161, February 14. This viewer will only function on the following Aditi (beta grid) regions: Materials1; Materials Adult and Rumpus Room 1 through 4.
    • Puppetry project viewer, version 6.6.8.576972, December 8, 2022.

In Brief

  • BUG-233440 “Add a method for dealing with user-customized keybindings in user-visible text” came under discussion, the Maestro Linden suggesting one approach, per the comments in the Jira.
  • The above segued into a general discussion on note card parsing commands in note card (with care!), making things like URIs within note cards clickable, and having an LSL command (e.g. llHTTPResponseNC() ) which could allows scripts to reply to HTTP requests within note cards – so if a HUD is using MoaP, JSON in the HTML could send commands back to it via post.
  • Please refer to the view below for other topics.

Advertisement

One thought on “2023 SL SUG meetings week #9 summary

  1. As to BUG-233440, I have no comments. I understand the issue, but strictly from my perspective, it might not be an ‘easy’ one to implement, and anyway would just affect a handful of content creators (well, I mean thousands). With almost a quarter of a million bugs to fix (!), some of which are far simpler to deal with, one wonders why LL cherry-picked this one to fix…

    As for sending back a notecard as a reply to a HTTP-in request, that’s quite interesting (no, I haven’t got time to watch one hour of video to understand the details…). I suppose the only reason for doing that is to overcome the draconian restrictions on message length — so you cannot simply read a notecard using LSL, line by line (that’s how it works), assemble all lines in a single list (you might not have enough memory for storing it), and send it back via HTML (the character limit is way too low for anything except the simplest of notecards).

    So, it makes sense to simply send the notecard instead, and a function such as llHTTPResponseNC().

    That, in turn, obviously raises some even more interesting questions. What format will those notecards be in? Currently, under OpenSimulator, a notecard is a specially-formatted XML file, where elements can be either plain text or, well, an ’embedded’ asset (landmark, texture, etc.). I can see how replying with a notecard could be implemented in the same way for the text part. But the assets?… well, one could argue that landmarks may get converted into SLURLs first. Textures/snapshots would be retrieved from the asset server as JPEG2000 files, and converted on demand to JPEG/PNG. The same would apply to sounds (just convert them back to WAV).

    However, other things would be much tougher to send back via this model. For instance, animations — I’m not even worrying about the permissions issue (and the way all content would be stripped of its creator/owner tags), but rather how LL could achieve that? They have an ‘internal format’ for animations; they’ve managed to convert from BVH into their format, but… is it even possible to do the reverse?

    What about embedded objects? I mean, I’d love that LL allowed that to happen in some form. For prims it’s easy enough: there is a limit to what kind of prim torture you can apply to a prim (and it’s quite well defined!), so all you need to send is a list of each prim in the linkset with its ‘torture’ attributes, including position, rotation… and its applied textures. Exactly like the format used by IAR files in OpenSimulator (and possibly how Firestorm does it — I have never tried it). The end result? An official mechanism for exporting content out of Second Life and storing it locally 🙂

    (Note: LL would not even need to reinvent the wheel; such formats — IAR for inventory items, OAR for the whole content inside a region — have long been fixed and standardised by the OpenSimulator community. Even if LL wants to reinvent the wheel, ultimately, there will be a reasonably simple way to convert from one format into another. Unless, of course, LL packages those assets inside a proprietary file format and encrypts it with a key that only they know… making all the above useless, of course)

    So, very likely they’re not going to allow that 🙂 It’ll be text-only, I guess. It will still be useful, though!

    Highlighting URLs and making them clickable is certainly a much simpler thing for LL to do (they did it for the public chat/IM/group chat/many pop-ups that appear on the viewer) and I wouldn’t be surprised that they’ll allow that to happen. One cannot wonder, though, since they’re so willing to hack and slash at the notecards (after more than a decade), they’d be willing to at least implement basic Markup on notecards — a much-needed feature, and which would be trivial to implement eons ago, if LL hadn’t felt the need to reinvent the wheel over and over again in their viewer…

    Just imagine how easier it would be for LL if they just launched a window with the renderer — all else relying on equivalent functionality on the operating system. Inventory would be just a filesystem; to upload sounds, textures or animations, you would just drag and drop them from your system on top of the inventory. You wouldn’t need all the myriad buttons and menus and whatnot for doing a gazillion things all inside the viewer — all of these would be on a native menu. Some would simply disappear — warnings and notices would come from within the operating system’s own notification system instead. Writing notecards? Just use one of several external text editors which can save to Markdown and drag and drop it inside inventory. The same, of course, for the LSL editor — no need to keep updating the built-in viewer with all its fancy things (which lags behind modern code editors and IDEs by at least a decade or even two!).

    And they wouldn’t even need to maintain two separate codebases in two different programming languages (or three — for those TPVs that still support Linux). They could just have one codebase, using one of the many cross-system frameworks out there. Qt comes to mind (it’s possibly the fastest out there which also provides the better integration with the different operating systems — the applications ‘feel’ native, even when they aren’t, because things work as you expect them to work under your operating system), mostly because it should be easy for LL to find qualified developers that are rather familiar with Qt. But there are plenty of alternatives. They could simply create an app running on JavaScript, using any of an even larger number of frameworks out there. I’m aware that this would not work for the renderer — which would still have to be a natively-compiled application — but that would be a question of separating layers. We already know that there is an ever-increasing number of ‘special URLs’ (those beginning with secondlife://…) which trigger quite a lot of the built-in functionality of the viewer (and not only that). It’s conceivable that the so-called ‘user interface’ layer already communicates via those mechanisms with the ‘scene viewer’; this would just require detaching one layer from the other, and running them independently. Imagine: no more chat lag. Ever! It would run on another window, without any relation to the 3D scene — and whatever FPS it has.

    Better still, you could run everything except the actual renderer on a mobile device…. or inside a browser, which would also mean running SL on your SmartTV. Voice is already handled by a separate plugin, and so will be puppetry, and who knows what else will be available (Linden Lab promises that a lot can be done using plugins. I wish I had time to do a few experiments… there are some simple ideas that would be actually cool to do, like communicating with an external XMPP or even ActiveStream (= Mastodon) server, and seamlessly integrate the two communication services.

    Anyway… sorry. I got sort of carried away 🙂

    One question I have with MoaP for HUDs — what happens if the avatar enters a no-MoaP area? Or does MoaP in HUDs — like most (not all, though?) scripts inside HUDs — will always work, no matter where the avatar currently is?

    Like

Have any thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.