SL project news week 43/1: Server updates and llHTTPRequest

SL Server Updates

A brief start-of week update with an important item on the LSL llHTTPRequest function.

Tuesday October 23rd saw an update to the main channel which should have minimal impact on things, “It’s a change that should make simulators run better on our new hardware,” Simon Linden explained at the Simulator User Group meeting on the 23rd.

Wednesday 24th, as previously indicated, should see the RC channels updated as follows:

  • Magnum should receive bug fixes together with Baker Linden’s Group Services project code (the viewer side of which is still blocked)
  • LeTigre should receive further updates for the new Havok code (which presumably include fixes for the crash loop situation Maestro reported in the Server Beta UG meeting (see above)
  • BlueSteel should receive, “Some more invisible changes that should help us deal with some problems like full disks that make servers very unhappy.”

Details on the deployments are, as usual, posted in the Second Life Server section of the Technology forum.

Third-party Web Caching and llHTTPREQUEST

Kelly Linden indicated that the updates to LeTigre in week 42 had some library updates. One of these updates was to the cURL library which changed its behavior specifically around caching.

Until now, outgoing requests on have a Pragma: no-cache header in them, because cURL added this to all requests, and thus ensured fresh data was returned. The change made to the cURL library on LeTigre means that this is no longer the case, so if the third-party web server has caching enabled, any outgoing llHTTPRequest might return previously cached results from the server, rather than fresh data.

Kelly noted that, “Systems that are most likely to be affected are those that frequently hit the exact same URL and expect the data to change. Maybe they are getting a counter or checking on something’s status, leading to problems with the likes of breedables dying, and so on.”

A workaround for this has been implemented for the llHTTPRequest in the form of a HTTP_CUSTOM_HEADER flag, which enables Pragma: no-cache to be specified manually.

While there have currently been no reports of this or similar happening so far, LL are continuing to discuss the potential impact. Further, it is expected that the , Kelly continued, “If anyone uses or develops systems they think *might* be affected please give them a try on LeTigre this week and let me know. Or if you know others that might, encourage them to test on LeTigre this week. Thanks.”

So, if you have created a product which uses an external web server for updates in the manner described above, etc. (or know anyone that does), you may want to test behaviours on a LeTigre region to see if your product is impacted prior to this change rolling out further across the grid.

SL Issues

Network Traffic and Sim Lag / Crash

This issue has been going on since the start of the month, and appears to affect regions with large numbers of people.

A bug report was raised on the issue (BUG-355), and has been imported by LL as a MAINT issue (MAINT-1682). However, there has been no feedback from LL as to the underlying cause, although investigations are continuing.

3 thoughts on “SL project news week 43/1: Server updates and llHTTPRequest

  1. Inara
    Can you explain, in non-Geek, what on earth this “non-Pragma” thing on LeTigre is this week, please? Even with your blog’s wording I still cannot make head nor tail of what Oskar meant on the Server Blog.


    1. I’m not 100% sure on this mayself and had to try and pick my way through Kelly’s comments in a saved vesrion of the chat file, where they were mixed with other conversations. My apologies if I meerely stirred the muddy waters even more.

      I’ll try and clarify in terms of SL, assuming my own understanding is right.

      It’s possible to used a “third-party” (i.e. your own) web service to provide data for in-word objects. Some of this data needs to be frequently updated, a typical example being things like breedable animals, with their health, etc.

      Where this is the case, it’s important to make sure that any data being received is fresh data, rather than data which may have been caches at some point along the way – say by any intermediary servers. The The Pragma: no-cache header field is sort-of intended for this.

      Again, assuming I’m understanding this correctly, the cURL library has, up until now, effectively added this to outgoing requests, so it doesn’t have to be explicitly scripted. However, with the new cURL linrary on LeTigre this is no longer the case. so there is a risk that calls to such external services may actually return previously-cached information (which would not be good for things like breedables, as they could start keeling over or running away or whatever it is they do (I’m admiteedly ignorant on all forms of breedbale)).

      So, to help overcome this, LL have put a new flag into th llHTTPRequest function to achieve the same ends – hopesfully ensuring that fresh data continues to be obtained from the external web service.

      The problem here is that it is not entirely clear where the cURL library change is having an impact or not – hence Kelly Linden’s caution iat the meeting, so he’s asking for people whothink they may have a problem (because they are witnessing issues already, or because of they way thay made have scripted their products, etc.), to spend time on LeTigre testing to see if their product is being adversely affected.

      Does that make sense?


Comments are closed.