SL Viewer Updates
Things have been relatively quiet viewer-wise with only two updates to the official viewer branches. The development viewer rolled to 220.127.116.117614 December 4th, while on December 5th, the beta viewer rolled to 18.104.22.1687755. The latter included a good crop of updates, including a number of graphics and GPU support related changes, and the long-await snapshot tiling fix.
Following-on from last week’s RC deployment issues, there was no main channel deployment on Tuesday December 4th, although a number of regions were restarted during the course of the day.
Wednesday December 5th saw the same maintenance release rolled to all three RC channels. This comprised the release originally aimed at Magum in week 48 and which included all bugs fixes for the problems which required the roll-back on Thursday 29th November. Initial statistics for this update during the brief time it was available last week showed a clear improvement in stability, and this seems to have continued with this week’s release, although there has been one major issue come to light and is under investigation.
This relates to IM messages sent by scripted objects failing to trigger e-mails to the object’s owner if they are off-line. The problem appears to be related to the use of llInstantMessage(llGetOwner(), and appears to affect regions on all three RC channels, but not every case where llInstantMessage(llGetOwner() is used appears to be affected.
Currently, it is thought that a fix will be available for deployment during week 50, and should reach the RC channels om Wednesday December 11th.
Speaking at the TPV developer meeting on Friday November 17th, Oz Linden indicated there would be code freezes for weeks 51 (commencing Monday December 17th) and Week 52 (commencing Monday December 24th), but the status of week 1, 2013 (commencing Monday December 31st) was unclear – although at the time it seemed unlikely there would not be a code freeze for that week, given the holiday period.
Speaking at the Server Beta meeting on Thursday December 6th, Maestro Linden confirmed there is a server-side “no change” window as follows:
- Week 52 – commencing Monday December 24th
- Week 1, 2013 – commencing Monday December 31st
Whether or not there is a “no change” window in place for week 51 (commencing Monday 17th December) is somewhat unclear, with Maestro commenting that while there will be releases as usual for week 50 (commencing Monday 10th December), he only committed to deployments “possibly” taking place the week after. Expect further updates.
User Group Meetings
Also as a part of the holiday season, various User Group meetings will / are liable to be suspended in the coming weeks. Speaking at the Server Beta meeting on Thursday 6th December, Maestro and Andrew indicated meetings would be suspended as follows:
- Tuesday Simulator UG: no meetings on Tuesday December 25th or Tuesday January 1st, 2013. Meetings will resume in 2013 on Tuesday January 8th
- Thursday Server Beta UG: no meetings on Thursday 20th or Thursday 37th December. Meetings will resume as normal in 2013 from Thursday January 3rd.
Meetings for both groups will occur as usual in week 50 (week commencing Monday December 10th).
Threaded Region Crossings
The work on multi-threaded region crossings is still with the LL QA team. In the meantime, further regions have been added to the simulator version (server code DRTSIM-184) running the new code. Four of the latter are GC Test 9, GC Test 10, GC Test 15 and GC Test 16, which form a block of four regions which may assist with testing the capability (remember these SLurls are all to Aditi!). As reported in my last update on region crossings, Bonifacio on Aditi is also runs the multi-threaded code, but testing the capability there is impossible as none of the surrounding regions are currently on the same code.
Region Performance Issues
The physics memory issues affecting some (at least one) Homestead regions, which I reported in week 47, are still ongoing, while others may have been fixed with the most recent deployments. As noted last time, Simon Linden believed he had identified a part of the cause to be Havok-related (and linked to the navmesh).
Essentially, Simon’s investigations revealed that part of the problem lay with regions experiencing repeated navmesh rebakes. Such rebakes can be triggered by a wide range of events: terrain editing, changing the status of a linkset or character, setting various parcel flags, etc. Each rebake consumes server memory, with the result that multiple rebakes can leave a region in need of a restart. As not all of the triggers generating a request appear to be linked to the actual need for a rebake (altering some estate / parcel settings can trigger a request, for example), Simon has been looking into matters and testing a possible fix to reduce the number of unwanted requests. His hope is that it will reach an RC release on passing QA.
Other, very specific issues with physics memory use related to individual regions, such as those being experienced by Rainbow Cove (again as noted previously) are still being investigated.
Aditi Grid Log-in Issues
As previously reported, there are ongoing issues with logging-in to the beta Aditi grid. Some of these are connected to the account management database on Aditi being coming overloaded with data and never having been purged of stale account information. It has been suggested that the solution for the problem would be to cull information from the database, with Andrew Linden indicating that an initial script had been written to purge “old” data (i.e. relating to accounts which have not been used to log-in to Aditi for a significant period of time), and speculating that, as a longer-term solution, it might be possible to tie this script to a cron job so it could periodically purge Aditi accounts which have not been used for a specified period of time.
However, according to Maestro Linden, it now appears that the interim solution for this problem will be to acquire more disk space. This presumably means obtaining disk space internally rather than adding new hardware. There is currently no timescale for this work, nor is it yet clear if other options (such as periodically culling accounts) have been definitely discounted.
In the meantime, there are other issues with Aditi, particularly the 24 – 48 hour wait period required to ensure data is properly propagated from Agni to Aditi following a password change. A further issue, related to the automatic update of Aditi accounts with L$5000 should they fall below L$10,000, has been resolved and accounts crossing the L$10K threshold should now be updating correctly.