SL Server roll-outs: creator tools and pathfinding

Update July 18th: The Magnum RC roll-out has been delayed until Thursday July 19th. Oskar may supply a reason on the deployment thread in the forums – keep an eye on that for updates (with thanks to Wolf Baginski).

Main Channel Release

Tuesday 17th July sees the a roll-out of LSL functions related to the Advanced Creator Tools. This release will see the addition of three new LSL functions (comments taken from the release notes):

  • llAttachToAvatarTemp(integer attach_point): Follows the same convention as llAttachToAvatar, with the exception that the object will not create inventory for the user, and will disappear on detach, or disconnect. It should be noted that when an object is attached temporarily, a user cannot ‘take’ or ‘drop’ the object that is attached to them. The user is ‘automatically’ made the owner of the object. Temporary attached items cannot use the llTeleportAgent or llTeleportAgentGlobalCoords LSL functions
  • llTeleportAgent(key agent_uuid, string lm_name, vector landing_point, vector look_at_point): Teleport Agent allows the script to teleport an agent to either a local coordinate in the current region or to a remote location specified by a landmark. If the destination is local, the lm_name argument is a blank string. The landing point and look at point are respected for this call. If the destination is remote, the object must have a landmark in its inventory with the teleport agent script. lm_name refers to the name of the landmark in inventory. This function cannot be used in a script in an object attached using llAttachToAvatarTemp
  • llTeleportAgentGlobalCoords(key avatar, vector global_coordinates, vector region_coordinates, vector: Teleports an agent to region_coordinates within a region at the specified global_coordinates. The agent lands facing the position defined by look_at local coordinates. A region’s global coordinates can be retrieved using llRequestSimulatorData(region_name, DATA_SIM_POS). This function cannot be used in a script in an object attached using llAttachToAvatarTemp.

The new LSL functions work with the current runtime permissions system and are precursor to future work with experience permissions. More information about the runtime permission is here:PERMISSION_TELEPORT.

The keen-eyed will note that these are the functions that were rolled-out to the Magnum RC channel in May, and which were subsequently abused for griefing purposes. However, Linden Lab have added a new capability to the functions  – what is described as an “on / off” switch which is available only to Linden Lab personnel, and which allows the functions to be enabled  / disabled (the functions were also rolled-out to the Le Tigre RC on July 11th with the “on / off” switch capability). As the release notes make clear, the functions are disabled by default in the roll-out, and will presumably remain that way until such time as the updated permissions system has been rolled-out.

The release also includes three bug fixes (again, as specified in the release notes):

  • SCR-342: llTeleportAgent() does not fail gracefully when specifying an invalid landmark name
  • SVC-7966: Magnum RC, llTeleportAgent gives a wrong message
  • SVC-7987: llTeleportAgent always points in the positive Y direction on teleport.

Pathfinding release: Magnum and Le Tigre

On Wednesday 18th July, the Magnum RC will get a further roll of the pathfinding code and Le Tigre will apparently get the same code as well. At the time of writing, the actual release note pages on the SL wiki for Magnum and Le Tigre still reflected the releases for July 11th and the forum post announcing the release did not show any specific changes from the forum post relating to the July 11th release. Any alternations which may have been made following the difficulties some initially encountered on the Magnum RC following that roll-out are therefore hard to identify. This ma change prior to the actual roll-out.

Related Links

Project Shining: what it means for the viewer

On the 29th June, Linden Lab announced Project Shining, aimed at improving avatar and object streaming speeds. At the TPV/Developer meeting on Friday 13th July, the project was discussed in terms of how the various elements within it will affect Second Life viewers.

The following is a summary of that discussion, based on the recording of the meeting, and focused primarily on the viewer changes / updates that will be most directly seen / felt by the majority of users.

HTTP Library

Commencing at 22:30 into recording.

The aim of this project is to improve the underpinning HTTP messaging that is crucial to simulator / simulator and simulator / Viewer communications. Monty Linden is leading this project.

Key points:

  • LL will release a project viewer containing a new “wrapper” implemented around how data is handled and a new texture fetch library  (see time frame comments at the end of this article)
  • Providing there are no major problems with the project viewer, the initial code release will move to a release version of the viewer
  • This will be followed by changes to group services and a “more ubiquitous” use of the library in the viewer – which is where Oz’s warning to TPV developers comes into play, as some services and the behaviours will start to change to improve throughput and reliability – and may even help improve the SL experience for those on older routers.

As a side note, some of this work has involved router testing aimed at determining what router hardware is compatible with Second Life. While it is hoped that work around the HTTP libraries will improve the SL experience for some using older router hardware as noted above, the tests have revealed that certain types of older router – Linksys WRT and Belkin G series routers were specifically named – are not compatible with running Second Life.

Avatar Baking

Bake fail: a familiar problem for many

Commencing at 32:38 into the recording.

The aim of this work (Project Sunshine) is to improve issues around avatar baking and to eliminate bake fail issues. It will primarily focus on moving the emphasis for the baking process from the viewer to a new Texture Compositing server. The viewer will retain some elements involved in avatar baking – the actual baking of the avatar shape (i.e. shape values and IDs) will still take place on the viewer side, for example.

Precisely how this new service will work on the server-side of things is yet to be fully determined by Linden Lab. However, work is progressing on the viewer side of the equation, with the current key points as follows:

  • The new service will use the Current Outfit folder to drive the new baking service
  • TPVs not currently supporting Current Outfits will have to implement it, otherwise they will effectively fail on avatar baking
  • The basic process will be that when it is time to send a rebake request (e.g. after a user has finished editing their appearance) the viewer must send a new message to the baking service which effectively says, “Look at the contents of my Current Outfit folder and give me back a new appearance based on that”
  • Viewers in general will have to support this new message that is sent to the service, and change how they perform the fetching of avatar textures; for the technically inclined, this will be HTTP without UDP fallback.

Currently, the plans is for LL to integrating the new way of doing avatar baking into their viewer code, which will be available for TPVs to integrate – although none of the Linden Lab 1.x code will be updated to support the new process, so this will effectively break their own Viewer 1.23.5, which currently is still in use within SL.

The viewer code will support both the “current” method of avatar baking (within the viewer itself) and the new baking service (using the Texture Compositing server) until the new service is fully rolled out across the grid. This means that if a user is in a region that does not make use of the new baking service, avatar baking will continue to be handled using the viewer-side mechanism we currently have. However, if the user is on a region that utilises the new baking service, avatar baking will be handled through that. The viewer will be able to recognise whether it is connected to a region supporting the “new” method through the region capabilities.

In order to ensure as smooth a transition to the new baking process as possible, LL are proposing a relatively long lead-in to the new service, making the code available well ahead of the new service being enabled, allowing TPVs to integrate it into experimental builds. The server-side changes will initially be implemented on a number of beta grid regions for testing with viewers there, prior to being scaled-up. The server changes will then be released onto the main grid in a controlled manner and then scaled up from there.

What Does This Mean for Users?

If all goes according to plan, and providing that you keep up-to-date with releases of your preferred viewer, this actually shouldn’t mean very much in real terms. There are however a number of things to be aware of:

  • If you use a viewer that is not updated to use the new code (i.e. the official viewer 1.23.5 or a viewer that is not updated to use Current Outfit folder and / or to support the new bake request message / HTTP texture fetch mechanism) OR you continue to use an old version of a viewer rather than updating, there will come a time when your avatar  – and those around you – will not bake correctly
  • There are two issues that may occur during the transitional period when both the “current” and the “new” baking methods are in issue:
    • When teleporting or crossing between regions that use the different methodologies, users will experience their avatar rebaking, as the viewer will effectively be using two sets of data for the bake process
    • If there are two adjacent regions, one of which is uses the current avatar bake process and the other is using the “new” baking service viewers in one region will not be able to correctly resolve the textures of avatars in the other region
  • It is hoped that the transitional period where both methods of avatar baking are active will only last for about two weeks.

Object Caching and Interest Lists

Commencing at 57:25 into the recording.

When you enter a region at the moment, your viewer receives a huge amount of information on what requires updating, much of it relating to things you can’t even see from your position in the region. The data is received in no particular order, with the familiar result that things appear to rez in your view in a totally random order – quite often with the thing you actually want to see being one of the last to rez due to the mechanics of Sod’s Law. What’s more, if you have previously visited the region, the chances are that much of the information being sent to your viewer is already cached.

Object caching and interest list changes: easing the pain of random rezzing

The focus of this project is to optimise the data being sent to the viewer, information already cached on the viewer and the manner in which that data is used in order to ensure it is used more efficiently so that things rez both faster and in a more orderly manner than is currently the case.

At this point in time, this work is in a greater state of flux than the HTTP library and avatar bake projects. This is more a process of optimisation both on the server-side of things and within the viewer itself, rather than that of new functionality within the viewer per se. There are no general time frames for this work at present, but there will be updates once things become clearer as to how the optimisation is going to be addressed.

Time Frames

The precise timeframes for implementing these changes have yet to be properly defined. However, Oz Linden hopes that there will be at least a two month period between Linden Lab making the code for each of these project elements available for integration by TPV developers into their viewers and the point at which the Lab states the code must be in use.

At the moment it is likely that the HTTP library element of the project will but rolled-out first, although this is unlikely to be within the next two months, for the reason given above. Project Sunshine, dealing with avatar baking, will then follow after that – or although how soon after has yet to be determined; as described earlier in this article, this will be a very controlled roll-out. It is possible that the object caching / interest lists part of the project many not be rolled-out for another six months. However, timeframes are still in discussion within LL, so any of this may well change.

Expect updates on all three of these project elements as and when more information is supplied by Linden Lab.

Related Links

Pathfinding: Magnum RC roll-out, viewer tools and more

Pathfinding is drawing closer to a release across the main grid, and preparation work for the roll-out – which will constitute one of the biggest changes to SL – is underway on several fronts. This article is intended to be a high-level update on various elements of the project, gleaned from a variety of sources.

Magnum RC Roll-out

On Wednesday July 11th, the server-side pathfinding code was rolled-out to the Magnum Release Channel. There had been some predictions that this could lead to significant problems and issues as a result of the issues given within the release notes

Following the release, issues were experienced, notably with mesh vehicles, as reported on the forum thread discussing the releases for the week, and which have been rapidly responded to by Linden Lab personnel. there are still concerns around the roll-out and potential impact, and Linden Lab are continuing to monitor.

In discussing the RC toll-out at the TPV/Developer meeting held on Friday July 13th, Lorca Linden, the Associate Producer responsible for the project, commented: “OK, so pathfinding did go in RC on Magnum on Wednesday [July 11th]. As a whole, things are looking really, really good. We’re seeing very few crashes, the performance is working great we are seeing issues with some vehicles – definitely not all. That’s the only major hitch that we’re looking into, but as a whole the RC has been going quite smoothly.”

New Viewer Tools

As mentioned above, Lorca Linden (together with Stinson Linden and Prep Linden) from the pathfinding project attended the Friday TPV/Developer meeting on the 13th July, where they specifically discussed the viewer-side pathfinding tools. The tools are covered in detail in a new wiki page from Linden Lab, and may already be familiar to those who have been working on the pathfinding beta. They are currently in the Pathfinding Project Viewer, and will need to be incorporated in to TPVs as well. The wiki page provides comprehensive notes on the tools, complete with screen shots; the following in intended to provide a high-level summary and some background notes for those unfamiliar with the core elements of pathfinding, and to provide an overview of what this means for viewers going forward.

Navmesh and the Rebake Tool

For those not familiar with the term, navmesh is short for navigation mesh. This is a representation of a region’s geometry generated and used by the physics simulator to determine paths for pathfinding characters. The navmesh can be somewhat fluid in nature, depending upon what is going on in a region and what is being changed; a new path for a character, for example, will change a region’s navmesh. When this happens, the navmesh for the region needs to be updated, which can take some six hours  if left to update automatically.

To overcome this when pathfinding is rolled out, one of the new tools that will be appearing in the viewer will be a Rebake Region button. This will automatically appear at the bottom of the viewer window of all users within a region when the navmesh requires updating – regardless as to who may actually have altered the navmesh.

Rebake region button for navmesh updates (with thanks to Linden Lab)

Once baking has commenced, the button will fade on all viewers on which is displayed, indicating that an update is in progress (and preventing someone else from initiating a rebake). Once the rebake is complete and the navmesh is updated, the button will vanish from viewers.

Object Attribute Tools

By default, a navmesh treats all resident-made objects within the region in which it is active as obstacles that pathfinding characters must manoeuvre around. Obviously, this may not always be the case; there will be objects (e.g. stairs, ramps, sidewalks, floors, etc.), pathfinding characters need to traverse, climb, etc. This is achieved by altering the pathfinding atrributes associated with an object, and some of the new viewer tools are to allow this to be done and to also allow users to examine in-world objects to determine their status vis-a-vis pathfinding and how pathfinding characters will react to them.

These tools take the form of menu options and additional panels located in the Build and Object Profile floaters.

Dedicated Floaters

Also included in the tools are three dedicated pathfinding floater panels:

  • Linsket floater: designed to give advanced users and builders the ability to customize an area to achieve interesting effects with pathfinding-enabled characters
  • Character floater: designed primarily to help users to locate characters moving throughout a region and to identify the CPU cost of characters affecting the performance of a region
  • View / Test floater: intended for advanced users who are building pathfinding-enabled objects and characters.
Pathfinding characters floater (with thanks to Linden Lab)

Tool Status and TPV Integration

As mentioned above, these tools are all currently available in a Project Viewer. However, it is anticipated that they will be appearing in a Linden Lab beta viewer in “one to two weeks” (Lorca Linden). The tools themselves are regarded as feature complete by LL, and Lorca encouraged TPV developers at the meeting to consider integrating them into their viewers sooner rather than later.

Integrating the new tools in TPVs is liable to be in two parts:

  • An initial release containing the tools required for setting object attributes, etc.
  • A follow-up release incorporating the use of the Havok libraries Linden Lab is establishing and which will be made available under the new sub-licence arrangement.

The reasons for this are two-fold:

  • The attribute tools, etc., are vital for optimising pathfinding within regions and ensuring everything works correctly (e.g. to ensure pathfinding characters can climb the stairs they’re supposed to by climbing or walk along the prim sidewalk they are supposed to walk along, etc.
  • The Havok libraries are not yet available, although Oz hopes to have them in a position where he can talk in more detail to TPVs about them “pretty soon”, and while it is nice to be able to visualise the navmesh, etc., it is not quite such vital part of the pathfinding process.

Universal Tools

Alongside the viewer tools, pathfinding will see a set of universal tools rolled-out in console format. These will be available to region owners and estate managers and will allow them to change an entire class of object in a region to have certain pathfinding attributes once pathfinding goes live. Linden Lab are approaching this in terms of having all non-scripted objects set them to be static obstacles that pathfinding characters must manoeuvre around, while anything that is scripted is set to “dynamic”, as it is thought to be moving.

This obviously doesn’t fit all cases – vendor boards, for example are scripted, but they are hardly what can be termed “moving” objects. Indeed, it might be argued that the majority of scripted objects within a region are non-moving, and therefore should have their pathfinding attribute set to “static”. However, LL feel they have no way of easily differentiating between a non-moving scripted object and a moving scripted object, and thus feel that setting all scripted objects to “dynamic” is the better option and allowing the attribute to then be modified through the viewer where necessary, as setting them to “static” could result in a worse overall behaviour case within a region.

Other Tools and Items

Alongside the above, Linden Lab have previously indicated that they will be making the following available as pathfinding rolls-out:

  • A set of script templates used for the creatures found in the Wilderness areas
  • A script for a “master rezzer system”.

The latter is a means by which region performance and the number of pathfinding characters rezzed in the region can be monitored, and which will reduce the number of pathfinding characters within a region in response to the region’s performance / number of avatars within the region.

Potential Timescales for Roll-out

During the TPV/Developer’s meeting, Lorca outlined some potential dates for pathfinding. note that these are currently potential, and shouldn’t be taken as tablets of stone:

  • The pathfinding tools should be available in one to two weeks in a beta viewer
  • The server-side release is dependent upon how well (or otherwise) the current release to the Magnum RC progresses, and may potentially come within the next two-to-four weeks, but certainly no sooner than two weeks.

Again these are not confirmed dates, and may well change in the next couple of weeks – particularly if major issues are found with the Magnum RC roll-out.

Related Links

A Shining announcement: major improvements coming to SL

Yesterday Linden Lab announced a major series of new initiatives aimed at improving the overall SL experience. The announcement came via a Tools and Technology  blog post, which covers the initiatives in great detail. These focus on four main areas of activity, one of which is directly related to hardware and infrastructure, and the remaining three are focused on the platform itself and are grouped under the Shining project banner.

The hardware / infrastructure element of the work is described thus:

This year, Linden Lab is making the single largest capital investment in new server hardware upgrades in the history of the company. This new hardware will give residents better performance and more reliability. Additionally, we are converting from three co-locations to two co-locations. This will significantly reduce our inter-co-location latency and further enhance simulator performance.

The Shining project is something that is already known to many SL users – especially those who attend some of the User Group meetings. It is perhaps most famously associated with the Lab’s work on the Viewer rendering code, removing outdated functions and calls no longer supported in modern graphics systems (most notably Nvidia) and improving graphics handling overall. Shining has also been responsible for other incremental improvements to issues around streaming objects and avatars.

Under the new initiative, Shining is split into three core performance projects.

Bake fail: a familiar problem for many

Project Sunshine: One of the biggest complaints from users in SL is related to avatar rezzing. This can appear slow, and usually manifests in avatars remaining grey for periods of time, or in skin and system clothes remaining blurry (see right) – and at its worst, result in a user changing their avatar’s outfit – but others either seeing the avatar still dressed in the previous outfit or naked. Collectively, these issues are known as “bake fail” and are the result of the Viewer having to do all the compositing of avatar textures locally, then sending the results to the SL servers, which then send the information back to the simulator the avatar is in to be accessed by other Viewers in the same simulator.

Under Project Sunshine, to precis the blog post, much of this work is moved server-side, using a new, dedicated server, the Texture Compositing Server, which is separate to the simulator servers. This effectively allows all the “heavy” communications and calculations work relating to avatar texture calculations to performed within LL’s servers and across their own internal network, removing the reliance upon the Viewer and on Viewer / server communications which are outside of LL’s control.

Object Caching & Interest Lists: This is intended to directly address another common request from users: improving how the Viewer handles local object caching. This effectively means that once the Viewer has information relating to a specific region, and providing the information is still valid (i.e. there have been no changes to objects that the Viewer already has cached), then it will no longer need to re-obtain that information from the server. Only “new” or “changed” data needs to be streamed to the Viewer. This should mean that on entering a previously visited region, the Viewer should immediately be able to start rendering the scene (rather than requesting a download from the server), while simultaneously requesting any “updates” from the server through a comparison of UUID information and timestamps.

HTTP Library: The final aspect of Shining’s three-phase approach is to improve the underpinning HTTP messaging that are crucial to simulator / simulator and simulator / Viewer communications (and thus key to the other elements of Shining) through the implementation of “modern best practices in messaging, connection management, and error recovery”.

Overall, Shining will be tackling some of the major causes of Viewer-side lag and user frustration in dealing with avatar bake fail and the complexity and wastefulness of scene rendering that is encountered when moving around SL.

No definitive time frames for the improvements have been forthcoming with the announcements – and this is understandable; there’s a lot to be done and matters are complex enough that LL will want to proceed with minimal disruption to the grid and to users. Doubtless, more information will be made available as becomes known through the LL forums and (possibly more particularly) via the relevant User Groups.

Mesh clothing deformation: alternative approach suggested

Updated June 26th 16:30 BST:  The discussion on this alternative continues on the SLU Forum thread (recommended reading for anyone interested, as a lot is explained succinctly and clearly). Darien Caldwell has summarised the technical aspects of both solutions (and in not having a deformation capability) in terms of who is the greatest impacted – consumers, creators and / or coders.  Similarly, in answering a question posed by Innula Zenovka on the relative advantages / disadvantages to the two ideas (RedPoly’s and the deformer), Adeon Writer commented

“This trick was created to address major problems with clothing, but it is a patch. And you can see the areas where it’s not patched: this only makes mesh follow a few more sliders, while the rest (especially the face) do nothing.

“Qarl makes mesh work with ALL sliders, even future ones that don’t exist yet. It is the correct solution to the problem, this is a quick workaround.

“Qarl gives the ability to make entire new human meshes fully removed from the system shape that still work with all sliders and avatar physics,

“That is not possible with this.”

This would seem to be a clear-cut differentiator that would suggest that if matters come down to a choice of one approach or the other, continuing with the deformer may well be the preferred course of action. Obviously, nothing further has been said on the matter by LL, but further updates will be posted as they become available.

Nalates Urriah brings news of a potential alternative to the mesh parametric deformer that has been under development by Qarl Fizz, and which has been reported upon extensively by Nalates, myself and others.

I’ll leave the in-depth technical explanation and quotes to Nalates – she broke the story, after all. However, to try to summarise:

  • The idea is the rather than weighting mesh clothes against the avatar “skeletal frame”, the clothes are weighted against the “collision volumes” – these are (I gather) used to detect when your avatar collides with a physical object in-world, and thus are designed to morph when you adjust your avatar’s shape
  • The approach isn’t perfect and has a number of limitations (female clothing won’t stretch with breast size changes, for example); extreme sizes cause issues (as they do with the deformer); weight painting during the construction of mesh clothing can be somewhat more problematical
  • Alpha masks will still be required in certain situations (but then, alphas were never going away anyway).

The developer of the approach, RedPoly Inventor has released a demo version of the approach using a dress, which can obtained from his store. There is also a demo video on YouTube:

RedPoly is the first to admit the approach is not perfect, but has also proposed an additional idea of developing a further set of avatar “bones”, which he calls “cbones” that would allow this approach to work a lot better. According to Nalates’ report on the mesh meeting where this all came out, RedPoly believes the development of such a new system would be relatively simple.

Interestingly, according to AshaSekayi Ra, commenting in an SLU Forum discussion on this development, the idea of using the collision volumes  was first raised in the mesh beta last year and that Prep Linden requested samples of clothes rigged to the avatar’s collision volumes, but it is unclear what happened with any tests LL may have carried out.

Right now, this doesn’t mean the end of the deformer, nor does it mean all mesh clothing issues are solved. It does, however, open-up new avenues of exploration and certainly new topics for discussion on the matter.

Reading Nalates’ report, it would appear that the idea has taken LL themselves a little by surprise, despite the fact it may well have been previously discussed, and their reaction is potentially best described as cautious.

As it stands, mesh designers such as AshaSekayi Ra and Ellie Spot will doubtless be looking at the idea, as will those with expertise in the avatar design, as well (one would hope) LL themselves. As Nalates states, there will be further news emerging on this as tests are conducted and feedback given.

Related Links

With thanks to Nalates Urriah.

Mesh Deformer: updates and musings

I’ve largely backed away from covering the mesh deformer of late because Nalates Urriah is doing a good job of reporting back on the Mesh Content User Group where it gets discussed, and I don’t really get the time to attend the meetings myself.

On June 11th, Nalates provided a summary of the most recent meeting, which includes some interesting excerpts from the conversation on the deformer. Of particular interest are a couple of comments from Nyx and Oz Linden, notably:

Nyx Linden (replying to a comment from Ellie Spot that the deformer is now in LL’s hands & is a matter of “Fixes to make it work for more extreme shapes“): The issue of extreme shapes is definitely an issue that needs to be discussed.

Oz Linden (later in the conversation): We’ve given Qarl some feedback. In its present form, it’s not quite good enough, but I don’t think we should get into details. There are problems with the avatar, and there are problems with the deformer. It remains to be seen whether or not we can fix the avatar problems (I’m looking into it from a couple of angles). But, we hope that it’s possible to make some progress on the deformer even without those fixes.

As Nalates points out, Nyx’s comment is open to a number of interpretations, some of which could be positive (and given Nyx’s nature) fare more likely) while some might be potentially more negative; as no real expansion on the comment was given, it comes down to a matter of interpretation / speculation on the matter.

However, in this week’s Metareality podcast, Qarl does comment further on the matter, in  a discussion commencing at 34:10 into the podcast:

[36:04] Qarl: Now I have to say that he’s like one of my favourite Lindens, so I doubt he was saying anything bad.

Oz’s comment – and the fact he would not be drawn into saying who at LL is working on the deformer or what the overall priority for the project is within the Lab – drew further comment from Qarl:

[38:22] Qarl: So I’m dealing with “Linden X”, who I also like a great deal and is a very nice guy. And … I think we’ve come to a place where we have agreed – I think, although he didn’t respond to my last e-mail – I think we’re agreed on what needs to be done before we can ship. One of those ideas is to … is similar to the standard sizing business that everyone is talking about, but instead of having a fixed set of sizes – small, medium large – encode the actual avatar parameters into the mesh itself, so you can have any base or any avatar shape as your base, because linden Lab wanted to have a stick figure base, and I’m like, “Well if you encode the parameters, then you guys can do that”.  … So assuming there’s enough room in the mesh asset for that, then I think that’s what we’re going to do. And then the other issue is that the vertex matching needs to be tweaked a little bit  – for our tech listeners – to take into account the normals. So its going to look at both the position and the normals when it chooses the matching spot.

Qarl’s comments prompted special guest Eclectic Wingtips to ask:

[39:48] Eclectic Wintips: So how much work is this going to be for those of us who make mesh? … If there’s multiple sizing, are we still going to need to do multiple sizing in the 3D programme [used to create a mesh item of clothing] to bring it in?

[40:01]Qarl, Oh! no, no, no. You can totally not use that at all. You just leave all the parameters the same, and it just uses the default avatar and blah, blah, blah … BUT, if you want to make an outfit that fits really well on … an anorexic model, so you tweak it for the super skinny or something, then you can set those parameters to be like “fat”, and it matches the bases of extra, extra, extra, small. 

[40:36] Gianna: But you’re setting those parameters within your 3D content?

[40:39] Qarl: within your 3D content … So the issue then becomes the GUI, because you have like a thousand parameters now you have to enter … what I think … what we’re going to default to is, you’ll have like six radio buttons for those sizes … but with very little extra effort, the Third-party Viewers will be able to expose that stuff, so you’ll be able to do anything you want; just so long as it’s in the protocol, you can open that later.

Qarl’s explanation – assuming this is what happens with the deformer – seems to offer the most flexible solution to the question of base shapes and sizing. To hear the discussion in full (and the rest of this week’s topics), be sure to listen-in to the podcast itself.

Related Links