How it Will Work
Once the system has been implemented, it will be possible to create specular and normal maps outside of Second Life and then upload them as individual assets just as textures and sound files, etc., are currently uploaded. Once uploaded, the maps can then be applied to in-world objects and object faces in much the same way as textures are currently applied, allowing them to be combined with one another and suitable textures to produce the finished material effect on the object itself.
As with any other content, maps could be created, uploaded and offered for sale (in packs with textures, for example), allowing builders, etc., make use of them. Additionally, LL may offer a selection of normal and specular maps as a part of the system library found in people’s inventories.
Obviously, for all this to work, it means that there are a number of changes that need to be made to Second Life, both on the server-side (storing the new maps as recognised assets, provisioning them to the viewer, etc.), within the viewer, and to the rendering system itself (so it can correctly interpret the data).
In the case of the viewer, a major area of change will be in the Texture tab of the Build floater, where additional pickers will be needed allow the selection of normal and specular maps. Additional controls will also be required to allow things like the reflectivity / lighting in specular maps to be adjusted, although in the initial release, it is likely the texture, normal and specular maps will be locked to the same rotation, repeats, offsets, and so on (so it will not be possible to flip a normal map vertically without also flipping the other two).
The Creative Process
Specular and normal maps have been on content creators’ wish lists for a long time, but they are capabilities that the Lab have perhaps viewed as being in the “some day” category. Given this idea is largely the result of a proposal from the Exodus team, how did it come about?
“I originally had an idea for encoding material properties in a texture,” Geenz Spad, one of the principal architects for the idea, explains. “I asked Oz Linden if this would be violating any policies, and he told me to try putting together a proposal. We started work on this in February. In less than a month [and dropping the texture encoding idea in the process], we had a functional proof of concept ready to show to the people at the Lab.”
Following this, there was an extensive period of discussion within both camps on how to approach the project, with different proposals being exchanged back and forth relating to how various parts of the feature should work. “The process was quite lengthy, given that we were having to work around an existing architecture and determine what could be used as-is in the existing architecture, and what we’d have to create from scratch,” Geenz goes on. “But the project was green-lit in July, and it’s fantastic to see its announcement to the public!”
Time Scale, Discussions and Feature Requests
No official time scale has been announced for the project as yet, but the initial feature set is now largely defined and it is unlikely that any requests for additional capabilities will be added to the system prior to the initial release. However, a discussion thread has been started in the Building and Texturing sub-forum where ideas for future enhancement to the system can be discussed, and where questions can be addressed.
In addition, a JIRA (STORM-1905) has been created by Oz Linden in which feature requests for future updates to the system can be made, and in which links to detailed information on the functionality and design will be added as they become available.