Materials processing: the what, why and where

On August 16th, Linden Lab announced the forthcoming arrival of material processing in SL in the form of specular and normal maps. At the same time, a video was released demonstrating some of the capabilities. But what does this actually all mean for the everyday user in SL? Here’s what I hope is a lay guide to things, including comments from one of the architects of the new system, Geenz Spad, as to how it came about.

Materials Processing

This is not intended to be a technical discussion on computer graphics mapping in general or on normal or specular maps in particular. Rather, it is intended to provide a broad, non-technical explanation as to how the latter work. 

Materials processing is the combining of various computer graphics “maps” to significantly increase the level of detail that appears on any object or surface within a computer game. Within Second Life, textures (themselves a form of computer graphics map called a diffuse map) are routinely used to add the illusion of surface details to in-world objects and surfaces. The new material processing capability will introduce two further kinds of computer graphics map to SL which can be used in-world with textures to dramatically increase the detail and realism of objects and surfaces. These additional maps are called normal maps and specular maps.

Normal  Maps in a Nutshell

Normal maps (sometimes referred to as bump maps, although they are more rightly the most common form of bump map) are a means of faking high levels of detail on an otherwise bland surface by means of simulating the bumps and dips that create the detail. Normal maps can be created in several ways.

For example, when working with 3D models, a common method is to make two models of the same object: one a very complex, highly detailed model with a high polygon count, the other a much lower polygon count model with significantly less detail. An overlay process is then used to generate a normal map of the detailed model’s surface features which can be applied to the less complex model, giving it the same appearance as the highly detailed model, but for just a fraction of the polygon count, reducing the amount of intensive processing required to render it.

Using a normal map to enhance the detail on a low-polygon model. The image on the left shows a model of some 4 million triangles. The centre image shows a model with just 500 triangles. The image on the right shows the 500-triangle model with a normal map taken from the model on the left applied to it (credit: Wikipedia)

Another common way to produce a normal map is to generate it directly from a texture file. Most modern 2D and 3D graphics programs provide the means to do this, either directly or through the use of a plug-in (such as the nVidia normal map filter for Photoshop). When combined with diffuse maps, the normal map creates the impression of surface detail far greater than can be achieved through the use of the texture alone.

Normal map from a texture: left – the original texture (diffuse map) and its normal map shown as a split view; right – the material resultant from applying both maps to surfaces inside a game (credit: Valve Corporation)

Specular Maps

In the real world, every highlight we see in an object is actually the reflection of a light source. Surfaces and surface details reflect light differently to one another, depending on a range of factors (material, lighting source point(s),  etc.). Specular maps provide a means of simulating this by allowing individual pixels in an object to have different levels of brightness applied to them, giving the illusion of different levels of light being reflected by different points on the object.

When life gives you lemons: a mesh lemon with (l) a  normal map  applied, and (r) a normal and a specular map together. Note how light is apparently being reflected across the surface of the latter (credit: Mind Test Studios)

Like normal maps, specular maps can be produced in a number of ways, both within 3D graphics modelling programs and in tools like PhotoShop. As shown above, they can be combined with normal maps and textures to add detail and realism to 3D models and flat surfaces.

What Does This Mean for Second Life?

Second Life itself already includes a dynamic example of how normal and specular maps can be used: Linden Water. This is created using an animated normal map to create the wave-like effect for the water, while an animated specular map adds the highlights and reflections. The result is a very realistic simulation of moving water able to catch and reflect sunlight.

Just as the use of normal and specular maps create a very real illusion of water with Linden Water, the new materials processing capabilities will significantly enhance the look and realism of both mesh and prim content within SL. Mesh content should additionally benefit as it will be possible to produce high levels of detail on models with low polygon counts (as shown in first image in this article). This will improve rendering performance while also having the potential to lower things like land impact for in-world mesh items.

The only initial limitations as to where and how normal and specular maps can be applied is that they will not be applicable to avatar skins and system layer clothing. Any decision on whether the material processing capability should be extended to include these will depend upon at least two things:

  • Community feedback – whether there is a demand for normal and specular maps to be used with avatar skins
  • Understanding what is happening with the avatar baking process, and determining what is involved in getting the new baking process and material processing to work together.

Use the page numbers below to continue reading

13 thoughts on “Materials processing: the what, why and where

    1. I’ve even seen an article eons ago on a creative resident who “hacked” those already-existing normal maps to apply their own to some objects, with impressive results. A pity I cannot find a reference to that, but it was eons ago, and of course, there is a limit to how many different bump maps one can apply, so this would only work for very limited examples. I always wondered why LL didn’t “finish its job” allowing user-generated bump maps on the viewer; they’re in it since, at least, SL 1.4 (released in June 2004!).

      Then again, I suppose these existing bump maps are software-generated (I’m just speculating, I never looked at the code!) while LL very likely will use OpenGL-based normal maps, which will use the GPU’s ability to process them directly.

      Like

  1. As with Mesh, I think clothing that can use normal and specular maps will be huge. Imagine for example, a leather corset. There are some lovely looking clothing layer corsets out there, as well as some spectacular Mesh corsets. But imagine being able to apply normal and specular level maps to the clothing layer corset? Suddenly, you have depth, what is in reality a texture mapped to the Avatar skin, can now look like it isn’t simply painted on. For the Mesh corset, well I think your relief example from Wikipedia says it all.

    Like

  2. I, too, think mapping the avatar textures would be a *huge* win, and alleviate the need to (mesh) model every piece of clothing.

    Like

    1. For sure 🙂 I wrongly assumed this would work on avatars as well. Hmm. Maybe the upcoming server-side baking of avatar textures will allow avatar-side maps, when it’s finished.

      The reason for allowing these maps on avatar textures is actually simple. These days, only high-end content creators are able to design and model rigged meshes for avatar clothing. But avatar clothes used to be very simple to do, if you only used a simple template. The results weren’t overly impressive, of course, but it meant that everybody could design their own T-shirts very easily. This was a source of fun and an encouragement for amateurs to personalise their avatars easily. As flexiprims and later sculpties were introduced, amateurs had no choices left to enjoy tinkering with avatar clothing…

      As Inara so well explained, even Photoshop can create normal maps and some sort of specular maps as well. This would allow amateurs to dust off their old templates, do some processing, and import a few maps which could give simple texture-based clothing look like it has far more realism and details, thus making amateurs happier again.

      With server-side baking of avatar textures, this would also mean that the current issue of downloading “lots of textures” in order to bake a full avatar would disappear. All the many layers, from alpha channels to the three levels of maps, would be baked on the server, and just three resulting textures distributed. So with Project Shining finished, this would be relatively easy to implement without generating much additional viewer-side processing and/or client-server communication.

      Of course, not to mention the ability to create truly scary skins with disease pocks and wounds oozing pus… hehe

      Like

  3. Inara, i just want to thank YOU for all the careful, thorough and thoughtful ways you manage to explain the new and the strange and even the mesh. Some facts, some examples, some maybes – no high tech jargon, no hyperbole & no harangues.
    I wish your blog was required reading for residents.
    Or maybe LL should hire you as actual Information Minister.

    anyway, sincere thanks and appreciation

    Like

  4. There is an interesting thing on your last paragraph. Why couldn’t normal/specular maps be encoded as textures as well, thus saving the need to create more types of assets? A pity that was not explained fully. After all, if you can “encode” a mesh using sculpties (or terrain files), maps are comparatively easier than that.

    I suppose that the only issue might be related to the way image assets are compressed and recompressed and later decompressed in SL, which might create odd results. We used to have lots of oddities with the first generation sculpties because of that, but at some point, LL fixed all the issues.

    Oh well, it’s worthless to speculate (pun intended!).

    Like

    1. Which last paragraph? Page 1 I assume :).

      TBH, I amassed a whole raft of questions and info, but had to cut things to a point where this article didn’t become a major case of tl;dr :). Also wanted to keep the the basics (as I’m going through a rapid learning-curve as well!).

      Still… gives room for a follow-up I guess, unless all the answers pop-up here :).

      Like

    2. The texture encoding idea was intended to be more of a hack to get materials working in viewers that would support it. However, once we had the opportunity to just implement them without any weird looking hacks, we opted to just abandon the approach altogether.

      Some of the blocking caveats that would have been involved had we proposed it to LL:
      – Each tweak you made to a material would require you to upload a new texture (imagine paying L$100 for a material because you made 10 “small” tweaks to it)
      – Parsing each material texture asset would have made it impractical in the long run due to parsing overhead (we’d have to figure out what each and every single pixel meant for each and every surface a texture was applied to)
      – We would have had to have had some kind of “baking” mechanism to help save people money on uploading (meaning all of your changes would only appear to you until you rebaked the material texture and uploaded it, paying the usual L$10 fee in the process)

      In the end we’re going for something significantly less hackish, and easier for everyone to work with.

      Like

Comments are closed.