Texturing and modeling are now very much intertwined. One only needs to look at zBrush or Mudbox to see the reality of high density asset generation.
The secret to good photo-real rendering is in the math and science of the shader and how it renders.
The problem, is that most of the time, the person you want doing the modeling, is not the person you want doing the shading.
It makes no sense to disqualify an extremely talented asset artist because he/she doesn’t understand the intricacies of color gamut mapping, or the proper definition of a bi-directional reflectance distribution function (BRDF).
Likewise, you don’t want to have to increase the number of shader writers on your team, in proportion to the number of assets that need to be created.
And lastly, it would be nice if you could hire your lighters based on their ability to light, rather than their ability to texture and/or shade.
This all sounds like it makes perfect sense. But it’s not how 3d software has been generally engineered over the years. Nor is it how the tasks are often divided up in a “out of the box” 3d pipeline.
Typically, shaders include textures and parameters, and are applied by modelers. They are then modified by lighter\renderers. And much chaos and inefficiency ensues as the shaders propagate from scene to scene and model to model, and get horribly muddled and overwritten as assets are versioned up.
The problem rears it’s head the clearest when you want to “propogate” a new version of a “car paint” shader to every car in a video-game cinematic. Every car has 10+ shaders in the model. Each shader with a different texture to handle the racing livery for that piece of geometry. There are 10 cars. That’s 100 shaders that need to be opened up and have their textures and parameters saved somehow. Then they need to be applied to a new copy of the new car paint shader and have it assigned. That can be a very time consuming manual labor task, riddled with the potential for mistakes. Distributing the work to a lot of people is a good idea. But that can be difficult to accomplish. And how do you keep track of what version of the shader is being used currently anyway? Which parameters should be saved and which are part of the new shader? Is anyone currently working on the model? Because you’ll want to upgrade their working file too if they have not published it.
Newer “changeset’ styled asset systems are attempting to fix this in a general sense. But they’re often not actually used in production. They are buggy and untested.
This nightmare scenario isn’t/wasn’t always the case. Those facilities that were using Renderman got around this problem for a long time, because their RSL shaders showed up as single nodes. The code on the inside of the shader could easily be updated across a show with a little effort. Is it a coincidence that the majority of the early VFX facilities did their work in Renderman? Is that why Maya and other 3D programs never had to fix this problem? Because, the high end users didn’t actually use Maya’s shading and rendering tools fully?
These days, a facility using something like v-ray also has a decent chance of avoiding the problem. That’s because v-ray exposes it’s main shader as a single, mammoth node that doesn’t change. So there’s no real shader development to be done. Though there are still problems caused by having so many parameters to tweak. And when people do start getting fancy with shader nodes, the solution fails.
However, the real way to handle the problem is to put some effort into software engineering a separation of textures and parameters, from shaders. Not because it’s necessary for the renderer, but because it’s better for how a team should be working.
That kind of system is something I did a long time ago for a proprietary rendering pipeline at PLF. In that pipeline: textures, colors and other shading parameters were attached to a piece of geometry… not to a shader. You’d also attach a shader name to the geometry. The system would then, on command, bring in a versioned shader library and copy the named shaders onto the objects, dynamically attaching textures and colors and parameters. This worked well enough. Though it did create some overhead. That overhead was far outweighed by the benefit of the team working together rather than causing problems for one another.
A system like Katana, is what a BIG facility like Sony Image Works does to fix these problems.
These days, I’m once again staring down the barrel of this problem and wondering what to do about it.
I could just use v-ray. I could try and free up funds for Katana. I could develop a system not unlike my old PLF one.
What I’m going to do first however, is take a good hard long look at Substance from algorithmic. Yea… I’ve got an idea how I might use it to solve the problem for me, across a few different render platforms and solutions in a very comprehensive manner.
But one thing is for certain. It makes no sense to ignore the problem. That’s just a waste of money and time.