Grain Management 101

Most compositors know that the first thing they should do to a shot is de-grain it.  They also know that they have to add back grain at the end before delivery.  A shot delivered without grain is unacceptable work.  And it needs to match what was given.

As a result, there are a myriad of grain management tools out there.  In my experience however, they’re often used completely incorrectly.

A good VFX supervisor has a particular kind of check that they do.  It drives compositors crazy.  Especially the ones that don’t know how to handle grain well.

Here is an example:

Three-up of a botched regrained effects shot.
Left: Original footage. Middle: Effected Footage. Right: Enhanced difference between the two.

On the left, we have the original plate.  Which is me.  On a poorly lit screen.  Yes my skin is clipping.  Don’t worry too much about it.

In the middle, we have the delivery.  I have executed an extremely difficult effect.  I’ve put a big white box over the middle.  In the composite, I’ve actually de-grained the footage, done the effect, and then re-grained it.  I used standard NukeX “best of breed” tools for the job.  We’ll take a look in a bit.

On the right however, is the difference between the two.  The white box of course shows up.  It’s something that changed.  But the problem is that my skin is showing up and so is anything that’s really bright.  If you look closely, you’ll see the darker bits showing up as noisy specks as well.

As stated, the supervisor would (and should) run this check and reject this.  Realistically, this should be part of a QC regimen.  A good supervisor seeing this result would immediately know what is wrong.  It’s the grain management.  Whoever did this comp doesn’t know how to execute proper grain management.

But wait… didn’t I say I de-grained and re-grained the shot?  Did I get the numbers wrong?  Do I need to do a better “match” and send it back again?

Let’s take a look at what I did.

Composting workflow of bad grain management.
Composting workflow of bad grain management.

From the top to the bottom, I’ve started with the footage.  I’ve de-grained it using a “Denoise” node, which is actually a spectacularly good node in Nuke.  The “effect” is just a rectangle node.  And the re-grain is handled by a F_ReGrain node, which is a very special node that’s only available in NukeX, the most expensive version of Nuke.  You’ll notice the re-grain is using the original footage as an input.  This is because F_ReGrain is analyzing the grain of the original footage in order to match it when it re-grains the footage.  In theory, it’s a lot better at doing that than I could be, adjusting a billion different noise parameters.

So this got rejected.  Rightly so.  What should be done?  Do I use something other than F_ReGrain?  There are a lot of parameters to use on F_ReGrain.  Do I adjust them to get it closer?

Let’s look at what we’ve got?

F_ReGrain parameters
F_ReGrain parameters

I’ll save you a step.  No.  You don’t try and adjust all the “Advanced” parameters to get it closer.  You might try.  You’ll get rejected again.  The supervisor isn’t going to be fooled.  He/She wants their original grain.  Not something that “looks like” their original grain.

The problem, is that F_ReGrain is a grain generator.  It’s creating new grain.  Which can be very useful for full CGI shots.  But it’s not useful for giving the client back their original grain.

Though, I’ll point out a lot of supervisors and compositing supervisors don’t really get this. When they suddenly must deliver to a senior supervisor (or client) who expects their original grain back, it can be especially problematic.

Maybe though, someone in your organization knows some mathematics and you try something like this:

Effective grain management compositing workflow, try #2.
Effective grain management compositing workflow, try #2.

Okay, so here I’ve got the same de-noise node as before.  But I’ve built up a bit more around it.  What I’m doing, is subtracting the de-grained footage from the original footage. That leaves me with the grain as a second output from my de-grain operations.  There’s also a little test logic built in there.  To verify I’ve done it right, I’m adding it back on top of the detrained footage and doing a difference with the original footage, just to make sure I can reverse it correctly.

The reason for that can be seen in the new re-grain section.  I’ve gotten rid of F_ReGrain and replaced it with a simple “plus” node.  Just like I was using with the check earlier.  If I add back the grain I subtracted out earlier, then it should be fine.  All the parts that have not been effected will come out exactly the same as before the de-grain.  Problem solved right?  Let’s take a look.

Results of composite try #2.
Results of composite try #2. Left: original footage. Center: effected footage. Right: enhanced difference between the two.

This looks like it might be right.  All the skin and background are black.  No change.  The only thing that changed is the box, which is the effect.

But wait.  What if I put in a black box instead of a white box?  Ugh.  No.  The composite breaks down when I regain it.  Something isn’t right.  This is what a 1:1 region of the comp looks like:

1 to 1 zoom of the same effect, where the box's color is set to black.
1 to 1 zoom of the same effect, where the box’s color is set to black.

So the grain ruined the effect.

Let’s take a look at the grain that we extracted to see why:

Extracted grain.
Extracted grain.

Ok, so that makes some sense.  The box is black.  The grain that was extracted is not.  When you add the grain to the black, you get a ghost image of me, as I was burned into the grain.  It works great for all the parts of the image that were not effected though.

Maybe to this point you’re saying: “Duh. This is kids stuff.  I know what you do here.  You use a matte of the ‘effect’ (in this case the box) to patch over the grain.”

Yea, kinda.  There is a mode of operation where you hold-out the grain of the effected region and replace it with newly generated grain that’s fairly well “matched” to the original footage.  And no-one is the wiser.  Since the un-effected regions are still as they were, you get away with it.

But you can end up dealing with some pretty complex grain edging issues and double graining issues.  It can become a whole parallel composite to your main composite if you’re not careful.

Wouldn’t it be really nice if you could actually capture the original grain structure without the image burned into it, so you could just use it?  It would.  You can.

I call it grain-normalization.  I’ve not heard it called anything else in particular.  I don’t think I’ve invented it.  But I need to call it something.  Here’s what it looks like.

Grain normalization compositing workflow.
Grain normalization compositing workflow. note: Gamma node is connected to the wrong input. It should come from the degrained footage, as described below rather than from the original footage, as shown.

Here’s how it works:

First I’m using a grade node to subtly lift the image.  This is because I have some values that are coming in blacker than black (negative).  Sensor noise and grain technically can be negative.  But image should not be.  Basically, once you’ve de-grained (or de-noised) the footage, you should not have negative values.  The lift is making that adjustment for what is obviously a mis-calibrated capture system.  There is a matching grade node at the end, that does the inverse to put the footage back to where it was, even if where it was is technically mis-calibrated.  It’s not our place in VFX to “fix” that problem for them.  It’s just our place to fix it well enough to do our work, and then give it back as it was.

Anyhow, once the black-point is handled, we de-grain as we did before.  Again, Nuke’s “Denoise” is just a great node.

The footage heads down through the effect, and then the re-grain again.  Nothing is different there.

What is different, is how we handle our grain before we apply it.  I have an explicit “normalization” section.  I’m taking the de-grained foorage and running it through a gamma node.  Which is an exponential function (color-space).  I’m then dividing the grain by it.  Then, I tweak the gamma parameter until the grain looks the “flattest” that it can.  Like so:

Three gamma settings for normalizing grain.
Three gamma settings for normalizing grain. Left: too low. Middle: just right. Right: too high.

Here I’ve got three normalized grains with different settings for the gamma value.  The one on the left is too low.  Therefore, my black shirt comes in brighter than the green screen.  The one on the right is too high.  Therefore, my black shirt comes in darker than the green screen.  The one in the middle is just about right as nearly everything is balanced out neutral.

You can still see some high frequency artifacts.  Basically, edges.  And we’ll talk about those later.

This grain is effectively normalized now.  It’s the original grain.  But it’s got nearly no image burned into it.  It’s “clean.”

What we need to do in order to use it, is de-normalize it by the newly effected image.  We need to dirty it again.  But we dirty it by what we intend to apply it to.  That’s what happens in the grain-denormalization section.

Simply put, we invert the normalization operations with different inputs.  Now, rather than gamma the de-grained footage, we instead gamma the effected footage (which is still de-grained of course).  Then, rather than divide, we multiply in the de-normalized grain.

This re-normalized grain is what we send into the re-grain section of the composite.

And the result:

Final grain normalization based composite.
Final grain normalization based composite. Left: original footage. Middle: Delivered footage. Right: Enhanced difference of the two.

The difference on the right now shows what you’d want to see.  The effected region is different, but the un-uneffeced region is identical to what was originally there.  I’ve made the box grey this time to bring out the only artifact still in the grain.  If you look extremely closely, you can still see some of the edges of my shirt and shoulders in the grain structure of the box.

What are they?  Probably they’re spatial artifacts of the sensor.  A de-convolve might reduce them further.  They extend past the edges into the interior and also are exterior to the object.  Basically, they’re ringing artifacts.  It may not technically be correct to remove them altogether.  In-fact, proper grain generation probably requires that you create them for any object that’s new to the image.  Though they’re very hard to see in footage.

Depending on what this effect really is, maybe you’re fine.  Maybe you’re not.  If all you sought to do was key out the screen, you probably don’t need to do any patching of the grain.  The screen effectively has no detail.  So there’s no spatial grain artifacting to remove there.  You can just use it and feel pretty spiffy about not just graining the new background, but actually putting the original grain back nearly perfectly.  I’ve executed plenty of composites that fell into that category.

If, on the other-hand, you really are patching over the top of the image to the point that the spatial artifacting is a problem, then you probably do need to generate grain and patch it in. But likely only in the areas you really can’t get away with the original grain structure.

I’d be willing to bet that in my middle-grey box, which is probably one of the worst possible scenarios, you can barely see the artifacts through the web compression.  They’re very subtle.

Anyhow it’s trivial to hold out, or patch over the re-normalized grain, just before you re-grain the final composite.  I’ll leave that to you if you really need to do it.

From now on, I never want to get a shot that has unneccesarily wrecked the original grain of the plate.

Class is dismissed.