Page 1 of 1

16 bit versus 8 bit Masks

Posted: April 5th, 2020, 3:10 pm
by Marpel
After some comments elsewhere on 16 bit masking, I decided to run a quick test to see if there is a noticeable difference between 8 bit and 16 bit masks. Not sure if this comparison is valid or not.

I took an image with some deep blacks (don't know if that makes any difference) and converted to 16 black and white and 8 bit black and white. I then did a Composite on the two, with Absolute Difference. Initially, could see no visible difference between the two. However, when I did a Levels and Colour, Full Dynamic Range, the result was a grainy, sand-like image (actually would make a good mask to introduce graininess to an image) with some anomalies in the darkest black areas. I did the same thing on a 48 bit black to white gradient image and, although much less obvious, the result only showed a speckled "grain" difference. I was sort of expecting a banding, in the gradient image especially, but not even close to that.

So, what does this graininess actually represent, and how does a 16 bit versus 8 bit mask impact the final outcome of an image?

Marv

Re: 16 bit versus 8 bit Masks

Posted: April 5th, 2020, 3:58 pm
by jsachs
I would expect the difference to be more or less random noise, assuming you start with a 48-bit color image and then convert it to 16-bit and 8-bit black and white and then look at the difference.

Since both B&W image are derived from the same color image, the differences between them should be less than about 1/256 or about 0.4% at most which is why you need to amplify the difference to see it.

Re: 16 bit versus 8 bit Masks

Posted: April 5th, 2020, 6:44 pm
by Marpel
Thanks for the explanation. Sort of makes me wonder why some are so adamant on 16 bit masks.

Marv