Page 1 of 1

Convert 8 to 16 bit for processing?

Posted: June 27th, 2021, 10:52 am
by tomczak
If an input image happens to be 8/24 bit and some radical adjustments are needed, does it make sense to first convert it to 16/48 bit to operate on it? I was doing it routinely, but the background calculations are done in higher precision anyway so I'm not sure if it makes theoretical or, probably less so, practical difference and what the round off errors could be.

Re: Convert 8 to 16 bit for processing?

Posted: June 27th, 2021, 10:58 am
by jsachs
The advantage of converting to 16-bit (using Convert) is that the low-order bits are filled in with random data rather than just zeros. This makes for smoother gradients as it fills in the gaps between adjacent 1/255 steps in brightness. The difference may or may not be visible, depending on the transformations you apply subsequently.

Re: Convert 8 to 16 bit for processing?

Posted: June 27th, 2021, 8:30 pm
by doug
Because 8 bits has 256 discrete tones or colors and 16 bits has 65,536, does this suggest that 99.6% of the resulting 16-bit file represents "random data"?

Re: Convert 8 to 16 bit for processing?

Posted: June 27th, 2021, 9:04 pm
by jsachs
The file size is only twice, so it is more accurate to say that half the data is random, but even a big change in the least significant bits is unlikely to be visible in the image.