07-17-2020, 09:18 AM
Need bugfix for Convolution Matrix filter. If I use the following:
0 0 0
0 1 1
0 0 0
I expect one white pixel (RGB=255,255,255) on a black (RGB=0,0,0) background, if normalizing is turned one, I will end up getting a pair of pixels, each at mid-gray (RGB=128,128,128). But that's not what I'm getting. I'm getting a pair of higher than mid-gray pixels (RGB=188,188,188).
This may have to do with the mode it's processing in. I think it's converting from normal pixel values (perceptual values) to linear values, and then performing the convolution filter on the linear values, and then finally converting the filtered linear values back to perceptual values. Ordinary pixel values (like those stored in a standard BMP file) represent perceptual values, so that decreasing the value in half results in the brightness appearing half as bright (not actually cutting the light output from your monitor in half).
Most graphics software performs processes on the perceptual pixel values, and this should be the default, unless the user expressly selects a different mode of operation. If it doesn't do this, I consider the software buggy.
Since the Convolution Matrix filter does not allow the user a way to specify how it should perform the operation, it should always perform the operation on perceptual pixel values, as these are the values that will get saved to common image file formats such as BMP. An alternative solution would be a check box in the dialog box for the Convolution Matrix filter that would let you select if it would perform its operation on linear or perceptual values.
0 0 0
0 1 1
0 0 0
I expect one white pixel (RGB=255,255,255) on a black (RGB=0,0,0) background, if normalizing is turned one, I will end up getting a pair of pixels, each at mid-gray (RGB=128,128,128). But that's not what I'm getting. I'm getting a pair of higher than mid-gray pixels (RGB=188,188,188).
This may have to do with the mode it's processing in. I think it's converting from normal pixel values (perceptual values) to linear values, and then performing the convolution filter on the linear values, and then finally converting the filtered linear values back to perceptual values. Ordinary pixel values (like those stored in a standard BMP file) represent perceptual values, so that decreasing the value in half results in the brightness appearing half as bright (not actually cutting the light output from your monitor in half).
Most graphics software performs processes on the perceptual pixel values, and this should be the default, unless the user expressly selects a different mode of operation. If it doesn't do this, I consider the software buggy.
Since the Convolution Matrix filter does not allow the user a way to specify how it should perform the operation, it should always perform the operation on perceptual pixel values, as these are the values that will get saved to common image file formats such as BMP. An alternative solution would be a check box in the dialog box for the Convolution Matrix filter that would let you select if it would perform its operation on linear or perceptual values.