So far I have seen from the SDK's examples (Dissolve.8BF) DisplayPixelsProc accepts 8 bit data only.
This means 16 Bit data must be converted to 8 bit before calling DisplayPixelsProc. DisplayPixelsProc then converts the data to the monitors colorspace and displays it.
This method doesnt work correctly for 16-bit linear RGB data. Information in deep shadows is lost and this results in visible color banding in deep shadows.
Is there a recommended way to handle this problem? Must I convert the images colorspace myself? Must I write my own display-methods or can the colorspace handling of DisplayPixelsProc be disabled?
TIA,
Peter
DisplayPixelsProc and 16 Bit image
Well, yeah - 8 bits/channel is crappy for linear (gamma 1.0) data.
You can use the colorspace conversion and depth conversion callback routines.
Or you can do it yourself.
 
No comments:
Post a Comment