Avoidable loss of color precision

This is not exactly a bug, not in PTB anyway, but just an observation. I stumbled across an unexpected loss of color precision that seems to occur at the very end of the processing pipeline, when OpenGL has to write the color values into the 8bpc output framebuffer. It almost seems like the values are passed through a 16bit-float bottleneck before being rounded to 8bit integers. The error can be as high as 12% of a pixel value step (8bpc, with Radeon PRO WX-5100). The pattern appears to be independent of OS (Win10/11, Linux). For 10bpc, multiply the 12% by 4, which is just short of 50% - ouch!
My test program (see below) draws many pixels with random gray level values and reads back the result from the backbuffer. It plots the deltas between the expected and the observed round-off errors.
This is what I get (plot1: full value range (x-axis) and plot2: zoomed in – one dot one sample, Radeon PRO WX5100):



For example, all values between 127.5 and 128.62 would be rounded to 128, although the values between 128.5 and 128.62 should have been rounded to 129 instead.

My test program uses PsychImaging with a FloatingPoint32Bit drawing buffer, but the problem can also occur without PsychImaging (e.g., when using procedural Gabors and such, which leave the rounding to the available color resolution in the framebuffer to later processing stages). I get different error patterns, qualitatively and quantitatively, depending on a couple of factors, like graphics card, drawing buffer format, drawing method, and what not.

Using a temporarily modified 'DisplayColorCorrection' PsychImaging task allows to work around the issue by anticipating the rounding (here to 8bpc). This would be done by replacing the line(s)

return (incolor);


return (floor(incolor*255.0+0.5)/ 255.0);

in the shader PsychOpenGL/PsychGLSLShaders/ICMPassThroughShader.frag.txt. Alternatively, one could use the 'EnableCLUTMapping' task, which also outputs rounded pixel values, at least with its default identity CLUT.

That these fixes work at all shows that the problem must be occurring after the shader. Moreover, reading back the pixel values from the drawbuffer instead of the backbuffer does not show any unexpected precision losses (as can also be verified with PsychTests/HighColorPrecisionDrawingTest).

Currently, I am just wondering about where exactly and why OpenGL would introduce such unnecessary errors. Or do I miss something? Some error in my test program? On the other hand, these errors would still be well within the specs of OpenGL 4.6 [PDF], according to section (Conversion from floating-point to normalized fixed-point):

[…] where convert_float_uint(r;b) returns one of the two unsigned binary integer values with exactly b bits which are closest to the floating-point value r (where rounding to nearest is preferred).

Simplified test program:

function TestPsychColorRoundoff_simple()
    clear all
    readbackBuffer = 'backbuffer';       % 'backbuffer' or 'drawbuffer'
    bpc = 8;
    %% --- Set up PTB.
    Screen('Preference', 'VisualDebugLevel', 0);
    Screen('Preference', 'Verbosity', 1);
    Screen('Preference', 'SkipSyncTests', 1);
    PsychDefaultSetup(2);       % use normalize color values for drawing.
    PsychImaging('AddTask', 'General', 'FloatingPoint32Bit');
    if bpc==10
        PsychImaging('AddTask', 'General', 'EnableNative10BitFramebuffer');    
    % PsychImaging('AddTask', 'General', 'NormalizedHighresColorRange');    
    % PsychImaging('AddTask', 'AllViews', 'EnableCLUTMapping');
    PsychImaging('AddTask', 'FinalFormatting', 'DisplayColorCorrection', 'None');
    wnd = PsychImaging('OpenWindow', 0, 0, [0,0, 1024,480]);
    %% --- Draw pixel data.
    pixIn = rand(480,1024);
    pixIn = reshape(sort(pixIn(:), 'ascend'), size(pixIn));
    [y,x] = ndgrid(0:size(pixIn,1)-1, 0:size(pixIn,2)-1);
    Screen('DrawDots', wnd, [x(:),y(:)]' + 0.5, 1, repmat(pixIn(:)',3,1));
    %% --- Read back pixel data.
    Screen('DrawingFinished', wnd, [], 1);
    pixOut = Screen('GetImage', wnd, [0,0,flip(size(pixIn))], readbackBuffer,1,1);
    %% --- Analysis, plotting.
    ivalMax = (2^bpc-1);
    pixIn = ivalMax*pixIn(:);
    pixOut = round(ivalMax*double(pixOut(:)));
    pixExpected = round(pixIn);            
    pixDelta = pixOut-pixExpected;
    rndOffErr = pixOut - 0.5*sign(pixDelta) - pixIn;
    rndOffErr(pixDelta==0) = 0;
    plot(pixIn, rndOffErr, '.r-','MarkerSize',8);
    unit = sprintf(' [LSB@%dbit]', bpc);
    xlabel(['Input pixel value' unit]);
    ylabel(['Round-off error error' unit]); 
    set(gca, 'xlim',[0,ivalMax]);