Gamma correction - missing gray levels

Due to the gamma correction, some gray levels are lost. For example, if I gamma encode 255, and then decode, I obtain 255.
However, if I gamma encode 254, and then decode, I obtain 255, not 254.
Therefore, gray level 254 is lost due to the gamma compression.
This happens because of the non-linearities of gamma, that affect the subsequent quantization process.

This way, if I do the gamma correction programmatically, let’s say using MATLAB, and then display the image, I lose some gray values, because the plotting function will perform quantization.

Can Psychotoolbox solve this issue? I mean, I would like to quantize only after the gamma decoding. If I quantize first, then I will lose some graylevels. And I don’t want to lose graylevels.

Alternatively, I would like to cancel the monitor gamma at all. LCDs are linear and don’t need any gamma. If I have a gamma of 1, I won’t have this problem of losing gray levels anymore.

For example, some monitors can be configured to change the gamma to 1. I can do this using the buttons behind the monitor, or using the NVIDIA control panel. In these cases, do I suffer from the problem of losing gray values?

Can Psychotoolbox also do this?

Best regards,
André Amorim

I have picked your specific problem for “free support on resolving a non-trivial issue”, sponsored by Mathworks Neuroscience - MATLAB and Simulink Solutions - MATLAB & Simulink. You get advice and help on this topic for free. Mathworks provided us with 5 such “Jokers” for free support in the time period October 2023 to June 2024, and you get to use the 4th one out of five.

That’s not how you would do it manually, as you’d indeed lose precision and go below effective 8 bpc due to the non-linear compression.

Instead you use the Psychtoolbox function …
Screen('LoadNormalizedGammatable', window, lut);
… to upload a suitable gamma correction lookup table lut. Optimal size of the ‘lut’ is dependent on operating system and graphics card, but in general it is a n rows by 3 columns matrix, with column 1, 2, 3 encoding the gamma curves for red, green and blue channels, and n depending on the bit depth / precision of your framebuffer, e.g., 256 rows for a standard 8 bpc framebuffer with 256 levels of red, green and blue, or potentially 1024 rows for a 10 bpc framebuffer, except on MS-Windows, where it is always 256 rows, regardless of framebuffer bit depth. Values between 0.0 and 1.0 for 0% to 100% output intensity are specified in the lut. The operating system and display driver will translate that into the highest precision possible on your graphics card.

Nowadays, on current common hardware from AMD, Intel or NVidia, typically the output precision of the lut is 10 or 12, sometimes even 14 bits per color channel. So the hardware gamma table will map your 8 bpc 256 levels input to a finer grained 10 (1024 levels) or 12 (4096 levels) gamma corrected output. Due to the extra 2-4 bits of output precision, the non-linear compression is “compensated”, so the effective precision loss on the actual video output is lower or “non-existent” - In theory.

In practice, the actual type of graphics card, selected video output mode (resolution * refresh rate), type and quality of video cable, and limitations of your display device will reduce the output precision again to less than 10 or 12 bits.

E.g., If you connect an analog VGA driven good old CRT monitor to a modern AMD or NVidia graphics card without an active DVI/DisplayPort/HDMI converter, but a native analog VGA output, then the output bit depth of the digital to analog converters (DAC’s) is usually 10 bits, so using the LoadNormalizedGammatable lut will provide 2 effective extra bits from 8 bpc → 10 bpc on the output / CRT display side to reduce precision loss due to gamma correction.

On a DVI-D connected display, 8 bpc framebuffer will map up to 10 or 12 bpc gamma correction output from the lut, but then get truncated down again to 8 bpc when transmitting over the DVI cable. However, the graphics card will usually apply a digitial display dithering technique like spatial dithering or spatio-temporal dithering to perceptually retain some or all of the 10 or 12 bpc precision on the output side, taking advantage of limited spatial or temporal resolution of your eyes and their low-pass filtering / blurring properties. if your display is set up and calibrated suitably, and depending on your specific stimuli.

On a HDMI or DisplayPort “deep color” capable display with suitable HDMI or DisplayPort cables, the system can or will output 10 bpc, or 12 bpc video signals for the display to process. It will - often by default - apply dithering as necessary to simulate 10 or 12 bpc precision on a less than 10 or 12 bpc precision video link. E.g., when the link is running at 8 bpc, it may simulate extra 2 or 4 bits via dithering to fake 10 or 12 bpc. On a 10 bpc link, it may simulate 2 extra bits to fake 12 bpc over the 10 bpc link. A 12 bpc link will feed the signal without dithering.

Note that even higher precision 10 or 12 bpc displays may have to reduce the precision of the actual video signal to less than their supported maximum if the graphics card, cabling, or display is not capable of transferring the large amount of video data at high precision with reliability when one selects high video output resolutions or high video refresh rates. In that case, dithering usually kicks in again to fake some extra precision over the lower precision link. Similar, a multi-display setup may require the graphics card to reduce output precision if multiple displays run at high resolutions and refresh rates…

If you have a high quality high precision monitor, its onscreen display will probably tell you somewhere what the actual precision of the input video signal is.

Such color proofing monitors used for press pre-print, photo editing, or movie post production sometimes have their own built-in color calibration and gamma correction, so one could set the graphics cards gamma table to an identity passthrough ramp, and leave the correction to the actual monitors builtin tables.

I don’t think they are linear, at least not to my knowledge. But usually the non-linearities, which are different from a typical classic gamma curve, get corrected by the panels display controller to turn into “typical” gamma curves that operating systems at their default “no gamma correction” settings expect for typical image content, like your average jpeg photo or similar which may be pre-corrected for a gamma 2.2 monitor. See for example https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://en.wikipedia.org/wiki/SRGB&ved=2ahUKEwij8q2guZyFAxV6VUEAHYyiB-cQFnoECBAQAw&usg=AOvVaw2zu57Jz2yYqjSMnlIY4rsY about the sRGB standard and gamma like curve.

That said, if your monitor is good enough to get sort of mostly linear, it could make sense to preset it to a ~1.0 gamma and then do calibration and “gamma correction” for that, to maybe avoid some gamma compression and retain more precision. Depends on the monitor, I guess…

It can’t remote control the monitors settings, that’s what you can do via buttons or an onscreen display menu etc. But if you had a high quality color proofing monitor, you could use the manufacturer recommended or provided calibration procedures, maybe even some high quality factory calibration, to get it linear, and then set identity gamma tables via the ‘LoadNormalizedgammatable’ command, to pass values through from Psychtoolbox frame buffer to the monitor and leave the correction tasks to the monitor.

Apart from gamma correction, Psychtoolbox does support various high color precision output modes for the stimulus framebuffer itself, depending on operating system and graphics card. E.g., you can select a 10 bpc framebuffer under Linux with modern AMD, NVidia and Intel graphics cards (as in "released since 2007 to 2010, ie. no older than 14 - 17 years). The XOrgConfCreator script helps you set up those 10 bpc framebuffers. On modern AMD graphics cards under Linux, you can even get 12 bpc framebuffers with proper setup, using AMD’s amdvlk driver and Psychtoolbox Vulkan display backend.
Sometimes this is also possible on MS-Windows, often only with substantially more expensive NVidia or AMD “Pro” graphics cards, where the vendor makes you pay for the same things you get for free on Linux. The 10 bpc are usually also available for HDR-10 display modes on HDR displays. Some 10 bpc modes are even supported on Apple macOS on some Intel Macs with some displays, albeit with massive loss of performance and complete loss of visual timing precision.

When configuring Psychtoolbox properly, this allows rendering with 23 bpc linear precision, post-processing and gamma correction with that precision, using our PsychColorCorrection() functions, and then final output with 10 or 12 bpc, potentially with dithering applied on less than 10 or 12 bpc display monitors.

There are also high precision visual stimulator devices supported by Psychtoolbox from VPixx or Cambridge Research Systems, for native 14 or 16 bpc output. And various less expensive devices like the Xiangrui Li et al VideoSwitcher for driving analog CRT monitors at 12 bpc precision in monochromatic/grayscale only.

Good Psychtoolbox demos for seeing what is supported are AdditiveBlendingForLinearSuperpositionTutorial.m or BitsPlusCSFDemo. They cover most high precision display and gamma correction approaches, for different types of operating systems, hardware, needs, and the depth of your pockets. MeasureLuminancePrecision.m is a script that needs to be modified in its parameters for your needs, but can automatically measure output precision with certain supported photometers or colorimeters.

There are separate demos for HDR high dynamic range use cases, e.g., SimpleHDRDemo or HDRViewer, or HDRTest.

Best,
-mario

[Time spent on this Mathworks sponsored support request so far: 58 Minutes.]

1 Like