we are setting up a HDR10 projector for psychophysics use. Thanks to Mario’s help, we managed to get the drivers to work and are now able to show 1023 grey levels as confirmed by showing patches and measuring them. Now we want to apply a gamma correction, ideally using a color lookup table as I am used to creating.
We tried Screen(‘LoadNormalizedGammaTable’) which seemed not to change anything and variations of the PsychColorCorrection routines which we can use for basic clamping of values, but did not get to output vales in the 0-1023 range we need to output to the screen whenever we activated any actual correction. All variations we managed to produce here seem to produce outputs in the 0-1 or at most 0-255 range.
So, my question is: Are these the functions we are meant to use for HDR? If so, do we need to explicitly turn on some kind of Processing additionally? Or are there other transformations we can use elsewhere?
sorry for taking so long to answer and provide the code. I tried quite a few more variations, just to try and find out what the problem is.
The code I am using for checks is the following, which draws multiple ramps with different bitrates on the screen. With the clamp color correction things actually work as expected, i.e. I get the display to show the ramps and if I clamp to values <1000 here the display does indeed get clamped. For the clut, I always only get a black or near black display, without any errors. I tried changing the maxinput values and step of the clut, switched back to 0-1 inputs and/or cluts from 0-1, but all this did nothing towards displaying things right.
Side note: I think the standard for HDR says the pixel values should be nits or cd/m² but the transformation is clearly not linear, so I do need the correction.
Just to keep this post up to date, after a little more debugging it looks like this is a bug in PsychColorCorrection methods once the Vulkan processing is added.
Concretely, there appears to be a problem with passing the texture containing the CLUT through to the shader program. This is confirmed by us being able to run the simple gamma correction, which runs the same kind of GLSL shader program, but without the need to pass a texture or sampler to the program. Also the CLUT routines work fine if we do not turn on HDR processing.
If anyone has experience with passing a texture to a shader program with Vulkan running, please give me a hint. Either way this will likely require some fix in the psychtoolbox pipeline in the end.
Oh, I thought from your previous post that your problem was solved? I can have a look if this is a bug or just needs a different way of using it. Usual approach, PsychPaidSupportAndServices for a support authentication token.
In general, with HDR-10 mode, you might run into a problem in that implementing HDR-10 support for standard commercial HDR display devices requires use of a very specific, fixed, standardized, non-linear Optoelectric transfer function (OETF), the SMPTE ST-2084 PQ “Perceptual Quantizer” of short PQ. So PQ is the OETF that is supposed to be used instead of a more regular gamma correction OETF.
PQ maps pixel intensity units in nits, with an input range of 0 - 10000 nits, into a normalized 0.0 - 1.0 output range and that is supposed to be sent to the display with 10 or more bits of precision. The display is supposed to apply the inverse PQ EOTF to that signal, to map back into absolute units of nits, and then faithfully and exactly represent that on the display surface.
In practice what a display makes out of the theory depends a lot on the display, I guess. But the idea for a good pro-class HDR display is that it has its own builtin color correction / calibration mechanisms to allow you to deal with any remaining non-linearities and limitations, or deal with it automatically.
So you are theoretically not meant to apply any gamma correction after what the HDR conversion shader does, and you should have a good look into the manual of your HDR projector if there is some vendor provided mechanism to deal with it.
Psychtoolbox disables the regular gamma hardware tables or more exactly, sets them to a f(x) = x identity mapping. The PQ function is extremely non-linear to squeeze a range of 10.000 nits into only 1024 digital levels to send over the actual display cable. I’m not sure what a “gamma correction” after that stage would meaningfully do. Even the way to communicate the stimuli to the operating system differs by operating system and graphics card. On Linux + AMD, Psychtoolbox does PQ encoding in its own shader, so there is the option for hacks. On MS-Windows or macOS, the HDR stimulus is communicated to the operating system with scRGB encoding instead, and the OS display stack converts scRGB encoding into PQ encoding using its own proprietary method, with less control on our side of what actually happens to the stimulus image. Sometimes this is handed unmodified to the display hardware of some gpu models itself, which has special circuitry for HDR handling and PQ encoding for higher power efficiency with potentially reduced precision, in a way that is specific to the specific model of graphics card, potentially differing across gpu vendors and models. This stuff on MS-Windows and macOS all tailors to the consumer that wants to watch HDR movies of play HDR video games on a battery powered laptop/tablet etc., not to vision scientists and similar.
If you need to apply some non-linear correction in Psychtoolbox anyway, I’d assume it would be better before that final encoding stage, by use of our PsychColorCorrection functions by use or even combination of the various supported PsychImaging ‘DisplayColorCorrection’ tasks. Apart from parametric gamma correction, standard lookup tables and 3D lookup tables, color conversion matrices and some other stuff, there is also a task for display vignetting correction, which could be useful if your HDR projector doesn’t have this built-in in an easy to use way (cfe. our VignettingCorrectionDemo.m and VignetCalibration.m scripts). I hope there isn’t any bug there, but there could be limitations, and the last exhaustive testing was done almost 4 years ago under contract from VESA. I didn’t hear any complaints from them, but ofc. the research teams of the display manufacturers themselves do probably have access to the best and most well calibrated display equipment and absolute top expertise in dealing with issues on the display side.
Update: While parametric color correction worked fine in HDR mode, testing showed that the CLUT based color correction and Vignette correction indeed had a bug in HDR mode, as long as only one of them got used at a time - some optimization gone wrong. Using both at the same time worked, as the buggy optimization got bypassed then. This will be fixed in an upcoming PTB release.
For more help → PsychPaidSupportAndServices for a new token.
thank you for looking into this as well even though I finally found the problem myself yesterday, too. So this is just to close this thread with the right answer(s). There was a bug in the HDR code of Psychtoolbox that did not pass the configuration which texture to use for the color lookup table into the shader that actually does the color correction. This explains why the corrections based on numbers worked and the CLUT based ones did not. Just adding a single argument to one of the setup functions fixes this.
So now the PsychColorCorrection based methods are all working in HDR and are the way to go to do corrections of the color display.
On the necessity of such corrections: Mario is of course right that the color standard as implemented by Psychtoolbox and the drivers are supposed to imply that the values we handle in the buffers inside of Psychtoolbox correspond to NITS or cd/m^2 on the screen in the end and I am sure in settings with access to the internals of the displays etc. this is actually (close to) true. For HDR displays (or projectors as in our case) that are aimed at consumers this is clearly not true though. They add non-linearities to make the picture “look better” and often still have settings like brightness, contrast, etc. that would not make any sense if the display was showing the actually specified cd/m^2. For example our projector has a fairly clear tendency to saturate, i.e. its luminance grow too fast initially such that high luminances have to be squashed together to stay within the range the projector can display. Presumably, this is for better visibility in not so dark rooms. So for actually linearising a display for psychophysics we definitely need color corrections and as they are not of the typical gamma shape, we need the color lookup table.
Yes. I found and fixed that bug a bit before you could, but I was impressed that you actually managed to figure that out yourself. For reference:
There are two pathways for color correction, performance optimized and standard. One is sometimes taken if only one color correction operation is applied and the output method is one that supports this optimization, e.g., HDR output formatting. The other path is used whenever more than one color correction operation is configured or if the output method does not support the optimization.
In this specific case, the optimized one-operation path was buggy for the specific case of having one lookup table based operation for HDR output, due to omission of a single parameter.
That said, another correction operation that could be of use for you is the per-pixel gain correction if your projector has an unevenly lit / inhomogeneous field of projection, e.g., where intensity falls off to the periphery of the image.
E.g., this was my test code for testing that case, just patched into SimpleHDRDemo.m at appropriate places:
Of course using look tables to do lookup transformed color values or intensity gains for every displayed pixels consumes more memory and memory bandwidth and reduces performance, especially on high resolution displays on lower end graphics cards.
Very high end pro class projectors will likely have built-in gain correction for uneven lighthing, where the projector can measure, and then correct for uneven projection fields in its own hardware - more comfy and efficient if available.
Parametric gamma functions are less flexible than LUT’s, but usually much faster in processing. Especially if your stimulus would cover a wide dynamic range, you would need rather large non-linear correction LUT’s to piece-wise linear approximate a correction over a large range of input values.
Anyhow…
Yes. I assume pro-class displays in the multiple 1.000 Euros or substantially higher range, e.g., display monitors used by the movie / media industry for post-production / editing etc., so called mastering displays or reference monitors, should be able to deal with this stuff much better than consumer displays. Googling this will show up displays, sometimes in the price range of a luxury car.
Wrt. color reproduction, the PsychHDR('HDRMetadata') function and related functions in help PsychHDR, and its use in some of our demos also deserves some attention. The command allows to specify the “HDR static metadata type 1” meta data about your visual content. This data will be send by the graphics card to your display, and depending on your display may change / affect how your pixels are turned into light. It tells the display about the minimum, maximum and average content brightness levels of your stimuli, as well as color gamut and properties of the mastering display that was used to originally create the stimuli, essentially what the display device should assume are the properties of a reference display device that displays the images most faithfully.
By default, at startup Psychtoolbox queries the HDR properties of the connected HDR monitors and sets those back as HDR metadata, so the display assumes that its own properties are exactly what is expected for these stimuli, and thereby probably avoid additional post-processing like gamut remapping, tone mapping etc. The PsychHDR() functions can override that to something more appropriate for your content. Our GStreamer based movie playback engine, e.g, optionally returns this HDR mastering metadata for a given HDR movie, so one could give the display some clue about how specific HDR movies are intended to be reproduced.
In practice, what the display does with the metadata is unspecified by the HDR standard afaik, and display device dependent. It could do brightness adjustments, gamut remapping, tone mapping to squeeze content with brightness levels lower or higher than what the display can reproduce, or gamuts wider than what the display can do, into the ranges displayable by the device to make things look better / closer to the intended look. Or it could also do just nothing and ignore all the info.
The actual effect of this metadata is completely untested by myself, as my own cheap Samsung C27HG70 Vesa DisplayHDR-600 monitor does absolutely nothing with this metadata. I can send random numbers and even invalid color gamuts and nothing changes at all. One would need a way more capable / modern HDR monitor to see how change in HDR metadata changes anything.
And needless to say, my own monitor is ofc. far from an accurate reproduction of the stimuli, given that stimuli with brightness beyond 603 nits peak and 352 nits sustained/average are not reproducible, so it saturates when approaching those limits. It was good enough for developing PTB’s HDR support, but is far from the theoretical ideal in the HDR standard. But then, 460 Euros in the year 2020 vs. the price of a luxury car for proper mastering display monitors…