dear mario
thanks! super interesting.
i'd love to have a test for dithering, to be sure of what we're displaying. i made a quick try at setting my retina display black, except one pixel, and testing the luminance precision of that pixel, but i didn't get enough light. i think i'd need to use a microscope objective to collect enough light from the pixel.
would apple's spatial and temporal dither be defeated by measuring one static pixel on a black background? i'd guess that zero is below threshold, so that small perturbations around zero don't change the black. In that case an isolated pixel in a black field wouldn't benefit from spatial dither. if temporal dither is confined to successive frames, then making every other frame black would similarly defeat temporal dither. if this is right, then doing photometry on a single bright pixel in a zero background, which is turned on only for every other frame, should reveal the luminance precision of the hardware, unaided by dither.
-> No. I tested "11 bpc" half-float framebuffer mode on my MacBookPro 2010 with NVidia GeForce 330M under latest OSX 10.12, with a CRS Bits# connected, so i can actually read back what 8 bit dithered video signal gets out to the "display". One can only read 1 video scan-line at a time, so assessment of temporal dithering would be difficult, but one can vary the scan-line read back to find out about spatial patterns. See attached M-Files, one for looking at one scanline only, with the whole display filled with the same constant gray value, the other displaying a single isolated pixel on black, or a row of 3 pixels on black, sweeping over 3 scan-lines to see the neighborhood of the pixel.
The M-Files translate 0 - 255 test values into equivalent 0 - 2047 values for a 11 bpc framebuffer, and use Screen('ColorRange') so PTB translates those back into 11 bit quantized floating point values between 0.0 and 1.0 for rendering. A dither-free system should thereby present the input test values unmodified to the 8 bit video input of the Bits# device. This all with an identity gamma table loaded, and verified it is indeed a 8 bit identity gamma table when using reguar 8 bpc framebuffer mode, and by readback at the end of the script before leaving "11 bpc" mode, to exclude non-linearities from there. I also tried adding different offsets o onto the j * 8 + o grayscale value, to account for bias in the float -> fb conversion. Didn't make much difference.
You see that an all-black screen gives black (RGB_target_0_0_0.txt), but already a gray level of 1 for th e whole screen creates very funky spatial patterns of 0's and 1's, also modulated over time, ie. the same image creates different output on successive Flip's. The whole thing becomes even more weird for higher values (_55_55_55, _128_128_128), where a constant value of [R,G,B]=[128, 128, 128] translates into [127,118,104] iow. the red channel follows more or less the target value, but green and blue are way off. Even for a maximum white of [R,G,B] = [255, 255, 255] you get weird results like [255, 236, 208] ie. creating errors in some color channels of up to 18%.
Using a single pixel, you don't get dithering in its neighbourhood, just weirdly distorted color values for the single pixel, even if its test value should be representable without any need to change anything (RGB_3linesweep_55_55_55). Similar for 3 pixels horizontal, all also modulated across Flip's.
The diary outputs also contain runs with other values than the ones in the filename.
I assume the same Apple proprietary dither algorithm is used regardless if it is a AMD or Nvidia gpu, but can only test this on that one NVidia card.
the apple document mentioned color shifting, so they might be compromising hue to enhance precision of luminance, what chris tyler dubbed "bit stealing". Again, i think we might defeat that by using just one channel, e.g. green, and setting red and blue to zero to make them reliably black for minor perturbation.
-> There isn't a Apple document? Only that web page which is written for lay people to explain in simple terms what dithering is? And a cited statement from Apples PR department that the future iMac will use spatial and temporal dithering. So all we seem to know is that most likely all current Macs including the "10 Bit capable" iMac's fake precision via dithering - at least on the iMac's internal panel. Even the future extra expensive 2017 iMac. So the 2017 iMac is not worth its money if you care about actual >= 10 bit precision without potential confounds. If at least display timing would be better, would depend if that thing still uses Apples own dithering method, or one of the methods built into AMD's display hardware.
hmm. if we knew that the dither was not stochastic and extended only a certain number of pixels spatially and frames temporally, then we could show a zero black background and upon that a sparse array of identical pixels, horizontally and vertically, shown on every other frame, to produce enough light for my photometer without needing a microscope objective. however, if the dither is random across those pixels then we'd be averaging across different values and failing to defeat the dither.
what do you think?
I don't know where you want to go with this? What's the point? You know already that dithering will be used for > 8 bpc modes and you can't defeat it for any meaningful stimulus. If you could, you would just be back to a 8 bpc output which you can already get by using standard 8 bpc mode.
If you want true 10 bit color precision without trouble, you can use your HP Linux laptop. If you want more than 10 bit luminance precision you could use the LInux laptop in 10 bit mode + our "bit-stealing" style PseudoGray method for potentially up to 12.7 bits. Even on your Apple machines the PseudoGray method would give you 10.7 bits. And the properties of bit-stealing are probably better understood and documented than Apples proprietary algorithm, at least you know there aren't spatial effects or timing effects, only slight colorization. And then there is the VideoSwitcher i mentioned in my e-mails to you for use with a CRT monitor, and rather cheap.
is there a way to load an image and then freeze the panel, to prevent any temporal change, to block temporal dither?
No, not in a useful way for you. Psychtoolbox stereo-resync function (Screen('Preference','SynchronizeDisplays', ...) does shutdown the display engines of all displays for a second, so that would be your "freezing" for a second. But not driving a display for more than a fraction of a second will cause all kind of funny visual artifacts and a breakdown of the image. But i doubt temporal dither is used by Apples proprietary method -- expensive. And hardware dithering isn't used at all, that we know for sure. What they seem to do is modulate properties of the spatial dither over stimulus updates, ie. on successive Flips, even if the same image is flipped again, so it's something like a temporally modulated spatial dither. That would stop if you don't Flip.
Btw. at least AMD's hardware dithering engines have various modes of operation, e.g., randomizing spatial dithering vs. not randomizing, applying high-pass filters, so dithering affects high frequency components like pixels, lines, edges different than lower frequency components etc. As i said, this isn't used on the Macs in 11 bpc mode, but possible that Apples algorithm implements similar treatments.
All to say, this is all rather troublesome if you want to present controlled low-level stimuli. I'd rather use actual high precision displays like in your HP Linux laptop and - if at all - then tricks on top of these where the algorithm is documented.
-mario
best
denis