On Jan 15, 2009, at 6:18 PM, Valentin Wyart wrote:
because this may be of general interest, i forward it to the
Psychtoolbox forum.
Your textures are represented with 32 bit floating point precision
and drawn like that into a framebuffer with 32 bit floating point
precision. The precision of that drawing operation depends on your
graphics hardware, the operating system and if alpha blending is
enabled:
On Geforce 7000 hardware or ATI X1000 hardware: With alpha blending
disabled, all calculation and storage happens with 32 bit float
precision, that is about 23 bits worth of "real" linear precision,
more than sufficient.
With alpha blending enabled via Screen('Blendfunction',...); it is
different, because the Geforce 7000 and ATI X1000 can't handle alpha
blending at high precision: On OS/X you won't see alpha blending at
work and you'll see a warning about this. On Windows and Linux you
will see alpha blending, again with 32 bit float precision, but you
will have a very low redraw rate (about 1 Hz), because the driver
will shut down the hardware and use a slow software renderer to work
around the hardware limitation.
if you add the PsychImaging('AddTask', 'General',
'FloatingPoint32BitIfPossible'); command, then PTB will choose a
different tradeoff: It will perform all drawing with 16 bit float
precision into a 16 bit float framebuffer, which is about 10 bits
worth of linear precision. This because the hardware can handle alpha
blending at that precision. Once you call Screen('Flip'), your final
stimulus will be converted into 32 bit float precision and all post-
processing, e.g., color correction/gamma correction, filtering etc.
will again happen in 32 bits float precision.
-> You only have about 10 bits, maybe 11 bits worth of precision for
specifying your stimulus, but things like gamma correction will still
happen at 23 bits linear precision.
If you have a Geforce 8000 or later, or a Radeon HD 2000 or later,
then the hardware is capable of carrying out all operations in full
32 bit float precision, so you won't suffer a loss of precision with
alpha blending. So high precision display work is a good reason to
buy such a card to get rid of limitations and awkward programming
choices.
In any case, at the very end of the processing chain, the 32 bit
float, 23 bit linear precision final gamma corrected stimulus image
is converted into regular 8 bit per color gun integer values, which
drive the DAC's of your graphics card. Obviously you will need some
appropriate output device plugged between your graphics card and your
monitor at the end to get a high precision image. In case of the
'EnableGenericHighPrecisionLuminanceOutput' that would be some kind
of video attenuator device like the Pelli & Zhang device or similar.
We also have built-in support for the VideoSwitcher by Xiangrui Li et
al. for standard color monitors, the CRS Bits++ device in Color++ and
Mono++ mode and the native 10 bit framebuffers and DACs of the latest
generation of ATI and NVidia hardware (The latest QuadroFX cards and
FireGL/FirePro cards, and some Radeons). Oh ja, and the BrightSide
HDR displays for colorful high dynamic range display, but they are
out of production.
In any case, the final precision depends on your output device, about
14 bits for Bits+ box, about 12-14 for the VideoSwitcher or video
attenuators.
A special low cost solution is the bitstealing or Pseudogray mode:
Works with standard 8 bit DAC's and some perceptual trickery to get
some 10.5 bits out of standard 8 bits hardware -- at least according
to the relevant papers.
The warning about not being able to disable color clamping is a bit
weird, because that should work on your hardware. What was the exact
wording of the warning?
It probably doesn't matter though: It just means that the framebuffer
can't represent intermediate results outside the displayable range of
0.0 to 1.0. Such values can happen if you superimpose stimuli with
alpha blending, probably not what you do?
AdditiveBlendingForLinearSuperpositionTutorial and BitsPlusCSFDemo
are good demos to show this kind of stuff.
What kind of video attenuator do you use for your setup?
best,
-mario
> Hi Mario,Hi Valentin,
>
> I have been playing with the
> 'EnableGenericHighPrecisionLuminanceOutput' switch for PsychImaging
> (...) and I have a simple question about how Psychtoolbox and
> OpenGL handles float-32 L-A images (luminance + alpha planes) when
> combined with a custom luminance LUT.
>
> To summarize, I have a GeForce 7600 GT which only has a precision
> of 8 bits for R, G and B guns, but I need a higher precision for my
> grayscale stimuli. I used a luminance meter to get RGB luminance
> curves of my CRT monitor. Using the same LUT format as shown in
> 'CreatePseudoGrayLUT.m', I built a 12-bit linearized LUT for my
> monitor between 0 and 60 cd/m². I used the
> 'EnableGenericHighPrecisionLuminanceOutput' switch for PsychImaging
> (...) at the beginning of my script to load my custom luminance
> LUT, and I used float-32 (clamped between 0 and 1) instead of
> uint-8 (clamped between 0 and 255) textures in the code.
>
> The textures I use in the code contain luminance and alpha planes,
> and I wondered how Psychtoolbox and OpenGL handled such floating-
> point textures when combined with a custom luminance LUT. What I
> hope is that Psychtoolbox combines the different textures in
> floating-point before looking through the LUT to get corresponding
> RGB gun values, but is it actually the case (is it written
> somewhere in the PsychImaging code where you load some external
> OpenGL code)? Besides, when I use the
> 'EnableGenericHighPrecisionLuminanceOutput', Psychtoolbox complains
> that my graphics adapter does not support unclamped color values,
> but I hope this is harmless since I don't use any RGB-A texture in
> the code, only L-A textures.
>
because this may be of general interest, i forward it to the
Psychtoolbox forum.
Your textures are represented with 32 bit floating point precision
and drawn like that into a framebuffer with 32 bit floating point
precision. The precision of that drawing operation depends on your
graphics hardware, the operating system and if alpha blending is
enabled:
On Geforce 7000 hardware or ATI X1000 hardware: With alpha blending
disabled, all calculation and storage happens with 32 bit float
precision, that is about 23 bits worth of "real" linear precision,
more than sufficient.
With alpha blending enabled via Screen('Blendfunction',...); it is
different, because the Geforce 7000 and ATI X1000 can't handle alpha
blending at high precision: On OS/X you won't see alpha blending at
work and you'll see a warning about this. On Windows and Linux you
will see alpha blending, again with 32 bit float precision, but you
will have a very low redraw rate (about 1 Hz), because the driver
will shut down the hardware and use a slow software renderer to work
around the hardware limitation.
if you add the PsychImaging('AddTask', 'General',
'FloatingPoint32BitIfPossible'); command, then PTB will choose a
different tradeoff: It will perform all drawing with 16 bit float
precision into a 16 bit float framebuffer, which is about 10 bits
worth of linear precision. This because the hardware can handle alpha
blending at that precision. Once you call Screen('Flip'), your final
stimulus will be converted into 32 bit float precision and all post-
processing, e.g., color correction/gamma correction, filtering etc.
will again happen in 32 bits float precision.
-> You only have about 10 bits, maybe 11 bits worth of precision for
specifying your stimulus, but things like gamma correction will still
happen at 23 bits linear precision.
If you have a Geforce 8000 or later, or a Radeon HD 2000 or later,
then the hardware is capable of carrying out all operations in full
32 bit float precision, so you won't suffer a loss of precision with
alpha blending. So high precision display work is a good reason to
buy such a card to get rid of limitations and awkward programming
choices.
In any case, at the very end of the processing chain, the 32 bit
float, 23 bit linear precision final gamma corrected stimulus image
is converted into regular 8 bit per color gun integer values, which
drive the DAC's of your graphics card. Obviously you will need some
appropriate output device plugged between your graphics card and your
monitor at the end to get a high precision image. In case of the
'EnableGenericHighPrecisionLuminanceOutput' that would be some kind
of video attenuator device like the Pelli & Zhang device or similar.
We also have built-in support for the VideoSwitcher by Xiangrui Li et
al. for standard color monitors, the CRS Bits++ device in Color++ and
Mono++ mode and the native 10 bit framebuffers and DACs of the latest
generation of ATI and NVidia hardware (The latest QuadroFX cards and
FireGL/FirePro cards, and some Radeons). Oh ja, and the BrightSide
HDR displays for colorful high dynamic range display, but they are
out of production.
In any case, the final precision depends on your output device, about
14 bits for Bits+ box, about 12-14 for the VideoSwitcher or video
attenuators.
A special low cost solution is the bitstealing or Pseudogray mode:
Works with standard 8 bit DAC's and some perceptual trickery to get
some 10.5 bits out of standard 8 bits hardware -- at least according
to the relevant papers.
The warning about not being able to disable color clamping is a bit
weird, because that should work on your hardware. What was the exact
wording of the warning?
It probably doesn't matter though: It just means that the framebuffer
can't represent intermediate results outside the displayable range of
0.0 to 1.0. Such values can happen if you superimpose stimuli with
alpha blending, probably not what you do?
AdditiveBlendingForLinearSuperpositionTutorial and BitsPlusCSFDemo
are good demos to show this kind of stuff.
What kind of video attenuator do you use for your setup?
best,
-mario