How to detect stimulus aliasing

When drawing a stimulus using Screen('DrawTexture'), is there some way to query the difference between the requested stimulus and the stimulus drawn to the screen?

For example, if I draw a simple texture consisting of a black square measuring 10×10px on a white background, then so long as that stimulus is perfectly aligned with the pixel array, there is no discrepancy between the requested and the drawn image. However, if I shift the texture by half a pixel, this results in an ‘imperfect’ representation of the stimulus, since the pixel array is incompatible with the requested draw position. Is there a built-in method to detect and quantify this discrepancy?

The reason I’m interested in this is to detect aliasing in Gabor patch stimuli, as well as determining whether two Gabors differing very slightly in spatial frequency actually render differently on-screen.

Note:

  • I’m aware there are features available such as dithering and sub-pixel rendering, but I’m talking about the scenario in which these features are turned off, and I want to access some quantifiable output regarding the ‘correctness’ of the stimulus actually shown on-screen.
  • I’m also aware there are numerous other potential sources of ‘incorrectness’ related to display hardware etc, but I’m interested solely in the discrepancy between the requested stimulus and the output sent from PTB to the screen.

Hi Matt,

I suggest you try to query the displayed stimulus using Screen(‘GetImage’), that should reflect all such rendering effects you’re talking about.
I also guess you are aware of the procedural Gabors, that should be as perfect as they can be given the pixel grid, if they are an option for you.

Cheers,
Dee

Brilliant; thanks Dee – this is exactly what I was looking for.

Incidentally we do use procedural Gabors, but I wasn’t aware that it’s possible to pass the pixel grid as an input; I’ll look into this now.

I didn’t mean you can pass the pixel grid as an input, I mean that these are, in the shader, directly computed to be quantized as accurately as possible given Gabor position and the pixel grid. If you do whats in the demos (i.e., basic usage of these procedural Gabors, if you see them you’re probably doing it right :p), you are already getting that benefit over prerendering and working with textures.

As an addendum that may mattr: ‘GetImage’ also has the optional ‘bufferName’ argument. Depending on what bufferName you ask for, it will give you screenshots of different stages in processing. ‘drawBuffer’ is a screenshot of what your script has drawn. ‘backBuffer’ is what ends in the framebuffer before flip, after post-processing by the imaging pipeline, e.g., stereo-processing, color/gamma correction, mirroring, geometric undistortion, special encoding for high precision display devices like those from VPixx, etc., all the stuff PsychImaging() can do on request. ‘frontBuffer’ for what is currently sent to the video output. And a few more for stereo or for special cases…

Our own precision tests use this method to verify accuracy of drawing or post-processing operations on the gpu, e.g., HighColorPrecisionDrawingTest, ProceduralGaborTest, DriftTexturePrecisionTest, BitsPlusImagingPipelineTest.

-mario

1 Like