Large flip delay on win-10 extended desktop stimuli display


I am having trouble precisely syncing the visual stim display and data acquisition. Here is my setup, I run PTB 3.0.16 with Matlab 2019a on a win10 machine. Monitor 1 connected to mini-DP port to show desktop and GUI, monitor 2 and 3 both for visual presentation, one in the animal recording booth, and one in the experimenter room, monitor2 and 3 are connected to the same monitor splitter, which connected to another mini-DP port on the same graphic card as monitor 1, all monitor refresh at 60 HZ with same color depth and resolution. Monitors 1 are set to be extended desktop to monitor2(monitor set as the primary display since it is used to present stimuli). The 3 monitor I have are a bit of old so they are connected to mini-dp ports with DVI and VGA converters. To measure the stim presentation precision, I tested a very simple code like this:


close all;


s = daq.createSession(‘ni’);




screens = Screen(‘Screens’);

screenNumber = max(screens);

white = WhiteIndex(screenNumber);

black = BlackIndex(screenNumber);

grey = white / 2;

[window, windowRect] = PsychImaging(‘OpenWindow’, screenNumber, black);

topPriorityLevel = MaxPriority(window);


Screen(‘FillRect’, window, [1.0,0.5;1.0,0.5;1.0,0.5],[1720,2120;880,880;2120,2520;1280,1280]);




I used a photodiode as external measurement for stim on time, signal from the photodiode and NIDAQ card both feed into same acquisition card of an open-ephys system.

The monitor shows the stimuli fine and did not throw any error. My problem is the open-ephys system shows that the signal from NIDAQ arrives about 40 ms earlier than the signal from the photodiode, I think in PTB documents, it is said screen(Flip) stops MatLab execution until the flip is over, and also how can the flip process possibly take so long?

Thanks in advance for any suggestions!

Your code looks correct, as far as i can tell - no expert in the Matlab DAQ toolbox. Also your hardware setup sounds correct. So what you should observe is photo-diodie and trigger close in time to each other, certainly not 40 msecs apart. If anything i’d expect the daq signal to be mildly delayed wrt. photo-diode.

I assume your photo-diode is in the top-left corner of the monitor, right?

This could be a graphics/display-driver/MS-Windows bug - Windows is known to be way more fragile wrt. timing than Linux. With buggy graphics drivers or buggy Windows desktop compositor (DWM), errors of up to 50 msecs have been observed by myself, ie. Screen(‘Flip’) would return up to 50 msecs (~ 3 refresh cycles on a 60 Hz 16.6 msecs display) too early. This is one of the Windows operating system flaws when using MS-Windows in multi-display configurations.

What graphics card is this?

Or somehow your hardware setup introduces a long delay, although adding 40 msecs by a DVI->VGA converter is completely unheard of, so most likely not. Or some measurement error on your side?

What i would do is first repeat all the measurements with a pure single-monitor configuration, ie. only 1 monitor directly plugged into your graphics card.

Ideally you would switch your setup to Linux for way more peace of mind for such usage scenarios, but that might not work because of your dependency on Matlabs proprietary Data acquisition toolbox, which sadly only seems to support MS-Windows?


I have seen a delay of ~40ms with a really poor LCD display (that came by default with a very expensive eyetracker). Mario is right, remove all other monitors and test with a single monitor to see if this is still the case, and consider using a different monitor. If it is a display latency problem, the bad news is that is depends on the luminance transition (grey-to-grey level), and the only real solution is a better display…

Thanks for the information, the graphic card is NVIDIA Quadro P1000, in the card control panel the buffer flipping mode was set to ‘auto-select’, triple buffering to ‘off’ and Vertical sync to ‘use the 3D application setting’, I tested only to use the animal Monitor (since that is the one the photo diode is attached to), the delay cut by half, but digital signal from DAQ card is still ~21 ms ahead of signal from photo diode, what does that mean? Does it sound like the problem of monitor? unfortunately we do not have extra large monitors for animal side, should I set up photodiodes on my human side small monitors to test?


buffer flipping mode was at ‘auto-select’: What are the other options you could have chosen from?

Triple-buffer off is correct, vertical sync use the 3D app settings is correct.
With “I tested only to use the animal Monitor (since that is the one the photo diode is attached to), the delay cut by half” you mean you unplugged all other monitors, so the monitor was the only thing directly connected to the graphics card? And that cut the delay in half, from 40 msecs to 21 msecs?

That would suggest MS-Windows interference, a known problem with many MS-Windows multi-display setups. The best way to get peace of mind wrt. computer timing problems is to switch to Linux, ideally with a AMD or Intel graphics card, or anything not NVidia. Even a Raspberry Pi microcomputer does better under Linux timing-wise than many Windows or macOS setups, if the stimuli are not too demanding.

Your Screen(‘FillRect’) statement in your sample code suggests that your photo-diode is placed at the bottom (right or left?) of the monitor, not at the top. That would add another ~16 msecs of delay between the trigger which refers to the top-left corner of the monitor and the photo-diode. So that would explain about 16 msecs of your 21 msecs. I don’t quite understand your video splitter configuration? Is the same image mirrored to both the experimenter and animal monitor? Or is the left half of your stimulus window sent to the experimenter monitor, and the right half to the animal monitor? I guess the latter from your ‘FillRect’ code, but would be good to know.

The remaining ~5 msecs could be explained by the LCD pixel response time of your monitor, as Ian said. I assume you use that white rectangle to drive the diode, not the grey one, right? I assumed from your statement about VGA converters that you use good old CRT monitors, which don’t have that problem, but if this is indeed a LCD panel, and likely an ancient one if it needs VGA input, then very long response times of 5 msecs or more (dozens of msecs) and non-deterministic behaviour depending on your stimulus would be totally expected. LCD’s have usually one fast transition, either from black (r,g,b) = (0,0,0) to max white (1,1,1), or from max white to black. The other direction is usually slower. Grey levels or other color values are also usually slower, in a highly non-linear fashion.

Btw. if your setup is a VGA monitor and if your stimuli would be only grayscale, not color, then i could also recommend the VideoSwitcher to you if you want to perfectly disentangle what timing problems are caused by the monitor, and which are caused by the computer/software:

The device accepts VGA input, outputs VGA output, ie. you put it inbetween your video splitter and the stimulation monitor, or between the DVI->VGA converter and the video splitter. When switched on, it will convert the R,G,B input signal into a high precision grayscale only VGA output signal. And whenever it detects a green “trigger line” in the input image, it will send a TTL trigger pulse on its BNC output port, something that could be recorded by your OpenEphys system. Psychtoolbox has built-in support for the Videoswitcher, both for timing purposes and high precision display. This way you can get a trigger defining when exactly the image is received by your monitor, and any difference between that trigger and the photo-diode would be due to the monitor. Ofc. not usable if you need color stimuli, or anything other than a monitor with VGA input. But not very expensive for the peace of mind it may give.


Thanks for your informative reply, the other option for buffer flipping mode is ‘block transfer’, but the pc does not allow me to change to that.
And sorry for the confusion of the photodiode position, my photodiode is on the left up corner of the screen, and when I test it I used the whole screen full white stim, so unfortunately I can’t subtract that 16 ms. All monitors I used are LCD monitors, and the splitter was used to mirror the same image to both the experimenter and the animal side. I tried to remove the splitter so the animal monitor is extended desktop of GUI monitor, that did not change anything, the delay was still around 45 ms.

Ok, auto is maybe the right choice. Although if there is only the option of “block transfer” and “auto” selects among the other available options, then “auto” would be identical to “block transfer”? Which would be bad for timing.

But your sample code suggested the photo-diode was in the lower region of the screen?

But then what was that 21 msecs result you talked about, which looked like a significant improvement?

You did test with a pure single-monitor setup (no extended desktop) at least once though, right? With same or better results?

What monitor model, vendor/model, is this? Maybe some monitor website has some info about the expected timing behaviour of that monitor. Can you adjust the threshold for the photo-diode to make sure it triggers rather aggressively instead of conservative?

This is a machine with only one NVidia graphics card, right?