flip timing jitters, missing beampos lines

Hi,

I noticed awhile ago that precise timing via VBLTimestampingMode = 1
can break down and cause a systematic error when there is a slight lag
in Flip return. Specifically, it seems that whenever a lag causes
beampos to be reported after the VBL for a single frame, it looks like
PTB overcompensates for the timing. The lags looks like this:

Pict of beampos as reported in VBLSyncTest:
http://web.mit.edu/gorlins/Public/ptb/beampos.jpg
Screen res is 1600x1200 with VBL Endline=1249
Note the two lags at the end.

Pict of VBL timestamp delta:
http://web.mit.edu/gorlins/Public/ptb/osc.jpg
The two spikes at the end correspond with the lags in the first pict,
while the spikes on the left are simply due to a dropped frame (i
increased load jitter to show this).

Zoom in on the oscillations:
http://web.mit.edu/gorlins/Public/ptb/osc_zoom.jpg
Note how they dip first then rise on the next frame.

I believe this shows that PTB is overcompensating during the
subtraction algorithm: because the oscillations dip first, it means
that the prior frame was normal, but this frame had too much time
subtracted from it during the timing calculation. Since the time is
too early, the next frame looks like it arrived too late, as seen with
the rise in the oscillation, though it actually was on time: the next
point is normal.

At first I was confused about this and thought there was nothing to be
done. It can't be attributed to additional lag in the system (such as
a longer VBL than estimated) because this would add time to the points
(causing the spike to rise first, then dip). However, with the new
Screen('GetWindowInfo', win, 1) function, I found something very
interesting:

Pict of histogram of successive beampositions, without drawing anything:
http://web.mit.edu/gorlins/Public/ptb/beampos_hist.jpg

On this set up (and I have verified this on a few other systems),
beampos is never read for certain lines! The number of missing lines
in this configuration is about 15, all of which are in the VBL, and
there are existing lines after them which do get read. Under the
assumption that these lines simply do not exist and the monitor passes
from line 1230 to 1245 in the time it normally takes to go through a
single line, timestamps from flip when it returns past the VBL would
occur 15 lines worth of time before they are expected, and PTB would
subtract that amount of time from the timestamp more than it should.
In this case, 15 * (1/60 FPS)*(1/1250 lines) = 200 us, which is
exactly the amount seen in the zoomed figure, suggesting this is what
is happening.

The nonlinearity can also be seen in this picture:

http://web.mit.edu/gorlins/Public/ptb/beampos_time.jpg

which plots the measured beampos against time. The increased slope
ending at the marker corresponds to the beamposition values which are
never measured, indicating that the monitor really does just pass
through them. The shallower slope to the left is the linear section
where the beam passes through each VBL line in the same time as it
does during drawing; the steep slope to the right is where the beampos
resets to 0. It does not seem the case that these lines are reached
but just not reported, as there would be a flat section preceeding
them (presuming a constant, false value was reported instead).

I have seen this beampos blank on several computers with old and new
graphics cards alike (nvidias and intels), all with the most recent
drivers and the latest PTB on Matlab 7.3 (the GetWindowInfo function
doesn't seem to work for me in Matlab 7.4), both on single and dual
monitor configurations, all running XP. I have not yet tested it on a
CRT however, but I have seen these types of oscillations in the past.

Could it be that missing beampos lines are common in monitors? Is
there any hope to fix this kind of nonlinearity? For instance, when
PTB runs its internal startup tasks, could it check not just for the
highest beampos measured, but for the number of unique lines, and use
this instead to calculate the right timestamps?

Thanks,

Scott
Scott,

I can replicate the effect of "skipped"/missing scanlines on my
Dell Inspiron notebook (Windows 2000, GeForce2Go, internal
Flatpanel) - it seems to consistently "skip" roughly 15 scanlines -
the first 15 scanlines after onset of VBL. When connected to a
CRT it only "drops" 2-3 scanlines, but that is more likely due to
Matlab being preempted at VBL time, not due to jumps in the
scanline count, something to be expected on slow machines due
to the fact that each VBL onset triggers a low-level, high priority
hardware interrupt handler which probably takes a few microseconds
to execute on older machines and which has higher priority than
anything else on a general purpose operating system.

I didn't test/don't have the time to test on other setups or
Macintosh computers at the moment, but i never observed such
an effect on any CRT setup. As to why this happens on a flat panel,
i have no clue. It contradicts anything one can read about how
the scanline counters work, so i would be interested as well for
why it happens (either a unnoticed bug in the drivers or some
weird side-effect of chip or driver design??): CRT's or flat panels
don't have "missing scanlines" per se: The beamposition is the
read-out value of a hardware counter in the graphics chip, which
is assumed to linearly increment from zero to some maximum at
a constant rate, then wrap to zero. In the non-VBL area, the counter
determines the next line of pixels to be read out of the framebuffer
and sent to the output converters. When the counter exceeds
specific thresholds (==enters the VBL area), it triggers other actions,
e.g., disabling drive-voltage to the monitor to "blank" the rasterbeam,
sending a VSYNC signal to the display to trigger retrace of the scanning
beam to the top left corner etc...

However, i think the practical impact of your discovery is pretty
negligible for multiple reasons:

1. CRT's have good presentation timing, so it would be sad to
spoil that good timing by some problems with timestamping.
If you can find significant effects on a CRT, that would be more
of a concern to me. My threshold for worrying is somewhere
between 0.5 and 1 millisecond, although we would still be
much better than any other tookit if we had only 1 msec
accuracy ;-)

2. Flat panels/LCD projectors have such a bad, non-deterministic
timing that i think 200 microseconds of timing noise basically
don't matter at all, given the huge up to *multiple dozen* millisecond
uncertainties if you try to use a flat panel for any kind of timed
presentation: Flat panel == Random timing noise generator.

Flat panel != Any deterministic visual presentation timing, at
least not for any mildly complex stimulus.

3. 0.2 msecs timestamp noise doesn't affect the presentation
of visual stimuli via 'Flip' at all, neither does it affect our skipped
frame detector.

4. The returned timestamps are only off by 200 microseconds in
a small fraction of your "trials": If you really need timestamps in
the sub 100 microsecond range all the time, you should go for
technical "hard-core" solutions, e.g., photo-diodes or trigger
circuits attached to your CRT or display connector that can
reliably detect VBL onset and trigger some external equipment,
or a realtime operating system like Realtime-Linux, where one
can write special drivers that are able to work with 20 microsecs
accuracy under all cirumstances.

Regarding your proposal: It wouldn't be difficult to extend the
timestamping mechanism by a user-changeable lookup table
to correct for such effects. But a calibration routine that can
work reliably on a variety of systems to compute such a LUT
would require a pretty long calibration time -- longer than most
users would be willing to wait, given that the typically 1-3 seconds
or our current routine are already too much for some people. So
the most i would/could do is implement such a LUT and some
interface that allows external code to change the LUT.
But i don't think its worth the effort, as there are other solutions
for people with need for such a high precision.

Could you check on some CRT setups?

best,
-mario


--- In psychtoolbox@yahoogroups.com, "stuckdoingwork" <gorlins@...> wrote:
>
> Hi,
>
> I noticed awhile ago that precise timing via VBLTimestampingMode = 1
> can break down and cause a systematic error when there is a slight lag
> in Flip return. Specifically, it seems that whenever a lag causes
> beampos to be reported after the VBL for a single frame, it looks like
> PTB overcompensates for the timing. The lags looks like this:
>
> Pict of beampos as reported in VBLSyncTest:
> http://web.mit.edu/gorlins/Public/ptb/beampos.jpg
> Screen res is 1600x1200 with VBL Endline=1249
> Note the two lags at the end.
>
> Pict of VBL timestamp delta:
> http://web.mit.edu/gorlins/Public/ptb/osc.jpg
> The two spikes at the end correspond with the lags in the first pict,
> while the spikes on the left are simply due to a dropped frame (i
> increased load jitter to show this).
>
> Zoom in on the oscillations:
> http://web.mit.edu/gorlins/Public/ptb/osc_zoom.jpg
> Note how they dip first then rise on the next frame.
>
> I believe this shows that PTB is overcompensating during the
> subtraction algorithm: because the oscillations dip first, it means
> that the prior frame was normal, but this frame had too much time
> subtracted from it during the timing calculation. Since the time is
> too early, the next frame looks like it arrived too late, as seen with
> the rise in the oscillation, though it actually was on time: the next
> point is normal.
>
> At first I was confused about this and thought there was nothing to be
> done. It can't be attributed to additional lag in the system (such as
> a longer VBL than estimated) because this would add time to the points
> (causing the spike to rise first, then dip). However, with the new
> Screen('GetWindowInfo', win, 1) function, I found something very
> interesting:
>
> Pict of histogram of successive beampositions, without drawing anything:
> http://web.mit.edu/gorlins/Public/ptb/beampos_hist.jpg
>
> On this set up (and I have verified this on a few other systems),
> beampos is never read for certain lines! The number of missing lines
> in this configuration is about 15, all of which are in the VBL, and
> there are existing lines after them which do get read. Under the
> assumption that these lines simply do not exist and the monitor passes
> from line 1230 to 1245 in the time it normally takes to go through a
> single line, timestamps from flip when it returns past the VBL would
> occur 15 lines worth of time before they are expected, and PTB would
> subtract that amount of time from the timestamp more than it should.
> In this case, 15 * (1/60 FPS)*(1/1250 lines) = 200 us, which is
> exactly the amount seen in the zoomed figure, suggesting this is what
> is happening.
>
> The nonlinearity can also be seen in this picture:
>
> http://web.mit.edu/gorlins/Public/ptb/beampos_time.jpg
>
> which plots the measured beampos against time. The increased slope
> ending at the marker corresponds to the beamposition values which are
> never measured, indicating that the monitor really does just pass
> through them. The shallower slope to the left is the linear section
> where the beam passes through each VBL line in the same time as it
> does during drawing; the steep slope to the right is where the beampos
> resets to 0. It does not seem the case that these lines are reached
> but just not reported, as there would be a flat section preceeding
> them (presuming a constant, false value was reported instead).
>
> I have seen this beampos blank on several computers with old and new
> graphics cards alike (nvidias and intels), all with the most recent
> drivers and the latest PTB on Matlab 7.3 (the GetWindowInfo function
> doesn't seem to work for me in Matlab 7.4), both on single and dual
> monitor configurations, all running XP. I have not yet tested it on a
> CRT however, but I have seen these types of oscillations in the past.
>
> Could it be that missing beampos lines are common in monitors? Is
> there any hope to fix this kind of nonlinearity? For instance, when
> PTB runs its internal startup tasks, could it check not just for the
> highest beampos measured, but for the number of unique lines, and use
> this instead to calculate the right timestamps?
>
> Thanks,
>
> Scott
>