Timing Issue with scheduled screen flip

Hi there,


We are trying to schedule a visual stimulus for a set time after we begin audio capture for an experiment. Because PsychPortAudio('Start') doesn't allow you to schedule the start of audio capture, we scheduled the visual stimulus to appear at a set time after the reported start time of audio capture.


We used the 'when' argument in Screen('Flip') to do this, but we noticed a variable discrepancy between the start of recording and visual stimulus presentation as defined by the output arguments to Screen('Flip'). We made a simpler script to examine this issue further. This script presents visual stimuli by getting the current time and scheduling a flip for 100ms later using the 'when' argument to Screen('Flip'), with some jitter added between stimuli:


https://gist.github.com/carrien/bd4bdf15904120f22f541e9e4a5da70f#file-run_fliptest-m-L27-L29

(Relevant lines highlighted)


The returned stimulus time, whether measured by VBLtimestamp, StimulusOnsetTime, or FlipTimeStamp, was off from the 'when' argument by up to 16.8 ms (see attached figure, which was generated by the above script).


Probably not coincidentally, this is the same refresh interval measured by Screen('GetFlipInterval').


How do we get around the delays introduced by the refresh rate interval? We want to be able to schedule the visual stimulus for a fixed delay after the start of audio capture (which PsychPortAudio('Start') does not allow precise control of in pure capture mode).


We're running this using MATLAB v. R2015b on Ubuntu 14.04 LTS, with these properties:

Memory: 3.7 GiB

Processor: Intel® Core™ i5-2540M CPU @ 2.60GHz × 4

Graphics: Intel® Sandybridge Mobile

OS type: 64-bit


The output of PsychtoolboxVersion is:

3.0.12 - Flavor: Debian package - psychtoolbox-3 (3.0.12.20160126.dfsg1-1~nd14.04+1)


Thanks for any help you can provide!

Ian & Carrie


you are on the right track. the display device runs its own frame-based clock. so when synchronising audio + video it is necessary to sync the audio events to the video, not the other way around.

there are various workarounds that come to mind, but it's tricky without knowing what you are trying to achieve. can you provide more detail about your neeed for synchronised audio capture + visual stimulus? what triggers the audio capture?

Thank you for your response!

 

For each trial in our experiment, we want to display a visual stimulus and record speech audio, and measure the latency of the speech response with respect to the visual stimulus.  It doesn't matter if the recording starts synchronously with the visual stimulus, as long as there is a consistent, known offset that we can use to correct the latency estimates.

 

Would it be enough to trust the inconsistent (but known after the fact) offset between recording and visual presentation, and to account for that offset when measuring the latency of the speech response? 

 

Ian & Carrie




XX---In PSYCHTOOLBOX@yahoogroups.com, <ianquillen@...> wrote :

Thank you for your response!

 

For each trial in our experiment, we want to display a visual stimulus and record speech audio, and measure the latency of the speech response with respect to the visual stimulus.  It doesn't matter if the recording starts synchronously with the visual stimulus, as long as there is a consistent, known offset that we can use to correct the latency estimates.

 

Would it be enough to trust the inconsistent (but known after the fact) offset between recording and visual presentation, and to account for that offset when measuring the latency of the speech response? 

 

-> You can get the start time of recording from:


tCaptureStart = PsychPortAudio('Start', pahandle, 0, 0, 1);


Then, given the selected audio sampling rate 'freq', once you found the location 'n' of the first sample of the voice response in the audio sample vector returned by PsychPortAudio('GetAudioData'); you can translate that into the voice response onset time as:


tResponse = tCaptureStart + (n / freq);


So you'd simply


1. Start audio capture as above.

2. Whenever you like [vbl, tVisualOnset] = Screen('Flip') the stimulus onscreen.

3. Wait for the duration of the response period, then stop audio capture.

4. Get the captured audio via 'GetAudioData' and find voice onset sample n.

5. tResponse = tCaptureStart + (n / freq);

6. ReactionTime = tResponse - tVisualOnset


BasicSoundInputDemo ([], 0.1); shows this in a similar way, except a bit more sophisticated. However, that demo does not use low latency / high timing precision mode as one should use for this, because voice onset timestamping is only a bonus feature in that demo and the basic demo was also supposed to work on deficient Microsoft Windows systems, hence that little sacrifice of timing accuracy...


In general you should always test the accuracy of the audio time stamping via some independent method once, as one can never know if there aren't any hardware bugs somewhere. That said, as you are using Linux, if you use a typical built in Intel HDA onboard soundchip, so far the accuracy of the audio timestamping was always better than <= 1 msec on tested systems in the past. Visual stimulus onset time stamping on the Intel Sandybridge graphics card should also be excellent, so the only thing that could ruin your timing would be use of a slow flat panel display instead of, e.g., a CRT monitor or a fast flat panel display.


Btw. your Psychtoolbox seems to be slightly out of date. NeuroDebian now ships one from May 2016...


best,

-mario






Ian & Carrie