I understand the following issue is likely on EyeLink’s side. Since there is a partnership undergoing between SRResearch and PTB development, I hope there is a joint effort to address this issue. I would be very happy to contribute feedback.
We have recently found an issue when using video calibration (EyeLink SDK) on Ubuntu 22.04.2.
The calibration video will appear very briefly (then disappears) without sound. This issue would subsequently affect movie play (freezes on the first frame without sound) in PTB. PTB has no issue with movie, sound play without the EyeLink calibration.
We don’t have this issue on Ubuntu 20.04.6. And the issue is much milder on Ubuntu 23.04, where the very first calibration video will not play for 10 s (I guess this is a setting in PTB), then play normally throughout the calibration, validation, and drift correction procedures, and no movie play would be affected.
More specifically, it is the sound play during Calibration and Validation, but not during Drift Correction procedure causing the issue. This is because we had no issue in calibration and following movie play when using a silent video for calibration (video without sound track). If we use a silent video for calibration, we could use a sound video for drift correction without any issue.
We replicated this issue with PTB version 3.0.17 to 3.0.19. And we used the EyeLink SDK, rather than those EyeLink script coming with PTB.
Well, Mario is off on holiday so the underlying problem will need some time. But I manage 4 systems that run Ubuntu 22.04, the latest Eyelink SDK with Eyelink systems (1000 & 1000plus), and my workarounds for sound mean we don’t see any issues (but we don’t use video calibration). The solutions that worked for us are two-fold:
You must rewrite the callback function that eyelink uses for Calibration. You can either first open PsychportAudio then chain snd so it uses it, or what I do is rewrite EyelinkMakeSound to use my own beep function (I create my own class that just wraps PsychPortAudio). You pass your custom callback when you use EyelinkInit.
Sometimes the default sound device is not valid for PsychPortAudio, so we must manually specify the correct device (try each device in the list till it works), and this may change for each machine.
Ideally SR research will upgrade their callback to improve the audio handling in the future (snd is legacy anyway), and perhaps the underlying bug is fixed in updates to Ubuntu. And while I usually prefer to stay on the bleeding edge software-wise, I learnt the hard way with Ubuntu that it is worth to stick with the latest LTS and not be tempted to use the latest version (many SDKs or complex drivers only officially support LTS and need a bunch of hacking to work on non-LTS versions).
Thanks, Ian for your quick reply (both here and in SRResearch forum)!
I guess using movie as calibration target is more complicated than playing a sound. While waiting for the solution from SRResearch or Mario, we will stay with 20.04.2 for a while. Or, start to use 22.04 with silent videos to calibrate, which is not a dealbreaker for the studies we do.
I’m glad that, at least on this issue, we (including SRResearch) are on the same page, which is crucial for the solution.
I’m not very familiar with what you meant by chaining snd to opened PsychportAudio. Did you mean, open PsychportAudio first, then pass the handle to SND? Do you have the code showing how chaining works here?
The silent movie seems a passable workaround. Even with the fixes for Beeper in place I don’t think GStreamer’s audio playback and PsychPortAudio will work well together (Psychtoolbox-3 - PsychPortAudio). One future option would be to see if you can modify the eyelink calibrartion callback to get PsychPortAudio to play an attention sound when the video plays silently if audio cues do help (I work with non-human primates, with many of the same problems as working with non-verbal children for calibration…)
Playing a sound with the calibration animation is something we are looking for (we actually play sound paired with silent movies in PTB). But I believe the calibration callback function is embedded in EyeLink.mex, which is beyond what I can do.
Why do you think using 20.04 is not a good idea? Is there any particular downside compared with 22.04? I understand the current PTB recommends 22.04.
While the calibration driver is a mex file, it actually uses a plain m file for screen management that is easy to edit. The default one is Psychtoolbox/PsychHardware/EyelinkToolbox/EyelinkBasic/PsychEyelinkDispatchCallback.m:
I copied that file, edited it, then use my custom version when initialising the eyetracker. This way I can activate our reward system, modify the sound driver, and use better calibration markers (we use those from Thaler L, Schütz AC, Goodale MA, & Gegenfurtner KR (2013)).
Regarding 20.04: If you are sure that your timing and reliability of your actual task, and any hardware you may use, work well, then 20.04 should be fine. But you should keep in mind that this is not the OS version Mario tests with or other users may use, and does have issues with specific features like HDR.
Hello, we also use ubuntu 22.04.2, just putting in here my own solution to this issue [1], which wasbasically just to add two more fields in the the el stracture, el.calib_pahandle the handle to the audio device (that is already open), and el.calib_sound, the audio data you want to accompany the calibration video. Then I modified the original eyelink functions by placing calls to start and stop audio place in the appropriate places and it works (as long as we do not update psychtoolbox ever).
The audio play should start at every beginning of a video loop. You just have to set these above fields to the proper values before calling for a calibration in your script.
I remember also trying to do the suggestion in the snd documentation, and in the end modifying the scripts and not using snd was much simpler (I think eyelink script modification was needed anyway for fixing snd but has been some time since).
I would not recommend using the original snd or gstreamer or anything to play audio while also using psychportaudio as in current eyelink scripts, unless it is declared officially supported. In our old mac we had matlab crashes that turned out to be due to this, even if it was playing fine like 98% of the time (which made it hard to reproduce the bug).
We ended up using the same approach modifying EyeLink callback functions and use a global pahandle to control audio playing throughout the experiments.
The work time allotted in the year 2022 partnership with SR-Research is mostly used up, or at least will be, after integration of the new Eyelink toolbox updates, so further work would depend on ther renewal of the partnership. But this kind of audio compat work is expensive, so I wouldn’t bet on this getting resolved within the scope of the partnership. These problems would be all easy (and a priority) to fix, if we had proper funding by our users, but with > 99% of all users just free-riding, money for time intense complex work will continue to be (too) tight. Fixing this kind of sound problem could have been solved already a year ago, but money shortage prevented that.
It should be in principle possible to use PsychPortAudio with GStreamer movie audio playback iff one doesn’t care at all about precise and trustworthy audio playback timing by PsychPortAudio. PsychPortAudio('EngineTunables', [], [], [], 0) as the first command before any other PsychPortAudio related/using commands, allows to keep the Pulseaudio sound server running while PsychPortAudio is in use. Opening sound devices with ‘reqlatencyclass’ 0 (no timing precision, high latency), e.g., as BasicSoundOutputDemo.m or Snd() (~ Beeper() ~ Eyelink default audio feedback) do, will select the virtual ALSA “default” audio device for output. This will use the ALSA pulse audio sound plugin to reroute sound through Pulseaudio, and that should be compatible with GStreamer audio playback, or other desktop sound applications. Downside is that this virtual audio device does not provide any audio timing/timestamping/latency control mechanisms, so timing and latency will be shot. Might be enough for simple feedback tones though…
I ended up using a global audio handle to take care of the sound play during eye tracker calibration and the study stimuli presentation. It involves some modification to the EyeLink functions. But everything works as we expected.