Timing issues/Missed frames on Dell XPS13 Ubuntu 18.04

Dear all,

we are experiencing timing problems on our Dell XPS13 Laptop (Developer edition) running Ubuntu 18.04.5 LTS

Psychtoolbox does not issue any warnings/errors, and the typical tests run fine (e.g. VBLSyncTest). The only thing I noticed is that if I run the VBLSyncTest for longer times, e.g. 10 minutes, Linux suddenly shows a login screen after some time despite the maximum priority set within the script.
Once we start the actual experimental code (which works perfectly on our stationary Linux machines), there appear to be many lost frames as measured by examining the StimulusOnsetTime returned by the [~, stimulusOnset, missed] = Screen(‘Flip’, ptb.w.id); command, and the trials, that are based on frame loops, take up to 1500 ms longer than expected.

We have configured the x-screens and installed the lowlatency kernel. The last thing to try seems to be installing GameMode, but I just wanted to double-check with experts here that I am not missing something obvious. We never had to use the GameMode before on Linux and always had satisfactory timing with this same code.

Many thanks in advance!

Below is the system information:

  • Output of >> PsychtoolboxVersion
    ans = 3.0.17 - Flavor: Debian package - psychtoolbox-3 (

  • Which platform (Mac OS X, Windows XP/Vista/7, Linux, …) AND which MATLAB/Octave version you are using?
    Ubuntu 18.04.5 LTS
    Octave 4.2.2

  • A minimal code snippet that exhibits the issue you are having, please format the code with the preformatted text tool provided by the forum software, or use markdown fenced code blocks .

Too embarrassing to post, but nothing fancy that wold result in frame losses.

  • Warnings and Errors that were printed to the console ( please read them carefully , they may contain advice on how to solve your problem!)
    PTB-INFO: This is Psychtoolbox-3 for GNU/Linux X11, under GNU/Octave 64-Bit (Version 3.0.17 - Build date: Jan 22 2021).
    PTB-INFO: OS support status: Linux 5.4.0-66-lowlatency Supported.
    PTB-INFO: Type ‘PsychtoolboxVersion’ for more detailed version information.
    PTB-INFO: Most parts of the Psychtoolbox distribution are licensed to you under terms of the MIT License, with
    PTB-INFO: some restrictions. See file ‘License.txt’ in the Psychtoolbox root folder for the exact licensing conditions.

PTB-INFO: For information about paid priority support, community membership and commercial services, please type
PTB-INFO: ‘PsychPaidSupportAndServices’.

PTB-INFO: OpenGL-Renderer is Intel Open Source Technology Center :: Mesa DRI Intel(R) UHD Graphics (CML GT2) :: 3.0 Mesa 20.0.8
PTB-INFO: VBL startline = 2160 , VBL Endline = -1
PTB-INFO: Will try to use OS-Builtin OpenML sync control support for accurate Flip timestamping.
PTB-INFO: Measured monitor refresh interval from VBLsync = 16.668124 ms [59.994753 Hz]. (50 valid samples taken, stddev=0.003058 ms.)
PTB-INFO: Reported monitor refresh interval from operating system = 16.667778 ms [59.995998 Hz].
PTB-INFO: Small deviations between reported values are normal and no reason to worry.
PTB-INFO: Psychtoolbox imaging pipeline starting up for window with requested imagingmode 1027 …
PTB-INFO: Will use 8 bits per color component framebuffer for stimulus drawing.
PTB-INFO: Will use 8 bits per color component framebuffer for stimulus post-processing (if any).
PTB-INFO: Failed to request additional performance tuning from operating system.
PTB-INFO: This is because the optional “FeralInteractive gamemode” package is not installed
PTB-INFO: and set up yet. If you want to have these extra optimizations, then read
PTB-INFO: the setup instructions in “help LinuxGameMode”.

  • Hardware setup (GPU etc.) and relevant driver versions.
    Intel® Core™ i7-10510U CPU @ 1.80GHz × 8
    Intel® UHD Graphics (CML GT2)
    GNOME 3.28.2
1 Like

In the meantime we figured out that DrawFormattedText within the frame loop slows down the stimulus presentation and causes lost frames (even if it is really just one letter being drawn per frame). Still it is puzzling that it works perfectly on other systems, but not with this one.

I would install GameMode, it is easy to do and does bring some clear performance improvements. You seem to be doing everything else right, based on the fact that you are using the latest PTB and the MESA driver and no obvious sync issues.

Why not use Screen(‘Drawtext’) as a simpler alternative?

Hi Natalia

The specs of that machine are fine for Linux, and so far i’ve heard good things about that model. The PTB output suggests excellent timing behaviour, as expected for that setup.

I think it’s simply that the graphics chip is a bit overwhelmed with the job, or the main processor with something you do in your script. The Dell XPS is not a machine advertised for high performance graphics, gaming or VR, and the Intel Cannonlake graphics chip is not the fastest - probably lower middle-class on the spectrum of graphics chips. I’d assume your other machines probably have way more powerful graphics cards from AMD?

Now text drawing is one of the most expensive 2D drawing operations, and Screen(‘DrawText’) instead of DrawFormattedText() may save a tiny bit of cpu overhead, but if drawing a single letter causes skipped frames then the machine must be already quite taxed by something else or something in your script is very inefficient and causing this as side-effect.

A few thoughts on squeezing out more performance:

  • Installing GameMode is a good idea, it will optimize cpu and gpu performance at Priority(1) and higher, but revert to more energy saving settings at standard/default Priority(0).
  • The machine seems to use the old DRI OpenGL drivers instead of the new and usually faster Iris OpenGL driver (otherwise it would say “Mesa Intel…” instead of “Mesa DRI Intel…”. Not sure why it hasn’t selected that one automatically for your hardware, but then i don’t have any modern Intel chips running under 18.04 LTS anymore.
  • Upgrading to Ubuntu 20.04.2 LTS could be beneficial, or manual upgrade to the latest and fastest Mesa graphic drivers.

I can also see that this seems to be running on a rather high resolution (3840x2160 ?) screen? The higher the screen resolution, the more demands on the graphics chip. Memory bandwidth demand and computation time for some operations scales with pixel area, so a 3840x2160 screen will be 4x as expensive as a 1920x1080 screen.

And the imaging pipeline is active, doing some post-processing, which will add to the load, the expensiveness of it again being proportional to the high resolution of your display.

The script DrawingSpeedTest, if you look at the code relating to the optional gpumeasure parameter, shows some code that allows to measure the processing time of the gpu from flip to flip if you add it to your script. If that number approaches 16 msecs, then the text drawing would just be the bit that pushes it over the limit.

A closer look at this would require to look at your script and for your lab to buy some priority support.


Hi Mario, hi Ian,

thanks for your suggestions!
Following Ian’s suggestion to switch to Screen(‘Drawtext’) worked for us for now.

We are currently in the middle of an experiment and I would like to wait with installing new things until it is over. It also makes it hard to make all the tests Mario is suggesting, because we need to drag the laptop back and forth between different locations.

However, I would like to fix this properly in the long term , so I will get back to this thread once again with our newly acquired priority support key once the data collection is finished :slight_smile:

In the meantime all the best!

1 Like

Hi all, hi Mario,

I am following up on this thread.

  1. I have installed the gamemode following the instructions on the github repository: gamemode/README.md at master · FeralInteractive/gamemode · GitHub (starting from the section “Development”). Not sure whether it was correct, but I haven’t found any better explanation on how to install it anywhere.
    How do I know PTB is using it? I don’t see any relevant messages when PTB is initializing.
    Priority(99) works from command line, but within my stimulus code I can only use priority level 1, otherwise PTB says:
    PTB-CRITICAL: In call to PsychSetThreadPriority(): Failed to set new basePriority 2, tweakPriority 1, effective 100 [REALTIME] for thread (nil) provided!
    PsychHID: KbQueueStart: Failed to switch to realtime priority [Invalid argument].

  2. I have tried to update to the newer mesa driver by following this instructions:
    Install Mesa Graphics Drivers on Ubuntu [Latest and Stable]
    The installation seems to work, but the psychtoolbox still reports the usage of Mesa DRI Intel(R) UHD Graphics:
    Mesa DRI Intel(R) UHD Graphics (CML GT2) :: 3.0 Mesa 21.0.0 - kisak-mesa PPA

I know that the resolution of the primary screen (XScreen 0) of 3840x2160 is a disaster. Normally the stimuli are shown on a secondary screen (XScreen 1) with 1920 x 1080 resolution, which is also my current configuration for testing. I don’t know why, but the 3840x2160 is the default resolution for the laptop’s native screen. Setting values to something different than that leads to loosing the mouse cursor as soon as it exists the screen borders, which means reboot :frowning:

And we would like to upgrade to Ubuntu 20.04.2, but this would require some work from our IT people, who, of course, are very busy at the moment.

Thanks a lot and hoping for help!

Here is the code for priority support:


Hi Natalia,

answering your questions below. But i think the more straightforward way to figure out why you get frame misses would be to have a look at the trial loop of your script. Or maybe just e-mail me your main script if you want to keep it secret, i will do a “burn after reading”, once we’re done.

Or you could add some gpu time measurement to your inner trial loop to see how much time is spent by the gpu actually drawing, to see if the bottleneck is graphics or something else in the script (see DrawingSpeedTest() for reference):

Before the first drawing command for a frame:

Screen('GetWindowInfo', window, 5);

After the Screen('Flip') insert this to log times:

winfo = Screen('GetWindowInfo', window); tGPU(end+1) = winfo.GPULastFrameRenderTime;

So a plot(tGPU * 1000); would give you a plot of gpu execution time in msecs for each drawn frame, etc. If you have much less than 16 msecs rendertime then the graphics part is probably not the problem.

If i understand correctly, replacing DrawFormattedText() by Screen('Drawtext') fixed the frame drops?

DrawFormattedText() can easily take at least 5x times more execution time than Screen('DrawText'), because it has to do quite a bit of computation for formatting the text, and it is a M-File instead of fast C-Code. On Octave it might be even slower than on Matlab. E.g., i see something like 2.5 msecs execution time vs. 0.4 msecs for a simple “Hello World!” text string on my old Ubuntu 18.04 machine under Octave.

But by itself 2.5 msecs shouldn’t cause skipped frames, so something else must be highly suboptimal if it pushes the machine close enough to using almost 16 msecs per frame and a DrawFormattedText pushes it over the edge.

Oh, i just see that the easy to install ppa described in our help LinuxGameMode no longer exists, and couldn’t find an alternative, so what you did is the proper and only way to get it on Ubuntu 18.04. Users of Ubuntu 20.04-LTS or later have it easier, where gamemode is included with the OS by default and should auto-install for a NeuroDebian Psychtoolbox. Will need to update our help text for the next release…

A Screen('Preference','Verbosity', 4); Priority(1); Priority(0);

will give some feedback, something like this on Priority(1):
“PTB-INFO: Gamemode optimizations enable requested. Current/Old status: Disabled
PTB-INFO: New status: Active”

I think the fact that it stays silent means it is working, as i think it would output some info message if gamemode would not be working/installed.

In a terminal you can also do a tail -f /var/log/syslog and then run the above commands in Octave/Matlab, and observe the status output in the system log, giving more details about gamemode actually changes.

Wouldn’t hurt to see its output posted here. Btw, as i just learned from some discussion on the gamemode GitHub repo, the default settings for power optimizations via gamemode may not necessarily be the best for a machine with Intel integrated graphics only, like yours. So the output of
grep gamemode /var/log/syslog | grep Loading would point to the config file being used. Psychtoolbox normally installs its own config file if one runs the PsychLinuxConfiguration setup script (or simply UpdatePsychtoolbox etc.) and no config file isn’t installed yet.

The range of levels on Linux goes from 0 - 99, and KbQueueStart will set the priority of its background input processing thread to the level requested by Priority() + 1, to make sure a very busy script can’t impair the timing of the user input data collection. This would end up with requesting a priority of 99 + 1 == 100 which is out of valid range, hence the failure. Some other background processing threads, e.g., for PsychPortAudio also will raise priority to such a delta and could therefore fail to set priority. Essentially whenever Psychtoolbox uses background processing threads for various tasks, it will prioritize them relative to what was set via Priority() so that PTB’s threads can’t get into each others way, timing-wise.

In practice our config file /etc/security/limits.d/99-psychtoolboxlimits.conf allows Psychtoolbox users to choose realtime priorities up to 50, so you can expect trouble already when going beyond maybe 48.

In practice a Priority(1) should be enough for almost any use case, or some low numbers. That’s why the MaxPriority function only returns 1. The reason for not allowing the full range up to 99 by default is that if one would choose a too high priority for Psychtoolbox, it might preempt other system processes from computation time on which PTB actually depends, e.g., the display server, so setting a too high priority can have the opposite of the intended effect.

By now that ppa already provides Mesa 21.2.2. Running a software update should provide that. Probably the more modern iris driver is not used by default, because your machine still uses the old X-Server 1.19: xdpyinfo | grep version in a terminal would report the server version.

A HiDPI 4k display like that sucks up quite a bit of gpu performance, but i doubt that that is the main problem, only something that will push it a bit closer to the edge of skipping a frame.

The cursor issue sounds like an annoying GUI bug. One trick to recenter the cursor on X-Screen 0 is to type sca in Octave/Matlab if its window is reachable.

Hmm. Given you have admin rights on your own machine, this should be easy and safe (famous last words) and with a fast SSD take less than half an hour, just make backups of your data (better safe than sorry) and start the automatic upgrade to the next LTS release, and about a lunch later it should be done. Ok, in your case you’d also have to remove the Mesa version first, as described on the webpage of the newer Mesa driver, doing a so called ppa purge. And switch back to a single-X-Screen via XOrgConfSelector before.

At least an OS upgrade should be faster and more straightforward than what you did already with the gamemode and Mesa upgrades…


Hi Mario!

Thanks a lot for all the suggestions.
I think I - sort of - figured it out, at least the timings look fine now. I also learned a lot.
In installed Ubuntu 20.04.3 LTS (indeed, this was easy), the current neurodebian ptb(3.0.17) and Octave 5.2.0 was somehow mysteriously present in the new Ubuntu.
Lowlatency and gamemode were enabled (I double-checked according to the advice you provided), and I automatically had the right graphics driver, so that’s great.
I measured the gpu processing time, as you suggested, and gradually uncommented all the drawing commands we use to see what causes the problem.
I also got rid of the Stereo pipeline, which was drawing the same stuff into the two stereo buffers and optimized a few things to reduce the overall number of drawing commands, after which the GPU processing time got down from 15 to around 4 ms. And there are no lost frames.

It seems like the issue was a cumulative effect of different operations inside the loop combined with a graphics card that is not particularly powerful. The possibility of measuring GPU time is very helpful!

I did not want to post the code here mainly because it is embarrassingly messy (there is nothing secret about it, of course!).


Good! A more modern system! Myself i only have one over 10 year old Ubuntu 18.04 LTS laptop left, and functionality of PTB is already more limited in some interesting areas for machines older than 20.04-LTS. NeuroDebian stopped providing new updates for 18.04-LTS, so PTB there is now permanently frozen at version So upgrading to 20.04-LTS is certainly recommended to users.

The upgrade of octave is expected, but that it would also switch Psychtoolbox properly is a mild but pleasant surprise to me.

Would be good to see the terminal output of:

uname -r – Is it Linux 5.11 lowlatency? That would be the latest for 20.04.3-LTS.

xrandr --listproviders – Does it contain “modesetting” at the end?
grep iris ~/.local/share/xorg/Xorg.0.log – Should report iris as display driver.

What Mesa OpenGL renderer does it report now in the PTB-INFO output? If the “DRI” is gone from the name string, that would suggest the new iris OpenGL driver is used for some extra graphics performance and some other new features.

In octave, does PsychtoolboxVersion report That would suggest a successful switchover of NeuroDebian from 18.04 to 20.04. If 3.0.17-6-dfsg would be reported, that might need another setup step to receive future updates.

One more thing: A major OS update could have changed the names of video outputs and invalidated the xorg.conf file for separate X-Screens. Unless you already tested 2 X-Screen mode and everything worked, it could be a good idea to rerun XOrgConfCreator() and XOrgConfSelector() to recreate that file with guaranteed matching names.

Good. What drawing command was the baddy in the end? Or nothing specific? Sometimes there is room for optimization in your code, or in a future PTB.

Yep, that’s useful. A few more tricks to squeeze out performance or find bottlenecks:

Screen GetWindowInfo? online help also contains more ways one can use the infoType parameters 5 and 6 to start and stop the gpu time measurement at specific points, e.g., to only measure how long a given set of drawing commands took, excluding processing by the imaging pipeline. Or simply shifting where you place the Screen('GetWindowInfo', window, 5); command to start measuring if you don’t want to uncomment drawing code in your script. E.g., if you’d place it right before a Screen('Flip') or Screen('DrawingFinished') you’d only measure how much the image post-processing costs.

Also structuring scripts frame drawing code to first have all drawing commands, then a Screen('DrawingFinished', window [, dontclear]);, then all code unrelated to drawing, then Screen('Flip', window [, when][, dontclear],...) can help in demanding scripts. The dontclear flag has to be specified identical for DrawingFinished if you use it for Flip, otherwise it can be omitted.

This way you can feed the gpu all the work for a frame, and then let the cpu process all other non-graphics stuff before the next flip in parallel, while the gpu munches away in the background. Increased parallelism – hides execution time, can give quite a boost in some situations, depending on the complexity of the rest of the code. Your mileage may vary… Cfe DotDemo.m or LinesDemo.m for a script that is structured to make use of DrawingFinished. In these scripts it can be a performance boost, because the script has to do a lot of vector math to compute updated dot or line-segment positions for the next frame, and this can be done in parallel on the cpu to the drawing of the dots/lines for the current frame by the gpu. May only show a real effect for larger numer of dots or lines than the default, depending on relative cpu and gpu speed.

If the modern iris gallium OpenGL driver is in use now (and this would also work for AMD gpu’s and NVidia gpu’s with the open-source nouveau driver), then here’s another nice way to get some rough idea wrt. performance and machine load:

Before starting a script / after starting Octave/Matlab:
clear all; setenv('GALLIUM_HUD', 'fps,cpu,GPU-load,frametime');

On modern drivers this will provide some graphs overlayed on the stimuli, giving an idea how much the processor is loaded, framerate, possibly gpu load. The exact type of information differs by type of gpu, a list of all options can be printed via setenv(‘GALLIUM_HUD’,‘help’);

Sometimes gives a quick crude hint to where bottlenecks could be, without needing to change the experiment script.

That’s a pretty good improvement!

More tuning tips for trading off convenience vs. gpu load/performance:

Depending on the stereo mode you need, sometimes calling Screen('OpenWindow',...) instead of PsychImaging('OpenWindow',...) may give you ok results via our older, less flexible stereo algorithms, without the use of the imaging pipeline. This is less powerful, but may help on slow gpu’s. E.g., stereomode 4 for side-by-side stereo is probably fine and retains the convenience of separate stereo buffers, whereas anaglyph stereo would be of lower quality and flexibility without imaging pipeline.

There’s also optimizations (see ImagingStereoDemo.m, some uncommented code around lines 176+) to limit processing of the imaging pipeline to a sub-region of the screen if the stimulus only covers the central part of the screen or another limited ROI.

And if you’d want to display the same content for the left- and right eye in a dual-display or side-by-side binocular setup, there are some special mirroring commands (see help PsychImaging) which can do just that more efficiently than using a stereo mode.

Between us, I doubt that your code is messy by typical PTB user standards, but fair enough :slight_smile: .

So the priority support part of this license is now used up, but i hope it was worth the money. Hopefully others can also learn something from our conversation, even if it is only that buying a community membership can save time and money :wink:


1 Like

Hi Mario!

Just following up on your specific questions.

this gives me 5.4.0-86-lowlatency instead of 5.11.
Is there a need to upgrade? things seem to work nicely as they are.


[ 102.319] (II) modeset(0): [DRI2] DRI driver: iris
[ 102.395] (II) modeset(1): [DRI2] DRI driver: iris
[ 102.402] (II) AIGLX: Loaded and initialized iris
[ 102.406] (II) AIGLX: Loaded and initialized iris

Mesa Intel(R) UHD Graphics (CML GT2) :: 4.6 (Compatibility Profile) Mesa21.0.3

I guess this looks good?
ans = 3.0.17 - Flavor: Debian package - psychtoolbox-3 (

yes, I have reconfigured the x-screens again

Thanks again for all the help!


Hi Natalia,

looks all excellent. No need to upgrade Linux to 5.11, if everything works nicely, i was just curious. I think Ubuntu is more conservative when upgrading from a previous version like 18.04, than when doing a fresh install from installation media to 20.04.3.

Case closed, i guess. :slight_smile:

Hi there,
thought I’d jump on this year-old thread to follow up with a question that concerns graphics cards and timing precision to inform my next purchase choice. I started working with PTB-3 on my old DELL XPS 15 9530 from 2014 (with Windows 10 in the meantime) and SyncTest continually fails. It is ok to skip sync tests for developing my experiment script, but I am starting to think about what machine to use/buy for running the actual experiment with optimal timing precision (I need both visual and sound precision). I strongly suspect the reason for my problems is that my laptop uses a muxless hybrid GPU setup with an Intel iGPU and a NVIDIA GeForce 750M dGPU. As far as I understand what I read in help SyncTrouble and help HybridGraphics my machine is lost for precise stimulus timing with PTB. Since you keep recommending DELL’s XPS 13 I was just wondering if newer DELLs no longer have this issue or if there are ways to making them work just fine as long as they have Linux installed. Would be thankful for a word on what to look out for when buying a new machine with two graphics cards or if a hybrid setup should be as much as possible avoided when buying a new machine.

Thanks in advance!

Yes, on Windows.

Only on Windows, not on modern Linux with a recent PTB.

This applies to almost all hybrid graphics laptops and is not specific to Dell. I also don’t think i specifically recommended Dell over other makes, just said I never had trouble with Dell, but it is a few years since i worked with them. And some of the XPS models ship with Linux preinstalled and vendor supported, so probably a good choice for beginners.

As our advice says, avoid if you can. A hybrid setup with its increased hardware + software complexity and degrees of freedom of configuration is never quite as plug & play as a single gpu setup, even in the many common cases where current PTB + modern Linux can make it work fine, where any other existing vision science software falls apart completely. There’s always extra configuration hassle, and often some more special use-cases where certain functionality (e.g., some stereo, HDR, special presentation modes) doesn’t work, or where performance may be degraded (e.g., AMD iGPU + non-AMD dGPU, or older AMD iGPU’s). Why complicate things?

help HybridGraphics usually gives the most up to date status and advice.

The best way to thank me is to buy our support licenses, as that is what funds future development and maintenance. Or would fund it if more than < 1% of all our users would buy them.

Since the trend has been for manufacturers to move away from usable ports to include just very few, very limited ports (mostly USB-C) on their laptops (such as DELL XPS), could it be a problem for timing precision if you need to use additonal hubs/adapters in between the laptop and say a trigger box for sending event markers (TTL) to an EEG? Would this be a scenario where you’d be better off with a desktop PC? Was just wondering if you’re inviting timing and synchronisation issues in dongle land or this works just fine.

Is there a way to support PTB financially other than support licenses, like do you have a paypal, ko-fi, patreon, etc. where one could make smaller recurring contributions?

I can only add that after the troubleshooting recommended in this thread the laptop works great. We never had any obvious issues. We do not run EEG experiments with the laptop, but we do connect a trigger box and the ethernet cable for the Eyelink eye tracker, both via adapters (and of course a second screen). The laptop is able to handle everything well.


@natalia Thanks for the feedback Natalia.

Not likely. Any USB-C video output is just “USB-C DisplayPort Alt-Mode” - repurposing some of the USB-C wires for classic DisplayPort signalling, so same timing as with any Displayport → DP/HDMI/DVI/VGA dongle. USB-C and modern USB should be faster if anything than older USB, and the USB duty-cycle can be shorter, decreasing latency on the input side afaik, so i’d expect it to not become worse than what old USB did, although i didn’t read up on the latest in USB specs over the last years.

Also: PTB can only detect what happens until the video signal leaves the graphics cards output connector, so any sync problems reported by PTB are always computer/graphics card/Os/driver/configuration issues, nothing to do with cables, dongles or the display device itself. A low-quality/shaky/noise adapter or cable could cause signal dropouts or quality issues though, e.g., triggering link re-training which would affect timing, picture quality, color bit depths etc. But that’s no new problem.

No. I tried PayPal for multiple years and few people paid, funding less than 2 months out of 5 years - almost all money came out of my own pocket, almost bankrupting me.

As part of the proper PTB business, we retried a cheap membership for 25 Euros again for over a year from December 2020 to May this year. Nobody paid shit. I think maybe 3 or 4 people paid ~ a total of 100 Euros in a whole year, not even remotely covering the setup costs for that. So that is a complete failure, just like 1.75 years in, the 150 Euros/year membership is a failure so far, even after tweaking it again last May. User/Lab contributions barely pay for 1 month of operating costs per year, and that even only because i currently work at 50% of what would be a reasonable salary. Nobody can tell me that a typical at least half-way reasonably funded research lab can not afford to spend 150 Euros per year on a tool as crucial to their work as Psychtoolbox. Definitely nobody can tell me that only less than 1% of all labs have that much money to spare for what is research equipment.

We do run a user survey atm. where we try to get more feedback about good or bad excuses for why labs don’t follow through on anything they promised us in our last user survey from 2016-2018 and throughout various VSS funding workshops with feedback of roughly ~ 2000 participants, or what we could possibly change/tweak to change the situation. Participants even get a discount code on the purchase of a support membership. So far, participation in that survey is also pretty disappointing. After 3 months only 115 responses were collected. That is less than what our previous survey from 2016 collected in less than two days. And too little to draw any meaningful conclusions at such an abysmal sample size. Links to the survey are in help Psychtoolbox and the announcements category for this forum and various other places.