Hi.
I am trying to display a stimulus on 1 monitor (3840x2160), and display a copy of this stimulus on another monitor (1920x1080). I tried to open a texture, then display that texture on the 2 monitors. This works fine on the monitor that has been linked to the texture at creation, but the other monitor is all white. If I try to link the texture to the second display, then it displays fine on the second monitor, but the first monitor is all black. I’ve tried using an off-screen window, and the function CopyWindow. All lead to the same issue.
How could I achieve what I am trying to do? The computer runs Ubuntu 20.04 and one monitor uses DisplayPort, the other one HDMI. Of course I could just draw the stimulus twice, but the goal is to monitor what is being displayed to an animal on the first monitor, and I can’t use a splitter because of the different resolutions.
Thanks,
Baptiste
Details about used graphics and display hardware are missing. Is upgrading to Ubuntu 22.04 an option? What graphics hardware is used?
Update: I thought about this general problem quite a bit lately, it is not as trivial as one would think, also of general interest, as the problem has come up more than once. After quite a bit of tinkering, I came up with a few new additional improvements. If you are still interested in a solution, I will provide advice for free - well sponsored by Mathworks, using the 2nd of three Mathworks sponsored support incidents for non-trivial issues faced by users, Neuroscience - MATLAB and Simulink Solutions - MATLAB & Simulink .
-mario
Ok, the original poster tuned out. But there is a related question from Tarana in Linux (UBUNTU) + AMD - which card to choose? - #7 by Tarana so lets continue that conversation here…
- Ok, I assumed all ViewPixx displays have that connector. Looking at the specs, the EEG variant seems to be a bit of a “light” edition with more limited functionality. An active hardware splitter might work, given that both monitors have same resolution and refresh rate, but I don’t have any practical experience with that specific setup, so no guarantee.
As far as pure software solutions go:
Ok, so the resolution and refresh rate of the console monitor is the same as the Viewpixx, and given that your AMD card uses DisplayPort connectors to drive both monitors, it is conceivable that the display driver will synchronize the video refresh cycles of both monitors if it considers them synchronizable. What model of AMD card is this?
Wrt. xorg.conf, you’d use XOrgConfCreator to create a dual-X-Screen setup, where you assign both monitors DisplayPort-1 and DisplayPort-2 to X-Screen 1. Then the usual XOrgConfSelector, logout + login.
Then you could test if the video outputs are auto-synchronized by running GraphicsDisplaySyncAcrossDualHeadsTestLinux
. If that works, then it might work on Ubuntu 20.04.5-LTS. Another visual test is PerceptualVBLSyncTest([],[],[],[],[],0,1);
- It should show a tear line somewhere in the middle of the screen, close to where the yellow line appears, and most importantly at the same vertical position on both monitors, ie. not shifted or drifting. On older AMD cards (not AMD Ryzen integrated graphics or AMD Navi cards), one can also run GraphicsDisplaySyncAcrossDualHeadsTestLinux([],[],1)
to let Psychtoolbox manually synchronize the video refresh cycles iff the AMD display driver didn’t do it automatically.
→ If cycles are synchronizable without drift, great, Ubunt 20.04 will do.
→ If not, e.g., due to subtle hardware mode timing differences between the monitors, an upgrade to Ubuntu 22.04.1-LTS would be needed. Then you can rerun XOrgConfCreator and choose a new option under “Advanced settings…” by answering “Use AsyncFlipSecondaries mode for multi-display setups” with (y)es. This will make sure to only use vsync synchronization on the “primary monitor” of an X-Screen for proper timing, but run unsynchronized on all other connected non-primary monitors. By assigning your “console monitor” as secondary monitor and the Viewpixx as the primary monitor, you will get proper timing and performance on the Viewpixx and good enough quality with possibly mild tearing on the secondary console monitor.
You’d either have to assign the monitors (in XOrgConfCreator, or by plugging on the graphics card) in an order that makes the Viewpixx monitor the primary monitor, or edit the xorg conf file created by XOrgConfCreator to mark that monitor / video output explicitely as primary monitor. E.g., if “DisplayPort-1” would be connected to the Viewpixx, you’d change the relevant monitor section with a text editor to contain the Option “Primary” true":
Section “Monitor”
Identifier “DisplayPort-1”
Option "Primary" "true"
EndSection
After reconfiguring and logout + login etc. the PerceptualVBLSyncTest function mentioned above lets you verify that the Viewpixx shows perfect pictures, whereas the console monitor might either also show perfect pictures, or show slight tearing.
Mirroring can then be done with the PsychImaging('AddTask', 'General', 'MirrorDisplayToSingleSplitWindow')
method. Or even without that, by putting both monitors in a mirror configuration, e.g., by replacing Option "RightOf" ...
in the xorg.conf monitor sections with Option "Position" "0 0"
, so all monitors are placed on top of each other and show the same image. This is a tad more efficient for a pure mirror image. The next Psychtoolbox release will have some further enhancements to the mirroring functionality.
-mario
Hi Mario! Thanks for your quick response!
I will give the GraphicsDisplaySyncAcrossDualHeadsTestLinux
a try with the new XOrg file you are suggesting and get back to you if this is something that already works well on Ubuntu 20.04.5-LTS already or not . I might be a bit late in responding because I am off to conference next week.
Tarana
Hi Mario.
I was simply waiting for my university to process payment for paid support (key below).
The computer I use has an NVIDIA GeForce GTX 1660. Below are details from the nvidia-settings GUI as well as the xorg.conf file. My screens don’t have all the same resolution, so I’m not sure I can use mirroring. Synchronization is also not critical for us. Basically one monitor is inside an animal setup and the other monitor just needs to display a copy of the animal display so that we can double-check everything is working fine.
# nvidia-settings: X configuration file generated by nvidia-settings
# nvidia-settings: version 440.64
# nvidia-xconfig: X configuration file generated by nvidia-xconfig
# nvidia-xconfig: version 440.82
Section "ServerLayout"
Identifier "Layout0"
Screen 0 "Screen0" 0 0
Screen 1 "Screen1" 1680 0
Screen 2 "Screen2" 3600 0
InputDevice "Keyboard0" "CoreKeyboard"
InputDevice "Mouse0" "CorePointer"
Option "Xinerama" "0"
EndSection
Section "Files"
EndSection
Section "InputDevice"
# generated from default
Identifier "Mouse0"
Driver "mouse"
Option "Protocol" "auto"
Option "Device" "/dev/psaux"
Option "Emulate3Buttons" "no"
Option "ZAxisMapping" "4 5"
EndSection
Section "InputDevice"
# generated from default
Identifier "Keyboard0"
Driver "kbd"
EndSection
Section "Monitor"
Identifier "Monitor0"
VendorName "Unknown"
ModelName "HannStar Display Corp Hanns.G Hi221"
HorizSync 24.0 - 94.0
VertRefresh 56.0 - 76.0
Option "DPMS"
EndSection
Section "Monitor"
Identifier "Monitor1"
VendorName "Unknown"
ModelName "Ancor Communications Inc VG248"
HorizSync 30.0 - 83.0
VertRefresh 50.0 - 76.0
EndSection
Section "Monitor"
Identifier "Monitor2"
VendorName "Unknown"
ModelName "LG Electronics LG ULTRAGEAR+"
HorizSync 270.0 - 270.0
VertRefresh 40.0 - 120.0
EndSection
Section "Device"
Identifier "Device0"
Driver "nvidia"
VendorName "NVIDIA Corporation"
BoardName "GeForce GTX 1660"
BusID "PCI:1:0:0"
Screen 0
EndSection
Section "Device"
Identifier "Device1"
Driver "nvidia"
VendorName "NVIDIA Corporation"
BoardName "GeForce GTX 1660"
BusID "PCI:1:0:0"
Screen 1
EndSection
Section "Device"
Identifier "Device2"
Driver "nvidia"
VendorName "NVIDIA Corporation"
BoardName "GeForce GTX 1660"
BusID "PCI:1:0:0"
Screen 2
EndSection
Section "Screen"
# Removed Option "metamodes" "DP-0: nvidia-auto-select +0+0"
# Removed Option "metamodes" "DP-0: 1680x1050_60 +0+0"
# Removed Option "nvidiaXineramaInfoOrder" "DFP-2"
# Removed Option "metamodes" "DP-0: 1680x1050 +0+0"
# Removed Option "metamodes" "DVI-D-0: 1680x1050_60 +0+0"
# Removed Option "metamodes" "DVI-D-0: 1680x1050_60 +0+0, DP-0: nvidia-auto-select +3600+0, HDMI-0: nvidia-auto-select +1680+0; HDMI-0: nvidia-auto-select +0+0; DP-0: 1024x768 +0+0, HDMI-0: nvidia-auto-select +1680+0; DP-0: 800x600 +0+0, HDMI-0: nvidia-auto-select +1680+0; DP-0: 640x480 +0+0, HDMI-0: nvidia-auto-select +1680+0; DP-0: nvidia-auto-select +0+0 {viewportin=1440x900, viewportout=1680x1050+0+0}, HDMI-0: nvidia-auto-select +1680+0; DP-0: nvidia-auto-select +0+0 {viewportin=1366x768, viewportout=1680x944+0+53}, HDMI-0: nvidia-auto-select +1680+0; DP-0: nvidia-auto-select +0+0 {viewportin=1280x800, viewportout=1680x1050+0+0}, HDMI-0: nvidia-auto-select +1680+0; DP-0: nvidia-auto-select +0+0 {viewportin=1280x720, viewportout=1680x945+0+52}, HDMI-0: nvidia-auto-select +1680+0; DP-0: nvidia-auto-select +0+0 {viewportout=1680x945+0+52}, HDMI-0: nvidia-auto-select +1680+0"
Identifier "Screen0"
Device "Device0"
Monitor "Monitor0"
DefaultDepth 24
Option "Stereo" "0"
Option "nvidiaXineramaInfoOrder" "DFP-0"
Option "metamodes" "DVI-D-0: 1680x1050_60 +0+0"
Option "SLI" "Off"
Option "MultiGPU" "Off"
Option "BaseMosaic" "off"
SubSection "Display"
Depth 24
EndSubSection
EndSection
Section "Screen"
# Removed Option "metamodes" "DVI-D-0: 1920x1080_120 +0+0 {AllowGSYNC=Off}"
# Removed Option "metamodes" "DVI-D-0: nvidia-auto-select +0+0 {AllowGSYNC=Off}"
# Removed Option "metamodes" "DVI-D-0: 1920x1080_120 +0+0 {AllowGSYNC=Off}"
Identifier "Screen1"
Device "Device1"
Monitor "Monitor1"
DefaultDepth 24
Option "nvidiaXineramaInfoOrder" "DFP-0"
Option "Stereo" "0"
Option "metamodes" "HDMI-0: 1920x1080_60 +0+0 {AllowGSYNC=Off}"
Option "SLI" "Off"
Option "MultiGPU" "Off"
Option "BaseMosaic" "off"
SubSection "Display"
Depth 24
EndSubSection
EndSection
Section "Screen"
# Removed Option "metamodes" "DP-0: 3840x2160_60 +0+0 {AllowGSYNC=Off}"
# Removed Option "metamodes" "DP-0: 1920x1080_60 +0+0"
Identifier "Screen2"
Device "Device2"
Monitor "Monitor2"
DefaultDepth 24
Option "Stereo" "0"
Option "nvidiaXineramaInfoOrder" "DFP-2"
Option "metamodes" "DP-0: 3840x2160 +0+0; DP-0: 1920x1080_60 +0+0"
Option "SLI" "Off"
Option "MultiGPU" "Off"
Option "BaseMosaic" "off"
SubSection "Display"
Depth 24
EndSubSection
EndSection
P99FPLTD-2022117183231:576dccdccc8aa85e9a10b74f62c2f5658c534440828e759157c5962be9f74d2d
I see. Use of a display splitter is not possible, and sync of the monitors is not possible, due to the differing resolutions and also different connector types. If your machine had a non-NVidia gpu, as recommended in our hardware recommendations, this would be solvable by upgrading the OS to Ubuntu 22.04-LTS and setting up a dual-X-Screen setup as described in the previous post, assigning both displays to X-Screen 1, and using that new ‘AsyncFlipSecondaries’ mechanism supported for open-source drivers on 22.04-LTS.
There’s a similar option listed in NVidia’s proprietary driver release notes, that may work for you though:
-
Set up the dual-x-screen config anyway for mirroring, as described above (e.g., with the
Option "Position" "0 0"
, so both monitors are placed on top of each other and show the same image), maybe this can be done in their GUI utility as well, and see what happens with NVidia’s proprietary driver. Run PerceptualVBLSyncTest. If your stimulus monitor shows tear-free high-frequency flicker, whereas the control monitor shows tearing flicker, that would be what you want. -
If the wrong monitor of the two is tearing, you could try to launch matlab with the environment variable __GL_SYNC_DISPLAY_DEVICE set to the name of the display monitor you use for visual stimulation to force sync and timing on that one. E.g., for having proper sync and timing on the monitor connected to output DP-0, which i think is the 4k “LG Ultragear” stimulus monitor if I read the info from your post correctly, launch matlab from a terminal as:
__GL_SYNC_DISPLAY_DEVICE=DP-0 matlab
Then check with PerceptualVBLSyncTest again, if the important monitor has proper sync, and the yellow beamposition marker lines cluster at the top of the screen, whereas the control monitor tears.
Please note that AMD gpu’s are recommended over NVidia for complex setups. The upcoming PTB 3.0.19 will have additional improvements for flexible display mirroring, but they probably only will work well with non-NVidia gpu’s like AMD and Intel.
Hope it helps,
-mario
Hi Mario.
To take a step back: why can’t I display the same texture on 2 monitors? I have done it in the past, in facts on 4 monitors connected to the same nVidia GPU and with Windows. So why doesn’t it work on this setup?
I grabbed whatever hardware I could find in the lab. I’m not against purchasing new one if it would solve our problem. Would a splitter help? I have been told they make synchronization unreliable.
Baptiste
I should mention that the current support I provide in this specific discussion thread is free to you, within reason, as I have assigned one of the “Mathwork support for non-trivial issues” jokers to this issue, iow. Mathworks pays the bill, which by now would go substantially beyond 1000 Euros. This because display mirroring is a topic of general interest, and there was room for improvement, and some of it only possible recently due to new developments both in Psychtoolbox and recent Linux distributions. This basically makes for a good test case for some new upcoming PTB 3.0.19 features and some stuff introduced a few months ago. So I won’t bill you on your provided paid support token. That’s why I was surprised about the long silence on your side.
The short answer is: Because of fundamental architectural design differences between the display servers of Linux/X11, Linux/Wayland, Linux/DirectDisplay, Windows XP and earlier, Windows Vista/7, Windows 8 and later, macOS older versions, macOS newer versions, macOS on IntelMac and macOS on Apple Silicon ARM Mac. Each OS and OS generation chose different designs, with different tradeoffs. And sometimes it then additionally depends on open-source vs. proprietary display drivers, and Graphics hardware vendor.
The long answer is “its complicated”: But mostly the biggest user-visible compatibility nightmares wrt. to the different OS + version + hw configurations are to be found when it comes to display mirroring, ie. what we are talking about here.
In your specific case:
Linux/X11 with the “classic” XOrg X-Server, which is what Psychtoolbox on Linux uses by default, has the concept of X-Screens, as you know already. Traditionally and by design, the X-Server treats each X-Screen as an almost completely separate entity, logically isolated from every other X-Screen, almost as if each X-Screen would be a physically separate computer. The only shared resource across X-Screens is essentially the mouse cursor. That’s why you can move the mouse cursor in a multi-x-screen config across all displays. That’s also why you can’t actually move windows from the set of monitors of one X-Screen to the set of a another X-Screen, because windows are not a shared resource between the X-Screen. That’s also why your desktop GUI only shows on X-Screen 0, because the GUI and window manager doesn’t even know that other X-Screen’s exist, so it won’t display any GUI there, unless your Linux distribution and GUI is specifically configured to also run on other X-Screens, something that is usually not the case on most Linux distributions. That’s also the reason why Psychtoolbox onscreen windows can’t span multiple X-Screens, each onscreen window is in its own cage. Psychtoolbox by default uses OpenGL for rendering and display, with each onscreen window having its own OpenGL context. Because of the resource isolation between X-Screens, an OpenGL context used for an onscreen window on one X-Screen can not share any OpenGL resources with an OpenGL context used for anonscreen on another X-Screen. Things like PTB textures, or virtual framebuffers used for display mirroring, are such resources that can’t be shared, hence the failure of your original approach.
On your specific recent version of Windows (Vista and later), the whole Windows desktop with all displays is one entity, so all windows are shared, and all OpenGL contexts across all displays can share resources, as long as only one physical graphics card is involved, which is usually the case. That’s why your Windows example worked for you.
By default, a standard Linux/X11 system will only use one X-Screen for all graphics cards and displays, so it behaves like Windows or macOS, and everything is pretty much “plug & play”, because that is a) convenient for users and b) good enough for most typical run of the mill computer use cases.
The multi-X-Screen config has the advantage that with the whole resource isolation comes generally much better performance/low-level control/reliability for special use cases like visual stimulation in neuroscience contexts, which is why Psychtoolbox uses multi-X-Screen for many multi-display paradigms, and also if users want to have a setup with a “experimenter GUI monitor” where Matlab/Octave and the rest of the regular desktop GUI displays and one or more visual stimulation monitors for the subject. The setup of taking over a complete X-Screen with a single Psychtoolbox onscreen window in fullscreen mode, allows for reducing interference from the desktop GUI and the applications running there, and it allows to enable page-flipping for visual stimulation, which is needed for the far superior visual timing precision and performance, and various other low level aspects of visual stimulation. “Windowed” windows that don’t cover a full X-Screen will have degraded low level control, performance, latency and especially timing precision, just as on other operating systems. Hence the setup with a second X-Screen for PTB visual stimulation in fullscreen mode, and X-Screen 0 for a GUI.
On MS-Windows 8 and later, any kind of multi-display setup is inherently unreliable wrt. visual stimulation timing, again due to various design decisions by Microsoft made since Windows Vista - different tradeoffs - and the lack of logical resource isolation that Linux provides with its separate X-Screens. E.g., if you ever had a multi-display setup under Windows 8 or later, with one monitor showing the desktop GUI with Matlab or Octave, and you ever ALT+Tabbed while a Psychtoolbox session was running, or you even did a single mouse click on any window on that GUI monitor, e.g., any element of the Matlab window, or any window opened on the desktop GUI while your experiment session was running, you can be 100% sure that your visual stimulation timing was broken from that moment on for the rest of your Psychtoolbox session, with timing errors in the +/- 50 msecs range, sometimes systematic, sometimes non-systematic, sometimes broken in different ways depending on stimulus condition, hardware, OS version and exactly how your script was coded, introducing the worst kind of bias, often subtle enough that an experimenter won’t notice the disaster happening. PTB may or may not have detected this and spewed some error or warning messages, but the false negative rate of PTB’s diagnostics on Windows or macOS is way worse than on Linux (where it is close to zero), so you may have simply collected lots of data trash during such a data collection session. I won’t even talk about other popular vision science packages i know of, who have way worse coping mechanisms and diagnostics to detect such problems…
There are many other failure cases avoided by X-Screen isolation. In practice i would not trust any multi-display setup under Windows or macOS for a second to do the right thing consistently…
There are many more advantages to the isolated multi-X-Screen setups, too long to mention here, but display mirroring is one of the few cases that is a bit tricky and involves various tradeoffs and hoops to jump through.
Apart from all this, a problem with display mirroring on any operating system is that if you don’t have perfectly synchronizable display monitors, you can only get proper performance and precise/trustworthy visual timing on at most one display monitor, and even that “at most one monitor” is not guaranteed, depending on the specific implementation details of the operating system, OS version, graphics card and graphics card device driver in use.
What you want is the visual stimulation monitor (ie. subjects display) to use vsync for Screen('Flip')
, but all other mirror monitors / experimenter feedback monitors / console monitors to not use vsync, but instead tearing, because that avoids timing interference by the unimportant displays, while usually still giving good enough image quality - with a bit of flicker or tearing - for pure experiment monitoring purposes. This requires an OS + display driver + graphics hardware combination that allows you to control which display / video output gets proper timing and vsync, and which other display(s) don’t. On Windows or macOS, the amount of control about this varies across system configurations, so you may or may not get lucky for any given setup.
On Linux the situation is more predictable in the sense that I know how different generations and types of drivers behave and was involved in the implementation of some of their relevant functionality in the past, and therefore I can make improvements there, but not on the other proprietary operating systems.
This Linux advantage is mostly only true for Linux with open-source graphics and display drivers, iow. AMD, Intel, Broadcom, … NVidia hardware is the odd naughty outlier again, as at the moment we only have high quality / high performance open-source drivers for older generations of NVidia hardware (GeForce 700 “Kepler” and earlier from before the year 2014), performance falls off a cliff with GeForce 800/900’ish “Maxwell” from around 2015 and later). This is mostly NVidia’s fault, nothing one could do about. Recently NVidia made some steps in the right direction of starting to open up at least their kernel driver, but not the user-space drivers which remain proprietary, and only for their latest generations of hardware (GeForce 1650 GTX “Turing” and later from late 2018), and only with limited alpha-quality drivers so far. This is a step in the right direction, but it will probably take multiple years to translate into meaningful improvements for neuroscience use cases. And the gap between Kepler and Turing will probably remain forever. With reasonable modern NVidia hardware you are dependent on the proprietary driver for many or most use cases to get good functionality and performance, and the proprietary driver is mostly a black box which often deviates significantly in behaviour from other Linux graphics drivers. All to say, you are almost always better of with Linux, but how much better off you are with a NVidia graphics card + NVidia proprietary driver on Linux is highly dependant on specific details of your setup and specific paradigm. And in case of trouble, little can be done to help, almost as bad as on Windows or macOS. Hence my general advice against NVidia hardware, at least for the foreseeable future.
Now, as I said, I made improvements specifically for typical neuroscience display mirroring scenarios for Linux/X11 with open-source display drivers. These require a Linux distribution with X-Server version 21 or later, e.g., Ubuntu 22.04 LTS. Backporting driver support for AMD gpu’s to X-Server 1.20 or Ubuntu 20.04.5-LTS would be possible, but upgrading to Ubuntu 22.04-LTS is easier and brings other advantages, especially given that various PTB improvements already require Ubuntu 22.04.
With your setup, my advice from the previous post will probably work though and solve your specific problem on your specific setup without need for upgrade.
In the “news” section, upcoming Psychtoolbox 3.0.19.0 will have mirroring improvements for all operating systems, although the biggest advantage is to be had on modern Linux with open-source drivers, ie. non-NVidia, again.
On Linux/X11 one will be able in some cases to use PTB’s rather new Vulkan display backend, which combines some of the advantages of multi-X-Screen mode with some of the advantages of standard single X-Screen / MS-Windows style setups. One will be able to exclusively take over a single video output / display for subject visual stimulation, while at the same time having even a windowed window for a mirrored stimulus image on the desktop GUI or fullscreen on a separate monitor. Performance of the approach as tested on AMD and Intel gpu’s is pretty good. However, the Vulkan display backend, due to its relative youth, atm. has some other functional limitations and tradeoffs, so this is not always usable, but probably in some common cases.
Also the Vulkan display backend needs a high quality Vulkan driver for direct display mode, and again NVidia’s proprietary offerings here are not great at all, highly unreliable, with no rhyme or reason wrt. when it works and when it doesn’t.
I haven’t had access to a display splitter in over six years, but I’d assume that many of them will only work if your console monitor has the same resolution as the main monitor.
Wrt. purchasing a new graphics card, I’d try the approach I outlined in my previous post first. I just tested on my GeForce GTX 1650 setup with very similar hardware to your GTX 1660, and contrary to what NVidia’s proprietary driver documentation states, using __GL_SYNC_DISPLAY_DEVICE didn’t work at all, the setting was ignored and my setup always synchronized to the wrong HDMI connected 60 Hz low-resolution monitor instead of the wanted Displayport connected high resolution 144 Hz monitor. Probably a driver bug in the proprietary driver installed on my Ubuntu 20.04.5-LTS machine, so you might get more lucky on your specific hardware+software setup. Or you might not need that setting at all for your setup. There’s nothing to be done if it doesn’t work on the proprietary driver, but worth a try.
Otherwise, a modern AMD graphics card and an upgrade to Ubuntu 22.04.1-LTS would be the best option to make things work. In theory, just upgrading to 22.04.1-LTS but using the open-source nouveau driver instead of NVidia’s proprietary driver would also likely work, but with the open-source driver you’d suffer a 95% performance loss on the GTX 1660. Non-demanding visual stimulation paradigms might still be workable with only 5% of the nominal performance of your gpu if you’d be lucky.
-mario
Hi Mario, Your descriptions were very helpful and useful in the above message!
I tried some of the things you suggested. I can now “in-principle” see a mirrored screen when I log on to the user. But can’t sync the two monitors (most likely caused by XOrg). The ViewPixx/EEG is set at 120Hz , but as soon as I logon to a user with XOrg settings , the second console monitor (only) turns to 60 Hz.
I checked that the monitor is certainly capable of 120 Hz - in fact when I logon the user using Wayland, I see 120 Hz. So I’m afraid Xorg on booting caps a 60Hz refresh rate on the 3rd display port.
I tried the Screen(‘ConfigureDisplay’, ‘ScanMode’, 1,1, 1920,1080, 120) to change the refresh rate. It runs , but when I run Screen(‘ConfigureDisplay’, ‘ScanMode’, 1,1) again to check if the change actually happened, I see and error that this screen doesn’t exist.
I can’t use xrandr because for some reason it doesn’t detect anything more than the Display Port-0. (Which is odd)
Do you know if there’s another way to force the refresh rate to 120 Hz, eg. Through some edid file ? Or something that fixes it at the startup of XOrg.
Thank you so much again for your help!
Can you show me the output of ResolutionTest(1)
in Matlab and/or xrandr --screen 1
in a terminal? Actually maybe xrandr --screen 1 --verbose > output.txt
and then the whole output.txt file? Also, what is the model of the gpu, e.g., the “PTB-INFO: OpenGL-Renderer is …” output line?
→ It is ‘ScanOut’ not ‘ScanMode’, it can’t be a typo on your side, no? Should work just fine.
That’s because of those whole “Each X-Screen is its own little world” property mentioned in the previous post. You only see the outputs assigned to X-Screen 0 by default, because you only see X-Screen 0 by default.
xrandr --screen 1 [whateverothercommandsyouwant]
would select X-Screen 1 for display and manipulation.
You could specify the “PreferredMode” option in the Monitor section, e.g., your monitor section for the video output of your console monitor could look like this, assuming DisplayPort-1 is the Viewpixx/EEG and DisplayPort-2 is the console monitor:
Section “Monitor”
Identifier “DisplayPort-1”
# Viewpixx is primary output:
Option "Primary" "true"
EndSection
Section “Monitor”
Identifier “DisplayPort-2”
# Console monitor mirrors the image, forced to a 1920x1080 at 120 Hz mode:
Option "Position" "0 0"
Option "PreferredMode" "1920x1080"
# Artificially restrict range of allowed refresh rates to 119 Hz - 121 Hz if commented in:
#Option "VertRefresh" 119-121
EndSection
The X startup default is whatever the monitor reports as its preferred resolution and refresh rate, if i remember correctly. In the xrandr --screen 1
output that would be the one with the + sign marking the monitors preferred mode. If the console monitor prefers 60 Hz, that would be it.
xrandr --screen 1 --output DisplayPort-2 --mode 1920x1080 --rate 120
should do the same as
Screen('ConfigureDisplay', 'Scanout', 1, 1, 1920, 1080, 120);
If that PreferredMode setting in xorg.conf doesn’t work, one could also try to artificially restrict the range of valid refresh rates so the server has to choose 120, e.g., by adding
the ‘VertRefresh’ option above. But i’d first try without.
Hi Mario,
Thanks again for your super quick response and your incredible help!
I managed to mirror the screen by running xrandr --screen 1 --output DisplayPort-2 --mode 1920x1080 --rate 120
! This looks quite promising . I havent yet done extensive timing tests. But a quick 7 minute of an experiment with visual display of images leads to roughly 50 frame skips out of roughly 50000 frames- which is a tiny bit more than what I used to have with only 2 screens. Does that sound about right? Or do you think there is much more to optimize in terms of the performance? I will do in-vivo electrophysiology with NHP, so the timing needs to be very precise.
I will try to do some more tests for stimulus presentation and triggers tomorrow and get back to you if this setup is something that works well.
This refresh rate of course doesnt stay once I log-out and log-back in. It would be good to know if there was a way to add this command in the setup of XOrg or something of that sort - instead of having to run the xrandr command every time on log in.
I also tried to change the xorg.conf file with Preferred mode and VertRefresh , but that does nothing - after log-out log-on.
Here’s the info you asked for:
PTB-INFO: OpenGL-Renderer is AMD :: Radeon RX Vega (VEGA10, DRM 3.42.0, 5.15.0-52-generic, LLVM 12.0.0) :: 4.6 (Compatibility Profile) Mesa 21.2.6
PTB-INFO: This is Psychtoolbox-3 for GNU/Linux X11, under Matlab 64-Bit (Version 3.0.17 - Build date: May 14 2021).
PTB-INFO: OS support status: Linux 5.15.0-52-generic Supported.
The output.txt is the file generated before I changed the refresh rate to 120 Hz and the output_new.txt is after I changed to 120Hz on the console monitor.
I will try to give some information on the timing either tomorrow or when I am back from a conference in two weeks.
Hope these tests could be useful for others.
Thanks again !
Tarana
(Attachment output.txt is missing) [I got an email saying that I can only attach jpg, png etc ]
(Attachment output_new.txt is missing)
Screen(‘ConfigureDisplay’,…) should have worked just as well as the xrandr command,
and in fact works fine on my test setup. But your Psychtoolbox is very outdated, v3.0.17
is end of life since quite a while, so maybe there are some bugs in there. And without the
debug logs i can’t tell you why that would not work.
Wrt. 50 skipped frames, that sounds too much. I just ran VBLSyncTest(50000) for
50000 frames on a dual-display setup with 2560x1440 resolution on each monitor,
one running at 144 Hz, the other at 100 Hz under Ubuntu 20.04.5-LTS, same kernel
etc., with an AMD Ryzen processor integrated gpu that is substantially less performant
than your Vega gpu, and had 0 dropped frames. This however with the 100 Hz
“console” monitor in non-vsynced mirror mode.
Just because you managed to set both monitors to 120 Hz doesn’t mean they are
synchronized, and their video refresh cycles drifting against each other could easily
cause many skipped frames. Both monitors having nominally matching resolution and refresh rate is only one necessary but not sufficient condition for proper sync. That’s why we had this whole GraphicsDisplaySyncAcrossDualHeadsTestLinux
stuff i mentioned before. You also need matching low-level mode timings, as provided by matching
EDID info for both monitors - which often means that both monitors must be the same
model from the same vendor, maybe even from the same batch, or one had to modify
EDID info or low-level mode timings to match. Often one needs identical video connections, e.g., both DisplayPort or both HDMI. If everything fits and your AMD gpu
is modern enough, the Linux display driver would auto-sync the displays. On a AMD
Vega, Psychtoolbox itself may be able to do such a thing manually, as mentioned before,
but I never had access to a Vega gpu for testing this and never got testing feedback from
users who have a Vega gpu.
That said, for your use case of display mirroring you don’t need synchronized video
refresh across monitors, just the ability to disable vsync on the console monitor. The
normal way to do this is to upgrade to Ubuntu 22.04.1-LTS and run XOrgConfCreator
to get the “AsyncFlipSecondaries” option enabled.
Because it was easy enough, I just compiled a version of the amdgpu driver version 22
for Ubuntu 20.04.5-LTS with X-Server 1.20.13 and uploaded a zip file under:
https://github.com/Psychtoolbox-3/MiscStuff/raw/master/amdgpu-ddx-22.0.0_ForUbuntu22.04.zip
The zip file contains a driver file for manual installation, which will bring the
“AsyncFlipSecondaries” option to modern AMD graphics cards like yours under
Ubuntu 20.04.5-LTS. See included Readme file. Note that this comes without
any warranty, and I do not intend to provide any maintenance for this driver file,
not even in case of critical security bug fixes or similar - it is “as is” take it or leave
it. As this is not a proper Debian package, installation is hackish, just overwriting
Ubuntu’s original driver file, but I won’t bother with a proper installation package.
The proper, clean way really would be for you to upgrade to Ubuntu 22.04-LTS,
as it contains a host of other improvements beyond this mirroring stuff,
but now you have a hacky option as well, if you want. This requires manually
adding another option to the Psychtoolbox generated xorg config file, this time
in the “Device” section for “Card1”, before or after the ZaphodHeads lines, like
Option "AsyncFlipSecondaries" "on"
Running PerceptualVBLSyncTest with this config should show nice fast
black-white flicker without tearing on the Viewpixx, and massive tearing
on the console monitor. In this case it doesn’t really matter at what refresh
rate the console monitor runs, tearing will just be worse the more the refresh
rates diverge. Important is that either the console monitor runs at a lower resolution
than the Viewpixx, or in case of matching 1920x1080 resolution, that that
Option "Primary" "true"
is specified in the Monitors section for the Viewpixx to make sure the Viewpixx
gets proper vsync and the console monitor gets tearing, instead of the other way
around, or some random choice. The method for deciding which monitor gets
proper vsync and timing in “AsyncFlipSecondaries” mode is:
- The one with the highest resolution gets vsync, other displays tear.
- If multiple displays have the same maximal resolution, the one which is the designated primary output gets vsync, the other displays tear. Option “Primary” “true” assigns primary output status to a monitor, as would do a
xrandr --screen 1 --output DisplayPort-1 --primary
command to define DisplayPort-1 as primary output.
-mario
Hi Mario.
Thanks for your very detailed answer. This “double texture trick” was supposed to be an easy way to mirror the display but apparently this will require more work than I thought. I tried to link 2 of the 3 monitors to the same Screen without success. What would the xorg.conf file look like?
As this is a currently used NHP setup I’m a bit worried to mess with it too much. I might just purchase a monitor with the same resolution and give a splitter a try.
Baptiste
The xorg.conf you have for X-Screen 0 for GUI with one experimenter monitor, and X-Screen 1 for PTB with stimulus monitor + “mirror” monitor, should be fine. XOrgConfCreator can also usually create a similar xorg.conf.
The important thing is to sync OpenGL to the right stimulus monitor output, ie. make the stimulus monitor have good timing and vsync and the mirror monitor tear. For that you need to test the __GL_SYNC_DISPLAY_DEVICE
method i mentioned to you previously. Did you try that? And verify with PerceptualVBLSyncTest? Or see if swapping the ports you plug your monitor does the same. On my quick test setup the NVidia driver ignored __GL_SYNC_DISPLAY_DEVICE
and always synced to the HDMI connected “mirror” monitor instead of the wanted DisplayPort connected “stimulus” monitor. So swapping which monitor is plugged into which output port could also work if the NVidia driver gets it wrong consistently. Might be though that at least HDMI would not provide enough bandwidth for 3840x2160 at higher refresh rates.
Or swapping to an AMD gpu and then do any of the things i explained wrt. AMD gpu’s to Tarana. That would almost certainly solve the problem perfectly, either by upgrading to Ubuntu 22.04.1-LTS, or using the custom built amdgpu-ddx driver on Ubuntu 20.04, which i mentioned in my most recent previous post.
That’s one option you could try if you can get the exactly same model+vendor of monitor as the stimulation monitor. Just a monitor of the same resolution may not necessarily be enough. However, this is also likely the most expensive and inflexible option.
-mario
I’ve at least used Displayport splitters using 120Hz Display++ and a general use 120Hz display without any obvious timing issues, but this is following Mario’s advice regarding setup with an AMD GPU and up-to-date Linux.
Hi Mario, just for clarification, XOrgConfCreator adds this line to both Card0 and Card1, and Option "Primary" "true"
is not added at all. Is that OK or would it be better to target this only to the desired card / screen?
Here is my xorg output under 22.04.1 with an AMD card:
Section "Device"
Identifier "Card0"
Driver "amdgpu"
Option "VariableRefresh" "off"
Option "AsyncFlipSecondaries" "on"
Option "ZaphodHeads" "DisplayPort-0"
Option "Monitor-DisplayPort-0" "DisplayPort-0"
Screen 0
EndSection
Section "Device"
Identifier "Card1"
Driver "amdgpu"
Option "VariableRefresh" "off"
Option "AsyncFlipSecondaries" "on"
Option "ZaphodHeads" "DisplayPort-1"
Option "Monitor-DisplayPort-1" "DisplayPort-1"
Screen 1
EndSection
The AsyncFlipSecondaries option is a per-x-screen - and therefore per-card setting. Could be that due to lack of time I didn’t implement support in XOrgConfCreator to select per-screen, so a ‘y’es will add the option to all cards/screen. You could manually delete that option or set it ‘off’ for x-screen where you dont’ want it, e.g., screen 0 for the desktop GUI.
Option Primary is not implemented in XOrgConfCreator yet - needs to be manually added to the proper Monitor section. But it is also always possible to assign primary at runtime with a xrandr --screen <screennumber> --output <outputname> --primary
command. The Primary option acts as a tie-breaker if multiple outputs/displays have the same resolution. If one display has a higher resolution then that one always wins the vsync battle, or said otherwise, if your “mirror” console monitor has a lower resolution than the stimulus monitor then assigning primary is not needed.
In an ideal case we’d have an independent parameter to always force vsync to a specified output, not just use primary as a tie-breaker, so one could also force vsync on the lower resolution display. But there wasn’t enough time to implement that in the X-Server 21 before the new feature cutoff deadline last year, so I went for rather getting “good enough in most cases” into the server - and thereby Ubuntu 22.04-LTS - in time, instead of “perfect” not at all, due to missing the deadline. Time constraints everywhere, many caused by the lack of funding, so I often have to prioritize less important stuff that brings in urgently needed money over what would be the most useful thing to do.
Thank you Mario for implementing, good enough is much better than not at all!!!
Hi Mario.
So using the XOrgConfCreator I was able to generate the correct xorg.conf. By default the high-res monitor (3840x2160) and “mirror” monitor (1920x1080) were linked to Screen1 next to each other (the bigger first). I tried opening 2 windows specifying explicitly the coordinates of the windows (so [0,0,3840,2160] and [3840,0,5760,1080]) and ran the PerceptualVBLSyncTest. Both monitors were flickering but I could see no tearing. What exactly am I supposed to look for?
I could now display a texture on the big monitor and a scaled copy on the small. I’ll try to run some tests in the coming weeks to see if that solution would work for us, but I unfortunately don’t have much time to spend on that.
Also, I assume that with both monitors linked to the same screen I couldn’t run them at different FPS?
Thanks,
Baptiste
The way you test this on your setup, is to run PerceptualVBLSyncTest without any parameters. The stimulus monitor should have nice fast tear-free flicker, whereas the mirror monitor should show tearing. Otherwise try the __GL_SYNC_DISPLAY_DEVICE
method as explained before to try to get the stimulus monitor to be good while the mirror monitor tears. If none of my previous advice works to get this config then it’s game over with the NVidia gpu due to NVidia proprietary graphics driver bugs and you could only try upgrading/downgrading the driver in the hope that fixes it, or switch to an AMD gpu, or try your more expensive/inflexible display splitter approach.
If it works, then you need one stimulus window in fullscreen, not two separate ones, as explained in previous replies: You can create exactly one fullscreen onscreen window on a given X-Screen that displays on / covers both monitors, otherwise pageflipping for precise/trustworthy timing/timestamping won’t kick in → Unreliable/Untrustworthy timing.
You always open 1 fullscreen window, and then:
-
Either setup so that both monitors mirror/show the same area of the window, instead being located next to each other, by editing the created xorg.conf by replacing
Option "RightOf" ...
in the xorg.conf monitor sections withOption "Position" "0 0"
, so all monitors are placed on top of each other and show the same image. You can also useScreen('ConfigureDisplay', 'Scanout', 1, 1, [], [], [], xStart, yStart);
in a PTB script to do the same at runtime to place the 2nd “mirror” monitor at (x,y) offset (xStart, yStart), e.g.,xStart = yStart = 0
, or any more convenient offset, before opening the onscreen window. -
Or keep the current arrangement and use the
PsychImaging('AddTask', 'General', 'MirrorDisplayToSingleSplitWindow')
method. This uses a bit more gpu processing, but will allow potentially interesting extra features with upcoming PTB 3.0.19 like putting an overlay on top of the mirror image to visualize additional stuff, e.g., some people like to overlay some eye tracking data over the mirror image, or similar feedback.
If the laid out approach works, you can run the monitors at different refresh. The stimulus monitor will show proper images with proper timing, the mirror monitor will tear anyway, regardless if refresh rate matches or not.
-mario