Multi-display setup difficulty on linux laptop

I’m trying to get a multi-display setup working on a Dell / Alienware linux laptop.

The graphics card is an AMD RX 6700M / REMBRANDT, and the OS is Ubuntu 22.04 LTS (what it came installed with). The CPU is an AMD Ryzen 7 6800h. I have read through the “HybridGraphics” documentation, and I used the following command to install low-latency HWE:

sudo apt install –install-recommends linux-lowlatency-hwe-22.04

My X-Server version seems to be new enough, and so does Mesa (though I infer that since this laptop has an AMD chip, that’s not relevant). Thus I think my use case corresponds most closely to the section “Laptops with dual AMD gpus AMD iGPU + AMD dGPU (“AMD Enduro” models)” in the HybridGraphics documentation, which suggests no specific reason why this system shouldn’t work, but of course it hasn’t specifically been tested, so there we are.

I will also note that I’ve got the secondary display plugged into the HDMI output port of the laptop. I do see the DisplayPort symbol next to a single USB-C port on the back of the laptop, but I haven’t tried using that output. I don’t have any USB-C to DisplayPort adapters on hand, but if it might solve my issue, I would be very happy to purchase one.

The steps I’ve taken so far: I’ve run XOrgConfCreator and selected TWO separate series of responses to the prompts presented (see below), but with both sets of responses I start with assigning: “X-Screen 0: eDP-1”, and “X-Screen 1: HDML-A-1-0”:

(1) Multi-screen setup, followed by saying “no” to the question “Do you experience weird issues with display arrangements and want me to try to fix it?” I then also say “no” to “Do you want to configure special / advanced settings?”. After this I use XOrgConfSelector to select the configuration, exit MATLAB, log out, log back in. When I log in I can see that the 22.04 “Squid” is still present on both displays, and indeed when I launch MATLAB and execute “Screen(‘Screens’)” it returns “0” (only a single display).

(2) Multi-screen setup, followed by saying “yes” to the question “Do you experience weird issues with display arrangements and want me to try to fix it?” I then say “no” to “Do you want to setup a 30 bit frame buffer…?”, “no” to “Do you want to allow use of so called VRR…?”, and “no” to “Use AsyncFlipSecondaries mode…?”. After this I use XOrgConfSelector to select the configuration, exit MATLAB, log out, log back in. When I log in, the secondary display tells me there is no input signal. When I launch MATLAB and execute “Screen(‘Screens’)” it returns “0” (only a single display). I also tried a variant of this set of responses in which I say “yes” to “…VRR…?”, and I get the same performance as in “(1)” - when I log out and back in the squid is there and “Screen(‘Screens’)” returns “0”.

My best guess at this point, consulting the text that accompanies XOrgConfCreator is that I need to manually set up a dual-X-Screen configuration following the “_SeparateScreensDualGPUIntelAndAMD” example. Is that a good idea? Is there anything else I should try? I suspect it should be possible to look at some log files, perhaps for X11, to try to understand what’s failing with one or both of the conf files created by XOrgConfCreator, but I’m not sure where to start. Any advice would be greatly appreciated! @mariokleiner if you’d like me to buy another 1 or 2 Support Memberships, or to simply figure out a way to directly compensate you for any effort you might put into this, I will be very happy to do so. Just let me know.

Yes, you’d need to by more paid support. Your current license is still valid afaics, although all paid-for work has been used up. You can buy the extra work hour package, which would normally cost 300 Euros + applicable tax per started 60 minutes, but thanks to your existing license, you will be eligible to a 25% discount on that, so currently 225 Euros + applicable tax. I’ll send you the discount code via private message. Such a package is only valid for this specific support issue.

Alternative is buying another regular membership, which can be reused on multiple issues within the usual 12 month period, but that only gives you 30 minutes in total for 200 Euros + tax with no discount. We raised the prices recently, after our extra cheap christmas/winter discount was such a disappointing failure.

So assuming this issue takes more than 30 minutes to resolve, the discounted package will be much cheaper.

James already paid, and sent various required log files, so lets continue here:

That was successful according to logs.

The X-Server is the latest and Mesa should be fine. Mesa version matters for everything but NVidia with the NVidia proprietary driver, as all open-source drivers are part of Mesa.

Almost, but this is a special variant, two AMD gpu’s, but only the internal laptop panel seems to be mux switchable between the iGPU and dGPU, whereas the external outputs are likely hard-wired to the dGPU. So the section does not quite apply. If you would not need the external outputs you could treat the machine as such a muxless setup and use the instructions from that section to use the iGPU for display and the dGPU for on-demand rendering / render-offload. It would allow you to power-down the dGPU when you doen’t need highest graphics performance, to increase battery runtime. But that’s not what you intend to do afaics, you want external displays.

The Laptop seems to be a Dell Alienware M17r5 with dual AMD gpu’s, one
internal iGPU booted by default and connected to the internal laptop eDP
flat panel, and one dGPU connected to the external HDMI and USB-C/DisplayPort
outputs. It appears there is a mux which can switch the internal eDP panel to
be driven either by the iGPU or the more powerful and power-hungry dGPU,
with a default of using the iGPU. XOrgConfCreator can’t create xorg.conf’s
for dual-gpu setups with such a mixture of “some displays muxed” and “some
displays hard-wired”. Therefore we need custom xorg.conf, which i created from
the info in the logs you provided.

The following custom xorg.conf file can be stored in /etc/X11/xorg.conf.d/ or
preferably inside ~/.Psychtoolbox/XorgConfs/ where it can then be found and picked up by XOrgConfSelector for convenient switching between the PTB optimized dual-x-screen config and the normal config more convenient for day to day use of the laptop for other use cases than visual stimulation.

# Custom xorg.conf file for a dual AMD gpu system, a
# Dell Alienware M17r5 with AMD (SmartAccessGraphics).
#
# This is a laptop with two AMD gpu's: A Rembrandt /
# Yellow Carp iGPU muxed by default to the internal eDP
# laptop panel, and a Navi22/ Navy Flounder Radeon RX
# 6700M dGPU which can be muxed to the eDP panel, but
# by default just drives the external HDMI and USB-C/DP
# outputs.
#
# This config will assign the iGPU with the internal
# laptop eDP panel to X-Screen 0 for the desktop GUI,
# and the AMD dGPU is used to drive the visual stimulation
# display on X-Screen 1 via the external video output(s).

Section "ServerLayout"
  Identifier  "Hydra-2XScreensOn2GPUs"
  Screen      0  "Screen0" 0 0
  Screen      1  "Screen1" RightOf "Screen0"
EndSection

Section "Device"
	Identifier  "Card0"
	Driver      "amdgpu"
	BusID       "PCI:54@0:0:0"
EndSection

Section "Device"
	Identifier  "Card1"
	Driver      "amdgpu"
	BusID       "PCI:3@0:0:0"
EndSection

Section "Screen"
	Identifier "Screen0"
	Device     "Card0"
EndSection

Section "Screen"
	Identifier "Screen1"
	Device     "Card1"
EndSection

What this does is assign X-Screen 0 to the iGPU and its sole eDP output for
displaying the desktop GUI (or PTB screen 0) on the laptop panel. And assign
X-Screen 1 to the dGPU for driving any externally connected display with stimuli
on PTB screen 1.

According to the wisdom of Reddit, there may also be a way to disable dual-gpu mode on this laptop by going into the EFI firmware setup, where somewhere there might be a switch to disable Hybridgraphics and just select the dGPU to drive everything. See
https://webcache.googleusercontent.com/search?q=cache:iFnrVgo3cEEJ:https://www.reddit.com/r/AMDLaptops/comments/rvx7id/amd_smart_access_graphics/&cd=12&hl=en&ct=clnk&gl=de&client=ubuntu

The option may be confusingly called “Optimus” although that is a NVidia marketing term for dual-gpu. If your laptop really has such an option, you could turn it into a single-gpu laptop which will only expose the powerful and power-hungry dGPU to the operating system and drive all displays with it. Then regular XOrgConfCreator could be conveniently used to set up everything just as with any other computer that only has one graphics card.

I tend to think the latter option - single gpu operation - might be the most simple in your case. If you want to use external displays, the dGPU needs to be powered up at all times to drive the exernal display, but in single-gpu mode, apart from the ability to conveniently use XOrgConfCreator, it may also allow the system to power down the iGPU, to save a bit of power for a bit of extra battery runtime and extra coolness. Not only good for the environment and electricity bill, it may also give some extra thermal budget to the main processor, allowing for slightly higher cpu speed if the iGPU is not drawing power or producing heat.

This has used up the 1 hour of paid for work. Let me know how it works out.
-mario

Mario –

Method number 1 works! I’m glad this was an easy one. One question, though:

Why

Looking at the example conf file for the intel+amd case I tried including BusID in place of Xaphodheads, and I did try “bus@domain” formatting, but why 54@ and not 36@?

Based on the output of “xrandr --listproviders”:

Providers: number : 2

Provider 0: id: 0x50 cap: 0x9, Source Output, Sink Offload crtcs: 1 outputs: 1 associated providers: 1 name:Unknown AMD Radeon GPU @ pci:0000:36:00.0

Provider 1: id: 0x85 cap: 0x6, Sink Output, Source Offload crtcs: 6 outputs: 2 associated providers: 1 name:AMD Radeon RX 6700M @ pci:0000:03:00.0

Or this line returned by “lspci”:

36:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Rembrandt (rev c8)

I was sure 36 made sense.

Your reasoning is right, but hexadecimal 36 is decimal 54 and the xorg.conf wants decimal :).

The easy way is to look in the Xorg.0.conf log for these lines → copy and paste:

[  1104.003] (--) PCI: (3@0:0:0) 1002:73df:1028:0b5d rev 207, Mem @ 0x7800000000/17179869184, 0x7c00000000/268435456, 0x98a00000/1048576, I/O @ 0x00005000/256, BIOS @ 0x????????/131072
[  1104.003] (--) PCI:*(54@0:0:0) 1002:1681:1028:0b5d rev 200, Mem @ 0x7c40000000/268435456, 0x7c50000000/2097152, 0x98700000/524288, I/O @ 0x00001000/256

As a little extra educational info for readers:

From the output of xrandr --listproviders, we can also make some educated guesses about how the laptop is wired up in its current configuration, or at least how the X-Server treats it - info that is important to get multi-gpu right, but usually completely absent from hardware vendor product specs:

Provider 0: id: 0x50 cap: 0x9, Source Output, Sink Offload crtcs: 1 outputs: 1 associated providers: 1 name:Unknown AMD Radeon GPU @ pci:0000:36:00.0

→ “crtcs 1” → Only one display engine enabled on the iGPU, attached to “outputs: 1” one video output. So given that the laptop flat panel is what is most frequently used on a laptop, it makes sense that this is the iGPU → laptop eDP panel connection. “Sink Offload” means the iGPU can act as image sink for a renderoffload gpu, iow. one can render images on a powerful dGPU and display them on the iGPU’s display. This capability is what the section in “help HybridGraphics” refers to and explains how to set up for proper display with proper timing. The dGPU has a corresponding “Source Offload” capability. The iGPU “Source Output” → can also act as a output source for a dGPU “Sink Output”, so one can get pictures from the iGPU displayed on the external video outputs via the dGPU. However, “Souce/Sink Output” is usually useless for visual stimulation, as it won’t provide reliable high precision timing and timestamping at all. It’s good enough for regular desktop use though, e.g., powerpoint presentations, office work etc, where frame accurate timing doesn’t matter.

Provider 1: id: 0x85 cap: 0x6, Sink Output, Source Offload crtcs: 6 outputs: 2 associated providers: 1 name:AMD Radeon RX 6700M @ pci:0000:03:00.0

“crtcs: 6” → six display engines could drive six X-Screens or outputs, but only “outputs: 2” → two outputs connected to the dGPU. These are likely the external HDMI and USB-C + DisplayPort outputs. If you’d connect a suitable laptop dock, those numbers might change. If the laptops mux would be switched so that the iGPU is not connected to anything, but the dGPU is connected to the laptop panel, you might get three or more outputs there.

The mux can be switched at runtime - with some logout + hassle + login, or likely in the system firmware setup before booting.

In theory, this laptop line seems to have a new feature called “Smart Access Graphics”, which in theory allows dynamic switching of the mux during runtime, depending on the needs of the applications, finding a dynamically optimal performance + power consumption tradeoff. NVidia introduced something similar a while ago as “Optimus pro” or something, and this seems to be what Apple did with their dual-gpu MacBookPro’s for “seamless graphics switching”. However, as far as i know, Linux doesn’t support this dynamic switching yet, and i doubt it will ever come to a X11 native X-Server based system, given that most development focuses on the new Wayland display server architecture. It’s too bad that Wayland is still far behind / mostly unsuitable wrt. many features we really need for vision science applications, e.g., good timing.

If we had proper funding from our users, I’d be working on pushing a Wayland transition and necessary work on Wayland itself forward since years, but given most of our users don’t possess mid- to long-term thinking ability and don’t financially support us in any significant way, this is another major thing that will stay on the back burner for longer, although it should be a top priority. Therefore good old X-Server on Linux will continue to be the only existing solution to reliable non-trivial vision science, as clunky as some of its configuration is, given that MS-Windows is a poor and degrading choice and, macOS is a disaster - even more so with the Apple Silicon Macs. At least as long as the X-Server still is maintained a bit.

1 Like

Btw. partially motivated by some earlier issue of yours, and explanations of how you use a 2nd display as a “mirror display” for monitoring by the experimenter, also employing some overlays, PTB 3.0.19 also has some improvements to display mirroring when combined with Ubuntu 22.04 and later on suitable AMD and Intel gpu’s.

See the ‘MirrorDisplayToSingleSplitWindow’ section in “help PsychImaging”. This allows for mirror mode and overlays with proper timing and performance + a convenient overlay for extra data (eye gaze, etc.). This goes with that XOrgConfCreator advanced option about “AsyncFlipSecondaries”. If you wanted to try this on that Laptop, e.g., external display for stimulation + Laptop display for experimenter mirror window with overlay, you’d need to run the machine in single-gpu mode, with everything on one X-Screen. So that’s another case where single-gpu mode can be useful.

Anyhow this is a different topic, just thought i mention it here.
-mario

Thanks so much Mario! Very informative on both fronts. Besides purchasing paid support once or twice a year (and paying for more specific work hours like in this case) is there anything else I can do to support you and PTB?