Current graphics card recommendations

Could anyone post current graphics card recommendations they have for Ubuntu PTB computers? For some time I was buying a Polaris-class WX3200, which worked very well and was recommended. It’s not easily possible to buy that one any more. I see cards like the W6600 available now in pre-configured Dell computers with Ubuntu (which I’ve been buying for PTB computers):

https://www.amd.com/system/files/documents/radeon-pro-W6600-datasheet.pdf

I also see the AMD W6400 and W6800, and many NVidia cards (RTX 2000, 4000, 4500, 5000, etc). My long-time preference has been AMD, which has been somewhat preferred by PTB. What are people currently using that is working well? Thanks!

1 Like

In fact I was going to post about some new cards we have been using. I normally buy WX5100 (Polaris generation) cards that always work reliably, but for some new systems I couldn’t spec them with these older pro cards so I ended up with an RX6800 (RDNA2) and RX7600 (RDNA3) cards.

The RX6800 just worked perfectly according to the various sync tests and a PTB benchmark (ProceduralGarboriumDemo with increasing number of gabors), topped out at a crazy 27000 animated gabors (54000 total procedural textures as there are the same number of blobs drawn) before dropping frames, the fastest GPU I’ve tried with this test.

The RX7600 has not been so straight forward. It doesn’t seem to be supported by the standard kernel or MESA release in Ubuntu 22.04. So you have to upgrade the kernel and/or use a later MESA. Once done I still hit a few glitches around X-windows but that seemed to recover and at the moment seems OK, though I didn’t test as much as the RX6800.

These are consumer GPUs whereas I previously got Radeon Pro GPUs. As I understand it for PTB there should be no difference, eg.g 10bit stuff or HDR modes should work similarly, but I haven’t done a photometric measurement of 10bit mode on these cards yet…

I think the latest AMD cards will work better once the next Ubuntu LTS is out if you don’t want to do too much fiddling or stick to the previous generation.

NVidia cards remain a mess, at least up to RTX 30xx as we have some (for machine learning where CUDA remains king) but for PTB they still can’t reliably pass a sync test…

2 Likes

Thanks, much appreciated!

The recent Ubuntu 22.04.3 LTS update, as part of its hardware enablement stack, provides Linux 6.2 and Mesa 23.0.4, which according to Phoronix should support the RX7600 out of the box.

Apart from the visual timing tests, other good tests to run, wrt. rendering precision, are DriftTexturePrecisionTest and HighColorPrecisionDrawingTest and BitsPlusImagingPipelineTest. Those are some of the standard tests I run.

In general, the most recent tesed AMD gpu by myself is the early 2019 RavenRidge integrated processor graphics of AMD Ryzen 5 2400G, part of a 499 Euro PC I bought early 2019 privately. That chip has older Vega 3D graphics accelerator hardware, but at least a modern DCN 1.0 display engine, which is at least similar to the DCN 2.x or 3.x engines of more modern AMD gpu’s, so hopefully the good results should transfer to all those later RDNA models unless somebody at AMD makes a mistake in driver coding. So far, according to user reports, the more modern parts seem to work well. We don’t have the money or time to buy and test any more recent hardware due to the severe lack of funding by our unsupportive users.

The main difference between the old Polaris/Vega parts with old DCE display engines and the newer Navi / RDNA parts with DCN display engines from the PoV of PTB is that PTB does no longer support low-level gpu hardware access tricks on the modern DCN parts to provide an extra safety net (“Airbags and Seat Belts” if you want) and consistency checks on top of Linux superior built-in checks in case of driver bugs. The method and complexity of the display engine programming has increased with the transition from DCE → DCN display engines to a point where this is no longer practically feasible, and especially not with the severe lack of funding for PTB. Different strategies would be needed, which are also impossible due to the severe lack of funding.

Luckily, the extra functionality previously provided by our low-level hw access on older gpu’s and drivers in now provided by the standard open-source gpu drivers themselves, and the no longer available extra safety net apparently hasn’t been needed so far for any of the modern DCN parts, as far as my testing on Raven Ridge, and mostly user reports on the forum suggest.

Yep. Differences between pro and consumer are in things like longer duration of manufacturer warranty, longer duration of replacement parts availability, certified compatibility with certain pro 3D animation/CAD/simulation software, maybe error correcting ECC VRAM. The open-source drivers and mostly the hardware are the same, so they should behave the same. Btw. on Linux with the optional amdvlk Vulkan driver installed, and the PTB Vulkan display backend in use, you can go up to 12 bit color output precision (dithered on 8 or 10 bit displays ofc., real on real 12 bit displays), one of PTB’s many unique features, on Linux + modern AMD only. Cfe. AdditiveBlendingForLinearSuperpositionTutorial.m with ‘Native16Bit’ display mode and useVulkan = 1.

1 Like

Following up on this discussion, are there any recent recommendations/good experiences with graphics card that can handle at least 6 displays simultaneously?

We are about to upgrade our stimulation computer for MRI, and it should become a linux (yey!)
The problem is that our setup for the MRI is quite complex and requires switching between different presentation devices, some of which are stereo with two independent screens.

We would also greatly appreciate any advice regarding other hardware specs for such a computer with multiple displays (RAM, motherboard, etc).

Many thanks in advance!

The Radeon Pro W6800 has 6 display output, each at 5K:

This should in theory have good driver support (it is RDNA2) with Linux + PTB (though probably untested by anyone so far, 6 displays is not a common requirement I suspect :sweat_smile: ).

You may also get away with a 4-output card and some display splitters (search for displayport MST; but again this will be “test it and see” – lots of room for expensive mistakes, and I don’t know how PTB handles this). I did try two different GPUs once in a PTB system, but had all sorts of problems getting both to work simultaneously, and gave up…

You’d need a PCIe 4.0 motherboard thus probably an AMD CPU would be preferable?

Hey Natalia! [We should probably Skype chat one of these days, I totally forgot {:/]

So the idea is to really simultaneously present stimuli on up to six monitors at the same time? Or more like one or two at a time, and you just want to avoid replugging monitors all the time or avoid using other display switching equipment?

I agree with Ian that a modern AMD gpu with six Displayport display outputs should be the way to go, on Ubuntu 22.04.3-LTS atm. In theory that should work well for multi-display setups in various configurations, creating up to six X-Screens via XOrgConfCreator etc. for six independent displays, or e.g., three X-Screens with two outputs each for up to 3 simultaneous binocular/stereo monitor pairs, or other combinations.

I think the maximum I ever tested on AMD was 4 or 5 displays at once, about ~12 years ago on Radeon HD5770 iirc, for some “Holodeck for mice” setup. And then some tests with more recent gpu’s with 3 or 4 displays - not enough monitors in the Bartels lab to test more at once :wink: at that time. Right now testing more than three (with great contortions) is impossible for me due to lack of suitable hardware, even two at a time is a hassle. But given this stuff worked well in the past on AMD + Linux, one would hope it still does.

Currently AMD are the only ones making gpu’s for up to six displays afaik. NVidia tops out at 4, Intel at 3.

Wrt. multiple gpu’s at once under Linux. It is definitely doable and I did and do test this on various occasions. However, one does need to create specially crafted xorg.conf files - XOrgConfCreator is not prepared for this. You can find some sample hand-crafted xorg config files in Psychtoolbox/PsychHardware/LinuxX11ExampleXorgConfs/ which have been used/tested on various dual-gpu laptops or pc’s. However, PTB low level gpu access only works for one gpu at a time. But that doesn’t matter with recent Navi / RDNA AMD gpu’s, as PTB neither supports nor generally requires low level access on these gpu’s with their next gen DCN display engines, as modern AMD gpu’s with modern drivers should be able to sync up Displayport connected monitors, e.g., for synchronized stereo/binocular dual-monitor stereo, at least with the right settings, without need for low level tricks.

Wrt. DisplayPort MST, I assume PTB doesn’t need to handle it, this is all a driver thing, iow. it should just work like any other standard DisplayPort connection. That said, I never tested DP-MST as I’ve never seen/had access to a DP monitor with MST daisy-chaining,

Update: Stumbling over a DP-MST related discussion and checking the X-Server driver code, I think naming of DP-MST displays could be shaky, as in: The names of connected MST display monitors may not be stable across machine reboots or system updates, so this would pose headaches for multi-X-Screen setups, where you have to store a fixed name in the xorg.conf file, which might become invalid on each reboot. In practice it might work, but there is nothing in the code to guarantee stable naming and operation. → Better avoid DP-MST if you intend to use multi-X-Screen setups.

But then, from what I’ve read, macOS does not support MST to this day, so MST and naming seems to be a general problem of that tech. Apparently the DP-MST spec has nothing helpful to say about persistent stable naming either…

Ofc. the devil is always in the details…

Thanks Ian, thanks Mario!

We appreciate your advice. We don’t want to use all 6 displays simultaneously, it is more about avoiding replugging stuff every time another experiment is running, as Mario said. But we do want to use 4 displays at a time.
I am happy to report back on our experiences once we set everything up. If you want us to run some specific tests, just let us know.

Cheers!

Makes sense, and should work fine, although ofc. not tested since quite a while, and not on the latest hardware.

Btw. I don’t think you’d need a PCIe-4 capable mainboard to use that gpu, it is just that you won’t get the full data transfer speed possible with a PCI gen 4 gpu if plugging it into an older generation PCIe slot. But for typical paradigms this likely won’t matter.

What kind of scenario/setup do you have in mind when you want to use 4 displays at a time?

Good to know that we do not need a special mainboard, but we might nevertheless invest in it, since the slogan of our university is “we work for tomorrow” :slight_smile: . Who knows, we might need fast transfer speeds one day.

Right now we have NNL stereo HD goggles/perescopes (2 screens), an additional NNL in-room MRI display for looking at the eye camera signals while adjusting the stereo goggles (1 screen) and control room monitor of a stimulation PC (1 screen). We might want to keep another output for an external laptop, which would make it 5 screens in total. At this point I am not sure why our radiotechnologist counted 6 outputs, but then again, better too many than too few.

In terms of stimulus content, the most challenging we had so far was presenting video clip snippets. I am not sure though what other groups are doing and whether they need anything special in terms of graphics card. Nobody but me uses psychtoolbox, so I may be the only one concerned with details.

Any further thoughts on this are greatly appreciated!

Just wondered if there is any more feedback on the functionality of the RX7600 following this update? My supplier has spec’d this card for a display computer I’m trying to organise and just wanted to ensure it was suitable before committing.

cheers

Ubuntu 22.04.3 can use the RX7600, although it gets a generic codename (GFX1102) when querying the GPU name. If you update MESA (kisak or oibaf), then you get the RX7600 named directly; but in both cases glxinfo -B shows hardware acceleration…

However PTB has not yet been updated to support it as far as I can tell (Oct. 25th 2023 build); at least VBLSyncTest can’t use pageflipping but it visually looks smooth. I assume it will be a simple enablement from @mariokleiner when he is able…

Does PerceptualVBLSyncTest also look visually smooth, or more like a visual tearing disaster?

Hm, PTB does not need updates for modern AMD gpu’s with the next generation DCN display engines, ie. for anything from the Navi gpu family with RDNA* rendering architecture or anything integrated graphics built into AMD Ryzen processors. Iow., anything released since roughly the year 2019. Neither could it do anything about problems, apart from detect and report them. If pageflipping doesn’t work, then timing is toast and it is very unlikely PTB could do anything about it.

The way to fix this is diagnosing the problem/limitation/bug in the X-Server, Linux amdgpu graphics and display drivers or Mesa, then either finding a workaround or submitting a bug fix to upstream, usually both, then waiting for a few weeks or months for OS updates coming around. This is usually work and time intensive, but it is what I’ve been doing for many years, usually as part of pro-active testing/fixing many months before a piece of relevant software was released. Ofc. due to the severe lack of funding caused by the lack of financial support by our mostly indifferent users, this has become more and more impossible. E.g., absolutely no testing has been performed for the recently released Ubuntu 23.10 and right now I don’t foresee the ability to properly test and fix for the upcoming Ubuntu 24.04-LTS release next April. No good can come out of this…

That said, Mathworks sponsors up to 5 “non trivial” user support requests between now and 5th July 2024, sponsoring up to 5 hours of my work time on each specific incident. Deciding what to take on is at my discretion, but I only get paid if I can solve the problem to reasonable satisfaction of the user, so this is a risk-benefit judgement if the problem will almost certainly be solvable in no more than 5 work hours with the limited and technically outdated equipment I have available.

The other way to get it fixed is via a paid support request, where the risk of failure to find a solution is on the user, and the financial costs to the user can be way higher. With reasonable and supportive users, this all could be a non-issue but with our users, it is. And with the dire financial situation of the project caused by our users, all these things may turn very bad very soon.

So Ian, if you want to, test this carefully, maybe after rebooting the machine and whatever caution, and if the problem is not some weird fluke, maybe post whatever logs and specs you can about this pageflipping failure, all the usual stuff I usually ask for, from versions/specs of all software components, to XOrg logs to kernel dmesg logs etc. Probably once for a standard setup, once for the latest Mesa updates from ppa’s etc. If I find the time to skim over it and it looks like a chance to fix, we can work from there. If there’s a non-trivial risk that this could take me more than 5 hours to fix, I’ll have to ignore it, or somebody will have to pay a serious bill if they want it fixed.

The nature of these problems is rarely that it affects just one model of gpu, but more likely whole gpu families, and the difficulty in fixing it will grow exponentially the more time passes between detecting the problem and trying to fix it.

-mario

Yellow line is only at the top and no “classic” tearing, but I do see a novel (at least for me) artefact: at the top 1/5 I see random flickering binary noise, like if you have a random 0 or 1 noise you scale up using nearest neighbour. It flickers every 10 to 40 frames. For some reason running PerceptualVBLSyncTestInfo shows this artefact less. Other tests “look” good, and ProceduaralGarboriumDemo can get up to around 28,000 gabors without apparently dropping FPS, but given the lack of pageflipping not sure how much to trust the averaged FPS measurement.

I’ll do a full logged run tomorrow and upload them to a gist, then you can have a look if you think it is worthwhile. The RDNA2 cards are working well (and an faster RX6800 is roughly the same prices as an RX7600, so it just depends on availability from suppliers), so yes it depends if this is just a fluke of this card or a general change from AMD…

Yep, that’s tearing from a too slow framebuffer blit, as part of the fallback path used when pageflipping can not be used. Other stimuli may look good, but they aren’t. The visual corruption is just most apparent in PerceptulVBLSyncTest because it is designed to make this most prominent. You can consider the system utterly untrustworthy for visual stimulation.

You can trust the average fps, just not the timing/timestamps of any individual frames, which is why timing has to be considered utterly broken and untrustworthy. Other toolkits naively think that average fps has any meaning other than for performance benchmarking, which is why no other toolkit would detect or warn about the brokeness and just let you silently corrupt your study.

It could be a temporary malfunction of the machine for some reason, fixed by rebooting or powerdown → wait → restart. But it is unlikely to be a fluke of that specific card. More likely would be a bug somewhere in the graphics/display stack, possibly triggered by some change in the hardware or driver implementation for a new gpu family, affecting the whole generation of gpu’s and future models. That’s why pro-active testing of this stuff by people like me is important to catch these things early, a time consuming task which will have to happen less and less carefully due to the lack of funding. Give it a tiny bit more time, and people will finally get what they pay for.

Hi,

did you end up trying one these two cards (AMD Radeon PRO W6600 or W6800)? Do they work fine?

Thanks!

Antimo

Not yet. I will post if I end up trying one. In the meantime I’ve used an extra one of the older Polaris cards I had available.