Linux (UBUNTU) + AMD - which card to choose?

XYW4MDVA-2021323113941:34716d8b38995d20b67a81c7f5d8f6f2ced12b5022da617260708a2d0393a7fd

I’m setting up a couple of new PTB systems for non-human primate visual system experiments. I’ll be building the linux PCs from separately purchased parts and installing whatever stable (non-beta) version of UBUNTU is advisable. Each PTB PC will be connected to a VIEWPixx display running at 1920x1200@100Hz, showing dynamic color stimuli (similar to those described here: Novel Color Stimuli for Studying Spatial Attention | JOV | ARVO Journals).

In general I like to buy the best hardware I can afford (with the caveat that it doesn’t make sense to spend money on features I won’t use).

It seems like the AMD Radeon Pro WX9100 is their current top-of-the-line card. It also seems like at least one other user, @antimo, is using that card almost successfully. Are there other AMD cards to consider? Should I wait and make sure Antimo is able to work out his concerns about dithering?

Looking forward to hearing from Mario and other users. Thanks!

If you want to be a bit more conservative, I built around 5 systems with WX5100 cards, which are a similar generation that Mario tests most heavily on. We use CRS Display++ panels @ 120Hz for NHP and human work with no timing issues (Display++ and ViewPixx use quite similar low-level code to enable the high bitdepth modes, I can use the Mono++ and Color++ modes without any problems). My simplistic benchmark is to use ProceduralGarboriumDemo(N), with this card we can get to N = ~8000 without any dropped frames. Though I normally like to live on the bleeding edge, in this case I am happy with the stability of these Polaris cards…

What are your specific needs? That JOV abstract suggests nothing out of the ordinary? Do you expect to need more than 8 bpc color precision? Why the ViewPixx - ie. what specific features of it do you intend to use?

In principle all AMD gpu’s made in the last ~14 years should work well, but in practice my current development and testing has shifted to only involve the two most recent AMD cards which i have conveniently available at the moment in my home office. One is a Polaris11 gpu, the RadeonPro 560 in a 2017 MacbookPro. This is functionally equivalent to the Radeon Pro WX4100 i have sitting on a shelf on standby atm,. and the WX5100 that @Ian-Max-Andolina recommends. They differ in maximum performance, connector types etc., but it is all the same underlying Polaris hardware, so testing one is enough to likely cover the whole class. All these pro gpu’s themselves are identical to the consumer Radeon’s of the RX500 series if i remember correctly - googling a model will usually give a quick pointer.

The 2nd one is a RavenRidge integrated graphics chip of a AMD Ryzen5 2400G processor. It is technically a bit more advanced than Vega graphics cards, as it combines the Vega graphics engine (like Vega) with the next generation DCN-1 display engine - more advanced than Vega. Performance-wise integrated graphics is slower ofc., but functionality-wise this gives me some confidence that older Vega gpu’s and more modern Navi gpu’s should work well.

In the end it is a tradeoff. Ian’s WX5100 cards as a conservative choice would likely be powerful enough for typical tasks done with a ViewPixx panel.

Vega gpu’s are faster, and more expensive. Their display engines are more advanced if one wants to use AMD FreeSync / DisplayPort adaptive sync for fine-grained timing control for sub-frame accurate stimulus onset timing, but that feature won’t work on a ViewPixx panel or Display++ anyway, it requires DisplayPort and a suitable monitor. I’d first wait for final results from @antimo before going that route if you think you need the extra performance for extra money.

The latest generation AMD gpu’s from the Navi gpu family are another option, providing more advanced hardware capabilities for special use cases inside their new DCN display engines. A theoretical downside is that PTB doesn’t support low-level access on them anymore, but in practice these tricks are rarely if ever needed anymore on modern Linux drivers - that’s why i so far didn’t bother reimplementing them.

So in the end it depends on what specific needs you have for your setup wrt. to color precision, timing precision, etc.? The abstract suggests no special needs, but the use of a Viewpixx suggests some special needs?

One more comment: If i remember correctly, the ViewPixx panels have exactly one native video mode at which they operate at optimal timing precision - the maximum they can do. I think unless i misremember that would be 1920x1200@120Hz, so that’s what one would want to use for optimal precision?

-mario
[20 minutes of work time used so far.]

Thanks @Ian-Max-Andolina and @mariokleiner for the information! I’ll try to provide a bit more detail about my needs, and I’ll also try to restate some of what I’ve understood so far.

Details:

I think you’re correct that my specific needs (at present) do not demand anything out of the ordinary. However, I do have one “wish” which I will state below, as it is better understood in the context of another question.

No. Perhaps unrelated, but I update the CLUT on each frame to accommodate my use of L48 mode (more on this below).

The VIEWPixx because: (1) I am using a version of PLDAPS (https://www.frontiersin.org/articles/10.3389/fninf.2012.00001/full), which relies on the I/O capabilities of the various -Pixx devices rather than on a DAQ card / device. I like this solution because it means the PC running MATLAB for PTB has minimal “other responsibilities”. Some of the experiments require processing / visualizing neuronal / behavioral data as it is collected, trial-by-trial, and it is most convenient to be able to do this in the same instance of MATLAB. (2) I like that the VIEWPixx LCD display panel is optimized for vision research (e.g. spatial uniformity is tested / calibrated; there are 1000 nicely spaced LED backlights and the “scanning backlight” mode for a moderate improvement in temporal dynamics). (3) If I’m concerned about the display I can get it tested / recalibrated / etc. by VIEWPixx.

One feature of the VIEWPixx that I intend to use is the L48 video mode, though I would prefer to avoid this if possible. I consider it an invaluable tool to see a copy of the subject’s display with added information overlaid. To accomplish this in my previous lab we used L48: one CLUT for the subject, and a second CLUT for display copy. Anything drawn to the display that we wanted to be invisible to the animal indexes a CLUT row with background RGB values entered into the subject’s CLUT and non-background RGB values in the display copy CLUT. Of course some consideration of the order things are drawn is also important. The major downside of L48 is that each CLUT is limited to 256 rows. The dynamic color stimulus referenced in the JOV link above uses colors drawn from a continuous distribution specified in DKL color space. The stimuli are circularly-windowed grids; each cell of the grids is assigned a color that persists for an 8 frame “lifetime” (each cell’s lifetime is initiated at a random value from 0-7 to avoid all cells being “reborn” on the same frame). This means many more than 256 colors per trial, so to make use of L48 mode, I update the CLUT on every frame. Ideally, rather than this old-school CLUT-animation-style solution, I would simply use PTB to draw distinct content to each of two window pointers on each frame, but I really have no idea if this is feasible. Is this something that my choice of graphics card might make possible? To be clear, if I could, I would prefer to draw two 1920x1080@100Hz windows rather than one.

In my previous lab we ran them at 100Hz and didn’t have any issues, but there’s no specific reason I couldn’t use them at 120Hz.

What I’ve understood so far:

The WX5100 / RX500 should work well for my not-special-despite-using-viewpixx needs; they differ slightly but not in ways that Mario would expect to make a difference for me (or, I assume, he would have mentioned as much).

Another AMD GPU that Mario has tested a good deal is the RavenRidge GPU integrated into a Ryzen5 2400G CPU. I don’t believe he’s suggesting that I go this route, but perhaps this could work well for some folks, and it suggests that AMD GPU architectures newer than Polaris (such as Vega / Navi / Big Navi?) should also work.

The potential upside of Vega GPU cards is that they are faster and support AMD FreeSync for improved temporal precision. However, VIEWPixx / Bits++ displays do not support FreeSync, so that potential upside is moot. It also seems like there aren’t very many cards that use the Vega architecture, including Radeon RX Vega 56, 64, and 64 Liquid, Radeon VII, Radeon Vega Frontier Edition (Air Cooled), and Radeon Vega Frontier Edition (Liquid Cooled).

The potential upside of even newer architectures like Navi (and perhaps Big Navi) is yet more advanced hardware capabilities, but it appears that we’ve established my needs are not advanced. I would be curious to hear what sorts of capabilities these cards have that Mario thinks vision scientists might make use of.

Bottom line: based on what I plan to do at present, the WX5100 will work well. If I want to wait and see how things shake out for @antimo, I could get the WX9100, but it’s not clear that I actually need it.

Hi, We just replaced our nvidia 1060 with Radeon Pro WX 3200 (and are replacing 6 more) and it works perfectly with Ubuntu 18.04 and Matlab 2020b.
Hope this helps, Best, Saumil

Ok, so you mostly use the ViewPixx for its i/o subsystem and PLDAPS compatibility and its general good display quality, not so much for the high color precision and other special display features which would influence the choice of graphics card.

You don’t actually need the high precision color or luminance display modes, only the L48 separate clut’s for the panel itself and a connected secondary standard DVI monitor? I guess that’s what they refer to as “console mode”. I never tried that.

So the only thing you need for your graphics card is to be fast enough to drive the display with your stimuli at 1920x1200@100Hz, and to pass through the pixel color values without applying any dithering or other distortions.

Wrt. to a second display and doing without the L48 clut trick, in principle yes, but the devil is in the details…

Given the ViewPixx uses a dual-link DVI video connection, the maximum video bandwidth of 330 Mhz will not be sufficient to support much more than 1920x1200@120Hz, certainly not a 2nd experimenter monitor connected to the ViewPixx in split video mode and running at the same video resolution/refresh rate. I think as far as the ViewPixx is concerned, the L48 trick would be the best you could do.

However, in principle you could do this by attaching the 2nd display monitor for the experimenter to a 2nd video output on your graphics card instead of to the ViewPixx. E.g.,

  1. Plug in the 2nd monitor which also runs at 1920x1200@100 Hz.
  2. Open a fullscreen onscreen window in stereomode 4, so it spans both monitors, the Viewpixx and that experimenter monitor, showing the “left eye” stimulus on the Viewpixx, and the “right eye” stimulus on that 2nd monitor. Then draw the same stimulus twice, separately via Screen('SelectStereodrawbuffer', win, eye) once without (eye=0), once with the extra info (eye=1).

There’s also
PsychImaging('AddTask', 'General', 'MirrorDisplayToSingleSplitWindow'); meant to implement mirroring of stimuli in such a “one window spans two monitors” configuration. It shows the same image in both halves of the window and thereby on both monitors though, no extra information though, as i didn’t anticipate this use case. One could extend the functionality in the future to allow for some information overlay for the kind of feedback you want. This would be a more fancy and slightly more efficient way than just abusing dual-display stereomode 4 for drawing this manually.

I guess the feasability of the above depends on how that would integrate with PLDAPS?

A bigger catch with this is that for stimulation timing to not get disrupted, both displays (ViewPixx and extra monitor) would need to run with synchronized video refresh cycles - just as one also needs for dual-display stereo. This is where it gets funny, because now you need two identical display devices which can run with identical low-level video modes and off the same gpu hardware clocks. As the Viewpixx is dual-link DVI at 1920x1200@100Hz, the other monitor would also need to be dual-link DVI, and be able to be forced to the same video mode timing as what a Viewpixx uses.

Another approach would be using a slightly hacked AMD display driver that disables vsync on the second monitor, intentionally tearing, so timing and quality is optimal on the subjects display, but only ok’ish on the experimenter monitor. This functionality doesn’t exist yet. I thought about adding it at some point to the Linux display drivers for these kind of scenarios where one needs to mirror content to a secondary experimenter monitor without impacting performance much and without impacting timing at all, but haven gotten around to this yet.

As far as straightforward but inefficient approaches go, you could also simply open a second onscreen window, on a separate monitor and X-Screen, or possibly on the desktop, and just draw all stimuli twice, and even use ‘Flip’ on that window without vsync enabled to avoid potential timing problems. The downside here is that separate X-Screens are treated like physically separate display devices which can’t share any gpu resources. E.g., if you use textures or offscreen windows then the same texture or offscreen window can not be used for both windows, one would need to create each texture twice, despite having the same content, one for each window.

So there are various ways one could do this, with different tradeoffs and potential catches. The main issue would be to make sure the graphics card is fast enough to drive two monitors at 1920x1080@100Hz, with lots of headroom to spare.

Ah, looking at ViewPixx specs on their website suggests they can work with low-lag in the refresh rate range from 100 Hz to 120 Hz at 1920x1200, so the timing problems would only happen at < 100 Hz. It is a few years that i tested on a ViewPixx, so i can’t remember the details, only that some different resolutions or lower refresh rates were problematic. If i remember correctly, it displayed some little red blinking warning square if the video mode was unsuitable for optimal timing. cfe. Datapixx('EnableVideoRescanWarning');.

Yes. Performance-wise for the L48 mode. Identity pixel passthrough should work on a DVI connected Viewpixx.

I’m not suggesting the RavenRidge, i’m just saying that is the only other AMD gpu apart from Polaris that i actively test and develop against atm., because that’s what i have available. I have the RavenRidge because that was the cheapest AMD gpu with next generation DCN display engine available at the time. It was part of a 499 Euros “special offer” PC from the local discounter two years ago, the only PC i could financially afford at my close to minimum wage salary. So far all equipment used to develop PTB is either my private property, or whatever some labs in Tuebingen let me use in non-Pandemic times.

That said, RavenRidge as a mixed design of older Vega graphics core + new DCN display engine is very useful to test how both Vega and more modern AMD gpu’s would likely fare with PTB. I would expect Vega to work fine – lets see how the situation in that other Vega thread will work out in the end. We also already have some user reports here on the forum and in my e-mail of successful use of Navi gpu’s under Linux. Ofc. users will usually only test their graphics cards for their specific use case, not exercise the way more exhaustive suite of tests i run with the graphics cards i have available in my home office.

Vega gpu’s were meant for power-users or power-gamers, as far as i understand. There are fewer models, and they are all relatively high price, high performance, high power consumption.

All AMD gpu’s since about 2014 (Sea Island gpu family) are FreeSync capable and can be used to get sub-frame accurate stimulus onset timing with Psychtoolbox if paired with suitable Freesync / DisplayPort adaptive sync capable monitors. The difference between Vega and older models is that Vega and later have a refined 2nd generation implementation of FreeSync, which allows more reliable sub-frame timing. I contributed improvements to AMD’s Linux display drivers a while ago to make Freesync useful for vision science and PTB’s current implementation takes advantage of those mechanisms to get the most out of current Freesync on AMD + Linux. From that work i know that it was easier to get stable timing on the post-Vega RavenRidge than on the Polaris or on a 2014’ish Sea Islands gpu that i can occassionally test on.

I also hope to be able to find the time later this year or next year to experiment with some new timing methods to improve our current implementation further. Most of the work would be done in the Linux kernel, not in PTB itself, and this could easily take a year or more to turn into a user facing feature, as it requires collaboration with other Linux developers, e.g., the Linux folks at AMD. The outcome of that effort could be a total failure or something pretty nifty, but i expect that Vega or later will be a faster or easier win. So people who care about sub-frame timing will probably be better off with Vega or later.

From your description, a fast enough graphics card which allows for identity pixel passthrough should be enough, so @Ian-Max-Andolina experience should translate well.

Wrt. to capabilities, the FreeSync stuff for fine-grained timing is already good and should become much better if my plans succeed. High dynamic range (HDR) / Wide color gamut support is evolving a lot in each hardware generation. Another feature i was working on lately is native support for 16 bit framebuffers, which on the current generation of hardware allow for up to 12 bpc output precision on suitable displays. This code works perfectly here on my machines, but is not yet accepted upstream - it is sent out for review, so i don’t know yet if and when it will become available in future Linux distributions / kernels. The older generation DCE display engines required some tradeoffs that weren’t needed on the new DCN engines, so upstream might be less likely to accept my changes for older generation hardware.

There’s a bit of a tradeoff here: Latest hardware has interesting new capabilities, but also tends to have higher complexity and therefore interesting new bugs either in the hardware or in the drivers, because all this stuff usually needs multiple iterations to mature. So if you don’t need any of the fancy stuff it can make a lot of sense to stick to older and mature stuff.

Yes. The only difference between Polaris and Vega for your use case is performance. For the L48 method, Polaris should be good enough, and with Vega we’ll have to see how that other thread with the Datapixx3 pans out.

If you wanted to get away from that L48 method, driving two displays would require at least twice the performance of the L48 method. Likely still doable with Polaris, but it depends all a lot on the specifics of your stimuli, how efficiently your script is coded, which approach to that dual-display method is taken, how efficient that PLDAPS software does graphics, so that’s difficult to judge without actually trying.

What graphics card do you currently use on the old setup?
-mario

Hi!
I am also running Non-human primate electrophysiology experiments with ViewPixx(1920X1080@120Hz) with a Linux/X11 system. I want to mirror the display on the ViewPixx Monitor to another monitor (not the primary monitor- where I see the matlab window).
I tried the PsychImaging('AddTask', 'General', 'MirrorDisplayToSingleSplitWindow') as suggested above with an extended monitor configuration. But unfortunately , it only displays on one of the screens. Could you let me know if my X.org config file is set up in the right way to get this mirroring to work ? Thanks a lot!

The following is my xorg.conf file :

Auto generated xorg.conf - Created by Psychtoolbox XOrgConfCreator.

Section “ServerLayout”
Identifier “PTB-Hydra”
Screen 0 “Screen0” 0 0
Screen 1 “Screen1” RightOf “Screen0”
Screen 2 “Screen2” RightOf “Screen1”
EndSection

Section “Monitor”
Identifier “DisplayPort-0”
EndSection

Section “Monitor”
Identifier “DisplayPort-1”
EndSection

Section “Monitor”
Identifier “DisplayPort-2”
EndSection

Section “Device”
Identifier “Card0”
Driver “amdgpu”
Option “ZaphodHeads” “DisplayPort-0”
Option “Monitor-DisplayPort-0” “DisplayPort-0”
Screen 0
EndSection

Section “Device”
Identifier “Card1”
Driver “amdgpu”
Option “ZaphodHeads” “DisplayPort-1”
Option “Monitor-DisplayPort-1” “DisplayPort-1”
Screen 1
EndSection

Section “Device”
Identifier “Card2”
Driver “amdgpu”
Option “ZaphodHeads” “DisplayPort-2”
Option “Monitor-DisplayPort-2” “DisplayPort-2”
Screen 2
EndSection

Section “Screen”
Identifier “Screen0”
Device “Card0”
Monitor “DisplayPort-0”
EndSection

Section “Screen”
Identifier “Screen1”
Device “Card1”
Monitor “DisplayPort-1”
EndSection

Section “Screen”
Identifier “Screen2”
Device “Card2”
Monitor “DisplayPort-2”
EndSection

On Linux/X11 in regular display mode, a window can’t span multiple X-Screens, and windows involved in PTB’s mirroring modes must also be located on the same X-Screen. So your specific xorg config for three separate X-Screens won’t work for this purpose, only for having two totally separate stimulus windows on screen 1 and 2, e.g., for stimulating two subjects in parallel or similar, and the desktop GUI with Matlab/Octave on screen 0 (or a 3rd independent stimulus window). There are different solutions however, depending on the specific needs and hardware setup - mirroring an image is something that sounds simple, but is quite involved, depending on setup and needs, if low-level control / accuracy / timing is of any importance. The next upcoming Psychtoolbox 3.0.19.0 release will have a few new tricks useful for display mirroring, at least applicable to graphics cards with fully open-source drivers, ie. non-NVidia.

Given this question is kind of off-topic for the title of this discussion thread, please post all followup answers under the following fitting topic to continue the discussion:

My first two questions though would be:

  1. Why not use the dual-link DVI “console monitor” output of your ViewPixx for a simple hardware solution? Assuming you really just need a mirror image of the stimulus, nothing fancy, and can connect a suitable secondary monitor without compromising timing?

  2. If 1 is not an option, do you use Ubuntu 22.04-LTS or later? What is the resolution of your “console monitor” for the experimenter? It provides a new trick wrt. display mirroring, specifically contributed by myself to X-Server 21 for such scenarios, at least for graphics cards with fully open-source graphics/display drivers, ie. non-NVidia.

→ Please answer in the linked topic above, not here.

Hi Mario,
Thanks so much for your quick response and help !

  1. Unfortunately , we have ViewPixx/EEG model which doesnt have a second port to connect to the console monitor. Reg. the hardware solution - I am considering a possibility of a powered-DVI splitter- but I am not sure if that would cause more delays than any other software-side solution. If this is something you have tested, I’d be happy to know your experience with that.
  2. I’ve been using Ubuntu 20.04.5 LTS. 1920X1080 @120Hz is the resolution of the console monitor .
  3. If you have suggestions on how to change the X.org conf file such that mirroring or extending the screen could be established - I am happy to try that out too.
    Thank you very much for your help! :slight_smile:

As I said, i want to move this discussion to the other post, so answered you there:

-mario