For the fun of it: At 1920x1080@60 Hz i can run that demo with 3000 gabors on a MacBookPro 2017 (about the last Apple MBP where one can run Linux with modest setup pain, the 2018 models introduce the T1 security chip and other hostilities) with a Radeon Pro 560 of the Polaris gpu family without dropping frames.
On my year early 2019’ish 500 Euro middle-class PC with AMD Ryzen 5 2400G processor (4 physical / 8 logical cores), 8 GB RAM and AMD Raven Ridge onboard graphics, i can do 1500 gabors at 1920x1080@60Hz without dropping frames. At 120Hz the same machine can do 530 gabors. In all cases, the cpu load is less than 10%, so the limiting factor is gpu performance.
The Raven Ridge processor integrated graphics chip uses a Vega11 graphics core, so is technically more advanced than the Polaris of the MBP.
One still can see the performance advantage of a discrete gpu chip with its own dedicated VRAM memory over onboard graphics which has to use normal system RAM.
In general, AMD is currently ahead of Intel in both absolute performance and price/performance when it comes to processors and integrated graphics. So for small (macMini style) PC’s, AMD is currently a much better buy than Intel.
Wrt. AMD gpu’s: Yes, the successor of Polaris, named Vega, is expected to work as well as Polaris gpu’s. Nothing i read in tests suggests otherwise. And my 500 Euros PC has an onboard chip with Vega graphics core, so far no problems at all with PTB. There is one difference though between a Vega discrete graphics card (for plugging into a PC) and this Vega integrated chip: The display engine for bringing the picture onto the screen is a DCN-1 display engine in my PC, but Vega discrete graphics cards have an older generation DCE-12 display engine. So DCE-12 is untested by myself. I expect it to work perfectly fine, but PTB’s special low-level display engine bag of tricks, which has to be adapted to each display engine generation, is there for DCE-12 but untested, so could have bugs. I’d expect bugs to be unlikely, and easy to fix by myself with cooperation of a user, should the need arise. Also, the low-level code is of much less importance with recent AMD gpu’s and modern Linux versions, as the OS does most of the stuff that code was used for by itself. The code is mostly like an airbag in a car. You usually don’t need it, but it is a bit of extra assurance in the case of driver bugs that would slip through my own testing.
Starting with the latest generations of AMD gpu’s like Raven Ridge integrated graphics, and the new RDNA / Navi gpu’s, these have a completely redesigned display engine called DCN instead of the old DCE, and those are not supported at all by PTB’s low-level code atm. I don’t intend to add support for the time being, because there would be little extra benefit, compared to the added implementation/maintenance cost. Would do if the need would really arise. The only feature that our DCE low-level tricks currently add to AMD graphics compared to DCN for the latest generation is some true 12 bpc high precision color support - at a substantial performance cost and more involved setup steps for the machine.
So Vega gpu’s should be safe to use, and apart from higher performance than Polaris (being the high-end cards) have an improved FreeSync hardware implementation, which is useful for using PTB’s VRR support for fine-grained stimulus onset timing (help VRRSupport
) on Linux + AMD with even better precision/robustness. But that doesn’t matter for a Display++ display, which is a fixed refresh rate display. You’d need a FreeSync2 compliant monitor for that, or at least a DisplayPort adaptive sync capable monitor.
As far as i know, there aren’t any Vega gpu’s up for sale with less than 80W of power consumption though, as these are rather cards for the performance hungry. That’s why i currently don’t have a suitable PC for testing one. ~75W is the limit of my 500 Euros PC.
Navi / RDNA is untested, doesn’t have PTB’s bag of tricks supported. While i assume it would work just fine for the most part, and haven’t heard otherwise so far, the general rule of thumb is to always stay one gpu generation behind the most recent one, if one wants a trouble-free experience, with any graphics card vendor or operating system. Those gpu’s are very complex hardware (== initial models of a new generation will have hardware bugs) which needs very complex device driver software (== initial driver releases will have software bugs + lack of software workarounds for the early hardware design bugs). The stuff needs time to mature, so unless one wants to be a voluntary beta tester… That said: Beta testers are always welcome to give feedback to myself, just don’t complain if you’d run into any trouble - someone with enough patience has to be the crash test dummy 
-mario