Alternative to NVIDIA 3D vision?


Until recently we were using the NVIDIA 3D vision system for binocular rivalry (using Psychtoolbox-3 - NVision3D ).

However, our glasses have broken and NVIDIA has stopped manufacturing the system, so I was wondering if anyone else had experienced a similar issue and had any alternatives to the NVIDIA system.

Additionally, if anyone has any advice on what to look for in 3D shutter glasses system for compatibility with psychtoolbox it’d be super appreciated.

Thanks in advance!

I also have the same question.
Could anyone help to reply please? Thanks.

It isn’t as easy to find consumer binocular-capable displays at present. The previous fad of consumer 3D (almost every projector and several TVs / displays supported a 3D mode, NVIDIA et al. for gaming) is almost completely dead.

I think DLP-Link capable projectors are the main remaining consumer tech still available (also active polarising glasses that sync to the projector signal, e.g DGD5 3D Glasses | BenQ US).

We bought a Chinese-made dual projector system (similar to commercial cinema 3D). The potential benefit is you don’t need active glasses, but proper alignment was close to impossible due to poor manufacturing :frowning:

If money is no obstacle, then the Propixx has to be the best validated stereoscopic-capable display for vision research: PROPixx - VPixx Technologies – an active shutter so the glasses can be passive, and PTB optimised for great visual fidelity (great contrast / temporal resolution)!

More broadly there are other technologies that are as good as or better than polarising glasses, and it seems that in particular lenticular-based (remember the Nintendo 3DS?) glasses free displays are making a strong comeback:

As head tracking is now a trivial thing – I know of at least one recent clinical trial for amblyopia using lenticular display (without head tracking) for children: Phase 2a randomised controlled feasibility trial of a new 'balanced binocular viewing' treatment for unilateral amblyopia in children age 3-8 years: trial protocol - PubMed

And the gold-standard of course is proper dichoptic displays, which ensures zero-crosstalk as each eye has its own display. Due to VR headsets, there are newer tiny OLED displays with better near-eye optics that can be used for binocular display, e.g. – that would require some manual design, or you could try to see if a VR headset could support pass-through, but exactly how PTB’s stereoscopic display modes could work with these displays remains to be seen (note PTB recently added OpenXR support for compatible displays, but I’m not sure how OpenXR and PTBs stereo modes interact?)

Thanks for your reply, Ian.

Yep, that hype-wave is over and the industry wasn’t too pleased with lack of $$$ streaming in. That’s why companies like NVidia not only stopped sales of their consumer 3D goggles, but also actively removed support for them from their proprietary drivers, so even existing hardware will become unusable on graphics card or potentially operating system upgrade. The danger of proprietary non OSS systems…

Also the Viewpixx 3D, and the old Datapixx for good’ol CRT monitors should have Vesa 3-pin Mini-DIN stereo output connectors to driver suitable shutter goggles? Should be conveniently supported by PTB. There’s also some PTB support for CRS FE1 goggles, cfe. ‘help BitsPlusPlus’ section about UseFE1StereoGoggles. I wrote driver code for all this stuff, but don’t remember ever having the opportunity to actually test it in practice, due to lack of hardware, so ymmv…

VR HMDs are binocular by nature, and PTB’s regular stereo drawing code applies. You either set up PTB to use your HMD in 3DVR or Tracked3DVR mode, so our driver will ask OpenXR to setup for proper 3D perspective correct projection (by use of OpenXR projectionLayers that are configured in field of view, view frustum, focal length etc. to be optimal for the given HMD/optics/viewer - for some definition of optimal. PTB will also provide suitable projection and modelview matrices to setup OpenGL perspective correct 3D rendering compatible with this viewing model, and updated by head tracking. The OpenXR runtime decides on all specific properties of the projection, and may take things like optics, field of view, focal length, eye-lens/screen distance (eye relief), lens separation or potentially actual IPD into account, depending on how fancy the HMD hardware is, and what settings can be adjusted or measured by the hardware.

Cfe. VRHMDDemo1.m, VRInputStuffTest.m, SuperShapeDemo.m, MorphDemo.m.

Or instead you add one line of code to an existing stereoscopic script to request presentation on a HMD. This allows to turn any existing stereo presentations script into something HMD compatible - in principle, e.g., VRHMDDemo.m, ImagingStereoDemo(103), some others. In this case OpenXR quad view layers are used to present images. You can think of these as big rectangular viewscreens floating in front of the viewers eye, at a fixed location and orientation relative to the eyes. The size and location and orientation of these wrt. to the viewers eyes are setup by a heuristic of mine to look ok for hopefully many use cases on hopefully many HMD’s on hopefully many OpenXR runtimes. But heuristics it is, and my test set of HMD’s currently is n=2, and due to the severe lack of funding for PTB I didn’t have plenty of time to fine-tune that heuristic. Therefore there are functions that allow to change those default locations/sizes/orientations per eye to adapt to specific needs of the experiment or specific properties of the used HMD or subject, e.g., IPD, eye relief, etc.

I guess VR HMD’s are the new hot thing for binocular stimulation, after the failure of 3D TV’s, with gradually improving quality of the technology. At least as long as industry thinks there’s money to be made in this area.

The downside of using VR HMD’s for research right now is somewhat the lack of control about how and when stuff is presented to the subject. OpenXR standardizes some aspects of this and for the first time provides an api and open standards across hardware vendors and operating systems. But there is enough wiggle room and variability in the details of different hardware and software implementations that may not matter at all for consumer applications, but very much for vision science.

My hope on the software side here is Monado, a free and open-source OpenXR runtime implementation, therefore capable people can look at and understand how the software side of the VR stack works, and can potentially customize and improve it for their needs. In my work on PTB’s new OpenXR driver I’ve already contributed small fixes and improvements, with hopefully more substantial stuff to come. On the hardware side, I root for SimulaVR, a consumer oriented, pro class HMD which uses a fully open-source Linux+Monado software stack. They are still in pre-mass-production stage, a small startup, so given how difficult hardware business is, they might go under before they make it to a good stage. But as far as suitability for research and openess goes, that would be probably a pretty splendid device. There’s also the ILLIXR project for hacker types who are not afraid of do-it-yourself hardware hacking.