VR/XR experiments

Thanks to the newly introduced Virtuality features, I started to think about VR/XR experiments seriously. I am excited if I can use VR for rigorous presentation of stimuli in psychophysics experiments.

But, first of all, is it possible to show stimuli fixed to somewhere in the view in any VR systems? For example, can I show a fixation point presented at the center of the view irrespective to user’s head direction?

Based on the ‘Stereoscopic’/‘Monoscopic’ feature of the PsychOpenXR, I assumed that the stimulus could be presented to anywhere in the user’s view, not in a 3D space. But, I wanted to quickly confirm before trying out unfamiliar VR developement, not to mention buying expensive VR headsets and PCs.

Thank you in advance,

I haven’t looked at PTBs new interface yet, though we also are building a custom headset we want to be openXR compatible, and I’m pretty sure we should be able to deal with head/eye position relative to world-centric co-ordinates (what is the point of VR otherwise?). I assume regular PTB commands draw to a virtual screen, and while they are 2D, the virtual screen is referenced in 3D relative to the user?

Thank you for confirming that we should be able to present stimuli not only in world-centric but also in user-centric, or eye-centric coordinates. This helps a lot because I could not find any instances of VR demos that display objects fixed to the head or eye.

It is great that you are building a custom headset! VR has been on the market for a while, but I think it is in a turning point in the rise of the AI technologies. I will catch up with these new fields to see how we can utilize XR for psychophisics and social experiments. Thank you!

Wrt. your custom headset, any more details? Wrt. OpenXR compatibility, the best way to do this is to contribute code to the open source Monado OpenXR implementation, just like the SimulaVR Linux headset, or project NorthStar https://docs.projectnorthstar.org/ or the ILLIXR project and various others.

Psychtoolbox can already take advantages of some special Monado feature for more reliable/precise timing/timestamping contributed by myself, and I intend to contribute more improvements, relevant to neuroscience to Monado, e.g., further enhanced timestamping - assuming time and or funding.

Yes, you draw as usual into an onscreen window, just like you’d do for standard mono or stereo/binocular display, and that windows mono/stereo content gets displayed on a virtual screen (or two screens for stereoscopic/binocular), which float(s) in 3D space, somewhere relative to the observers eyes.

For submitting and display of these images to the XR compositor for actual display, there are different modes, Monoscopic, Stereoscopic, 3DVR and Tracked3DVR. In Monoscopic/Stereoscopic mode, OpenXR quadview layers are used, which are rectangular flat-screens floating at a fixed location relative to the observers eyes. The size, orientation and positions of these layers can be set by the users script, e.g.,

PsychVRHMD('View2DParameters', hmd, 0, [-0.098726, 0.000000, -1.000000]);
PsychVRHMD('View2DParameters', hmd, 1, [+0.098726, 0.000000, -1.000000]);

to place the left / right screen at specific 3D locations relative to some head-centered origin. By default the driver tries to select reasonable size and position of these layers, for some definition of reasonable. This is what one uses to convert existing mono/stereo stimulus scripts for a HMD.

The 3D modes use a OpenXR projectionLayer instead, whose location relative to the viewers eyes is driven by head tracking data and set by the OpenXR runtime, ie. as suggested as optimal by the runtime for perspective correct projection. Our driver also returns suitable OpenGL projection and modelview matrices to render 3D content correctly for this mode of operation.

The 3D reference space for tracking/rendering can be selected via PsychVRHMD('ReferenceSpaceType', hmd, refSpace) from a set of supported spaces, e.g., head locked/fixed or fixed to the world - OpenXR supports different spaces, depending on OpenXR runtime and hardware system setup and user choice.

Thanks Mario for all the pointers. I only have one student working on this, he has built the headset using a nice pancake optical system from a Chinese manufacturer called see-ya, the driver board and 6DOF IMU he integrated together, custom-printed helmet box for non-human primate (35 - 60mm inter-pupil distance), and so far has the displays and IMU all working via OpenXR in Windows. He also added a SLAM camera for integrating world and hand tracking. I think he will try to move over to e.g. Monado and Linux, I’ve forwarded him your information and links. We hope to make this open-hardware once we get to a working prototype. The cost should be a few hundred euros hardware-wise. Of course if we can get this all built on an open-source stack it would be great. We’d like to use it for VR and also just as a dichoptic display compatible with PTB. That is a lot to do for just one student so lets see how far we get :upside_down_face:

Sounds like a lot of work indeed. But OpenXR is an api and spec, you need an OpenXR compliant runtime for a given collection of hardware, so what OpenXR runtime do you use on Windows? SteamVR is the only one that comes to my mind as having some restricted openness and some info on how to write drivers for new hardware for it? I did contribute some improvements to Monado on Linux for Monado’s SteamVR driver plugin wrt. input controllers. I use(d) SteamVR for testing PTB’s OpenXR support on both Linux and Windows with both Oculus Rift CV-1 and HTC Vive Pro Eye, in addition to OculusVR for Rift CV-1 on Windows and Monado on Linux.

The main downsides of SteamVR when writing PTB’s OpenXR driver were a whole bunch of bugs and quirks, on both Windows and Linux, sometimes reported by others and unfixed since many years, which made development of the PTB OpenXR driver much harder and much more time intensive. And the problem shared with all proprietary runtimes and drivers: The lack of control over the visual part, and especially over presentation timing. The latter is because the spec doesn’t cover that sufficiently, and although I had the opportunity to give input wrt. those aspects when OpenXR was still in the making, I couldn’t devote enough time at the right time, because I had/have to spend most of my time not on the things I consider really important for PTB and other toolkits in the mid- to longterm, but instead spend it on taking on whatever work brings in the money to keep PTB from complete failure due to the severe lack of reasonable funding.

All the proprietary runtimes are currently not targeted at all towards research use afaict.

That’s why I think Monado is the way to go for equipment with research purpose or special purpose. A Monado port to Windows has also started, although in early stages, not yet really usable productively. Open, pleasant developer community, the ability to contribute improvements if you have the skills. Hopefully indirectly the ability to also drive useful spec extensions forward in the long run, although the latter might be difficult. The main limiting factor for me is again lack of funding. Right now, the combo of Monado on Linux and PTB has some limited, hacky, not very plug and play timing support, not really great yet, but better than anything else out there. I hope to find the time for a proper timing implementation at least for Monado on Linux, at least as a draft, later this year. Ofc. lack of funding is the biggest threat to this again and might kill good things just as we reached the point of having the basics in place in PTB. As things stand now, we lost lots of money by writing this new driver, instead of making any money, so it is an investment into a future that might never come, if PTB’s business continues to be the failure it is and has been right now.

Going for a solution with a proprietary runtime would be quite a loss for general research purposes.

Anyhow, need to continue packing…
-mario