Psychtoolbox Demos / Tutorials

The core code I use is here, lines 267-282: opticka/movieStimulus.m at master · iandol/opticka · GitHub – the important detail is just to check if a new texture is available, if it is, then delete the old one and draw the new one or just draw the old one. You need to be careful when closing textures as it can be easy to close the wrong one. I never got round to contributing that, I need to conform to Peter’s coding style but will put it on my todo…

A simple test (using my PTB toolbox):

m=metaStimulus; %meta stimulus handles multiple stimuli
m{1}=movieStimulus('xPosition',-5); %position is degrees from center
m{2}=dotsStimulus('xPosition',5);
s=screenManager;
open(s);
setup(m, s); %tell our stimuli the screen parameters
for i = 1:600; draw(m); animate(m); flip(s); end
close(s); reset(m);

You should see a dancing monkey and a random dots stimulus both running together. As an aside my default dancing monkey video also shows alpha transparent video support from PTB…

Thanks so much for sharing it! We will give it a try! :wink:

Latest Demo: “Coolness slider”.

Someone asked for a demo of an interactive slider which allows participants to report a continuous response on a scale. Here is the demo. You can move the slider with the mouse (click on the toggle and drag to move it). The current “coolness %” is reported and this value is used to colour some text on the screen.

https://www.peterscarfe.com/coolnessSlider.html

Enjoy

P

1 Like

Nice work man! That’ll be of good use for many i think.

1 Like

Couple of more demos…

First, one demonstrating how to get the bounding boxes of text in order to position text centred on a given point on the screen. This is contrasted with the standard way in which text is positioned.

https://www.peterscarfe.com/boundingBox.html

This method was used in the coolness slider demo and it seemed sensible to make a demo showing this point alone.

https://www.peterscarfe.com/coolnessSlider.html

Finally, an interactive Likert scale demo. This is like coolness slider, but with distinct points on a rating rate. This is another very standard way in which to get ratings. It again used the bounding boxes stuff.

https://www.peterscarfe.com/likertScale.html

Enjoy

P

1 Like

Another day another demo.

This one shows how to load a .obj 3D mesh and render it using a vertex buffer object (VBO). This allows very fast drawing of complex objects loaded from file.

https://www.peterscarfe.com/rotatingMesh.html

It is used in the code associated with our (coming soon) SolidSight Database.

https://www.peterscarfe.com/resources.html

We have also started the long job of making a comparable resource to the PTB demos for PsychoPy.

https://www.psychopy.org

We tend not to use PsychoPy in the lab due to the superiority of Psychtoolbox, however, it is becoming increasingly popular due to the popularity of Python. These demos will slowly start appearing.

https://www.peterscarfe.com/psychopytutorials.html

More to come…

Peter

2 Likes

I wonder how the sample code using VBO’s compares to the mesh rendering abilities of our moglmorpher() function, as demonstrated in MorphDemo.m and MorphTextureDemo.m? While it is mostly intended for fast morphing between different meshes of identical topology, and that’s what those demos show, it can also render sets of single meshes via VBO’s, similar to what is demonstrated at a lower level in this demo, using VBO’s (subfunctions like ‘renderMesh’, ‘render’, ‘renderToDisplayList’, ‘renderRange’ etc.) and stuff for good performance, and various other goodies, e.g., ‘renderNormals’, ‘getVertexPositions’. In principle moglmorpher() should do this stuff in a convenient higher level way, although I think our demos only show the morphing stuff, not the general mesh rendering stuff, which only was used by lab internal code.

Another thing to point out wrt. complex scenery are our bindings to the Horde3D rendering engine. I think they weren’t heavily advertised, and are even only stored in my GitHub repo:

They do allow for way more convenient rendering of complex 3D scenes, with less low level control though, than what can be done by low level use of OpenGL or what PTB ships in terms of helpers. The nice thing about Horde3D is that it is a lightweight rendering engine used for video games and such. Not at the level of Godot, Unity, Unreal, etc., but still quite capable.

Your demo code also mentions that you use readOB() from gptoolbox, because “…the LoadOBJFile included with Psychtoolbox fails to load many common .obj files.” - Is this also your experience if you set the optional preparse parameter of LoadOBJFile(filename, debug, preparse) to 0? preparse is 1 by default, which speeds up loading of suitable large OBJ files, but will fail on OBJ files which contain other than 3-component vertices, faces and texture coordinates. So preparse 0 should be slower but more robust against various OBJ’s.

In principle one could include readOBJ.m from gptoolbox and use it as a fallback for LoadOBJFile, given that gptoolbox is under a compatible MIT license which easily allows this, just by giving proper credit. LoadOBJFile btw. is an improved version of code originally contributed by William Harwin from University of Reading. It is a small world…

Thanks for the comments Mario.

Yes, many years ago I remember looking at moglmorpher et al. My memory corresponds with what you say. Basically they did far more than I needed. I needed a much more simple use case i.e. load and render a mesh. Plus wanted to attempt to understand things a bit more myself.

I remember being impressed by the Horde functionality. I think it is very much overlooked / not known.

With reference to the LoadOBJFile file. Yes, it fails with preparse 0 (attached screenshots). Basically, my understanding is that the .obj file format is a bit of a mess, with many variants (if that is the correct word). I have never been able to get it to work with any of the .obj files I have created via our scanners.

I actually use a modified version of some C++ code (mexed) from

https://libigl.github.io

This makes the loading dramatically faster. I reference readOBJ() in the demo so the user does not need to worry about mex stuff. Though I hope to provide something when we publish the SolidSight Database and Code.

I actually collaborate lots with William. Was funny to see the origin of LoadOBJFile().

P

Another Day another demo…

Experimental code: Posner cuing task. Demonstrates all the key elements of this basic widely used task.

Background here: https://en.wikipedia.org/wiki/Posner_cueing_task

Code here: https://www.peterscarfe.com/poserCuingExperiment.html

P

For learning / understanding your demo is certainly useful. For use in many cases, the moglmorpher() might be the better choice due to high-perf and lots of backwards compatibility with older systems or low end hardware, and lots of builtin useful functionality. moglmorpher() was basically one important component of a high-perf facial tracking + manipulation + animation system, developed by myself and used by various people at the MPI. It was wrapped by higher level code though for our use cases.

Yep, it is a fun thing for more complex environments. Was used for research in spatial 3D navigation and I think self-motion perception, in labs in Tuebingen, and I used it mostly to have less boring demos when I implemented initial VR HMD support in PTB. Unfortunately I don’t have any artistic skills or patience or time for 3D modelling, but I did spend some time building a 3D rendered model of the original NCC 1701 Enterprise for VR, by assembling sections of the ship from freely available models made by others on Google Sketchup. It was a bit too taxing for the graphics cards of ~2014 and I didn’t have the time to implement proper culling optimizations etc. Unfortunately the resulting model was something like 3 GB of data on disc, and not really easily redistributable with its license, so not suitable for a public demo. It’s also sad that the owners of the Star Trek franchise seem to be very keen on shutting down any, even non-commercial / hobbyist, projects of recreating those beautiful ships in great detail. I’d have loved to walk the corridors of that ship in VR…

But it certainly showed what one can do with Horde (could do, if one had actual 3D modelling skills), and was a fun few days of geeking out.

It’s surprising, as our pimped LoadOBJFile() is technically more capable than readOBJ(), so one would expect it to work in more cases than the other one, and also with sub-meshes and material definitions. But it hasn’t been updated much in a decade, and the OBJ file format is certainly rather complex for something that is supposed to be a “simple” data exchange format between applications, so there are various more exotic cases that neither our reader nor readOBJ won’t handle. It’s probably something simple that makes it fail if readOBJ works …

Ja, fast they are not. One of the cases where compiled code can make a drastic difference. The ‘preparsing’ in my code is just a hack to speed up the cases I needed for my and others research for the type of .obj’s we used - mostly 3D meshes of faces and facial expressions for high performance facial animation. I also used caching in my own code to read .obj’s slowly then save them as .mat files for fast loading…

-mario

1 Like

Another day another demo…

This one loads a movie and grabs the frames and pastes them onto a 3D plane rendered with perspective projection.

It is a simplified version of “SpinningMovieCubeDemo” which is included in the PTB Distro. The aim being to set out the key coding steps.

Lots of work going on here on new demos. Primarily focused on VR. These will be released in a big chunk sometime before Christmas.

Also, work progresses on a Psychtoolbox Demos paper.

P

1 Like

Another day another demo…

This one loads in an image of the earth from NASA. Converts it into an OpenGL texture and maps it to the surface of a sphere. It is based on the famous MinimalisticOpenGLDemo from the Psychtoolbox distribution, but makes the key steps clearer for this specific task (the PTB demo is more complicated and contains other elements).

https://www.peterscarfe.com/earthSphere.html

Same day another demo…

This one is the same as the static earth sphere demo, but animates the earth rotating over time.

https://www.peterscarfe.com/rotatingEarth.html

P

Nice!

A few suggestions though. Your scripts now use …
PsychImaging('AddTask', 'General', 'FloatingPoint32Bit');
… but I don’t see quite why? This is useful if you need more than 8 bit per color channel (8 bpc) color precision, or in fact you need maximum precision. E.g., HDR displays, fine contrast control, > 16 Million colors, super precise alpha-blending, or various image post-processing operations by the imaging pipeline.

These demos seems to not require any of that. There is a cost associated with using these high-precision buffers, so one shouldn’t use them by default for no reason:

  • 32 bit float framebuffers consume twice the amount of video memory, and also memory/processing bandwidth, so they slow down drawing/post-processing/flipping. Depending on display resolution, desired framerate and performance of the used graphics-card this can be significant.
  • They are not supported on old graphics cards (before ~year 2007) or on low end systems like RaspberryPi 1/2/3.
  • On older, or new but cheap/lower end systems, alpha blending may not be supported at all, or only at very slow performance, for such float 32 framebuffers. E.g., the RaspberryPi 4/400 won’t alpha-blend and only slowly filter textures of that precision.

For this reason, there is PsychImaging('AddTask', 'General', 'FloatingPoint32BitIfPossible'); to allow PTB to downgrade precision to 16 bit floating point if 32 bit is not possible or not possible with good performance. 16 bpc float still gives enough precision for ~11 bpc SDR framebuffers or supposedly for typical HDR rendering.

But in the end, high precision framebuffers should not be used needlessly, because there is always a performance hit.

The other thing is
PsychImaging('OpenWindow', screenid, black, [], 32, 2, [], multiSample);

That 32, 2 sequence I’d recommend replacing with [], [], because there is literally no other setting there that makes sense in any use case. Those Screen parameters only exist for backwards compatibility with old scripts written against Psychtoolbox-2 from almost 20 years ago. E.g., single buffering 1 is theoretically possible, but useless, untested since many years, probably broken, and ignored with any PsychImaging functionality. Technically there are other magic values for pixelSize than 32, but only PsychImaging('OpenWindow',...) knows how to map certain requirements and task specifications to suitable Screen('Openwindow', ... pixelSize, ...) values that would make any sense.

Iow. 32, 2 is useless/meaningless at best, so leaving it out / at [],[] default is better.

The snippet

% Fill the screen black
Screen('FillRect', window, black);
Screen('Flip', window);

is redundant if it follows almost directly after ‘OpenWindow’, because OpenWindow will already have done that. The reason some PTB demos still contain that is because it is a left-over from very early versions of PTB-3 from before the year 2005, which had a bug preventing the clear color in ‘OpenWindow’ from taking effect.

% Set the black and white index
black = BlackIndex(screenid);

Nothing wrong with that in principle, but if you use PsychDefaultSetup(2); in modern code, which requests color representation in normalized 0.0 - 1.0 range for 0% to 100% intensity, by definition BlackIndex() will always be 0.0, WhiteIndex() will always be 1.0 and GrayIndex() will always be 0.5 for 50% intensity. So the statements are basically redundant. With old style code that has 0-255 color range or has set a custom color range, e.g., 0-1023 for 10 bit framebuffers, BlackIndex() will always be 0 and WhiteIndex() will be 255, 1023, … etc. But the point of the modern normalized color range mode is that one doesn’t have to think about this anymore and always can use 0 - 1 range for black to white. The exception are HDR display modes where the color is usually represented in Nits or other “HDR units”, but then those Black/Gray/WhiteIndex() functions become meaningless anyway.

% Unify the keyboard names for mac and windows computers
KbName('UnifyKeyNames');

applies also to Linux, or any other operating system on which PTB could theoretically be ported to. But this function call is redundant, because this is implied in PsychDefaultSetup(n) for any n > 0, iow. already done in your PsychDefaultSetup(2) statement at the beginning of demo scripts.

PsychDefaultSetup is meant as catch-all / replacement for typical boilerplate code. Currently it makes AssertOpenGL, KbName('UnifyKeynames') and color range setup redundant.

Some (many?) of our PsychDemos/ will demonstrate similar anachronisms / redundant calls, etc., because nobody ever found the time or energy to update/rewrite them for the latest best practices. It is good though to stop using them in new sample code.

-mario

1 Like

Hi Mario,

Thanks for the comments. Yes, being an old person now (sigh) I do tend to have these elements of code which were relevant when I learnt things many years ago, but are less relevant / not at all relevant now (all be it not breaking anything). I had been thinking about going over the demos to chop redundant things like that out, but time…

Same with the efficiency things. I have been lucky enough to work with kit which these things have no noticeable overhead, but indeed could have noticeable effects / problems for more restricted circumstances. I think I got into repurposing code where stuff like that was relevant, for different things where it is now not relevant. The code just stayed as for my systems there was no noticeable negative effect (just making the computer work harder).

Another example I can think of off the top of my head is multi-sampling. I tend to use this approach generally, even in some cases where arguably a less processor intensive route could achieve the same goal.

Another is programming in loops rather than using a >1 value of wait frames. This I have done consciously, but again, not needed in some cases e.g. static stimuli.

I will keep all this in mind going forward. Its would be good to show “best practice”.

r.e. The demos in PTB. I think you are correct. They likely contain some of these things which are no longer needed. I was also going to volunteer my time to go over all of the PTB demos in the distribution r.e. formatting and adding explanation. The demos are great, but I remember when I started learning PTB it was still quite difficult for me to understand what was going on and why. If you think that would be useful, I would be happy to do it. I know PTB is open source, but I have never been sure as to whether that would be welcomed or not.

I have a bit of time up until Christmas to work on stuff like this. In the new year my time will become much more constrained.

I also want to finish a “PTB Demos” paper which I started years ago. People have been asked for it for ever, but again, time… I mentioned it to a friend a while back and they said “well…, thats 10 years too late”. Hopefully still worthwhile though. All be it very late.

P

Another day another demo…

This time the famous Cafe Wall Illusion.

https://en.wikipedia.org/wiki/Café_wall_illusion

https://www.peterscarfe.com/cafeWall.html

Dynamic version to come tomorrow!

P

Another day another demo…

This time an animated version of the Cafe Wall Illusion. Shows how to build this illusion out of simple graphics primitives and animate the illusion over time. There is a little bit of math shown in the setup.

https://www.peterscarfe.com/dynamicCafeWall.html