BlurredMipMap - but for Procedural textures?

BlurredMipmapDemo uses a very neat trick to generate a “foveated” rendering like effect where the mouse position is sharply focused but this blurs depending on distance. This demo depends on movie frames, and running on a procedural texture unsurprisingly gives an error:

You asked me to use mip-mapped texture filtering on a texture that is 
not of GL_TEXTURE_2D type! Unsupported.

If we wanted to selectively blur a procedural texture, what would be the best way to do this? I’ve seen lots of GLSL one-pass blur shaders, but they cannot selectively blur, so we need a first-pass buffer, then blur from this. I will explore drawing the procedural texture to an offscreen window then see if the shader works. But does anyone have any code or experience for spatially selective blur, or clues as to the optimal way to do this?

Hi Ian,

What i have done before is just have two textures, a blurred and a not blurred one, and with a simple shader blend between them. I’m not near this code now but can send it later in the week if you want. Blending between two texutres drawn jnto offscreen windows should work just as fine.

I guess its a poor man’s version of selective blurring (or is it, my math isnt good enough to figure that out, but could use some simulations to figure it out), but perceptually it worked fine which is all i needed. Let me know of you want it.


1 Like

My problem is that I am using a dynamic animated procedural checkerboard in anaglyph stereomode. I could pre-render the animation, but we tweak some of the animation parameters for each subject before a scan so we’d need to pre-render every possible variable. Share the code whenever you have time, that would still be appreciated, and I’ll explore the offscreen window to buffer and blend the dynamic animation (hoping performance is OK, this is in a clinical hospital scanner with all sorts of absurd rules on their computers as usual…)

So the offscreen rendering trick works:

function blurtest()

Screen('Preference', 'SkipSyncTests', 2);
Screen('Preference', 'VisualDebugLevel', 3);

PsychImaging('AddTask', 'General', 'FloatingPoint32BitIfPossible');
[win, winRect] = PsychImaging('OpenWindow', max(Screen('Screens')), [0 0 0]);
Screen('BlendFunction', win, 'GL_SRC_ALPHA', 'GL_ONE_MINUS_SRC_ALPHA');

% offscreen window, specialFlags == 1 means GL_TEXTURE_2D
[owin, ~]=Screen('OpenOffscreenWindow', win, [0.5 0.5 0.5], [], [], 1);

gazeRadius = 15;

	% Load & Create a GLSL shader for adaptive mipmap lookup:
	shaderpath = [PsychtoolboxRoot 'PsychDemos'];
	shader = LoadGLSLProgramFromFiles([shaderpath filesep 'BlurredMipmapDemoShader'], 1);
	% Bind texture unit 0 to shader as input source for the mip-mapped video image:
	glUniform1i(glGetUniformLocation(shader, 'Image'), 0);

	% Build a procedural texture
	texture = CreateProceduralColorGrating(win, 1024, 1024, [1 0 0], [0 1 0], 512);
	startT = vbl;
	phase = 0;
	% Repeat until keypress or timeout of 10 minutes:
	while ((vbl - startT) < 600) && ~KbCheck
		% Yes. Get current "center of gaze" as simulated by the current
		% mouse cursor position:
		[gazeX, gazeY] = GetMouse(win);
		% Flip y-axis direction -- Shader has origin bottom-left, not
		% top-left as GetMouse():
		gazeY = winRect(4) - gazeY;
		% draw procedural texture to offscreen window
		Screen('DrawTexture', owin, texture, [], [],...
		45, [], [], [0.5 0.5 0.5 1], [], [],...
		[phase, 0.03, 0.5, 0.1]);

		% Draw offscreen texture and apply GLSL 'shader'
		% during drawing. Use filterMode 3: This will automatically
		% generate the mipmap image resolution pyramid, then use the
		% 'shader' to adaptively lookup lowpass filtered pixels from
		% the different blur-levels of the image pyramid to simulate
		% typical foveation effect of decreasing resolution with
		% increasing distance to center of fixation. We pass gaze
		% center (gazeX, gazeY) and radius of foveation 'gazeRadius' as
		% auxParameters. auxParameters must always have a multiple of 4
		% components, so we add a zero value for padding to length 4:
		Screen('DrawTexture', win, owin, [], [], [], 3, [], [], shader, [], [gazeX, gazeY, gazeRadius, 0]);
		% Show it at next video refresh:
		vbl = Screen('Flip', win);

		phase = phase + -5;
	% Close down everything else:
catch %#ok<*CTCH>
	% Error handling, emergency shutdown:

Now the mipmap shader is not ideal when applied on top of a moving grating, as the filtering causes quite a lot of aliasing, but it is better than nothing.

I next need to invert the shader itself as I want to blur the fixation point and fall off to no blur to the edges…

This is probably the best approach you can do without large effort in terms of efficiency vs. control - drawing into an offscreen window, then drawing that with a suitable shader applied.

We have a bunch of demos wrt. filtering/blurring:

BlurredMipMapDemo.m is most computationally efficient by use of the image pyramid. You can also try the path using CreateResolutionPyramid() for manual - and less efficient - mipmap generation, but using a custom shader to have more control over the kind of filtering/downsampling method used for building the image resolution pyramid. Maybe that would allow for better quality in your case? The same shader uses a small 3x3 gaussian instead of (i think) the commonly implemented box filter. Note that that shader stupidly converts images to grayscale for no sane reason, easily fixed in the shader and in an upcoming PTB release, I don’t know why I thought that was a good idea to also convert to grayscale, but I was young and foolish.

There’s also ImagingVideoCaptureDemo.m which shows a method which may give even more control, but is way more brute force and less computationally efficient, convolving at each pixel location with a controllable box filter. The demo could do with a rewrite, e.g., it assumes a hard-coded 640x480 video stream, from when that was all the rage with webcams…

Diedericks method is the one from GazeContingentDemo.m. We have a whole bunch of demos for more or less efficient gpu accelerated convolution, cfe. ConvolutionKernelTest.m - not updated in a decade at least.

But my assumption would be that the methods in BlurredMipMapDemo.m or possibly even more in ImagingVideoCaptureDemo.m might be the most efficient ones, especially given that if you want blur in the foveated area and less blur in the periphery. ImagingVideoCaptureDemo.m runs a box filter kernel of per pixel controllable width at every output pixel, which gives good control, but is more expensive for large areas with large blur. Might be favorable for a small foveated area with large blur and a large periphery with little to no blur…

Looking at all these demos, they look like they could benefit from some rewrite/cleanup, maybe not always showing best practices anymore, but who has unpaid time for that?

1 Like

Thanks Mario! I rewrote the basic mipmap shader to blur from the centre out[1], and for both the automatic processing and the CreateResolutionPyramid override, with a moving procedural checkerboard, the aliasing artifacts are quite apparent. I will have a go at modifying that additional shader to do a more complex filtering to see if that helps, though MipMap is pretty much voodoo to me so I am just twiddling parameters to see what I get.

I’ll also look at ImagingVideoCaptureDemo to see what it does.

The amazing point, is I am using this with anaglyph stereo, and so I have to (1) draw offscreen, (2) run the pyramid, then (3) draw onscreen applying the shader for BOTH the left and then the right channels and no frames are dropped, great stuff!!! The imaging pipeline rocks!

Your numerous demos, even if they haven’t been refined with the current coding styles, remain totally amazing, so thank you for all of these examples for us to work with!!!

If I get something decent I will submit it as a new demo via a pull request…

[1] currently using very basic code which falls off linearly, I will probably need to do something better like smoothstep but using the lod values from most to least blurred.

Yep. If you want to understand, search for “OpenGL mipmapping” or mipmap textures or similar or look into the free edition of the OpenGL programming guide, old but still relevant. The general technique is that of “image resolution pyramids”, as it would be called in standard digital image processing literature, or something similar.

It is efficient for large scale filtering or blurring or anti-aliasing. But the built-in filtering for creating the downsampled pyramid, while efficient, is vendor and driver specific if i remember correctly, so results may vary across gpu model/vendor and os and driver(version). Although my guess would be that they mostly use box filters, or maybe some very simple gaussian. So the CreateResolutionPyramid.m and associated filtershader give more control. The provided shader is just proof of concept, the most simple thing I could mash together to show the principle, certainly room for improvement…

Do so before you spend too much time on optimizing the other one. It just does the filtering/blurring for each drawn output pixel per shader, and might give more control or at least more easy to understand control. Downside is that it is computationally way less efficient for large image areas to blur/filter, or for large filter width (ie. strong low pass filtering) - it is the brute force approach to it. But given that you want strongest blur in a small (foveated?) area and not much blur in the much larger periphery, and having a modern gpu, it may be totally feasible, as modern gpu’s are very fast.

I should mention we also have an approach based on FFT + point-wise multiply in frequency space + inverse FFT demo that for large area filtering and large filter sizes is the most efficient one, a well known technique in image processing. Unfortunately it depends on GPUMat, a great 3rd party open-source toolbox which hasn’t seen maintenance in a decade and was Matlab only - not Octae, and worse, it requires CUDA and a CUDA capable NVidia gpu. In an ideal world that project would be alive and well and had ported from CUDA to cross-vendor OpenCL, but it didn’t. GPUMat is essentially an open-source implementation of what Matlab has in its gpu accelerated parallel processing toolbox, done long before Mathworks had that idea, except Matlabs toolbox is also locked to NVidia proprietary for no good technical reason, and we never found the time or funding to implement an interface between PTB and Matlabs toolbox. Anyhow, it might still work on some old NVidia gpu’s and proprietary drivers under Linux (high perf) and Windows (medium perf) and impressively shows the advantage of brain over brawn when applied to digital image filtering. Cfe. the GPU*.m demos.

Sound good.


[1] currently using very basic code which falls off linearly, I will probably need to do something better like smoothstep but using the lod values from most to least blurred.

Hi Ian,

What i am wondering is, instead of relying on some shader to downsample and create the mipmap levels for you, can’t you somehow procedurally generate each mipmap level? If its the filtering that creates the aliasing, this should work well.

The naive way (and only way i can think of) to do that would be to have a bunch of offscreen textures, each half as big as the previous, have an array of procedural textures with size parameters halved each time, and then DrawTexture each level with the right procedural texture, also halving the size-related input parameters to that call. You then have all the resolution levels and can upload them to the right mipmap level ala calls in CreateResolutionPyramid (details of “upload” left out as i don’t know of the top of my head).

As for my blur mixing shader, I understand you’ve already figured it out. What is still nice about it perhaps and you can adapt is that it doesn’t use linear distance from some gaze point, but a raised-cosine window. So what i do is draw the normal image first, and then the blurred image on top with a shader attached that sets the alpha of the latter image.

vertex shader:

 * Standard passthrough vertex shader.
 * (c) 2009 by Mario Kleiner, licensed under MIT license.
void main()
    /* Apply standard geometric transformations to patch: */
    gl_Position = ftransform();
    gl_TexCoord[0] = gl_MultiTexCoord0;

fragment shader:

 * File: ImageMixingShader.frag.txt
 * Apply window function to texture such that how the input texture is
 * drawn depends on distance to user provided position.
 * (c) 2009 by Mario Kleiner, licensed under MIT license.
#extension GL_ARB_texture_rectangle : enable
uniform sampler2DRect Image;
uniform vec2 windowPos;

/* declare function that computes the window, will be linked in from a
 * separate translation unit. */
float windowFunc(vec2,vec2);

void main()
    /* Read RGB color values from first texture coordinate set: */
    vec3 rgb = texture2DRect(Image, gl_TexCoord[0].st).rgb;

    float alpha= windowFunc(gl_TexCoord[0].st, windowPos);

    /* ... and output to framebuffer, with proper alpha set for this
     * pixel */
    gl_FragColor = vec4(rgb,alpha);

example window function 1, a circle:

uniform float diam;
float windowFunc(vec2 pos, vec2 windowPos)
    float dist = distance(pos, windowPos);
    float fw   = fwidth(dist);

    return mix(0.0, 1.0, smoothstep(diam-fw, diam, dist));

example window function 2, Gaussian:

uniform float gaussSD;
float windowFunc(vec2 pos, vec2 windowPos)
    vec2  dist   = pos-windowPos;
    float distSq = dot(dist,dist);

    return 1-exp(-distSq/(2*gaussSD*gaussSD));

example window function 3, raised cosine:

uniform float diam;
uniform float edgeWidth;
#define M_PI 3.1415926535897932384626433832795

float windowFunc(vec2 pos, vec2 windowPos)
    float dist = distance(pos, windowPos);

    return cos(clamp(1-(dist - diam) / edgeWidth, 0.0, 1.0)*M_PI)/2+.5;

to load and link this all together:

windowShader   = LoadShaderFromFile('windowMixinRaisedCosine.frag',[],1);
shader     = LoadGLSLProgramFromFiles('ImageMixingShader',2,windowShader);
% then use this shader in the Screen('DrawTexture') call when drawing the second (blurred) image

You achieve blurred at fixation, non-blurred outside by flipping which image you draw first and which with the shader attached.

As said, think you’re on a better path, but the raised cosine, or whatever window function you want, could be adapted to a LOD manipulation instead.

1 Like

Wrt. Diedericks mixing approach, ImageMixingTutorial.m also comes to mind, which shows/visualizes exactly how the trick is done behind the scenes. And the more simple SimpleImageMixingDemo.m as something somewhere between the good old GazeContingentDemo.m approach that is shaderless and runs on ancient hardware, and the full shader approach by Dee. So many tradeoffs between complexity/generality/quality/performance to choose from…

1 Like

Thank you both for the ample pointers, I’ll have a play this week when I get time…