Hi Ian,
What i am wondering is, instead of relying on some shader to downsample and create the mipmap levels for you, can’t you somehow procedurally generate each mipmap level? If its the filtering that creates the aliasing, this should work well.
The naive way (and only way i can think of) to do that would be to have a bunch of offscreen textures, each half as big as the previous, have an array of procedural textures with size parameters halved each time, and then DrawTexture each level with the right procedural texture, also halving the size-related input parameters to that call. You then have all the resolution levels and can upload them to the right mipmap level ala calls in CreateResolutionPyramid
(details of “upload” left out as i don’t know of the top of my head).
As for my blur mixing shader, I understand you’ve already figured it out. What is still nice about it perhaps and you can adapt is that it doesn’t use linear distance from some gaze point, but a raised-cosine window. So what i do is draw the normal image first, and then the blurred image on top with a shader attached that sets the alpha of the latter image.
Shaders:
vertex shader:
/*
* Standard passthrough vertex shader.
*
* (c) 2009 by Mario Kleiner, licensed under MIT license.
*/
void main()
{
/* Apply standard geometric transformations to patch: */
gl_Position = ftransform();
gl_TexCoord[0] = gl_MultiTexCoord0;
}
fragment shader:
/*
* File: ImageMixingShader.frag.txt
* Apply window function to texture such that how the input texture is
* drawn depends on distance to user provided position.
*
* (c) 2009 by Mario Kleiner, licensed under MIT license.
*/
#extension GL_ARB_texture_rectangle : enable
uniform sampler2DRect Image;
uniform vec2 windowPos;
/* declare function that computes the window, will be linked in from a
* separate translation unit. */
float windowFunc(vec2,vec2);
void main()
{
/* Read RGB color values from first texture coordinate set: */
vec3 rgb = texture2DRect(Image, gl_TexCoord[0].st).rgb;
float alpha= windowFunc(gl_TexCoord[0].st, windowPos);
/* ... and output to framebuffer, with proper alpha set for this
* pixel */
gl_FragColor = vec4(rgb,alpha);
}
example window function 1, a circle:
uniform float diam;
float windowFunc(vec2 pos, vec2 windowPos)
{
float dist = distance(pos, windowPos);
float fw = fwidth(dist);
return mix(0.0, 1.0, smoothstep(diam-fw, diam, dist));
}
example window function 2, Gaussian:
uniform float gaussSD;
float windowFunc(vec2 pos, vec2 windowPos)
{
vec2 dist = pos-windowPos;
float distSq = dot(dist,dist);
return 1-exp(-distSq/(2*gaussSD*gaussSD));
}
example window function 3, raised cosine:
uniform float diam;
uniform float edgeWidth;
#define M_PI 3.1415926535897932384626433832795
float windowFunc(vec2 pos, vec2 windowPos)
{
float dist = distance(pos, windowPos);
return cos(clamp(1-(dist - diam) / edgeWidth, 0.0, 1.0)*M_PI)/2+.5;
}
to load and link this all together:
windowShader = LoadShaderFromFile('windowMixinRaisedCosine.frag',[],1);
shader = LoadGLSLProgramFromFiles('ImageMixingShader',2,windowShader);
% then use this shader in the Screen('DrawTexture') call when drawing the second (blurred) image
You achieve blurred at fixation, non-blurred outside by flipping which image you draw first and which with the shader attached.
As said, think you’re on a better path, but the raised cosine, or whatever window function you want, could be adapted to a LOD manipulation instead.