I'm working on an OpenGL implementation of the oculus Rift distortion shader. The shader works by taking the input texture coordinate (of a texture containing a previously rendered scene) and transforming it using distortion coefficients, and then using the transformed texture to determine the fragment color.
I'd hoped to improve performance by pre-computing the distortion and storing it in a second texture, but the result is actually slower than the direct computation.
The direct calculation version looks basically like this:
float distortionFactor(vec2 point) {
float rSq = lengthSquared(point);
float factor = (K[0] + K[1] * rSq + K[2] * rSq * rSq + K[3] * rSq * rSq * rSq);
return factor;
}
void main()
{
vec2 distorted = vRiftTexCoord * distortionFactor(vRiftTexCoord);
vec2 screenCentered = lensToScreen(distorted);
vec2 texCoord = screenToTexture(screenCentered);
vec2 clamped = clamp(texCoord, ZERO, ONE);
if (!all(equal(texCoord, clamped))) {
vFragColor = vec4(0.5, 0.0, 0.0, 1.0);
return;
}
vFragColor = texture(Scene, texCoord);
}
where K is a vec4 that's passed in as a uniform.
On the other hand, the displacement map lookup looks like this:
void main() {
vec2 texCoord = vTexCoord;
if (Mirror) {
texCoord.x = 1.0 - texCoord.x;
}
texCoord = texture(OffsetMap, texCoord).rg;
vec2 clamped = clamp(texCoord, ZERO, ONE);
if (!all(equal(texCoord, clamped))) {
discard;
}
if (Mirror) {
texCoord.x = 1.0 - texCoord.x;
}
FragColor = texture(Scene, texCoord);
}
There's a couple of other operations for correcting the aspect ratio and accounting for the lens offset, but they're pretty simple. Is it really reasonable to expect this to outperform a simple texture lookup?