I've studied all the shader docs I could find and I'm still almost clueless. I ran into a problem where bigger images needed a higher amplitude of pixelation to look the same as a smaller image. Makes sense to me but that sounded like it could be complicated later on. I experimented for a day and found that multiplying that amplification value by the divisor(is that the word?) of the texture's longer dimension (x or y) by 100. I really don't know why. I had the idea and it made sense at the moment for a reason I don't remember. I'll post exactly what equations are being used.
GDscript:
pixel_amplification_value = max(texture.get_size.x, texture.get_size.y) / 100
ShaderGL(or whatever it's called) amp = pixel_amplification_value
pixel_uv = ((floor(UV * amp) / amp) * ratio);
COLOR = texture(TEXTURE, pixel_uv);
ratio just gets the texture's aspect ratio in GDscript, it works how you'd probably imagine.
I need to keep a notepad, I have these strokes of genius and then I forget how it works.