So I recently watched a video on Z-fighting, and learned of a simple way to take care of it--mostly. The solution given was to actually skew the projection so that closer objects were given more space for more accurate depth testing (since floats can only be so precise), and further away objects were crammed into a small area of the projection. Now I'm quite new to OpenGL and graphics programming (just working through slowly), and I haven't actually made anything complex enough where this is a problem for me, but I'll probably need to know this in the future. Anyways, the new problem posed by said solution is even worse Z-fighting in the distance (e.g. the mountains in Skyrim, Rust, etc.). Is there a better work around that doesn't involve graphical compromises, even if it does cost performance? Speaking hypothetically (since I'm not totally cozy with the OpenGL pipeline yet), could Z-values in a program be cast to doubles just before being clamped for depth testing?
Let me clarify. Think of Skyrim. Notice how the mountains sometimes flicker? When a scene is rendered with OpenGL, all the objects are crammed, or "clamped" into a small coordinate plane with Z-values from -1.0 to 1.0. Then depth testing is performed on every object--trees, snow, mountains, animals, houses, you name it, so that things aren't drawn when they're covered by something else. However, floating points can only get to a certain level of precision, so clamping hundreds of objects into a tiny space inevitably results in some having the exact same Z-coordinates, and the two objects flicker together on screen in a phenomenon known as "Z-fighting". I'm asking if every object's depth (z-) coordinates can be cast into doubles, so that they have sufficient precision (worth the negligible extra memory used for a negligible period of time) in order to accurately draw objects in the right order, without clipping into each other.