Not all races are equally bad.
The worst kind of race that you can get is reading of partial results. This is what Herb Sutter referred to as 'seeing pink elephants': your program is able to observe an intermediate state that violates all invariants.
The typical example here are concurrent non-atomic writes. If one thread reads from a variable that is concurrently written by another thread, the reader might obtain complete garbage. Not only can you not tell whether the reader will see the old value or the new value, it might actually see a value that was never written by anyone. These kinds of data races need to be avoided at all costs, as it's pretty much impossible to reason about the observed values in any way. C++ for example sends you straight to undefined-behavior-land in this case.
A less critical kind of race is when all data accesses are atomic, so you know that readers will only ever observe fully written values, but the order is unspecified. So you don't know whether the value that you read is actually the newest or whether two values that you read together were actually in memory at the same time. It is often useful to accept this for performance reasons. A prominent example here are distributed applications: Synchronizing data over the network is particularly slow, so it is often accepted that certain nodes may have an outdated view of the world but are still able to perform work based on that state. Think of a search engine cache: It's better to give a fast result based on yesterday's cache than to give a slow result or no result at all.
Similar examples occur in non-distributed environments. Consider a lock-free queue: You usually don't care about the exact order in which items end up in the queue. All producers 'race' to inserting items to the back of the queue and similarly all consumers 'race' to consume the front item of the queue. However, as long as you can guarantee that no items are accidentally lost or corrupted, this reduced level of control is acceptable.