(For the purpose of this answer, I'm assuming you're talking about the new A/B testing framework we just launched last week)
So right now, you can't really ensure mutually exclusive experiment groups with the new A/B testing framework. If you specify that 10% of your users are in experiment A and 10% are in experiment B, then a small portion of your users in experiment B (specifically, about 10% of them) will also be in experiment A.
The good news is that those users from experiment A should be evenly distributed among your variants in experiment B. But still, if you find yourself in a case where you feel like these experimental users will favor one variant over another (and thereby skew your results), you have two options:
Run your A/B tests serially instead of in parallel. Just wait until you've stopped your first experiment before running your second.
If it makes sense, try combining them into a single multi-variant experiment. For example, let's say experiment A is adding a faster sign-in flow, and experiment B is pushing your sign-in flow until later in the process. You could try creating a multi-variant experiment like this:
+---------------------+---------------+----------------+
| Group | Sign-in speed | Sign-in timing |
+---------------------+---------------+----------------+
| Control | (default) | (default) |
| Speedy | Speedy | (default) |
| Deferred | (default) | Deferred |
| Speedy and Deferred | Speedy | Deferred |
+---------------------+---------------+----------------+
The benefit here is that you'll get some extra insight into whether being in both experiments really does affect your users in the ways you're suspecting.