What you need is a relation between each test and the code it exercises.
This can be computed statically but it is hard and I don't know any tools that do it. Worse, if you had such a tool, the static analysis to decide what the test affected, might actually take longer than just running the test itself, so this doesn't look like an attractive direction.
However, this can be computed using a test coverage tool. For each individual test,
run that test (we presume it passes) and collect the test coverage data. We now have
a lot of pairs (t_i,c_i) for "test i has coverage c".
When the code base changes, the test data coverage sets can be consulted. A simple check: if for any (t_i,c_i), if c_i mentions file F, and F has changed, you need to run t_i again.
Given test coverage data in almost any representation this is sort of easy to detect in the abstract. Given that most test coverage tools don't specifically tell you how they
store the test coverage data, this is harder than it looks in practice.
Actually, what you want ideally is if c_i mentions any programmatic element of F, and that programmatic element has changed, you need to run t_i again.
Our SD Test Coverage tools provide this capability for Java and C#, at the level of methods.
You need to set up some scripting to associate the actual test, however you have packaged it, with collected test coverage vectors. As a practical matter this tends to be pretty easy.
Unit
because they test one unit separately from the others. So generally you should have a class tested in a way when no other class' implementation will affect the test. It also means that when you change anything in a class you should run only unit-tests for that class, because other tests should use mocks instead of this class. – Galyak