No, I don't think there is any generally applicable way to determine the minimum requirements that does not involve testing on some specified reference hardware.
You may be able to find some of the limitations by using Virtual Machines of some kind - it is easier to modify the parameters of some VM than modifying hardware. But there are artifacts generated by the interaction between host and VM that may influence your results.
It is also difficult to define the criteria for "acceptable performance" in general without knowing a lot about use cases.
Many programs will use more resources if they are available, but can also get along with less.
For example, consider a program using a thread pool with a size a based on the number of CPU cores. When running on a CPU with more cores, more work can be done in parallel, but at the same time overhead due to thread creation, synchronisation and aggregation of results increases. The effects are non-linear in the number of CPUs and depend a lot on the actual program and data. Similarly, the effects of decreasing available memory range from potentially throwing OutOfMemory-Errors for some inputs (but possibly not for others) to just running GC a bit more frequently (and the effects of that depend on the GC strategy, ranging from noticeable freezes to just a bit more CPU load).
All that is without even considering that programs don't usually live in isolation - they run on an operating system in parallel with other tasks that also consume resources.