TL;DR
If your data needs to be correct, you need to enforce the constraints, and if you need to enforce the constraints, letting the database do it for you will be faster than anything else (and likely more correct too).
Example
Attempting to enforce something like key uniqueness at the application-level can be done correctly or quickly, but not both. For example, let's say you want to insert a new row. A naive application-level algorithm could look something like this:
- Search the table for the (key fields of) new row.
- If not found, insert the new row.
And that would actually work in a single-client / single-threaded environment. However, in a concurrent environment, some other client could write that same key value in between your steps 1 and 2, and presto: you have yourself a duplicate in your data without even knowing it!
To prevent such a race condition, you'd have to use some form of locking, and since you are inserting a new row, there is no row to lock yet - you'll likely end-up locking the entire table, destroying scalability in the process.
OTOH, if you let the DBMS do it for you, it can do it in a special way without too much locking, which has been tested and double-tested for correctness in all the tricky concurrent edge cases, and whose performance has been optimized over the time the DBMS has been on the market.
Similar concerns exist for foreign keys as well.
So yeah, if your application is the only one accessing the database (e.g. when using an embedded database), you may get away with application-level enforcement, although why would you if the DBMS can do it for you?
But in a concurrent environment, leave keys and foreign keys to the database - you'll have plenty of work anyway, enforcing your custom "business logic" (that is not directly "declarable" in the DBMS) in a way that is both correct and performant...
That being said, feel free to perform any application-level "pre-checks" that benefit your user experience. But do them in addition to database-level constraints, not instead of them.