Some people try to avoid NULL
values, claiming the logic would be confusing.
I am not one of them. NULL
values are just fine for columns with no data. They are certainly the cheapest way to store "empty" columns - for disk space as well as performance (the main effect being smaller tables and indices):
Once you understand the nature of NULL
values, there is no reason to avoid them. Postgres offers a variety of functions to deal with NULLs. colaesce()
, nullif()
, concat()
, concat_ws()
, ...
Generally, as far as performance is concerned, a NOT NULL constraint beats a CHECK constraint and both beat triggers by a log shot. But even simple triggers are cheap. The cost of a NOT NULL
constraint is next to nothing. Also, all of these only affect write operations, but in most applications read operations dominate.
The most relevant impact on performance (sub-optimal indices and queries aside) therefore is the size of tables and indices or, more importantly, the number of tuples per data page. Bigger tuples lead to slower performance for most use cases. The number of data pages that have to be read to satisfy a query increases accordingly. Available cache memory is saturated earlier.
I don't have a benchmark ready, but it's best to test for your particular environment anyway. These are just simple rules of thumb. Reality is a lot more complex.