One almost never discusses the best case. It is simply not that interesting. An algorithm can always be modified to have the smallest theoretically possible best case, which is O(max(size-of-input, size-of-output)), simply by recognising one particular input and producing output precomputed for that input. In the benchmarking business this is known as cheating.
The term bound here has the same meaning as in the rest of mathematics, namely, an arbitrary value that is no larger (no smaller) than any element of a given set.
For example, when discussing the set of sorting algorithms, we can prove that no comparison-based sorting algorithm has better asymptotic efficiency than O(n log n) in the worst case (and also in the average case). Thus O(n log n) is a lower bound on the efficiency of all possible comparison-based sorting algorithms in the worst case (and also in the average case). O(n) is another lower bound. O(n log n) is a better lower bound than O(n). And it just happens that O(n log n) is the tight lower bound, because there are in fact sorting algorithms with this complexity.
There is no finite upper bound on the complexity of the set of sorting algorithms because an arbitrarily bad sorting algorithm can be created.
On the other hand, we can discuss a particular sorting algorithm, and prove that it never exceeds a certain number of operations, which would be the upper bound on its complexity. For example, the quicksort algorithm has an upper bound of O(n2). It also has an upper bound of O(n3). It does not have an upper bound of O(n log n) because there are inputs that make it exceed this number of operations. The bound of O(n2) is tight, because it is attained for some inputs.
One theoretically could discuss the lower bound in the same sense as above, but this is almost never being done (this would amount to discussing the complexity of the best case, which we are not normally interested in).
We can also discuss the difficulty of a particular problem, and place both upper and lower bounds on it. How efficient is the most efficient algorithm (in the worst or average case) that solves it? (We don't discuss the best case because the answer is not interesting, see above). For the comparison-based sorting problem, we know that both the tight upper bound and the tight lower bound are O(n log n) because there are in fact O(n log n) sorting algorithms and it can be shown that no better algorithm exists. This is not a very interesting case because we can find the most efficient algorithm possible. For e.g. the knapsack problem the situation is more interesting. We only know that an upper bound of O(2n) exists because an algorithm with this complexity trivially exists (the brute force one). We suspect but cannot prove this bound is tight. We also cannot provide any good lower bound (we suspect that there are no algorithms that solve it with polynomial complexity but cannot prove it).