A hash looks more or less random, but it's deterministic -- that is, a particular input always produces the same hash value.
Based on that, when you want to insert an item in a hash table, you start by generating the hash for that input. You then use that to index into the table, and insert your item at that spot in the table. In a typical case, you have one part that's treated as the key, and you have some more information associated with that (e.g., you might be able to look up people by name, and with each name you have information about that person).
Later, when you want to look up (the information associated with) a particular key (in this case, the person) you enter and hash the key to find the right spot in the hash table to look for that information.
That does skip over a few crucial details, such as how you handle the two or more inputs happening to produce the same hash value (which is unavoidable unless you place some limits on the allowable inputs). There are various ways to handle this, such as just looking through the table sequentially to find the next free spot, re-hashing to find another spot in the table, or building something like a linked list of items that hashed to the same value.
In any case, it should probably be added that there are use cases for which a hash table does end up a bit like you've surmised. Just for one example, when you want to see all the contents of a hash table (rather than just looking up one item at a time), you really do normally scan through the whole table. Even if your hash table is nearly empty, you normally have to scan from one end to the other, looking for every entry that's actually in use. When you do that, you get items in an order that appears fairly random.
This points to another shortcoming of hash tables -- you generally need a precise match with a single original record for them to work well. For example, let's consider some queries based on my last name. Assuming you indexed on the entire last name, it would be trivial to find "Coffin" -- but at least with most normal hash functions, searching for "something starting with "Cof" would be dramatically slower, as would "find all the names between "Coffin" and "Demming".
As such, you're sort of half correct -- while hash tables are typically very fast for a few specific cases (primarily searching for an exact match), the general idea you've outlined (scanning through the entire table to find the data) is nearly the only choice available for some other purpose, so if you want to support anything but exact matches, a different data structure may be preferable.
That's dealing primarily the the most typical uses/types of hash tables. It's possible to create hash functions that at least bend (if not outright break) those rules to varying degrees. In most cases these involve some compromises though. For example, given geographic information as input, you could create a hash (of sorts) by simply truncating the coordinates (or one of them, anyway) to get the same information at lower precision. This organizes the information to at least a degree, so things that are close together end up with similar hash values, making it easier to find neighboring data. This, however, will often lead to more collisions (e.g., you'll get a lot of items hashing to the same value for the downtown of a large city).
Looking specifically at universal hashing, this adds one extra element to the puzzle: instead of a single hash function, you have a family of hash functions from which you choose "randomly". When universal hashing is used to implement a hash table (which isn't always--it's also often used for things like message authentication codes) you typically do not choose the hash function randomly every time you insert an item. Rather, you typically choose a hash, and continue to use it until you encounter some fixed number of collisions. Then you randomly choose another hash function.
For example, in Cuckoo hashing (probably the most commonly used universal hash), you hash your key to find a location. If it's already occupied, you "kick out" the existing item there, and re-hash it to find an alternate location for it. It gets inserted there. If that slot is already occupied, it "kicks out" the item already in that slot, and the pattern repeats.
When you're searching for an item, you hash it and look at that location. If that's empty, you immediately know your item isn't present. If that slot's occupied, but doesn't contain your item, you re-hash to find the alternate location. Continue this pattern for as many hash functions as you're using (normally only two in the case of cuckoo hashing, but you could obviously use an otherwise similar algorithm with more functions).
It is possible for this to fail--to enter an infinite loop, or (almost equivalently) to build a chain that exceeds some pre-set length. In this case, you start over, re-building the table with a different pair of hash functions.
When using open hashing (of which universal hashing is one form) deletion tends to be non-trivial. In particular, we have to ensure that when we remove an item at one location, it wasn't the beginning of a chain of items that collided at that location. In many cases, it's most efficient to simply add a third state for a slot: if it's never been occupied, it's just empty. If it's currently occupied, it's in use. If an item has been deleted there, it's deleted. This way when you're searching for an item, if you encounter a "deleted" slot, you continue searching for your item (whereas, if you get to a slot that's never been used, you can stop searching immediately--your item clearly was never inserted).