first of all, I want to assure that I'm aware of the fact, that rehashing is a sensible topic. However I'd like to hear some of your opinions, what approach you would take here.
I'm building a distributed application, where nodes remotely create entities identified by a UUID. Eventually, all entities should be gathered at a dedicated drain node, which stores all entities by using these UUIDs.
Now I want to create additional identifiers, which are more handy for human users. Base64-encoding the UUIDs would still create IDs with 22 characters, which is not appropriate for human usage. So I need something like URL-shortening services. Applying bijective functions will not help, because they will not reduce the information value. Of course, I'm aware that I need to lose information in order to shorten the id. And I'm also aware that any reduction of information of a hash will increase the probability of collision. I'm stuck, what is the most appropriate way to reduce information in order to create shorter ids for humans.
Here are some prerequisites: I will provide the ability to map {UUID, shortened ID} via my data storage. I'd still prefer a non-centralized solution. I will probably never ever need more than about a milion of IDs (~2^20) in total.
Here are the thoughts I came up with so far:
Auto incremented IDs:If I'd use some kind of auto-incremented id, I could transfer this id to an obfuscated string and pass this around. This would be the easiest approach, and as long as there are few keys around, the keys would not be very long. However I'd have to introduce a centralized entity which I don't really want.- Shorten the UUID: I could just take some of the bits of the original 128 bit uuid. Then I should take at least into account the version of the UUID. Or is there anything else wrong with this?
- Rehashing the UUID: I could apply a second hashing algorithm on my initial UUID and store the mapping.
Are there any other approaches? What is favorable?
Thanks in advance!