Difference in performance between map and unordered_map in c++
Asked Answered
N

3

19

I have a simple requirement, i need a map of type . however i need fastest theoretically possible retrieval time.

i used both map and the new proposed unordered_map from tr1 i found that at least while parsing a file and creating the map, by inserting an element at at time.

map took only 2 minutes while unordered_map took 5 mins.

As i it is going to be part of a code to be executed on Hadoop cluster and will contain ~100 million entries, i need smallest possible retrieval time.

Also another helpful information: currently the data (keys) which is being inserted is range of integers from 1,2,... to ~10 million.

I can also impose user to specify max value and to use order as above, will that significantly effect my implementation? (i heard map is based on rb trees and inserting in increasing order leads to better performance (or worst?) )

here is the code

map<int,int> Label // this is being changed to unordered_map  
fstream LabelFile("Labels.txt");  


// Creating the map from the Label.txt  
if (LabelFile.is_open())  
{  
    while (! LabelFile.eof() )  
    {             
        getline (LabelFile,inputLine);  
        try  
        {  
            curnode=inputLine.substr(0,inputLine.find_first_of("\t"));  
            nodelabel=inputLine.substr(inputLine.find_first_of("\t")+1,inputLine.size()-1);  
            Label[atoi(curnode.c_str())]=atoi(nodelabel.c_str());  
        }  
        catch(char* strerr)  
        {  
            failed=true;  
            break;  
        }  
    }  
    LabelFile.close(); 
}

Tentative Solution: After review of comments and answers, i believe a Dynamic C++ array would be the best option, since the implementation will use dense keys. Thanks

Noyade answered 28/2, 2010 at 6:2 Comment(0)
D
10

Insertion for unordered_map should be O(1) and retrieval should be roughly O(1), (its essentially a hash-table).

Your timings as a result are way OFF, or there is something WRONG with your implementation or usage of unordered_map.

You need to provide some more information, and possibly how you are using the container.

As per section 6.3 of n1836 the complexities for insertion/retreival are given:

One issue you should consider is that your implementation may need to continually be rehashing the structure, as you say you have 100mil+ items. In that case when instantiating the container, if you have a rough idea about how many "unique" elements will be inserted into the container, you can pass that in as a parameter to the constructor and the container will be instantiated accordingly with a bucket-table of appropriate size.

Dangle answered 28/2, 2010 at 6:10 Comment(3)
yes from my dict experience in python a hash table should alway be faster compared to a binary tree based map, yet at least for insertion i am finding map to be faster than unordered_map.Noyade
ya its possible that rehashing would be leading to significant increase in time for insertions, since i am not providing any hint about possible number of elements.Noyade
so is it garanteed to be O(1) on insert or not I can't tell ? What did the guy do wrong ?Panatella
O
2

The extra time loading the unordered_map is due to dynamic array resizing. The resizing schedule is to double the number of cells each when the table exceeds it's load factor. So from an empty table, expect O(lg n) copies of the entire data table. You can eliminate these extra copies by sizing the hash table upfront. Specifically

Label.reserve(expected_number_of_entries / Label.max_load_factor());

Dividing by the max_load_factor is to account for the empty cells that are necessary for the hash table to operate.

Onaonager answered 21/6, 2012 at 17:49 Comment(0)
C
1

unordered_map (at least in most implementations) gives fast retrieval, but relatively poor insertion speed compared to map. A tree is generally at its best when the data is randomly ordered, and at its worst when the data is ordered (you constantly insert at one end of the tree, increasing the frequency of re-balancing).

Given that it's ~10 million total entries, you could just allocate a large enough array, and get really fast lookups -- assuming enough physical memory that it didn't cause thrashing, but that's not a huge amount of memory by modern standards.

Edit: yes, a vector is basically a dynamic array.

Edit2: The code you've added some some problems. Your while (! LabelFile.eof() ) is broken. You normally want to do something like while (LabelFile >> inputdata) instead. You're also reading the data somewhat inefficiently -- what you apparently expecting is two numbers separated by a tab. That being the case, I'd write the loop something like:

while (LabelFile >> node >> label)
    Label[node] = label;
Catarrhine answered 28/2, 2010 at 6:12 Comment(3)
The Problem is that i am hoping to extend implementation to handle possibly around billion entries.Noyade
It is going to handle networks with billion+ nodes. The map contains Label for each node in the network, the code will be implemented on hadoop in streaming mode.Noyade
@Mitch:yes, that's exactly what I said. @akshayubha: the question isn't really the number of entries, but the density of the keys. If it's a billion keys running from 1 to 1 billion, an array will be fine. If it's a billion keys that are (say) 128 bits apiece, an array won't work at all.Catarrhine

© 2022 - 2024 — McMap. All rights reserved.