Is Dijkstra faster when using Fibonacci Heap?
Asked Answered
C

2

3

Is Dijkstra faster when using Fibonacci heap than with the Binary heap?

I did some experiments implementing Fibonacci heap on my own and using it in Dijkstra, I also checked the ready-to-use Fibonacci heap from fibheap library, but none of the implementations turned out to be faster in finding the shortest path with the use of Binary heap.

Is it possible that I made a mistake on some point or is that possible that Fibonacci heap is actually slower than Binary heap in context of shortest path finding in Dijsktra?

Casandra answered 6/7, 2022 at 18:41 Comment(0)
P
4

Using a Fibonacci Heap improves the asymptotic runtime of the algorithm. In other words, as your graph grows larger, there will eventually come a point where using a Fibonacci Heap is faster than using a binary heap.

However, the conventional wisdom I've heard is that the graph size required before that happens is so large that, for practical purposes, a binary heap will always be faster.

Porush answered 6/7, 2022 at 21:21 Comment(1)
Indeed. As the Wikipedia article says under Practical Considerations: "Fibonacci heaps have a reputation for being slow in practice due to large memory consumption per node and high constant factors on all operations. Recent experimental results suggest that Fibonacci heaps are more efficient in practice than most of its later derivatives, including quake heaps, violation heaps, strict Fibonacci heaps, rank pairing heaps, but less efficient than either pairing heaps or array-based heaps."Opening
T
2

This is a rather long reply, but hopefully it answers the various questions raised in this thread. I also include an efficient (and correct) C++ implementation of a Fibonacci heap below. This has been used to produce all of the results discussed here.

To summarise, Dijkstra's algorithm is an efficient, exact method for finding shortest paths between vertices in a graph. It is particularly useful in transportation problems when we want to determine the shortest route between two geographic locations using a road network. Here, I will consider a C++ implementation of Dijkstra’s algorithm to examine the differences in computing time when using a priority queue implemented as a self-balancing binary tree and a Fibonacci heap. Most computing languages include some sort of self-balancing binary tree (or binary heap) structure as part of their standard libraries, but implementations of Fibonacci heaps are less common. Existing online C++ implementations of Fibonacci heaps are also buggy, inefficient, and/or difficult to use. This is not the case for the implementation below, which has been fully tested and evaluated.

I'll use the following notation. Let G = (V, E) be an arc-weighted, directed graph where V is a set of n vertices and E is a set of m arcs (directed edges). Also, let N(u) denote the set of vertices that are neighbours of a vertex u. The weight of an arc travelling from a vertex u to a vertex v is denoted by w(u, v).

Here, I will be using Dijkstra’s algorithm to find the shortest path between a source vertex s and all reachable vertices of the graph. In this sense, we are using the algorithm to generate a "shortest path tree rooted at vertex s". An example is illustrated in the figure below. Note that Dijkstra's algorithm is only suitable for graphs in which all arc weights are nonnegative. If this is not the case, then more expensive techniques such as the Bellman-Ford algorithm need to be applied.

Figure 1. An example graph with n = 100 vertices (the black circles). In this case, arcs are drawn with straight lines, and arc weights correspond to the lengths of these lines. This particular graph is also symmetric in that (u, v) ∈ E if and only if (v, u) ∈ E, with w(u, v) = w(v, u). The highlighted arcs show a shortest path tree rooted at the vertex s. Because this graph is a strongly connected component, the shortest path tree is also a spanning tree.

Dijkstra's algorithm operates by maintaining a set D of so-called "distinguished vertices". Initially, only the source vertex s is considered distinguished. During a run, further vertices are then added to the set D, one at a time. A "label" L(u) is also stored for each vertex u in the graph. During execution, L(u) stores the length of the shortest s-u-path that uses distinguished vertices only. On termination of a run, L(u) will then store the length of the shortest s-u-path in the graph. If a vertex u has a label L(u) of infinity, then no s-u-path is possible in the graph.

In its most basic form, Dijkstra’s algorithm can be described in just three simple steps:

  1. Set L(u) = ∞ for all u ∈ V. Also, let D be an empty set and let L(s) = 0.
  2. Choose a vertex u ∈ V such that: (a) its value for L(u) is minimal; (b) L(u) is less than ; and (c) u is not in D. If no such vertex exists, then end; otherwise insert u into D and go to Step 3.
  3. For all neighbours v ∈ N(u) that are not in D, if L(u) + w(u, v) < L(v) then set L(v) = L(u) + w(u, v). Now return to Step 2.

In these steps, one vertex is added to D at each iteration, giving O(n) iterations in total. Within each iteration, we then need to identify the minimum label amongst vertices not in D (an O(n) operation) and then update O(n) vertex labels. This leads to an overall complexity of O(n x n).

Using Priority Queues

For sparse graphs, run times of Dijkstra’s algorithm can be significantly improved by making use of a priority queue Q. During execution, this priority queue is used to hold the label values of all vertices that have been considered by the algorithm but that are not yet marked as distinguished. It should also allow us to quickly identify the undistinguished vertex that has the minimum label value. In addition to this, it is also desirable for the algorithm to maintain a predecessor array P. On completion of the algorithm, P will allow the construction of shortest paths (vertex sequences) between s and all reachable vertices.

This improved version of Dijkstra’s algorithm can be expressed by the following pseudocode.

DIJKSTRA (s ∈ V) 
1: For all u ∈ V, set L(u) = ∞, set D(u) = false, and set P(u) = NULL
2: Set L(s) = 0 and insert the ordered pair (s, L(s)) into the priority queue 
3: While Q is not empty do:
4:     Let (u, x) be the element in Q with the minimum value for x
5:     Remove (u, x) from Q and set D(u) = true
6:     For all v ∈ N(u) such that D(v) = false do:
7:         If L(u) + w(u, v) < L(v) then:
8:             If L(v) < ∞ then: 
9:                 Replace (v, L(v)) in Q by (v, L(u) + w(u, v))
10:            Else:
11:                Insert (v, L(u) + w(u, v)) into Q
12:            Set L(v) = L(u) + w(u, v) and set P(v) = u

As shown, the DIJKSTRA procedure uses four data structures, D, L, P and Q. The first three of these contain n elements and should allow direct access (e.g., by using arrays). D is used to mark the distinguished vertices, L is used to hold the labels, and P holds the predecessor of each vertex in the shortest path tree. The priority queue is denoted by Q. At each iteration, Q is used to identify the undistinguished vertex u with the minimal label value. In the remaining instructions, u is then removed from Q and marked as distinguished, and adjustments are made to the labels of undistinguished neighbours of u (and corresponding entries in Q) as applicable.

The asymptotic running time of DIJKSTRA now depends on the data structure used for Q. The first option is to use a self-balancing binary tree (or binary heap) to store Q; the second option is to use a Fibonacci heap. The following table uses Big-O notation to summarise the worst-case complexities of the relevant operations with these data structures. Further information on how these data structures work "under the hood" can be found here and at Reference 2 below.

Delete-Minimum Insertion Deletion Replacement
Self-balancing binary tree / binary heap O(log n) O(log n) O(log n) O(log n)
Fibonacci heap O(log n) O(1) O(log n) O(1)

In this table, n refers to the number of elements currently in the tree/heap.

For self-balancing trees and binary heaps, we see that all of the above operations take logarithmic time in the worst case. (The replacement of an element involves deleting the element followed by an insertion operation.) With Fibonacci heaps, on the other hand, insertions and replacements take constant time. A replacement is performed using the constant-time Decrease-Key(x, y, z) operation, which operates by identifying a specific element (x, y) in the heap, and then modifying y to a new value z, where z < y. Using Fibonacci heaps, Line 9 of the DIJKSTRA procedure above is therefore rewritten as Decrease-Key(v, L(v), L(u) + w(u, v)). (Although not used in this procedure, the deletion of an arbitrary element (x, y) in a Fibonacci heap is carried out by performing Decrease-Key(x, y, -∞), followed by a Delete-Minimum operation.)

Recall that n and m denote the numbers of vertices and arcs in the graph, respectively. The use of a self-balancing tree or binary heap for the priority queue Q therefore leads to an overall run time of O((n + m) log n). Using a Fibonacci heap for Q, this complexity is reduced further to just O(m + n log n). Despite this improved complexity, however, Fibonacci heaps are often viewed as slow in practice due to their larger memory consumption and the high constant factors contained in their operations. Indeed, it is noted in Reference 2 that:

the constant factors and programming complexity of Fibonacci heaps makes them less desirable than ordinary binary (of k-ary) heaps for most applications. Thus Fibonacci heaps are predominantly of theoretical interest.

The C++ code below can be used to explore these issues further. Here, two versions of Dijkstra’s algorithm are presented, one that uses a self-balancing tree for Q, and one that uses a Fibonacci heap. In this case, the set container from C++’s standard library is used for our self-balancing binary trees. The Fibonacci heap is defined by the custom FibonacciHeap class.

#include <iostream>
#include <string>
#include <climits>
#include <algorithm>
#include <vector>
#include <tuple>
#include <set>
#include <time.h>

using namespace std;

const int infty = INT_MAX;

//Code for printing out a vector
template<typename T>
ostream& operator<<(ostream& s, vector<T> t) {
    s << "[";
    for (size_t i = 0; i < t.size(); i++) {
        s << t[i] << (i == t.size() - 1 ? "" : ",");
    }
    return s << "] ";
}

//Structs used with the adjacency list. 
struct Neighbour {
    int vertex; //Index of neighbouring vertex
    int weight;    //weight of the associated arc
};

//Graph Class (uses an adjacency list)
class Graph {
public:
    int n; //Num vertices
    int m; //Num arcs
    vector<vector<Neighbour> > adj;
    Graph(int n) {
        this->n = n;
        this->m = 0;
        this->adj.resize(n, vector<Neighbour>());
    }
    ~Graph() {
        this->n = 0;
        this->m = 0;
        this->adj.clear();
    }
    void addArc(int u, int v, int w) {
        this->adj[u].push_back(Neighbour{ v, w });
        this->m++;
    }
};

//Struct and ordering/comparison operator used with self-balancing tree (c++ set)
struct HeapItem {
    int vertex;
    int label;
};
struct minHeapItem {
    bool operator() (const HeapItem& lhs, const HeapItem& rhs) const {
        return tie(lhs.label, lhs.vertex) < tie(rhs.label, rhs.vertex);
    }
};

//Struct for a Fibonacci heap node
struct FibonacciNode {
    int degree; //Number of children.
    FibonacciNode* parent; //Pointer to parent
    FibonacciNode* child; //Pointer to first child
    FibonacciNode* left; //Pointer to left sibling.
    FibonacciNode* right; //Pointer to right sibling.
    bool mark; //Is the node marked?
    int key; //Node's key value.
    int nodeIndex; //The nodes index value (referring to the vertex in the problem graph)
};

//Fibonacci heap class
class FibonacciHeap {
private:
    FibonacciNode* minNode;
    int numNodes;
    vector<FibonacciNode*> degTable;
    vector<FibonacciNode*> nodePtrs;
public:
    FibonacciHeap(int n) {
        //Constructor function
        this->numNodes = 0;
        this->minNode = NULL;
        this->degTable = {};
        this->nodePtrs.resize(n);
    }
    ~FibonacciHeap() {
        //Destructor function
        this->numNodes = 0;
        this->minNode = NULL;
        this->degTable.clear();
        this->nodePtrs.clear();
    }
    int size() {
        //Number of nodes in the heap
        return this->numNodes;
    }
    bool empty() {
        //Is the heap empty?
        if (this->numNodes > 0) return false;
        else return true;
    }
    void insert(int u, int key) {
        //Insert the vertex u with the specified key (value for L(u)) into the Fibonacci heap. O(1) operation
        this->nodePtrs[u] = new FibonacciNode;
        this->nodePtrs[u]->nodeIndex = u;
        FibonacciNode* node = this->nodePtrs[u];
        node->key = key;
        node->degree = 0;
        node->parent = NULL;
        node->child = NULL;
        node->left = node;
        node->right = node;
        node->mark = false;
        FibonacciNode* minN = this->minNode;
        if (minN != NULL) {
            FibonacciNode* minLeft = minN->left;
            minN->left = node;
            node->right = minN;
            node->left = minLeft;
            minLeft->right = node;
        }
        if (minN == NULL || minN->key > node->key) {
            this->minNode = node;
        }
        this->numNodes++;
    }
    FibonacciNode* extractMin() {
        //Extract the node with the minimum key from the heap. O(log n) operation, where n is the number of nodes in the heap
        FibonacciNode* minN = this->minNode;
        if (minN != NULL) {
            int deg = minN->degree;
            FibonacciNode* currChild = minN->child;
            FibonacciNode* remChild;
            for (int i = 0; i < deg; i++) {
                remChild = currChild;
                currChild = currChild->right;
                _existingToRoot(remChild);
            }
            _removeNodeFromRoot(minN);
            this->numNodes--;
            if (this->numNodes == 0) {
                this->minNode = NULL;
            }
            else {
                this->minNode = minN->right;
                FibonacciNode* minNLeft = minN->left;
                this->minNode->left = minNLeft;
                minNLeft->right = this->minNode;
                _consolidate();
            }
        }
        return minN;
    }
    void decreaseKey(int u, int newKey) {
        //Decrease the key of the node in the Fibonacci heap that has index u. O(1) operation
        FibonacciNode* node = this->nodePtrs[u];
        if (newKey > node->key) return;
        node->key = newKey;
        if (node->parent != NULL) {
            if (node->key < node->parent->key) {
                FibonacciNode* parentNode = node->parent;
                _cut(node);
                _cascadingCut(parentNode);
            }
        }
        if (node->key < this->minNode->key) {
            this->minNode = node;
        }
    }
private:
    //The following are private functions used by the public methods above
    void _existingToRoot(FibonacciNode* newNode) {
        FibonacciNode* minN = this->minNode;
        newNode->parent = NULL;
        newNode->mark = false;
        if (minN != NULL) {
            FibonacciNode* minLeft = minN->left;
            minN->left = newNode;
            newNode->right = minN;
            newNode->left = minLeft;
            minLeft->right = newNode;
            if (minN->key > newNode->key) {
                this->minNode = newNode;
            }
        }
        else {
            this->minNode = newNode;
            newNode->right = newNode;
            newNode->left = newNode;
        }
    }
    void _removeNodeFromRoot(FibonacciNode* node) {
        if (node->right != node) {
            node->right->left = node->left;
            node->left->right = node->right;
        }
        if (node->parent != NULL) {
            if (node->parent->degree == 1) {
                node->parent->child = NULL;
            }
            else {
                node->parent->child = node->right;
            }
            node->parent->degree--;
        }
    }
    void _cut(FibonacciNode* node) {
        _removeNodeFromRoot(node);
        _existingToRoot(node);
    }
    void _addChild(FibonacciNode* parentNode, FibonacciNode* newChildNode) {
        if (parentNode->degree == 0) {
            parentNode->child = newChildNode;
            newChildNode->right = newChildNode;
            newChildNode->left = newChildNode;
            newChildNode->parent = parentNode;
        }
        else {
            FibonacciNode* child1 = parentNode->child;
            FibonacciNode* child1Left = child1->left;
            child1->left = newChildNode;
            newChildNode->right = child1;
            newChildNode->left = child1Left;
            child1Left->right = newChildNode;
        }
        newChildNode->parent = parentNode;
        parentNode->degree++;
    }
    void _cascadingCut(FibonacciNode* node) {
        FibonacciNode* parentNode = node->parent;
        if (parentNode != NULL) {
            if (node->mark == false) {
                node->mark = true;
            }
            else {
                _cut(node);
                _cascadingCut(parentNode);
            }
        }
    }
    void _link(FibonacciNode* highNode, FibonacciNode* lowNode) {
        _removeNodeFromRoot(highNode);
        _addChild(lowNode, highNode);
        highNode->mark = false;
    }
    void _consolidate() {
        int deg, rootCnt = 0;
        if (this->numNodes > 1) {
            this->degTable.clear();
            FibonacciNode* currNode = this->minNode;
            FibonacciNode* currDeg, * currConsolNode;
            FibonacciNode* temp = this->minNode, * itNode = this->minNode;
            do {
                rootCnt++;
                itNode = itNode->right;
            } while (itNode != temp);
            for (int cnt = 0; cnt < rootCnt; cnt++) {
                currConsolNode = currNode;
                currNode = currNode->right;
                deg = currConsolNode->degree;
                while (true) {
                    while (deg >= int(this->degTable.size())) {
                        this->degTable.push_back(NULL);
                    }
                    if (this->degTable[deg] == NULL) {
                        this->degTable[deg] = currConsolNode;
                        break;
                    }
                    else {
                        currDeg = this->degTable[deg];
                        if (currConsolNode->key > currDeg->key) {
                            swap(currConsolNode, currDeg);
                        }
                        if (currDeg == currConsolNode) break;
                        _link(currDeg, currConsolNode);
                        this->degTable[deg] = NULL;
                        deg++;
                    }
                }
            }
            this->minNode = NULL;
            for (size_t i = 0; i < this->degTable.size(); i++) {
                if (this->degTable[i] != NULL) {
                    _existingToRoot(this->degTable[i]);
                }
            }
        }
    }
};
//End of FibonacciHeap class

tuple<vector<int>, vector<int>> dijkstraFibanocci(Graph& G, int s) {
    int u, v, w;
    FibonacciHeap Q(G.n);
    vector<int> L(G.n), P(G.n);
    vector<bool> D(G.n);
    //Initialise the data structures
    for (int u = 0; u < G.n; u++) {
        D[u] = false;
        L[u] = infty;
        P[u] = -1;
    }
    //Main Dijkstra algorithm
    L[s] = 0;
    Q.insert(s, 0);
    while (!Q.empty()) {
        u = Q.extractMin()->nodeIndex;
        D[u] = true;
        for (auto& neighbour : G.adj[u]) {
            v = neighbour.vertex;
            w = neighbour.weight;
            if (D[v] == false) {
                if (L[u] + w < L[v]) {
                    if (L[v] == infty) {
                        Q.insert(v, L[u] + w);
                    }
                    else {
                        Q.decreaseKey(v, L[u] + w);
                    }
                    L[v] = L[u] + w;
                    P[v] = u;
                }
            }
        }
    }
    return make_tuple(L, P);
}

tuple<vector<int>, vector<int>> dijkstraSelfBalancingTree(Graph& G, int s) {
    int u, v, w;
    set<HeapItem, minHeapItem> Q;
    vector<int> L(G.n), P(G.n);
    vector<bool> D(G.n);
    //Initialise the data structures
    for (u = 0; u < G.n; u++) {
        D[u] = false;
        L[u] = infty;
        P[u] = -1;
    }
    //Main Dijkstra algorithm
    L[s] = 0;
    Q.emplace(HeapItem{ s, 0 });
    while (!Q.empty()) {
        u = (*Q.begin()).vertex;
        Q.erase(*Q.begin());
        D[u] = true;
        for (auto& neighbour : G.adj[u]) {
            v = neighbour.vertex;
            w = neighbour.weight;
            if (D[v] == false) {
                if (L[u] + w < L[v]) {
                    if (L[v] == infty) {
                        Q.emplace(HeapItem{ v, L[u] + w });
                    }
                    else {
                        Q.erase({ v, L[v] });
                        Q.emplace(HeapItem{ v, L[u] + w });
                    }
                    L[v] = L[u] + w;
                    P[v] = u;
                }
            }
        }
    }
    return make_tuple(L, P);
}

vector<int> getPath(int u, int v, vector<int>& P) {
    //Get the u-v-path specified by the P vector
    vector<int> path;
    int x = v;
    if (P[x] == -1) return path;
    while (x != u) {
        path.push_back(x);
        x = P[x];
    }
    path.push_back(u);
    reverse(path.begin(), path.end());
    return path;
}

int main(int argc, char* argv[]) {
    //Construct an example graph (a directed cycle on 5 vertices here)
    Graph G(5);
    G.addArc(0, 1, 10);
    G.addArc(1, 2, 10);
    G.addArc(2, 3, 10);
    G.addArc(3, 4, 10);
    G.addArc(4, 0, 10);

    //Set the source vertex
    int s = 0;

    //Declare some variables
    vector<int> L, P;
    double duration1, duration2;
    clock_t start;

    //Execute Dijkstra's algorithm using a Fibonacci heap    
    start = clock();
    tie(L, P) = dijkstraFibanocci(G, s);
    duration1 = ((double)clock() - start) / CLOCKS_PER_SEC;

    //Execute Dijkstra's algorithm using a self-balancing tree
    start = clock();
    tie(L, P) = dijkstraSelfBalancingTree(G, s);
    duration2 = ((double)clock() - start) / CLOCKS_PER_SEC;

    //Output some information
    cout << "Input graph has " << G.n << " vertices and " << G.m << " arcs\n";
    cout << "Dijkstra with Fibonacci heap took " << duration1 << " seconds\n";
    cout << "Dijkstra with self-balancing tree took " << duration2 << " seconds\n";
    cout << "Shortest paths from source to each vertex are as follows:\n";
    for (int u = 0; u < G.n; u++) {
        cout << "v-" << s << " to v-" << u << ",\t";
        if (L[u] == infty)
            cout << "length = infinity. No path exists\n";
        else
            cout << "length = " << L[u] << ",\tpath = " << getPath(s, u, P) << "\n";
    }
}

Running this produces the following output

Input graph has 5 vertices and 5 arcs
Dijkstra with Fibonacci heap took 1.2e-05 seconds
Dijkstra with self-balancing tree took 4e-06 seconds
Shortest paths from source to each vertex are as follows:
v-0 to v-0, length = 0,     path = [] 
v-0 to v-1, length = 10,    path = [0,1] 
v-0 to v-2, length = 20,    path = [0,1,2] 
v-0 to v-3, length = 30,    path = [0,1,2,3] 
v-0 to v-4, length = 40,    path = [0,1,2,3,4]

The following features should be noted in this C++ code:

  • A small directed graph is included in the code as an example input.
  • Graphs are defined using the Graph class. Graph objects are directed and are stored using the adjacency list representation for weighted graphs. It is assumed that the vertices are labelled from 0 to n – 1, where n is the number of vertices, and that all arc weights are nonnegative integers. (The algorithm will also work perfectly well on undirected graphs.)
  • The code uses the C++ constant INT_MAX for infinity values. Exceeding this value will result in overflow and incorrect behaviour.
  • Both versions of Dijkstra’s algorithm return two vectors: the label vector L and the predecessor vector P.
  • The code also contains a function for extracting a path from the predecessor array P. This is used to produce some of the output as shown.

Evaluation

To assess the performance of these two algorithm variants, two graph topologies are now considered: dense planar graphs and random graphs. Planar graphs are a type of graph that can be drawn on a plane so that no arcs cross. Here, they were formed by placing n vertices into a (10000 x 10000) unit square before generating a Delaunay triangulation to give a graph with approximately (but not exceeding) 6n – 12 arcs. The weight of each arc was then set to the Euclidean distance between its two endpoints. These planar graphs can be considered as similar to road networks which, as noted, are an important application area of shortest path algorithms.

The second graph type considered here are random directed graphs. These were generated by creating n vertices and then, for each ordered pair of vertices (u, v), adding the arc (u, v) with a probability p. This generation process leads to graphs with approximately p x n x (n – 1) arcs. In this case, each arc was assigned a weight between 1 and 10000, selected at random. In the following, all trials were performed on a 64-bit Windows 10 machine with a 3.3 GHz Pro Intel Core i5-4590 CPU and 8 GB of RAM.

Execution times of the two algorithm variants with our dense planar graphs are summarised in the following chart for various values of n. Each point on the graph is the mean taken from 100 runs. Error bars show one standard deviation on either side of the mean.

Planar Graph Results

This chart above shows that, with these planar graphs, the use of a self-balancing tree gives faster run times, though these differences are very marginal. Indeed, once the graphs have been read into memory, both variants can compute shortest path trees in graphs of a million vertices (and approximately six million arcs) in well under a second.

The following three charts show the results of the same experiments with random graphs, using p = 0.1, 0.5 and 0.9 respectively. For p = 0.1, where graphs are quite sparse, similar results to the planar graphs are seen, with self-balancing trees producing slightly shorter run times. In contrast, with the most dense graphs (p = 0.9), the opposite is true, with Fibonacci heaps bringing slight improvements. The reasons for this are that, with these dense graphs, the number of neighbours per vertex is larger, meaning that the number of Decrease-Key() and Insertion operations is relatively large compared to the remaining, more expensive, operations.

Random graphs, p = 0.1 Random graphs, p = 0.5 Random graphs, p = 0.9

Finally, note that the largest instances considered here involve n = 5000 vertices, density p = 0.9, and therefore approximately 22.5 million arcs. In our runs, such graphs were seen to occupy approximately 225 MB of memory. The times taken to load these graphs into RAM are not included in the above timings.

Further Reading and References

Reference 1: https://www.mdpi.com/1999-4893/13/11/269/pdf (General information and practical applications of shortest path algorithms)

Reference 2: Cormen, T., C. Leiserson, and R. Rivest (1991) Introduction to Algorithms (Second ed.) isbn: 9780262031417. (A detailed description of Fibonacci heaps and their operators)

Trephine answered 27/2, 2023 at 12:3 Comment(2)
An even fuller answer to this (plus source code) can be in the paper that I recently wrote: arxiv.org/abs/2303.10034Trephine
The paper is useful. What do you think about boost?Ferial

© 2022 - 2025 — McMap. All rights reserved.