I'm preparing for a coding interview, and was refreshing my mind on graphs. I was wondering the following : in all places I've seen, it is assumed that adjacency lists are more memory efficient than adjacency matrices for large sparse graphs, and should thus be preferred in that case. In addition, computing the number of outgoing edges from a node requires O(N) in a matrix while it's O(1) in a list, as well as which are the adjacent nodes in O(num adjacent nodes) for the list instead of O(N) for the matrix.
Such places include Cormen et al.'s book, or StackOverFlow : Size of a graph using adjacency list versus adjacency matrix? or Wikipedia.
However, using a sparse matrix representation like with Compressed Row Storage representation, the memory requirement is just in O(number of non-zeros) = O(number of edges), which is the same as using lists. The number of outgoing edges from a node is O(1) (it is directly stored in CRS), and the adjacent nodes can be listed in O(num adjacent nodes).
Why isn't it discussed ? Should I assume that CSR is a kind of adjacency list representation of the graph represented by the matrix ? Or is the argument that matrices are memory intensive flawed because they don't consider sparse matrix representations ?
Thanks!