This is built upon a previous question Solve a simple packing combination with dependencies, although there is no need to check out that question to understand this one.
This question asks about the fastest ways to count the number of linear extensions of a partially ordered set.
Such an partially ordered list can be visualized as the feasibility of a packing configuration, given arbitrary ordering of the objects. The question is than equivalent to what are the possible ordering of all the blocks handed to you for you to achieve the shown packing configuration, if the blocks cannot be moved once placed?
In this example, all blocks ABCDEFG need to be laid down,dependencies for A,B,C,D: set{}, E: set{A,B},F: set{B,C} G: set{C,D}.
You can get all possibilities of completing the partially ordered set by checking all permutations of 7 blocks and count the number that meets the dependencies. A faster alternative is provided by גלעד ברקן in the previous post who used a graph structure that terminate further search into a branch if the dependencies are not met in nodes already explored.
nodes = {
'A': {
'neighbours': ['B','C','D','E','F','G'], 'dependency': set()},
'B': {
'neighbours': ['A','C','D','E','F','G'], 'dependency': set()},
'C': {
'neighbours': ['A','B','D','E','F','G'], 'dependency': set()},
'D': {
'neighbours': ['A','B','C','E','F','G'], 'dependency': set()},
'E': {
'neighbours': ['C','D','F','G'], 'dependency': set(['A','B'])},
'F': {
'neighbours': ['A','D','E','G'], 'dependency': set(['B','C'])},
'G': {
'neighbours': ['A','D','E','G'], 'dependency': set(['C','D'])},
}
def f(key, visited):
if len(visited) + 1 == len(nodes):
return 1
if nodes[key]['dependency'] and not nodes[key]['dependency'].issubset(visited):
return 0
result = 0
for neighbour in nodes[key]['neighbours']:
if neighbour not in visited:
_visited = visited.copy()
_visited.add(key)
result += f(neighbour, _visited)
return result
print 2 * f('A', set()) + 2 * f('B', set()) # exploiting symmetry
My question is that if there is any way to further optimize upon גלעד ברקן's algorithm without losing generality? I can probably make it faster if the dependencies are scarce and the list can be further broken down into independent sub-lists but that's not the case for this particular example.