Why is My Minimax Not Expanding and Making Moves Correctly?
Asked Answered
A

1

8

I am implementing minimax in Python 2.7.11 in a basic game of Pacman. Pacman is the maximizing agent, and one or more ghosts (depending on the test layout) is/are the minimizing agent(s).

I must implement minimax so that there can be potentially more than one minimizing agent, and so that it can create a tree of n plies (depth). Ply 1, for example, would be each ghost taking a turn minimizing the terminal state utilities of their possible moves, as well as pacman taking his turn maximizing what the ghosts have already minimized. Graphically, ply 1 looks like this:

Ply 1 depth of minimax

If we had the following arbitrary utilities assigned to the green terminal states (left to right):

-10, 5, 8, 4, -4, 20, -7, 17

Pacman should return -4 and then move in that direction, creating an entirely new minimax tree based on that decision. First, a list of variables and functions needed for my implementation to make sense:

# Stores everything about the current state of the game
gameState

# A globally defined depth that varies depending on the test cases.
#     It could be as little as 1 or arbitrarily large
self.depth

# A locally defined depth that keeps track of how many plies deep I've gone in the tree
self.myDepth

# A function that assigns a numeric value as a utility for the current state
#     How this is calculated is moot
self.evaluationFunction(gameState)

# Returns a list of legal actions for an agent
#     agentIndex = 0 means Pacman, ghosts are >= 1
gameState.getLegalActions(agentIndex)

# Returns the successor game state after an agent takes an action
gameState.generateSuccessor(agentIndex, action)

# Returns the total number of agents in the game
gameState.getNumAgents()

# Returns whether or not the game state is a winning (terminal) state
gameState.isWin()

# Returns whether or not the game state is a losing (terminal) state
gameState.isLose()

This is my implementation:

""" 
getAction takes a gameState and returns the optimal move for pacman,
assuming that the ghosts are optimal at minimizing his possibilities
"""
def getAction(self, gameState):
    self.myDepth = 0

    def miniMax(gameState):
        if gameState.isWin() or gameState.isLose() or self.myDepth == self.depth:
            return self.evaluationFunction(gameState)

        numAgents = gameState.getNumAgents()
        for i in range(0, numAgents, 1):
            legalMoves = gameState.getLegalActions(i)
            successors = [gameState.generateSuccessor(j, legalMoves[j]) for j, move 
                                                           in enumerate(legalMoves)]
            for successor in successors:
                if i == 0:
                    return maxValue(successor, i)
                else:
                    return minValue(successor, i)

    def minValue(gameState, agentIndex):
        minUtility = float('inf')
        legalMoves = gameState.getLegalActions(agentIndex)
        succesors = [gameState.generateSuccessor(i, legalMoves[i]) for i, move 
                                                      in enumerate(legalMoves)]
        for successor in successors:
            minUtility = min(minUtility, miniMax(successor))

        return minUtility

    def maxValue(gameState, agentIndex)
        self.myDepth += 1
        maxUtility = float('-inf')
        legalMoves = gameState.getLegalActions(agentIndex)
        successors = [gameState.generateSuccessor(i, legalMoves[i]) for i, move
                                                       in enumerate(legalMoves)]
        for successor in successors:
            maxUtility = max(maxUtility, miniMax(successor))

        return maxUtility

    return miniMax(gameState)

Does anyone have any ideas why my code is doing this? I'm hoping there are a few Minimax/Artificial Intelligence experts out there that can identify my issues. Thanks in advance.

UPDATE: by instantiating my self.myDepth value as 0 instead of 1, I have irradicated the exception throwing issue. However, the overall incorrectness of my implementation still remains.

Allegro answered 15/3, 2016 at 21:53 Comment(10)
I don't see the function getScore() defined anywhere in your code. Where is it supposed to be?Hanseatic
It is within an entirely separate testCases class. I'll add it in, but out of context it may seem a bit bizzare. The more important fact is that this is happening because my self.evaluationFunction(gameState) is being called before terminal states or maximum depth have been achieved.Allegro
The if i == 0 line in miniMax seems very sketchy to me. That condition is always going to be True on the first pass of the outer loop, so the following line will return and the other agents will never be considered. Is the indentation in your code correct?Mica
I made sure my code matched here exactly to what I have, but I see your point. The psuedo code I am working with for def miniMax(gameState) says: "1) If state is win/lose/terminal then return utility. 2) If next agent is pacman then return maxValue(gameState). 3) If next agent is ghost then return minValue(gameState)". I saw no other way to keep track of the agents than to use for i in range(0, numAgents, 1):Allegro
Well, you can only return once per run of the function, so having an unconditional return inside a loop makes the later iterations never occur. Perhaps the next agent to act should be part of the gameState?Mica
I changed my instantiation of self.myDepth to 0 instead of 1, which eliminates the exception throwing issue at least.Allegro
self.myDepth == self.depth What is going on here? Is this the max depth check? Where you are incrementing/decrementing self.depth?Labourite
self.depth is globally defined, and varies based on the test cases I use to test if my implementation of minimax is generalized enough that it can be passed any amount of agents with any amount of plies. self.myDepth is a local variable I instantiate that gets incremented within the def maxValue(gameAgent, agentIndex, depth) function, since maxValue can only be called once per depth (for Pacman). The line self.myDepth == self.depth is checking to see if I have reached the maximum depth defined in my test case.Allegro
1) What is self? self is usually what you use for the self reference in class, but it doesn't look like you're doing that here. Is getAction a part of a class? 2) depth += 1 in the maxValue isn't doing anything, you're updating a function parameter and that's not going to be reflected elsewhereLabourite
1) Again, this is only a fragment of the entire project which consists of over 8000 lines of code. The self in self.depth and self.evaluateFunction(gameState) are part of a class that extends a long hierarchy of agents and objects. They are in the same class as def getAction(gameState), but they are correct. The program is written so that my implementation of Minimax is all that matters in my test cases succeeding. 2) You are right, I meant to change that part of the code, as I caught it earlier.Allegro
A
1

I have finally figured out a solution to my issue. The main problem was the fact that I was not referencing the depth correctly to keep track of the plies. Instead of incrementing depth within the maxValue method, it should instead be passed as a parameter to each function, and only incremented when being passed into maxValue. There were several other logical errors such as not correctly referencing numAgents, and also the fact that my miniMax method was not returning an action. Here is my solution which turned out to work:

def getAction(self, gameState):

    self.numAgents = gameState.getNumAgents()
    self.myDepth = 0
    self.action = Direction.STOP # Imported from a class that defines 5 directions

    def miniMax(gameState, index, depth, action):
        maxU = float('-inf')
        legalMoves = gameState.getLegalActions(index)
        for move in legalMoves:
            tempU = maxU
            successor = gameState.generateSuccessor(index, move)
            maxU = minValue(successor, index + 1, depth)
            if maxU > tempU:
                action = move
        return action

    def maxValue(gameState, index, depth):
        if gameState.isWin() or gameState.isLose() or depth == self.depth:
            return self.evaluationFunction(gameState)

        index %= (self.numAgents - 1)
        maxU = float('-inf')
        legalMoves = gameState.getLegalActions(index)
        for move in legalMoves:
            successor = gameState.generateSuccessor(index, move)
            maxU = max(maxU, minValue(successor, index + 1, depth)
        return maxU

    def minValue(gameState, index, depth):
        if gameState.isWin() or gameState.isLose() or depth == self.depth:
            return self.evaluationFunction(gameState)

        minU = float('inf')
        legalMoves = gameState.getLegalActions(index)
        if index + 1 == self.numAgents:
            for move in legalMoves:
                successor = gameState.generateSuccessor(index, move)
                # Where depth is increased
                minU = min(minU, maxValue(successor, index, depth + 1)
        else:
            for move in legalMoves:
                successor = gameState.generateSuccessor(index, move)
                minU = min(minU, minValue(successor, index + 1, depth)
        return minU

    return miniMax(gameState, self.index, self.myDepth, self.action)

And presto! Our final working multi-agent minimax implementation.

Allegro answered 2/4, 2016 at 6:47 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.