AI algorithm for "RaceTrack" game
Asked Answered
U

8

19

does anyone know (or can suggest) a good algorithm for an AI for the RaceTrack pencil-paper game?

since you have 9 possible choices in each step and you need to look at least 6-10 steps ahead to decide on a good strategy, bruteforce is getting very expensive even if you can rule out some choices because of intersection with the boundary.

Currently I'm trying to assign each choice some quality value in order to decide which choices to rule out - but I don't know good rules yet on how to assign such a quality value.

Upgrowth answered 5/7, 2011 at 21:25 Comment(5)
Presumably the answer would be to construct some sort of racing line, then look two or three steps ahead to evaluate speed and position versus the racing line? Or is that just restating the question? I can think of some ad hoc methods of constructing a racing line from first principles, but I suspect a proper algorithm would be a smarter move.Llama
Vector Racer Online select AI Level 3 to get the optimal moves for a given track.Concurrence
One way to get great answers would be to organize a challenge on the code golf stack exchange website. Define a standard way to represent the tracks and answers, and award a bounty to the best solution. I think it would be a lot of fun!Letendre
@Jarrod: Vector Racer Online only lets you use pre-defined tracks (probably a lot of pre-computation has gone into them), and from what I can tell it doesn't give you any code.Diaz
@Diaz never claimed it gave any code, and it isn't pre-computed it calculates different paths on the same difficulty levels, I only posted the link so everyone could play the game and understand the rules.Concurrence
P
14

I have made a C++ solver that's a bit too long (187 lines) to fit comfortably here, so I have put it in pastebin instead: http://pastebin.com/3G4dfTjR. The program either computes an optimal (minimum possible number of moves) solution, or reports that none is possible.

Usage

Run the program as racetrack startX startY goalX goalY [circleX circleY radius].

The program assumes a 100x100 grid which may optionally contain a single circular obstacle whose centre and radius you specify. You must additionally specify the car's initial location, and a single goal location. Although these constraints are somewhat restrictive, a look at the code should make it obvious that they don't limit the algorithm in general -- all the relevant logic is encapsulated in the isMoveValid() and isGoalState() routines, so if someone can be bothered implementing more general versions of these routines (e.g. allowing the user to specify a bitmap of grid locations, and/or allowing multiple goal locations) then this can be incorporated without difficulty.

The only slight complication would be in getting the goal location to be the same as (or near, but "on the other side of") the starting location, which is what you need if you want your track to be a circuit. In this case, in order to avoid the solver simply turning the car around or stopping immediately, you would need to specify an invisible "starting line", and alter isMoveValid() to forbid "backwards" movements across this line.

How it works

Because each move costs exactly 1, it's possible to use a breadth first search through the 4D state space to find an optimal solution. Whenever we visit a given state s, which consists of a 4-tuple (x, y, dx, dy) with dx and dy being the velocity vector we used to get to (x, y), we consider all 9 states that we can reach from s with a single move. For any such state t which has not already been seen, this path to t (i.e. via s) is guaranteed to be optimal, since BFS always visits nodes in order of their minimum distance from the root. Whenever we determine an optimal path for a state, we record the predecessor state, enabling a traceback of the full path to be produced at the end.

BFS is simpler, and thus probably faster, than Dijkstra's Algorithm or A* search, which are more general algorithms that allow moves to have various costs -- flexibility that we don't need here. A* may be faster if there are few obstacles to confound its heuristic, but at each step it needs to look up the minimum-cost node, which is usually done using a heap, whereas for BFS a minimum-cost node is always available at the front of the queue.

Examples

stopwatch racetrack 30 3 90 10

Starting at (30, 3).
Goal is (90, 10).
Grid size is 100*100 (W*H).
No obstacle.
11-step solution:
(90, 10) (dx=10, dy=4)
(80, 6) (dx=9, dy=3)
(71, 3) (dx=8, dy=2)
(63, 1) (dx=7, dy=1)
(56, 0) (dx=6, dy=0)
(50, 0) (dx=5, dy=0)
(45, 0) (dx=5, dy=0)
(40, 0) (dx=4, dy=0)
(36, 0) (dx=3, dy=-1)
(33, 1) (dx=2, dy=-1)
(31, 2) (dx=1, dy=-1)
(30, 3) (dx=0, dy=0)
128113 states were examined in the process.
stopwatch: Terminated. Elapsed time: 343ms
stopwatch: Process completed with exit code 0.

stopwatch racetrack 30 3 90 10 50 20 25

Starting at (30, 3).
Goal is (90, 10).
Grid size is 100*100 (W*H).
A circular obstacle of radius 25 is centred at (50, 20).
22-step solution:
(90, 10) (dx=5, dy=-8)
(85, 18) (dx=5, dy=-7)
(80, 25) (dx=4, dy=-6)
(76, 31) (dx=4, dy=-5)
(72, 36) (dx=5, dy=-4)
(67, 40) (dx=6, dy=-3)
(61, 43) (dx=7, dy=-2)
(54, 45) (dx=8, dy=-1)
(46, 46) (dx=7, dy=0)
(39, 46) (dx=6, dy=1)
(33, 45) (dx=5, dy=2)
(28, 43) (dx=4, dy=3)
(24, 40) (dx=3, dy=4)
(21, 36) (dx=2, dy=5)
(19, 31) (dx=1, dy=6)
(18, 25) (dx=0, dy=6)
(18, 19) (dx=-1, dy=5)
(19, 14) (dx=-2, dy=4)
(21, 10) (dx=-3, dy=3)
(24, 7) (dx=-3, dy=2)
(27, 5) (dx=-2, dy=1)
(29, 4) (dx=-1, dy=1)
(30, 3) (dx=0, dy=0)
949565 states were examined in the process.
stopwatch: Terminated. Elapsed time: 3076ms
stopwatch: Process completed with exit code 0.

Notice how the optimal solution here first has to "double back", go up and around and then down again, since the circular obstacle extends all the way past the bottom of the grid.

Small bug: the code as posted will give a short (but nonzero-length!) answer if you set the goal location equal to the initial location. Obviously this could be checked for as a special case, but I'd already put the code on pastebin when I realised this... :)

Peacetime answered 6/7, 2011 at 14:43 Comment(5)
great! I'll get through your code tonight to see whether I understand it and whether it serves my purposes. thank you already!Upgrowth
@Upgrowth BFS may fail with larger, more complicated tracks since it requires an exponential amount of memory. If you run into this problem, an iterative-deepening depth-first search or perhaps IDA* would be a better fit.Carleycarli
@Aaron, my BFS needs memory at most linear in the size of the state space. Here, for an n by n grid, that would be at most O(n^4) bytes, which may be a lot, but it's not "exponential". This bounds the memory for storing the traceback path for any state, and it also bounds the size of the queue, since we only ever add a state to the queue if it has not been seen yet. (In fact I suspect that dx and dy are actually O(sqrt(n)) rather than O(n), which would make the total bound O(n^3) bytes.)Peacetime
@Peacetime Ah, good point. BFS is exponential in the solution depth, but for this problem the solution depth is constrained by the size of the grid. The effective branching factor is also low (it seems somewhat high, until you consider all the states that are cut off by the boundaries).Carleycarli
@Aaron, sorry to go on, but "BFS is exponential in the solution depth" is only true when the number of distinct states that can be reached in n steps is exponential in n, which frequently is not the case. Many search problems (like RaceTrack) enable you to get to a given state in more than 1 way, and if that happens "often enough" then it reduces the time and space complexity. OTOH if the state space was e.g. a binary tree, then there is only 1 way to get from the root to each state, and using BFS to find a minimal solution at depth n would indeed need O(2^n) time and space.Peacetime
D
5

Others recommend A*, which is probably the way to go, but there is a problem with that approach. Let me first say that the 'cost' of going from one node to another is always 1, as you want to minimize the number of steps, there simply is not other cost involved.

But the important point I want to make is that a location (x,y) is not a unique node in the search graph of A*! The node is characterized by x and y, but also by the x and y coordinates of the node the car is coming from(or by the velocity components vx and vy if you will). So you cannot just traverse the A* algorithm over a 2 dimensional grid; it should actually be 4-dimensional. That said, A* is probably still the way to go.

As for the heuristic, you could get really creative about that one, but I suggest something like distance to finish minus current velocity, where the distance is precalculated for each point in the regular 2D grid(use a Dijkstra algorithm for that). This makes the A* algorithm search first towards the finishline and preferably as fast as possible. I believe such an algorithm would do very well to calculate the entire route immediately.

One problem though, is that A* will always yield the optimal route, so an AI using such an algorithm wouldn't be fun to play against, as it would always win(assuming the startingpositions are fair).

Delius answered 5/7, 2011 at 23:42 Comment(4)
I assume you mean "distance to finish divided by current velocity", since this would give you an lower bound on the number of moves required to reach the finish at the current velocity. This heuristic isn't admissable, though. A better heuristic would count the number of moves required to travel the distance to the finish, assuming maximum acceleration.Carleycarli
@Aaron, you're right, I had completely forgotton about admissibility. I haven't come up with an admissible solution yet, but dividing by the current velocity doesn't guarantee admissibility either, since the car can accelerate.Delius
Right, but assuming maximum acceleration does guarantee admissibility.Carleycarli
True. It wouldn't be too hard to compute a maximum velocity for each point on the track, which would help quite a bit.Carleycarli
C
5

So far, I don't think anyone has addressed a key point of your question: how do you come up with a good "quality value"? In AI, the quality value you refer to is usually called a "heuristic". Ideally, your heuristic would tell you exactly the minimum number of moves required to reach the finish, given the current position/velocity. In reality, we have to settle for something that's easier to compute.

One important guideline is that a good heuristic should be admissable; that is, it should never overestimate the cost of reaching the goal (in your case, the number of moves to reach the finish). The A* algorithm depends on having an admissable heuristic.

A common technique for coming up with an admissable heuristic is to relax the original problem. In games, you can often do this by changing the game so that it becomes easier (e.g. by dropping rules). In RaceTrack, for example, you could straighten out the track to make it an easier game. With a straight track, the best strategy is clearly to just continuously accelerate. Thus, an admissable heuristic is to compute the distance from the current position to the finish (i.e. the length of the straightened-out track) and then compute the number of moves required to travel that distance assuming constant acceleration.

You can come up with other heuristics by relaxing different rules, but there is often a trade-off between the accuracy of the heuristic and the amount of computation required.

Carleycarli answered 6/7, 2011 at 0:10 Comment(1)
I think you could compute an approximation of how far along the track you are using fast marching. It wouldn't be too hard to implement, and should be largely good enough to guide the algorithm.Letendre
L
1

While it won't be immediately applicable to RaceTrack, you may be able to learn something from the A* path-finding algorithm. It's used in a lot of games to help the AI figure out how to get from point A to point B as quickly as possible.

Landahl answered 5/7, 2011 at 22:57 Comment(0)
D
1

You mention the idea of "assigning each choice some quality value" - this is called a heuristic function. Many AI algorithms (such as A* and alpha-beta pruning, mentioned by others) are only as good as the heuristic function you plug into them.

However, if you do manage to create a good heuristic function, then these algorithms will automatically perform much better "for free" - so it is very much worth your while to invest some time in developing a good one.

Another angle is to try to pre-compute your entire race from the start. Then it is a question of minimization of number-of-turns to cross the finish line. One handy minimum-finding algorithm is simulated annealing.

That aside, it would also be cool to see some genetic algorithm solution to a game like this. Not sure if it's quite the right fit, but I could imagine creating a 'brain' that takes various inputs - expected distance from wall at a few turns in the future, speed, distance to other racers, etc - and evolving the logic of that brain with a genetic algorithm. The trick is to break the problem down into parts that can be mutated meaningfully.

Actually, you could even combine them, and use a genetic algo. to develop a heuristic function which is plugged into a standard AI search algorithm.

Honestly though, brute force would probably work fine on a constrained track, since you can toss out a subtree if you crash (and crashes are common for bad paths).

Diaz answered 5/7, 2011 at 23:11 Comment(0)
C
1

I'd propose that you start by reversing the problem. Use retrograde analysis (like they do in chess endgames http://en.wikipedia.org/wiki/Retrograde_analysis) to calculate backwards from the end, assuming you are the only player, to see how many steps are needed to cross the finish line, given a position and velocity. If my thinking is correct, the time to calculate this should be linear in number of positions. Should be very quick.

It will not be the absolute truth, since you have competitors disturbing your path, but it will give you a very good heuristic for your search algorithm.

Conversable answered 6/7, 2011 at 9:6 Comment(0)
I
0

there are a few algorithms known in chess like alpha-beta pruning, move ordering etc.. maybe you have better luck if you search in chess context.


alpha beta, strategy


Edit: Those path finding algorithms only work if you don't have additional rules and conditions. For example if you also have velocity and centripetal forces you're out of luck. If you don't have those advanced conditions you're better off with a simple path finding algorithm as stated in other answers.

Inhere answered 5/7, 2011 at 22:46 Comment(2)
You can use A* in state space instead of the two dimensional (x,y) space, it should work as well.Letendre
If you say so :) I can't imagine it would work if you have dynamic weights between two nodes depending on the velocity,direction,etc..Inhere

© 2022 - 2024 — McMap. All rights reserved.