Pacumen – Exploring Testing and AI

In this post I want to set the stage for some future posts regarding thinking about how you might work, as a specialist tester, within the context of an environment that is using machine learning and various artificial intelligence techniques. This is an area that I’m finding many testers are not ready for. To that end, I’m going to show you how to get my Pacumen code repository up and working. Then I’ll take you through a few exercises to put it through its paces.

You might want to check out my previous post on Testing and AI for the context. My goal is to introduce testers to the thinking behind data science, machine learning and some aspects of artificial intelligence. These are domains testers find themselves working in more and more and I think it’s important to be ready for the kind of thinking these environments tend to require.

This post will make sure you can use the project and introduce you to the basic mechanics. Further posts will get more into the basis of the project, including some of the theory, and then ask you to consider what that means for developing and testing. You might wonder if this order isn’t reversed. Shouldn’t some of that theory come first and then the application of it? Well, that tends not to be how it works sometimes. We often have to act with incomplete knowledge and, ironically, that’s one of the very situations that Pacumen was designed to model.

You can clone Pacumen or just download a zipped version. You will need either Python 2 or 3. It doesn’t really matter which although I tend to recommend Python 3 on general principles. This is still a work in progress as I’m putting in place some significant updates from the university-based versions of the code. That said, everything should work just fine for purposes of these posts.

Play the Game

After you’ve downloaded and unpacked the code the easiest way to work with the project is to just go to your terminal and/or command line and make sure you are in the root project directory. You can test the game by typing:


This is an approximation of the actual game Pac-Man. I’ll presume most people are familiar with the game, but if not feel free to read up on Pac-Man wiki. You can move the Pac-Man character around using the arrow keys or the traditional gamer WASD setup: W for up, A for left, S for down, and D for right. One thing you might notice is that doing nothing in the game causes your score to go down. The same applies if you are moving through the board but not eating dots.

If you find the game moves a bit too fast, you can try to slow down the frame time, like this:

python --frameTime 0.2

Understand the Layouts

Pacumen generates its game board from layout files. Let’s try one of those:

python --layout test_maze

Here I’m using a “layout” switch to specify the name of a particular layout. Those layout files are located in the layouts directory of the project, all with a “.lay” extension. So “test_maze” corresponds to test_maze.lay.

You can have pacman eat the dot to win. Let’s try another layout:

python --layout tiny_maze

Again, pretty easy to win this one. And one more just to show some variation:

python --layout medium_maze

Incidentally, if you want to see the maze a little smaller you can do this:

python --layout tiny_maze --zoom 0.5

Or a little bigger:

python --layout tiny_maze --zoom 2

Depending on your resolution you may need to do that with certain mazes. For example, try out big_maze.

These specific layouts I’m showing you are a particular implementation of a Pac-Man environment called a “single agent environment.” The agent here being the Pac-Man character (hereafter just “pacman”). In these particular environments, pacman always starts at the top right corner. The game ends when pacman eats the last food dot. There can be food dots anywhere on the board. In all these mazes, at the bottom left corner is a single food dot. When you eat it, you win.

Obviously there are very simplified boards and that would be important in a machine learning or artificial intelligence environment where you want to start off with simpler approximations. But thinking of more complicated variations can be interesting and is a part of testing in such environments.

There’s really no way to lose on these boards because pacman can’t be killed. There are no ghosts yet. But, as mentioned, do notice that just sitting doing nothing does cause your score to go down. This is how learning systems and learning algorithms tend to work: there is a negative “reward” for actions that are not immediately useful towards reaching a goal.

Creating Agents

What I want to do now is give you a taste of how certain features are added to the project. Those features are capable of being tested, of course, and we’ll get into that in future posts. For now I just want you to get a feel for the basic operation. That basic operation requires the use of agents.

Pacumen provides an Agent class. The Agent class is very simple. It’s the class you will subclass to create your pacman agent.

But what is an agent? For now, let’s not get technical about it and just settle for saying that agent is something that acts within a world, according to a set of finite actions. Further, this agent is pursuing a goal. Here that world is the Pac-Man grid and the finite actions correspond to the four directions that pacman can go: up, down, left, right. Or north, south, east, west, if you prefer. The goal is to complete a board by eating all of the dots on it. And, if there were ghosts, to avoid being killed by them, of course.

Let’s create a very simple, dumb agent. Create a file called in the main project folder. Add the following code in it:

You’ll note that we’re importing the classes Agent and Direction. The Direction class is very simple, essentially just acting as a placeholder for a series of constants.

Every subclass of Agent — like DumbAgent — is required to implement a get_action() method. This is the method called in each time step of the session. In the case of a Pac-Man game, the session is until any individual game is won or lost. The time steps could be measured only when pacman moves but, in fact, time keeps running continuously.

Going back to the get_action method, it should return a valid action for pacman to carry out. The only possible actions are to go a specific direction.
Notice that the get_action method is supplied a parameter, state, which the method can use to find out about the current configuration of the game. We’ll come back to the state in a bit.

With your new file and code in place, try this command:

python --layout test_maze --pacman DumbAgent

The command above is specifying to run the Pac-Man program, using the tiny_maze environment, and the agent (pacman) will be controlled by the DumbAgent. That DumbAgent has one action it will apply: going west.

And look at that — pacman wins! You should see something like this in your console:

Pac-Man emerges victorious! Score: 503
Average Score: 503.0
Scores:        503.0
Win Rate:      1/1 (1.00)
Record:        Win

You might wonder about that “win rate.” Part of using intelligent agents in these contexts is having them try multiple sessions. This makes a lot more sense when you are applying learning algorithms. But to see how this would work, try this slight variation on the above command:

python --layout test_maze --pacman DumbAgent --numGames 3

And three times in a row, pacman wins. Congratulations, you’ve made an intelligent agent. Well, not really. Try the same thing but with tiny maze instead:

python --layout tiny_maze --pacman DumbAgent

You’ll actually find that the game crashes with a message about “Illegal action West.” Now let’s try it out with another layout:

python --layout medium_maze --pacman DumbAgent

Same thing.

The situation here is that in the Pac-Man game, if the path to the grid is blocked and Pac-Man tries to go into it, an “illegal action” is generated. This is okay; we’re catching an exception. It’s not quite okay that the agent is doing this but it is expected. After all, it is a dumb agent. We’ll fix that next. But for now just start thinking about what it means to create an agent that acts a certain way. You might want to consider each agent a kind of test case. The code in the get_action method would be the equivalent of a data condition.

Use the State

Let’s try and use the information present in the state parameter. This is an object of type GameState. You might want to take a look at the GameState class and check out some of the methods defined there. Using these methods, you can find out all kinds of information about the current state of the game. Then you can base your agent’s action accordingly.

This is how the algorithms for these kinds of systems would be developed and, as a tester, this information will be important to you.

For now understand that a GameState specifies the full game state, including the food, agent configurations (such as position and direction of motion) and score changes. GameStates are used by an instance of the Game class to capture the actual state of the game and, most importantly, can be used by agents to reason about the game and decide what actions to take to achieve a goal.

From an implementation perspective, much of the information in a GameState is stored in an instance of the GameStateData class. However, GameState provides that data via its own accessor methods, which should be used rather than referring to the GameStateData object directly. Again, an important thing to be aware from a testing standpoint. It’s critical to test these kinds of systems by the interface they will have to use, even when a lower level representation is available to them.

Keep in mind that the Pac-Man game has a number of different agents, but they are basically either pacman or the ghosts. Each agent in the game has a unique index; pacman is always index 0, with ghosts starting at index 1. For purposes of this post, however, we won’t have any ghosts. But testing such systems does require you to consider, as part of your test and data conditions, what agents are in place and how they will act.

Let’s modify our DumbAgent a bit to get some of this state information:

Let’s run this with test_maze:

python --layout test_maze --pacman DumbAgent

The result of this will be shown in your console:

Location:  (8, 1)
Actions Available:  ['West', 'Stop']
Location:  (7, 1)
Actions Available:  ['East', 'West', 'Stop']
Location:  (6, 1)
Actions Available:  ['East', 'West', 'Stop']
Location:  (5, 1)
Actions Available:  ['East', 'West', 'Stop']
Location:  (4, 1)
Actions Available:  ['East', 'West', 'Stop']
Location:  (3, 1)
Actions Available:  ['East', 'West', 'Stop']
Location:  (2, 1)
Actions Available:  ['East', 'West', 'Stop']
Pac-Man emerges victorious! Score: 503
Average Score: 503.0
Scores:        503.0
Win Rate:      1/1 (1.00)
Record:        Win

So we see that pacman is simply doing the one action (“West”) any time it can do so. Given that the sole food pellet happens to be west of its current position in the test maze, this guarantees a win. But notice how that’s just a happenstance of the maze. As we’ve seen, this same exact strategy would fail entirely with the tiny maze.

So now let’s add another class to this same file, called GoWestAgent, and utilize information from the state:

Let’s try it with test maze again, noting that we change the name of the agent passed to the “pacman” option:

python --layout test_maze --pacman GoWestAgent

Great. Let’s try it with tiny_maze:

python --layout tiny_maze --pacman GoWestAgent

Better. The game doesn’t crash. Although pacman does appear to still be pretty dumb.

Let’s create yet another agent. As before, you don’t have to replace your existing one. Add the following RandomAgent:

Notice that we’re now importing the “random” module. You can actually put that import at the top of the file if you want. Now try it with our two mazes:

python --layout test_maze --pacman RandomAgent
python --layout tiny_maze --pacman RandomAgent

Pay attention to what you observe there. Specifically, what you might notice is that now pacman can do really badly on the test_maze. This is because the action choice is now random. However, this does mean that pacman can now win on the tiny maze, although it may take some time. Still, that’s better than before. So you can see that even with this simple environment and with slight modifications to the agent (Dumb, GoWest, Random), different results can be obtained.

This is a really important sensitivity that testers and developers have to be aware of as they apply different agents to different environments.

I should note also that RandomAgent always includes a choice for the ‘Stop’ action. This tends to slow the agent down. Stopping is needed in situations where you need to evade ghosts but, for now, in environments without any ghosts, you can choose not to pick the ‘Stop’ action.

Again I’ll note that there are many layouts that you can try, simply by looking in the layouts folder and choosing one. You can also create your own simply by following the conventions.

What the pacman agent can perceive is based on the methods of the GameState class, which means that the agent can perceive:

  • Its own position.
  • Its current score in the game.
  • The position of all of the ghosts (if there are any).
  • The locations of the walls.
  • The positions of any power pellets (if there are any).
  • The positions of each food dot.
  • The total number of food dots remaining.
  • Whether the game has been won or lost.

In addition, given the action chosen, pacman can determine what the next state of the environment will be by using the generate_pacman_successor().

Now, as a tester — and as a developer, I would argue — you can figure out some things you would like to test. You could even try to code those things given that the API has essentially been provided to you. For example, you could think about testing something called a SimpleReflexAgent. This agent would look at the possible legal actions and, if one of those actions would cause a food dot to be eaten, it will definitely choose that action. But if none of the immediate actions lead to eating a food dog, then the agent can choose randomly from the possibilities.

You might say: “But how would I have known to create something called a ‘SimpleReflexAgent’? What even is that?” Good point. As a tester — and, again, as a developer — you would have to learn the domain. But even if you didn’t know that specific term, you could have thought up that kind of test based on the information I gave you in this post.

Different Representations

You’ll notice that everything has been done using the standard, graphical Pac-Man display. However, in many machine learning and AI settings you’ll want to deal with different representations. You can use a purely textual display for these. For example, try this:

python --layout test_maze --pacman RandomAgent --textGraphics

This will show you one of the same scenarios we looked at above, but you’ll see it’s being played out on a textual graphic display. I won’t go into that too much here but this notion of different representations for a common layout, or environment, is an important part of the sensitivities you would be testing for in these kinds of applications.

Search Agents

There is a SearchAgent class in the code base and this is actually a critical component of how many of these types of systems work. The SearchAgent implementation is a very general search agent that finds a path using a supplied search algorithm for a supplied search problem. The instance of the class then returns actions to follow that path.

Two key terms are introduced there: search algorithm and search problem. There’s a lot to unpack with that statement and I’ll save that for a future post. For now just think about using a SearchAgent for the pacman agent and supplying a particular strategy.

A search algorithm is a function which takes an instance of SearchProblem as a parameter and returns a sequence of actions that lead to a goal. Let’s keep things really simple. In the file, add the following method:

Now let’s try to run that with the following:

python --layout tiny_maze --pacman SearchAgent -a fn=tiny_maze_search

This command tells the SearchAgent to use tiny_maze_search as its search algorithm. You should find that the pacman agent should navigate the maze successfully. And why wouldn’t it? We gave it the very strategy it had to follow, in terms of the exact sequence of moves. In other words, this strategy works correctly only on tiny_maze.

However, that’s not how things would work in actual systems. The goal of machine learning and many AI projects is to have the system find the best approach to solving a problem — like navigating a maze — or learning how to reach a goal in the most efficient manner. Further, we want those algorithms to be general. We don’t want to have to make one algorithm for each maze.

Actual examples of search algorithms would be concepts like a depth first search, a breadth first search, a uniform cost search, and so on. And I bring these up because that’s exactly what you should be thinking about now as we close this post out. If you were told what I just told you — that we want to think about implementing search algorithms — what is your test approach going to be? How do you work with developers and the business to reason about such a system?

What Next?

Well, I hope this was at least an interesting introduction that whetted your appetite for some more things to come. In the next post I’ll get into some of how, as a tester, you have to start conceptualizing these kinds of environments. I’ll also show that while Pac-Man is clearly a contrived example, it’s actually very indicative of the type of work you are likely going to be doing in these contexts.


About Jeff Nyman

Anything I put here is an approximation of the truth. You're getting a particular view of myself ... and it's the view I'm choosing to present to you. If you've never met me before in person, please realize I'm not the same in person as I am in writing. That's because I can only put part of myself down into words. If you have met me before in person then I'd ask you to consider that the view you've formed that way and the view you come to by reading what I say here may, in fact, both be true. I'd advise that you not automatically discard either viewpoint when they conflict or accept either as truth when they agree.
This entry was posted in AI, Exploration. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *