Testing Algorithmic Searching, Part 1

The next two posts are follow-ons from the previous. The goal here is to get you set up thinking as a tester in a machine learning environment and specifically in the context of search problems. This post, and the next, will focus on making sure you have the basics of the algorithms.

Two disclaimers at the start. First, one of attribution. I have some videos in this post and I created these videos based on a mash-up of images that are used in the “Intro to AI” course provided by Dan Klein and Pieter Abbeel.

The second disclaimer is that the contents of this post can be rough going if you have not been exposed to the ideas. I urge you to go through it, however. I say that not just because I wrote it and hope it provides value but rather because facing cognitive friction is something testers routinely have to do. I will periodically bring up some specific relevance aspects for specialist testers.

When you start getting into algorithms, there is a huge distinction between what is a state space graph and a search tree.

State Space Graph

A state space graph is a mathematical representation of a search problem. This graph encodes the connectivity between the states and any given state can only appear once. Each node is a particular state space configuration; the arcs represent successor functions. Arcs, in other words, are the result of an action being taken. The goal test is a set of goal nodes. So the test would be something that checks if agent is in a subset of those nodes.

Once you specify a search problem, then, by definition, this graph exists. But, as developers and testers, you’re certainly not going to try to build it. Except for the most simple of systems, the state space graph can be very complex. Instead what you’ll do is use an abstraction of this graph in your algorithms. And that brings us to search trees.

Search Tree

Search trees are not part of the problem specification, unlike the state space graph which, in fact, is part of the specification. In other words, where a state space graph might be part of the requirements (and thus the intent), as it were, the search tree is part of the implementation. A search tree is something that the algorithms actually build up as they are running. That’s an important point: search trees are used in algorithms, state space graphs are not. The former is an actual representation, the latter is a mathematical one.

So, throwing on the tester hat, where am I going to put my focus? That’s probably somewhat obvious. But does the state space graph then have any relevance whatsoever? I’ll leave that for you to ponder. You’ll have more chances to ponder that as this post goes on. For now just note that the start state is the root of the search tree and child nodes correspond to successors.

The search tree shows the possible futures you might encounter starting from the start state, following a particular sequence of actions corresponding to each of the paths in the search tree. With the above image we have two possible futures from the start state.

An important point here is that each node is not just a state. In fact, each node encodes an entire path and thus each node will correspond to plans to achieve that particular state. An important point is that it’s possible to have specific states multiple times in the search tree. This is because different plans can achieve that same state. In other words, the same state can be reached but by different sequences of actions. This is different than a search graph, wherein a given state can appear only once in the entire graph.

Does that refine your ideas of the relevance of the state space graph? Keep pondering.

Search trees tend to be larger than the state space graph, so you don’t want your algorithms to build them in their entirety any more than you, as a human being, want to build the state space graph in its entirety. This means developers only want the algorithm to build the part of the tree that matters to find a solution to the problem that we’re all trying to solve. As a tester, that gives you a key sensitivity to be considering as well as a warning about the potential observer effect.

Does that make sense to you? I’ll revisit it as we go on.

The Fringe

In the context of building software agents that solve problems, your agent has to expand out potential plans — nodes in the tree. You maintain a fringe of partial plans under consideration:

A fringe is the furthest part of the tree that has been constructed so far. The entire fringe is sometimes referred to as the frontier.

So the idea is that your agent picks one of the nodes in the fringe to expand. That’s what the successor function does. However you want to try to expand as few tree nodes as possible because the more you expand, the more time you are spending solving the problem. That translates to more cost. So we want a least cost solution which means an effective way to traverse the tree.

But how do we tell the agent how to prioritize which node to pick? Whatever our choice ends up being, that becomes our exploration strategy. Ah! Exploration! I bet your tester senses perked up at that, huh? Indeed, keep that in mind.

Generic Strategy

So let’s just take a moment here and make sure we’re all on the same page.

(1) You have an overall state space graph.

(2) You have a search tree that will be built up:

(3) We build up a tree from the successor function.

(4) And we keep track of a fringe.

(5) The fringe is the part of the tree that we have constructed so far.

(6) You pick a node on the fringe, call the successor function, and grow the tree.

The important ideas here are the fringe, expansion of nodes, and an exploration strategy that guides the expansion and thus the creation of the fringe. That strategy is basically having your algorithm allow the agent to ask the following: which fringe nodes should I explore? Which nodes do I pick first? That strategy determines which part of the tree your agent will explore. And you want the agent to explore as little as possible of the search tree and yet still find a solution.

Tree Search

So what we want is a tree search algorithm and a visualization of what that means looks like this:

Notice that we pass in to a TREE-SEARCH two things: a search problem and a strategy. As a tester, you should be thinking of TREE-SEARCH as a test condition and the problem and the strategy as data conditions applied to that test condition.

The search problem is something that allows us to call successor functions and do goal tests. That’s what we talked about in the last post. So that problem might be “eat all the dots in a given maze in the least time possible.” What that visual is showing you is that we get the initial state or start state. At that point, the fringe initially consists of just that initial state. Now consider the loop in that visual.

  1. If the fringe is empty, then we’ve been growing the search tree but we have nothing left from where we can grow more; every node is a dead end. If we haven’t found a solution by now, we won’t.
  2. If the fringe is not empty, we choose a node according to our strategy.
  3. If the node contains a goal state, we return that node which encodes the path from the root to that goal state. We have a solution!
  4. If the node is not a goal state, then we expand the node. Meaning, we call the successor function, look at all the successor states we get back from that, add them to the fringe.

Tree search is a general algorithm. You can make it more specific by passing in different strategies. This gets into two broad areas of search:

  • Uninformed search: Here the agent has no idea if it’s getting closer to the goal or not. It just keeps exploring and exploring. Once it finds the goal, it will know it. But until that happens, the agent has no idea how close it is. The most common algorithms in this context are depth first search, breadth first search, and uniform cost search.
  • Informed search: Here the agent has some idea if it’s moving closer to the goal or further from the goal, via some rule of thumb, or heuristic, which helps the agent make that determination. The most common algorithms here are greedy search, A* (pronounced “aay-star”), and graph search.

So a general tree search can become a depth first search, or a breadth first search, or an A* search, and so on if you give it a specific strategy. That’s a pretty key thing. As a developer that’s giving you hints as to how to optimize the creation of these algorithms but, as a tester, that’s also helping you work with the developers to put pressure on the design.

For the remainder of this post, we’ll focus on uninformed search. The next post, which will serve as the second part to this one, will get into informed search.

Uninformed Search

General Tree Search

I’m going to spend more time on this general approach to get you up to speed. This should let me take on the next sections a bit more rapidly. That “rough going” I mentioned at the start? Yeah, this is where that can kick in for some folks. First, we have a search graph:

We’re going to start at S and the goal is to get to G. We also have a search tree:

It’s easiest to show this with a video.

That video shows you a progression of how this plays out. Basically what’s happening there is the agent is starting in state S, which is the start state. So the fringe, on the right, just has S at the start. The agent calls the successor function. This means it checks what states can be gotten to assuming S is the starting state. That leads to three possibilities, all of which are put on the fringe:

  • S -> d
  • S -> e
  • S -> p

At this point, our algorithm takes S off the fringe (strikes it out in red in the video) because it’s been explored. We now have three nodes in the fringe. The agent has to pick one. But which one? It doesn’t really matter in this case; let’s say our strategy is arbitrary. Let’s pick d. So then that node is expanded to create:

  • S -> d -> b
  • S -> d -> c
  • S -> d -> e

So note that we now have taken S -> d off the fringe and our total fringe is the parts we haven’t explored:

  • S -> e
  • S -> p
  • S -> d -> b
  • S -> d -> c
  • S -> d -> e

Let’s expand S -> d -> e. We take that off the fringe and so on. Eventually the agent ends up at G and realizes it has reached the goal state. So the solution is S -> d -> e -> r -> f -> G.

Note that, as the last part of the video shows, this solution isn’t found until the segment with G is removed from the fringe. The solution is not found solely when G is added as a node. That’s a very important point in search algorithms. So here’s the final state:

This was a very generic tree search where we basically just eye-balled it and made some arbitrary decisions.

Depth First Search

This is the same search graph and search tree shown above but with a different algorithm applied: a depth first search. Let’s use the video for demonstration purposes.

When we start with S and pull it from the fringe, we have S -> d, S -> e and S -> p just as we did before.

All those nodes are of equal depth. That means we have just as good a reason for expanding one node as the others. So we break the tie by choosing a strategy. In this case, let’s say our strategy is alphabetical. In other words, given nodes at equal depth, choose the one that comes first alphabetically. That would be d. So we expand S -> d -> b – >a. Again, we keep going down the depth. That’s why this is depth first.

A reminder here is that the algorithm has not built the search tree ahead of time. So our strategy can hit dead ends. We can see those dead ends here because we’re using a fully built out tree but the algorithm would not. This is a key point! You can’t let your own understanding of the problem space infect how the solutions are found. You can’t write solutions that take into account what we know. It all has to based on what the agent can figure out. This kind of observer effect is incredibly important in terms of testers being aware of it.

Eventually a path is found but notice that a lot of exploring was needed to get there.

Tester Hat On?

As a tester, I urge you to look at that visual:

Consider what you are seeing there. Look at possible solutions. Does something jump out at you? Hint: testers often think in terms of equivalence partitions or classes. And that’s sometimes looking for things that are the same or would lead to the same effect. Take some time. Study the graph. Think it through.

Breadth First Search

This is a different strategy but acting on the same graph and tree. Let’s consider the video.

So when we start with S and pull it from the fringe, we have S -> d, S -> e and S -> p just as we did before. All the nodes are of equal depth. So, again, we break the tie by choosing a strategy. This time, however, we pick the shallowest first. But initially the nodes are all equally shallow so we go alphabetically. We start with d, which has three successors on its fringe. That leaves us with e and p that equally shallow. So we pick e (due to using an alphabetical “tie breaker”).

So now we have, on the fringe, the following:

  • S -> d -> b
  • S -> d -> c
  • S -> d -> e
  • S -> e -> h
  • S -> e -> r
  • S -> p

So p is the most shallow. No tie-breaking needed. See if you can follow how this works based on what I’ve told you. You’ll note that this is the same general algorithm but just choosing the shallowest node. And you should be able to see the difference in terms of how the solution is found by looking at those colored search tiers.

Think about what you are seeing here versus what you saw with depth first search. I’m purposely giving a few less details here just so I force you to think about it rather than spell it all out. As a tester, if you were asked for your test strategy in terms of what algorithm you want to use and/or construct, what would be your thoughts?

Uniform Cost Search

Here, again, we have the same search graph and search tree although this time the search graph is annotated with numbers that indicate costs.

That number is the cost of going from one state to another by following a particular path. The cost from S to d, for example, is 3. Let’s consider the video.

Again we start with S. We end up with S -> d, S -> e, and S -> p. And we can see the following:

  • S -> e is the most expensive.
  • S -> p is the cheapest.

So we pick the cheapest. That’s what this kind of algorithm does: picks the cheapest path it can. We keep expanding the cheapest node on the fringe.

Follow what the video is doing and see if this is getting easier to understand. Consider that we eventually get this path: S -> d -> e -> r -> f -> G. There we have a node on the fringe that achieves the goal. But we don’t pick it next! That’s hard to see in the video perhaps but keep in mind he algorithm always picks the cheapest one. Here’s the situation I’m talking about:

That next cost — to pick G — would be 10. But we have S -> e on our fringe, with a cost of 9. So the algorithm picks that instead. Now the next cost for that path would be 11 (the cheapest between 17 and 11). Both of those are more expensive than picking G, with a cost of 10.

So you can see that this approach is somewhat like breadth first, but with costs. The contours are there just to provide a visual representation of equal cost. This algorithm keeps searching until it has excluded everything that’s cheaper than reaching the goal so it knows for sure that it hasn’t missed the cheaper path that might have achieved the goal.

Go, Tester, Go!

So … what did you think? Assuming you made it to the end, as a tester take stock of your thoughts right now. Are you utterly confused? Or did it all start to make sense? Were your tester instincts kicking in or were you finding them stifled? Did you find the notion of what “we” were doing versus what the “agent” was doing a bit confusing, particularly if you were considering this in terms of requirements that would guide your test design?

Well, hold on to your hats. If you stick with me to the next post, we’ll cover the informed search. And then we’re going to get into running my Pacumen project with these algorithms against actual Pac-Man implementations. At that point I’ll be asking you to make some hypotheses about what you expect to find as a result of your testing. You’ll then be able to confirm if what you actually find matches that.

Share

About Jeff Nyman

Anything I put here is an approximation of the truth. You're getting a particular view of myself ... and it's the view I'm choosing to present to you. If you've never met me before in person, please realize I'm not the same in person as I am in writing. That's because I can only put part of myself down into words. If you have met me before in person then I'd ask you to consider that the view you've formed that way and the view you come to by reading what I say here may, in fact, both be true. I'd advise that you not automatically discard either viewpoint when they conflict or accept either as truth when they agree.
This entry was posted in AI, Exploration. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *