AI is a huge field, and it’s often hard to understand where to start learning. If you’ve done some programming, you’ve probably learned something about ‘classical search’ – things like breadth-first or depth-first search of trees and graphs. This serves as a pretty decent introduction to artificial intelligence – a search through various states, looking for the best one, is at the core of making any sort of intelligence. But these basic searches make plenty of assumptions that don’t hold for the real world – for that you’ll need better algorithms:

In the basic search algorithms mentioned above, the search spaces of the problem at hand are kept in memory – when a goal is found, the solution is the path from the initial position to the goal state. But in many cases, the path does not matter – only the resultant state. This is illustrated well with the familiar ‘8 Queens‘ problem –
[quote]Given a standard chess board and 8 queens, position them in such a way as to have no two queens ‘attacking’ each other – thus, no queen may be on the same row, column, or diagonal as another[/quote]

It’s a fairly simple problem, but one that easily illustrates the importance of having viable algorithms. While on a small board like the standard 8×8, it would be fairly simple to brute force the problem – simply test every set of queen positions. If you build the brute force algorithm under the fair assumption that there can only be one queen per row, then the search space is very small – 8! (40,320) or less positions. However, this simple search rapidly gets much larger. If you extend the problem to be ’12 Queens’, you’re now dealing with 12! (479,001,600) states. On a modern computer, that amount is still fairly testable – but move to 15 or 20 (1,307,674,368,000 or 2,432,902,008,176,640,000) states, which rapidly changes the problem from taking seconds to taking years.

Luckily, however, local search algorithms handle this problem. At their heart, local search algorithms hold only the current state. At each step, the ‘surrounding’ states are evaluated based on some chosen heuristic, the best one is chosen, and the process repeats. Simple, right?

Not quite. This search algorithm is known as ‘hill-climbing‘, and it has one glaring problem – it’s not always guaranteed to find the solution. To understand why, consider the following concept:

### The State-Space Landscape

The ‘state-space landscape’ is a way to describe how the overall structure of states looks – the landscape’s domain (x-axis) is the set of various states. The range (y-axis) is the value of that particular state based on whichever heuristic we are using to evaluate it. This two-dimensional graph is simplification – depending on the heuristic and branching factor, the real environment may have many more dimensions.

On the chart are several points of interest. (a) is the global maximum – our ‘goal’ state, and the one for which the heuristic gives the highest value. However, at (b) and (c) is where we encounter problems. Hill-climbing, obviously, cannot hop from hill to hill. If it hits the local maximum at (b) or the ‘flat’ local maximum at (c), it won’t have any way of moving further. Every state in its vicinity will return a value less than the current one. So how can we solve this problem?

### Variants on Hill Climbing

The simplest way to solve this, and definitely the worst, is **random restart hill climbing**. This simply means that when the hill-climbing algorithm hits a local maximum, the process is restarted at another random state. In this way, the algorithm is guaranteed to eventually return the global maximum – over time, it will find a spot where the hill climbing will not result in hitting a global maximum or a flat point.

There are better solutions, however. **Simulated annealing** is another type of hill-climbing method. This method is based off annealing in metallurgy – allowing a metal to heat and cool slowly to get it in to its proper shape. In the algorithm, a variable *T* starts at a high value and slowly lowers – this value dictates the probability of choosing the next state, which is picked randomly from the current one.

At the start of the algorithm, when *T* is high, the states move about more quickly, and the algorithm can ‘bounce’ out of any local maxima. As *T* lowers, however, the process resembles hill-climbing more and more. If *T* is lowered slowly enough, the algorithm will return the global maximum with a probability approaching 1.

Then there is **local beam search**. This method is very similar to hill-climbing, with one exception. In hill-climbing, there is only one current state, but in local beam search there are many. At each step, the neighbors of each current state are evaluated, and those with the best value are chosen. Because of the random starts, this algorithm is much more likely to return the global maximum than just hill-climbing alone.

Even local beam search is not without its problems, though. Because the best states are chosen from the entire pool of next states, this method tends to end up ‘bunching up’; though the states start randomly along the landscape, if certain areas are significantly higher valued than others, the states will all ‘hop’ across to the better areas. This means that while the probability of returning the global maximum is higher, it still isn’t guaranteed.

This is where **stochastic local beam search** comes in. Just like local beam search, this algorithm uses many starting states. However, when the pool of next states is created, they are each chosen only with a certain probability – while the higher-valued states still have the highest probabilities to be chosen, the smaller-valued ones are still often chosen. This way, we guarantee that over time, the algorithm does not get stuck at any local maximums.

### Other Algorithms, and Moving Further

The few local search algorithms presented here are only a small part of the those which have been created over the years. Not only that but these are only a small part of creating intelligent agents, and an even smaller part of AI at large. They are a good place to start, however.

*If you’re interested in learning more about AI or the algorithms that are a part of it, leave some feedback in the comments below – we’d love to learn more about what our readers are interested in, and it helps us tailor our articles better to you.*