![]() Main
HomeDownload Variants descriptions (wiki) Adding variants Sjef Perft Personal history About Programming details
BlogDesign decisions Search techniques Evaluation Chess programming links
Chess programming wikiEd Schröder's programming pages XBoard communication protocol Talk Chess forum |
Search algorithmChess programs divide the task of deciding which move to play in two parts: a static evaluation function that is applied to the current position, and a search algorithm that recursively tries all moves in the current position and returns the static score after a certain number of moves have been player. The best move is simply the move that leads to the best position. These two topics, search and static evaluation, are normally dicussed separately. Basic searchI use a standard fail-soft alpha-beta search algorithm, the workings of which is discussed in detail on several sites (try wikipedia). There are a few details worth mentioning:
Quiescence searchBefore calling the static evaluation, we call the quiescence search: this resolves all winning captures and promotions. I don't consider losing captures (even if there might be something to gain from them, that's likely not something that would be picked up by the quiescence search) or checks (mainly because I don't have a check-generating move generator). Move orderingMove ordering is very important. It can be shown quite easily that the alpha-beta algorithm operates most efficiently if we always examine the best move in each position first. That's a problem, because if we knew what the best move was, we wouldn't have to do the search at all. Although we don't know the best move in the current position, we can nevertheless try to make a good guess for the best move. That way, we can sort moves that are likely to create a cutoff near the top of the move list. In a sense, we're trying to combine the strength of Shannon type B programs (look only at plausible moves) with the strength of type A programs (it's hard or impossible to give general guidelines for what good moves are, so we really need to look at all of them). There are various ways to guess what a good move might be:
The history heuristic, the counter move heuristic and the combo move heuristic can be thought of as generalisations of the killer move that work on different levels in the search tree. In a sense, they're a way for killer moves to filter through to other parts of the tree. Note that the idea behind many of these moves is not that we look for moves that are good for us per se, but rather we look for moves that will quickly weed out obvious blunders by the opponent one ply earlier. My move ordering scheme looks like this:
The history heuristic is a curious beast. There are a lot of articles saying how great this historical heuristic is, and there are several claims that it actually doesn't work at all. In my experience, it certainly doesn't help one bit in its original form (any move causing a cut-off gets registered in the history table), and in fact seems to do more harm than good. It appears that it performs very badly alongside extensions and reductions: situations where it might have helped don't arise so often because the moves where it helped have been reduced anyway. Jazz has code for the history heuristic (disabled), Sjaak does not. So far so good, but how do we actually sort the moves? One possibility is to build up a sorted list by insertion sort as we're scoring the possible moves. I was too lazy to do something like that, so I just use the standard quicksort to sort the movelist after I scored all the moves. This can actually be quite expensive and I'd like to get rid of it if possible. I only call quicksort after trying the hash move, so there is a chance that I never get to call it at all (if the hash move causes a cut-off), but still I'd like to eliminate it entirely. For now, it results in code that works and is tolerably fast, so it's not a big priority to eliminate it yet. Extensions/reductionsAs explained above, I like to think of looking for a cutoff as refuting bad moves by the opponent (as opposed to looking for good moves for the player who is to move). It is important in this sense to recognise "delaying tactics" that try to push the refutation across the search horizon. For this reason, some moves need to be searched to a greater depth: the search is extended rather than reduced. However, this needs to be done with some care. Checks are an obvious choise for extension, since they can very easily be used as a delaying tactic. Alternatively, we might extend checks just in case we have something to gain by them, but this is less common. I extend the search depth by one ply if the side to move is in check. This could perhaps be limited in some way, e.g. don't extend checks unless we're close to the search horizon. Null moveThe idea behind trying a null move is to let the side who has just played a move move again. If doing so does not improve their position, then our own position is very strong, no matter what we do. Any reasonable move will fail high, so we may as well cut this branch off before spending any more time on it. There are situations where a null move should be avoided, however. Being in check is an obvious one. Being in a position with zugzwang is another: in such a position, there is no "reasonable move" that will fail high. The first of these conditions is easy to detect, the latter one not so much. In Jazz, I skip the null-move if there are very few pieces on the board and if there are no sliders (which can make a "waiting" or "tempo" move). In Sjaak, I skip the null-move if there are fewer than 4 piece types for the side to move. If the null move does not fail high but is refuted, that means that the other player has some sort of threat and we can use this information to improve the search: we can withdraw a piece that is en-prise, or avoid a threatened mate. |