next up previous contents index
Next: 14.3.1 Sequential Computer Chess Up: 14 Asynchronous Applications Previous: 14.2.4 Performance Analysis

14.3 Computer Chess

 

As this book shows, distributed-memory, multiple-instruction stream (MIMD) computers are successful in performing a large class of scientific computations. As discussed in Section 14.1 and the earlier chapters, these synchronous and loosely synchronous problems tend to have regular, homogeneous data sets and the algorithms are usually ``crystalline'' in nature. Recognizing this, CP explored a set of algorithms which had irregular structure (as in Chapter 12) and asynchronous execution. At the start of this study, we were very unclear as to what parallel performance to expect. In fact, we achieved good speedup even in these hard problems.

Thus, as an attempt to explore a part of this interesting, poorly understood region in algorithm space, we implemented chess  on an nCUBE-1  hypercube. Besides being a fascinating field of study in its own right, computer chess is an interesting challenge for parallel computers because:

One might also ask the question, ``Why study computer chess at all?'' We think the answer lies in the unusual position of computer chess within the artificial intelligence world. Like most AI problems, chess requires a program which will display seemingly intelligent behavior in a limited, artificial world. Unlike most AI problems, the programmers do not get to make up the rules of this world. In addition, there is a very rigorous procedure to test the intelligence of a chess program-playing games against humans. Computer chess is one area where the usually disparate worlds of AI and high-performance computing meet.

Before going on, let us state that our approach to parallelism (and hence speed) in computer chess is not the only one. Belle, Cray  Blitz, Hitech, and the current champion, Deep Thought have shown in spectacular fashion that fine-grained parallelism (pipelining, specialized hardware) leads to impressive speeds (see [Hsu:90a], [Frey:83a], [Marsland:87a], [Ebeling:85a], [Welsh:85a]). Our coarse-grained approach to parallelism should be viewed as a complementary, not a conflicting, method. Clearly the two can be combined.





next up previous contents index
Next: 14.3.1 Sequential Computer Chess Up: 14 Asynchronous Applications Previous: 14.2.4 Performance Analysis



Guy Robinson
Wed Mar 1 10:19:35 EST 1995