The dish

Seven Games.

The title of Oliver Roeder’s book Seven Games: A Human History is a misnomer in two ways: It’s not really a book about games, and it’s far more a history of computers than of humans. It is, instead, a history of attempts to use what is now unfortunately referred to as “AI” to tackle the myriad problems posed by seven popular board and card games from human history, from chess to bridge. Each of these games presents the programmers with specific, novel issues, and while machine-learning techniques have succeeded in solving some games (like checkers), others have and may forever prove inscrutable (like bridge).

Roeder is a journalist for the Financial Times and clearly a gamer, and someone who loves the games for what they are beyond their competitive aspect (although it becomes clear he is a fierce competitor as well). He writes as an experienced player of all seven games in the book, even though he must have varying skill levels in each – I’d be shocked if he were much of a checkers player, because who on earth in the year of our lord 2024 is a great checkers player? His experience with the games helps infuse a book that could be a rather dry and grim affair with more than a touch of life, especially as he enters tournaments or otherwise competes against experts in games like poker, Scrabble, and backgammon.

What Roeder is really getting at here, however, is the symbiotic relationship between games and machine learning, which is what everyone now calls AI. (AI is itself a misnomer, and there are many philosophers who argue that there can be no intelligence, artificial or otherwise, without culture.) Games are perfect fodder for training AI modules because they tend to present short sets of rules and clear goals, thus giving the code and its coder targets for whatever optimization algorithm(s) they choose. In the case of checkers, this proved simple once the computing power was available; checkers is considered “weakly solved,” with a draw inevitable if both players play perfectly. (Connect 4 is strongly solved; the first player can always win with perfect play.) In the case of bridge, on the other hand, the game may never be solved, both because of its computational complexity and because of the substantial human element involved in its play.

In one of those later chapters, Roeder mentions P=NP in a footnote, which put an entirely different spin on the book for me. P=NP is one of the six unsolved Millennium Prize Problems* in mathematics, also called the P versus NP problem, which asks if a problem’s correct solution can be verified in polynomial time, does that also mean that the problem can be solved in polynomial time? The answer would have enormous ramifications for computational theory, and could indeed impact human life in substantial ways, but the odds seem to be that P does not equal NP – that the time required to solve these problems is orders of magnitude higher than the time required to verify their solutions. (For more on this subject, I recommend Lance Fortnow’s book The Golden Ticket, which I reviewed here in 2015.)

*A seventh, the Poincaré Conjecture, is the only one that has been solved to date.

You can see a thread through the seven chapters where the machine-learning techniques adjust and improve as the games become more complex. From there, it isn’t hard to see this as a narrow retelling of the ongoing history of machine learning itself. The early efforts to solve games like checkers employed brute-force methods – examining all possible outcomes and valuing them to guide optimal choices. More complex games that present larger decision trees and more possible outcomes would require more processing power and time than we have, often more time than remains in the expected life of the universe (and certainly more than remains in the expected life of our suicidal species), and thus required new approaches. Some of the attacks on games later in the book allow the algorithm to prune the tree itself and avoid less-promising branches to reduce processing time and power, thus leading to a less complete but more efficient search method.

Roeder does acknowledge in brief that these endeavors also have a hidden cost in energy. His anecdotes include Deep Blue versus Kasparov and similar matches in poker and go, some of which gained wide press coverage for their results … but not for the energy consumed by the computers that competed in these contests. We’re overdue for a reckoning on the actual costs of ChatGPT and OpenAI and their myriad brethren in silicon, because as far as I can tell, they’re just the new crypto when it comes to accelerating climate change. That’s nice that you can get a machine to write your English 102 final paper for you or lay off a bunch of actual humans to let AI do some things, but I’d like to see you pay the full cost of the electricity you’re using to do it.

I’ve focused primarily on one aspect of Seven Games because that’s what resonated with me, but I may have undersold the book a little in the process. It’s a fun read in many ways because Roeder tells good stories for just about all seven of the games in the book – I might have done without the checkers chapter, because that’s just a terrible game, but it is an important rung in the ladder he’s constructing – and puts himself in the action in several of them, notably in poker tournaments in Vegas. There’s also a warning within the book about the power of so-called AI, and I think inherent in that is a call for caution, although Roeder doesn’t make this explicit. It seemed a very timely read even though I picked it up on a friend’s recommendation because it’s about games. Games, as it turns out, explain quite a bit of life. We wouldn’t be human without them.

Next up: Dark Matter of the Mind: The Culturally Articulated Unconscious, a book by Daniel Everett, a former evangelical Christian missionary who became an atheist and turned to linguistics after his time trying to convert the Amazonian Pirahã tribe. He appeared at length in last year’s outstanding documentary The Mission.

Exit mobile version