The Emperor of All Maladies.

Siddhartha Mukherjee, an Indian-born American oncologist who trained at the Dana-Farber Cancer Institute in Boston, won the Pulitzer Prize for Non-Fiction for his 2010 tome The Emperor of All Maladies: A Biography of Cancer, his first book and an enormous undertaking – an exhaustive attempt to chronicle the history of the disease itself and the ongoing scientific fight to cure it. Interspersed with anecdotes from his own oncology work, including several patients he treated – some who survived the disease, and many who did not – Emperor covers a truly incredible amount of ground, often with more detail than I needed to understand the story, and presents a sobering picture of how endless the efforts to treat and cure cancer will be, given the disease’s nature and ability to defeat our best weapons against it.

Mukherjee goes back to ancient Egypt and Greece to give us the earliest known examples of the disease’s appearance and explain how it got its name – it’s from the Latin word meaning ‘crab,’ and the word carcinoma comes from the Greek word for the same – but the bulk of the history in this book starts in the mid-19th century with the first real identification of a specific cancer, leukemia. The story wends its way through the late part of that century with the advent of radical mastectomies to remove breast cancer, disfiguring surgeries that would remove many muscles beyond the breasts and that were the brainchild of the coke-addicted surgeon William Halsted, who also conceived the modern residency program for new doctors that forces them to operate without sleep. We get the discovery that radiation causes cancer, and the related discovery that it might treat cancer as well, as would certain drugs that we now put under the umbrella of chemotherapy. Mukherjee takes the science thread all the way through what were, at the time, the latest developments in oncology treatment and research, including the ongoing identification of oncogenes (genes that, when switched ‘on,’ can produce cancer), proto-oncogenes (genes that become oncogenes with mutations), and anti-oncogenes (tumor-suppressing genes); and therapies that target specific cancer subtypes based on their genotypes – such as Herceptin, which has proven exceptionally effective against breast and other cancers cancer with the HER2 oncogene.

The science bits – my favorite, of course – are interspersed with much of the story of the American public policy fight over cancer, which led to a so-called “War on Cancer,” the passage of the 1971 National Cancer Act to boost the National Cancer Institute, and many breathless pronouncements that we were mere years away from finding a cure. The narrative lags at several points here – the origin story of the Jimmy Fund’s “Jimmy,” real name Einar Gustafson, is the big exception – although it serves as a reminder of how credulous the world was, including early researchers into oncology, about our ability to ‘beat’ or cure cancer. Cancer is not just one disease; it is many, probably hundreds, of diseases that all share the common characteristic of abnormal cell growth, but that can differ substantially by their origin in the body, and even for a specific source or organ can come in vastly diverse forms that require different, targeted treatments. The above-mentioned Herceptin works on HER2+ cancers, mostly breast cancer but sometimes appearing in gastric or ovarian cancers; it will be ineffective against HER2-negative cancers. Someone with ‘breast cancer’ can have any of several forms of the disease – each of which will respond in totally different ways to treatments. This is good news and bad news; the more we know about specific forms of cancer, the better that scientists can come up with targeted treatments to attack them, but there are also far more forms of cancer than we’d ever realized in the history of our fight against the disease. The single ‘cure for cancer’ is probably a chimera, because cancer is not just one thing, but a common attribute of many diseases, and stopping that attribute – rampant cell division – would kill regular cells too.

The Emperor of All Maladies is kind of a depressing read, between the awful outcomes for some of the patients described, but also because the outlook for the future of the disease is not that great. Yes, the medical world continues to search for and find treatments for specific cancers, some of which are the most effective drugs in the history of oncology, but it’s also clear that if your specific cancer isn’t one of those, the medical response is the same drug cocktail approach that has been the norm for decades – better than it was, and with the benefit of drugs to help combat nausea, but still an ordeal for the patient with modest success rates. And finding Herceptin-like advances for all cancers will take many years and billions of dollars that may not be available without a massive public investment. Dr. Mukerjee has put together a remarkable work of research and insight, written with great feeling for the individual patients fighting their cancers, but I left this book feeling worse about the war on cancer than I ever had before.

Next up: Dan Simmons’ The Fall of Hyperion.

The Tyranny of Metrics.

A scout I’ve seen a few times already this spring on the amateur trail recommended Jerry Muller’s brief polemic The Tyranny of Metrics, a quick and enlightening read on how the business world’s obsession with measuring everything creates misaligned incentives in arenas as disparate as health care, education, foreign aid, and the military, and can lead to undesirable or even counterproductive outcomes. With the recent MLB study headed by physicist Prof. Alan Nathan that found, among other things, that players trying to optimize their launch angles hasn’t contributed to rising home run rates, the book is even somewhat applicable to baseball – although I think professional sports, especially our favorite pastime, do offer a good contrast to fields where the focus on metrics leads people to measure and reward the wrong things.

The encroachment of metrics on education is probably the best known of the examples that Muller provides in the book, which is strident in tone but measured (pun intended) in the way he supports his arguments. Any reader who has children in grade school now is familiar with the heavy use of standardized testing to measure student progress, which is then in turn used to grade teacher performance and track outcomes by schools as well, which can alter funding decisions or even lead to school takeovers and closings. Of course, I think it’s common knowledge at this point that grading teachers on the test performance of their students leads teachers to “teach to the test,” eschewing regular material, which may be important but more abstract, in favor of the specific material and question types to be found on these tests. My daughter is in a charter school in Delaware, and loses more than a week of schooldays each year to these statewide tests, which, as far as I can tell, are the primary way the state tracks charter school performance – even though charters nationwide are rife with fraud and probably require more direct observation and evaluation. That would be expensive and subjective, however, so the tests become a weak proxy for the ostensible goal in measurement, allowing the state to point and say that these charters are doing their jobs because the student test scores are above the given threshold.

The medical world isn’t immune to this encroachment, and Muller details more pernicious outcomes that result from grading physicians on seemingly sensible statistics like success or mortality rates from surgeries. If a surgeon at a busy hospital knows that any death on the operating table during a surgery s/he performs will count, so to speak, against his/her permanent record, the surgeon may choose to avoid the most difficult surgeries, whether due to the complexity of the operations or risk factors in the patients themselves, to avoid taking the hit to his/her surgical batting average. Imagine if you’re an everyday player in the majors, entering arbitration or even free agency, and get to pick the fifteen games you’re going to skip to rest over the course of the season. If your sole goal is maximizing your own statistics to thus increase your compensation, are you skipping Clayton Kershaw and Max Scherzer, or skipping Homer Bailey and some non-prospect spot starter?

Muller mentions sports in passing in The Tyranny of Metrics but focuses on other, more important industries to society and the economy as a whole; that’s probably a wise choice, as the increased use of metrics in sports is less apt than the other examples he chooses in his book. However, there are some areas where his premise holds true, with launch angle a good one to choose because it’s been in the news lately. Hitters at all levels are now working with coaches, both with teams and private coaches, to optimize their swings to maximize their power output. For a select few hitters, it has helped, unlocking latent power they couldn’t get to because their swings were too flat; for others, it may help reduce flyouts and popups and get some of those balls the hitter already puts in the air to fall in for hits or go over the fence. But for many hitters, this emphasis on launch angle hasn’t produced results, and there are even players in this year’s draft class who’ve hurt themselves by focusing on launch angle – knowing that teams measure it and grade players in the draft class on it – to the exclusion of other areas of their game, like just plain hitting. Mike Siani of William Penn Charter has cost himself a little money this spring for this exact reason; working with a coach this offseason to improve his launch angle, he’s performed worse for scouts this spring, becoming more pull-conscious and trying to hit for power he doesn’t naturally possess. He’s a plus runner who can field, but more of an all-fields hitter who would benefit from just putting the ball in play and letting his speed boost him on the bases. Because many teams now weigh such Trackman data as launch angle, spin rate, and extension heavily in their draft process, either boosting players who score well in those areas or excluding those who don’t, we now see coaches trying to ‘teach to the test,’ and that approach will help only a portion of the draft class while actively harming the prospects of many others.

At barely 220 pages, The Tyranny of Metrics feels like a pamphlet version of what could easily be a heavy 500-page academic tome, recounting all of the ways in which the obsession with metrics produces less than ideal results while also explaining the behavioral economics principles that underlie such behavior. If you have some of that background, or just don’t want it (understandable), then Muller’s book is perfect – a concise argument that should lead policymakers and business leaders to at least reconsider their reliance on the specific metrics they’ve chosen to measure employee performance. Using metrics may be the right strategy, but be sure they measure what you want to measure, and that they’re not skewing behavior as a result.

Next up: I’m currently reading Ray Bradbury’s short story collection I Sing the Body Electric!.

Whistling Vivaldi.

In this era of increased awareness of cognitive biases and how they affect human behavior, stereotype threat seems to be lagging behind similar phenomena in its prevalence in policy discussions. Stereotype threat refers to how common stereotypes about demographic groups can then affect how members of those groups perform in tasks that are covered by the stereotypes. For example, women fare worse on math tests than men because there’s a pervasive stereotype about women being inferior at math. African-American students perform worse on tests that purport to measure ‘intelligence’ for a similar reason. The effect is real, with about two decades of research testifying to its existence, although there’s still disagreement over how strong the effect is in the real world (versus structured experiments).

Stanford psychology professor Claude Steele, a former provost at Columbia University and himself African-American, wrote a highly personal account of what we know about stereotype threat and its presence in and effects on higher education in the United States in Whistling Vivaldi: How Stereotypes Affect Us and What We Can Do. Steele blends personal anecdotes – his own and those of others – with the research, mostly in lab settings, that we have to date on stereotype threat, which, again, has largely focused on demonstrating its existence and the pernicious ways in which it can affect not just performance on tests but decisions by students on what to study or even where to do so. The resulting book, which runs a scant 200 pages, is less academic in nature than Thinking Fast and Slow and its ilk, and thus a little less intellectually satisfying, but it’s also an easier read and I think the sort of book anyone can read regardless of their backgrounds in psychology or even in reading other books on human behavior.

The best-known proofs of stereotype threat, which Steele recounts throughout the first two thirds of the book, come from experiments where two groups are asked to take a specific test that encompasses a stereotype of one of the groups – for example, men and women are given a math test, especially one where they are told the test itself measures their math skills. In one iteration, the test-takers are told beforehand that women tend to fare worse than men on tests of mathematical abilities; in another iteration, they’re told no such thing, or something irrelevant. Whether it’s women and math, blacks and intelligence, or another stereotype, the results are consistently – the ‘threatened’ group performs worse than expected (based on predetermined criteria like grades in math classes or scores on standardized math tests) when they’re reminded of the stereotype before the test. Steele recounts several such experiments, even someone that don’t involve academic goals (e.g., whites underperforming in tests of athleticism),and shows that not only do the threatened groups perform worse, they often perform less – answering fewer questions or avoiding certain tasks.

Worse for our academic world is that stereotype threat appears to lead to increased segregation in the classroom and deters threatened groups from pursuing classes or majors that fall into the stereotyped category. If stereotype threat is directly* or indirectly convincing women not to choose STEM majors, or steering African-American students away from more academically rigorous majors or schools, then we need policy changes to try to address the threat and either throttle it before it starts or counteract it once it has begun. And Steele argues, with evidence, that stereotype threat begins much earlier than most people aware of the phenomenon would guess. Stereotype threat can be found, again through experiment, in kids as young as six years old. Marge and Homer may not have taken Lisa’s concerns about Malibu Stacy seriously, but she was more right than even the Simpsons writers of the time (who were probably almost all white men) realized.

* For example, do guidance counselors or academic advisors tell female students not to major in math or engineering? Do they discourage black students from applying to the best possible colleges to which they might gain admission?

To keep Whistling Vivaldi readable, Steele intersperses his recounting of academic studies with personal anecdotes of his own or of students and professors he’s met throughout his academic career. The anecdote of the title is almost painful to read – it’s from a young black man who noticed how differently white pedestrians would treat him on the street, avoiding eye contact or even crossing to the other side, so he adopted certain behaviors, not entirely consciously, to make himself seem less threatening. One of them was whistling classical music, like that of Vivaldi. Other stories demonstrate subtle changes in behavior in class that also result from stereotype threat, and show how students in threatened groups perform better in environments where the threat is diminished by policies, positive environments, or sheer numbers.

Stereotype threat is a major and almost entirely unaddressed policy issue for teachers, principals, and local politicians, at the very least. Avoiding our own use, even in jest, of such stereotypes can help start the process of ending how they affect the next generation of students, but the findings Steele recounts in Whistling Vivaldi call for much broader action. It’s essential reading for anyone who works in or wishes to work in education at any level.

Next up: Michael Ondaatje’s The English Patient.

A Brief History of Infinity.

Infinity is a big topic, to put it mildly. The mere concept of a limitless quantity has vexed mathematicians, philosophers, and theologians for over two centuries. The Greeks developed some of the first infinite series, some divergent (they approach infinity) and some convergent (they approach a finite number), with Zeno making use of these concepts in some of his famous paradoxes. Galileo is better known for his observations in astronomy and work in optics, but he developed an early paradox that he argued meant that we couldn’t compare the sizes of infinite sets in a meaningful way, showing that, although we know intuitively that there are more integers in total than there are integers that are perfect squares, you can map the integers to the perfect squares in a 1:1 ratio that appears to show that the two sets are the same size. Georg Cantor later explained this paradox in his development of set theory, coining the aleph terminology for infinite sets, and then went mad trying to further his theories of infinity, a math-induced insanity that later afflicted Kurt Gödel in his work on incompleteness. There remain numerous – dare I say infinite? – unsolved problems in mathematics that revolve around infinity itself or whether there are an infinite number of some entity, such as primes or perfect numbers, in the infinite set of whole numbers or integers.

Science writer Brian Clegg attempts to make these topics accessible to the lay reader in his book A Brief History of Infinity, part of the Brief History series from the imprint Constable & Robinson. Rather than delving too far into the mathematics of the infinite, which would require more than passing introductions to set theory, the transfinite numbers, and integral calculus, Clegg focuses on the history of infinity as a concept in math and philosophy, going back to the ancient Greeks, walking through western scholars’ troubles with infinity (and objections from the Church), telling the well-known story of Newton and Leibniz’s fight over “the” calculus, and bringing the reader up through the works of Cantor, Gödel, and other modern mathematicians in illuminating the infinite both large and small. (It’s $6 for the Kindle and $5 for the paperback as I write this.)

Infinity can be inconvenient, but we couldn’t have modern calculus without it, and it comes up repeatedly in other fields including fractal mathematics and quantum physics. Sometimes it’s the infinitely small – the “ghosts of departed quantities” called infinitesimals that Newton and Leibniz required for integration – and sometimes it’s infinitely large, but despite several millennia of attempts to argue infinity out of mathematics, there’s no avoiding its existence and even the necessity of using it. Clegg excels when recounting great controversies over infinity from the history of math, such as the battle between Newton and Leibniz over who invented the calculus, or the battle between Cantor and his former teacher Leopold Kronecker, who disdained not just infinity but even the transcendental numbers (like π, e, or the Hilbert number) and actively worked to prevent Cantor from publishing his seminal papers on set theory.

Clegg’s book won’t likely satisfy the more math-inclined readers who want a crunchier treatment of this topic, especially the recent history of infinity from Cantor forward. Cantor developed modern set theory and published numerous proofs about infinity, proving that there are at least two distinct sets of infinities (the integers, aleph-null, are infinite, but not as numerous as the real numbers, aleph-one; aleph notation measures the cardinality of infinities, not the quantity of infinity itself). I also found Clegg’s discussion of Gödel’s incompleteness theorems rather … um … incomplete, which is understandable given the theorems’ abstract nature, but also meant Gödel earned very little screen time in the book other than the overemphasized parallel between his own descent into insanity and Cantor’s. I was disappointed that he didn’t get into Russell’s paradox*, which is a critical link between Cantor’s work (and Hilbert’s hope for a resolution in favor of completeness) and Gödel’s finding that completeness was impossible.

Let R be the set of all sets that are not members of themselves. If R is not a member of itself, then it must be a member of R … but that produces a contradiction by the definition of R.

Clegg does a much better job than David Foster Wallace did in his own book on infinity, Everything and More: A Compact History of Infinity, which tried to get into the mathier stuff but ultimately failed to make the material accessible enough to the reader (and perhaps exposed the limits of Wallace’s knowledge of the topic too). This is a book just about anyone who took one calculus class can follow, and it has enough personal intrigue to hold the reader’s attention. My personal taste in history of science/math books leans towards the more technical or granular, but I wouldn’t use that as an indictment of Clegg’s approach here.

Next up: I’m reading another Nero Wolfe mystery, after which I’ll tackle Michael Ondaatje’s Booker Prize-winning novel The English Patient.

Thank You for Being Late.

Thomas Friedman’s Thank You For Being Late: An Optimist’s Guide to Thriving in the Age of Accelerations is a solid book about the fast-moving present and immediate future written by a man whose prose is firmly, almost embarrassingly stuck in the past. Friedman has obviously thought deeply about the topics in this collection of connected essays, and talked to many experts, and there are many insights here that would be useful to almost anyone in or soon to enter the American workforce, as well as to the people who are attempting to manage and regulate this fast-moving economy. It was just hard to get through the clunky writing and jokes that don’t even rise to dad level.

Friedman’s main thesis here is that the world is accelerating, and many people – I think his main audience is Americans, although it’s not limited to them – are unprepared for it. Technology has substantially increased the pace of change since the Industrial Revolution, and 100-plus years of accelerations now has the developed world changing at a rate that leads us to a point where it doesn’t even take a full generation of people to churn through more than one generation of tech. These technologies also collapse borders, threaten sovereignty of states, and increase economic inequality. Everyone reading this likely knows about the debate over automation and machine learning (please, stop calling it AI, they are not the same thing), but Friedman is arguing that we need policy makers at all levels to accept it as given and respond to it with policies that produce a populace better equipped to cope with it – and that people themselves accept that continuous learning is likely to be a part of their entire working lives.

Friedman refers to the cloud – a term I’m not 100% sure he even understands — as “the supernova,” a pointless and confusing substitution of a fabricated term for a more commonly accepted one, and then refers back to it frequently throughout the book as the source of much of this technological change. He’s certainly correct that the power of distributed computing has allowed us to solve more problems than we were ever able to solve previously, no matter how many chips you were able to cram into one box; he also gives the sense that he thinks P = NP, that this accelerating rate of growth in computing firepower will eventually be able to solve problems that, in nonmathematical terms, probably can’t be solved in a reasonable time frame. And Moore’s law, which he cites often, has changed in the last few years, as the growth in the number of transistors Intel et al can put on a chip has slowed from 18-24 months to more like 30, and with Intel projecting to hit the 10 nm transistor width this year, we’re probably butting up against the limits of particle physics.

The strongest aspects of Thank You For Being Late are Friedman’s exhortations to readers to accept that the old idea of learning one job and then doing it for 40 years is probably dead. Most jobs, even those we might once have spoken of dismissively as blue-collar or low-skilled, now require a greater knowledge of and comfort with technology. (There’s an effective CG commercial out now for University of Phoenix, where we see a mom working in a factory where all of the workers are slowly replaced by machines until one day the supervisor comes for her. She eventually pursues some sort of IT degree through the for-profit school, and the commercial ends with her walking through stacks of servers.) He lauds companies like AT&T that have already set up programs for employees to take new courses and then make it easier for those employees to identify new jobs within the company for which they qualify – or could try to qualify with further learning. He also discusses municipal and NGO efforts to build job sites that help connect people with skills with learning opportunities and employment opportunities.

There is, however, a bit of a Pollyanna vibe about Friedman, who refers to himself repeatedly as an optimist, and seems to think that more people in the American working class have the time to be able to take classes after hours – or that they have sufficient background to go get, say, a certificate in data science. I looked up some of the programs he mentions in the book; the one related to data science expected students to come in with significant knowlege of programming or scripting languages. He supports government efforts to support lifelong learning and to improve diversity in the workplace and in our communities, but doesn’t even acknowledge the potential government role in ensuring equal access to health care (essential to a functioning economy) or the mere idea of universal basic income, even if to just explain why he thinks it wouldn’t work.

And then there’s Friedman’s overuse of hackneyed quips that felt dated twenty years ago. “Attention K-Mart shoppers!” didn’t resonate with me in the 1980s, since there wasn’t a K-Mart anywhere near where I grew up; the chain has since been obliterated by competition from Wal-Mart and Target, and K-Mart operates 75% fewer stores today than it did at its peak, fewer than 500 nationwide. “This isn’t your grandpa’s X” is just lazy writing at this point; besides, if my daughter read that, she’d likely point out that her grandpa is a retired electrical engineer with two master’s degrees who already did a lot of the lifelong learning that Friedman describes.

Friedman’s writing is also dense, which I find surprising given his background as a newspaper columnist; perhaps he feels like he’s finally set free to prattle on as long as he wants, without anyone to stop him. There’s a level of detail in some parts of the story, such as his overlong descriptions of the halcyon days of the Minnesota town where he grew up, which I’m sure was very nice but probably not quite the Mayberry he describes.

There’s value in here, certainly, but I found it a grind to get through. This could have easily been a series of a dozen or so columns in the New York Times — that they wouldn’t run today because they’re too busy running columns denying climate change or explaining how so-called ‘incels’ need sex robots — rather than a 500-page book. He’s right about his core premise, though: Expect to learn throughout your working life and to see your job, whatever it is, change regularly over the course of your career.

Next up: Roddy Doyle’s Man Booker Prize-winning novel Paddy Clarke Ha Ha Ha.

Locking Up Our Own.

James Forman, Jr., was a public defender in DC for six years, right after he clerked for Sandra Day O’Connor, and encountered the results of two decades of disastrous policies in the criminal justice system of the nation’s capital, many of which led to differential policing and mass incarceration of the city’s black residents. He discussed the history and causes of this system in his 2017 book Locking Up Our Own: Crime and Punishment in Black America, which lays much of the blame for the high incarceration rates on policies embraced and advocated by black community leaders themselves. The book won the 2018 Pulitzer Prize for Non-Fiction this past April.

Forman’s parents met while working for the Student Nonviolent Coordinating Committee (known colloquially as “snick”) during the civil rights movement, which he says spurred his decision to move off the career track into the public defender’s office, eventually moving from there into teaching at Georgetown’s and now Yale’s law schools. Where the 2016 documentary The 13th laid all of the blame for the high rates of black incarceration in the United States on two-plus centuries of racism and white domination – a view that is largely justified – Forman’s book lays bare the role that leaders in black communities played in supporting those policies. Foremost among them: Fighting early progressive efforts to decriminalize possession and personal use of small amounts of marijuana.

Washington DC didn’t achieve some semblance of home rule until 1973, and Congress still holds the power to overturn some laws passed by the DC council and could even, in theory, dismiss the city’s council at will. This gives the city’s residents a status not too much greater than those of territories like Puerto Rico or the U.S. Virgin Islands, although I suppose if two hurricanes knocked out power to DC for several months the federal government would be a little quicker to address the problem. DC’s population is nearly half African-American, and the high rates of incarceration and different policing strategies in its neighborhoods with higher black populations have had a severe effect on the city’s economy, including continuing high crime rates. Forman explains how DC got into this mess, going back to the end of the civil rights movement and explaining how it was actually a white progressive council member who tried to decriminalize marijuana possession, but found himself opposed by black church leaders, Nation of Islam leaders, and even some black city council members, all of whom ended up working together to scotch the proposal (which may not have passed muster with Congress anyway). When a similar proposal arose a few years later to create mandatory minimum sentencing to fight rising crime rates in DC – themselves at least in part the result of the crack cocaine epidemic – black community leaders were all for the new law, responding to residents’ concerns about violent street crime and home invasions, but also enforcing a longstanding moral viewpoint that African-Americans could defeat stereotypes about them by, in essence, behaving better. If DC cracked down on even trivial crimes, even misdemeanors, the theory went, it would improve the quality of life for all DC residents while also working against white politicians and community leaders who worked to disenfranchise and/or limit the economic mobility of people of color.

None of this worked, as Forman writes, and instead helped fuel a new DC underclass – as it did in other cities, including Detroit, the US city with the highest proportion of residents who are African-American – of blacks, mostly men, who were now de facto unemployable because they had criminal records. Such ex-convicts also could find themselves ineligible for certain government assistance programs, turned down for housing, and even unable to vote. Forman, as a public defender, worked with many such clients, but, in his own telling, he was struggling upstream against a system that simultaneously limited the advancement of African-Americans in its police force and judiciary and also aggressively pursued policies that further hindered the black community. He touches on greater arrest rates in black wards of DC versus white, the long-term harm of “stop and frisk” policies (formally known as a Terry stop, and of dubious constitutionality, especially when opponents can show disparate impact by race of police targets), and the formal and informal obstacles that efforts at community improvement can face from municipal police forces – even when officers and administrators are themselves African-American.

Locking Up Our Own is a sobering look at how we got here, but perhaps short on prescriptions for undoing forty years of damage. Marijuana decriminalization is finally happening, although it’s driven by white stoners and libertarians rather than black citizens and provides no procedure for vacating past convictions for trivial possession cases. Stop and frisk was ruled unconstitutional in NYC in 2013, but our current President and Attorney General have both explicitly endorsed the practice. Mandatory minimums remain popular, in large part because they serve “tough on crime” candidates well – and who would dare to stand up and say that criminals deserve shorter sentences? A path to greater African-American enfranchisement and sovereignty in majority black neighborhoods would likely be impossible in any system where higher level, white-dominated government bodies can invalidate city or state policies. Any change that starts at the bottom will fail without a change at the top.

Next up: Claude M. Steele’s Whistling Vivaldi: How Stereotypes Affect Us and What We Can Do.

Why We Sleep.

Why do we sleep? If sleep doesn’t serve some essential function, then it is evolution’s biggest mistake, according to one evolutionary scientist quoted in Matthew Walker’s book Why We Sleep: Unlocking the Power of Sleep and Dreams, which explains what sleep seems to do for us, what sleep deprivation does to us, and why we should all be getting more sleep and encouraging our kids and our employees to do the same.

Walker, a sleep researcher and Professor of Neuroscience and Psychology at Cal-Berkeley, begins by delving into what we know about the history of sleep in humans, and how sleep itself is structured. Humans were, for most of our history as a species, biphasic sleepers – we slept twice in each 24 hour period. We retain vestiges of this practice, which only ended in the 19th century in the developed world with the Industrial Revolution, in our Circadian rhythms, which still give us that post-prandial ‘slump’ that led to customs like the siesta. (It had never occurred to me that the word “circadian” itself came from the Latin words for “almost a day,” because that rhythm in our bodies isn’t quite 24 hours long.)

Sleep is, itself, two different processes that occur sequentially, alternating through a night of full sleep. Most people are familiar with REM sleep, referring to the rapid eye movements visible to an observer standing not at all creepily over you while you slumber. The remaining periods of sleep are, creatively, called nREM or non-REM sleep, and themselves comprise three different sub stages. Both phases of sleep are important; REM sleep is when dreaming occurs, which itself seems to serve the purposes of helping the brain process various events and the associated emotions from the previous day(s), as well as allowing the brain to form connections between seemingly unrelated memories or facts that can seem like bursts of creativity the next day. Your body becomes mostly paralyzed during REM sleep, or else you’d start moving around while you dream, perhaps kicking, flailing, or even acting out events in your dreams – which can happen in people with certain rare sleep disorders. N-REM sleep allows the body to repair itself, helps cement new information into memories in the brain’s storage, boosts the immune system, and contributes to feelings of wakefulness in the next day. The part of N-REM sleep that accomplishes the most, called deep or N3 sleep, decreases as you age, which is why older people may find it hard to sleep longer during the night and then feel less refreshed the next morning.

The bulk of Why We Sleep, however, is a giant warning call to the world about the hazards of short- and long-term sleep deprivation, which Walker never clearly defines but seems to think of as sleeping for a period of less than six hours. (He calls bullshit on people, like our current President and I believe his predecessor too, who claim they can function well on just four or five hours of sleep a night.) Sleep deprivation affects cognition and memory, and long-term deprivation contributes to cancer, diabetes, mental illnesses, Alzheimer’s, and more. Rats deprived of sleep for several days eventually die of infections from bacteria that would normally live harmlessly in the rats’ intestinal tracts.

We don’t sleep enough any more as a society, and there are real costs to this. Drowsy driving kills more people annually than drunk driving, and if you think you’ve never done this, you’re probably wrong: People suffering from insufficient sleep can fall into “micro-sleeps” that are enough to cause a fatal accident if one occurs while you’re at the wheel. Sleep deprivation in adolescents seems to lead to increased risks of various mental illnesses that tend to first manifest at that age, while also contributing to behavioral problems and reducing the brain’s ability to retain new information. Walker even ends the book with arguments that corporations should encourage better sleep hygiene as a productivity tool and a way to reduce health care costs, and that high schools should move their school days back to accommodate the naturally later sleep cycles of teenagers, whose circadian rhythms operate somewhat later than those of preteens or adults.

One major culprit in our national sleep deficit — which, by the way, isn’t one you can pay; you can’t ‘catch up’ on lost sleep — is artificial light, especially blue light, which is especially prevalent in LED light sources like the one in this iPad on which I’m typing and the phone on which you’re probably reading this post. Blue light sources are everywhere, including the LED bulbs the environmentally responsible among us are now using in our house to replace inefficient incandescent bulbs or mercury-laden CFLs. Blue light confuses the body’s natural melatonin cycle, which is distinct from the circadian rhythm, and delays the normal release of melatonin in the evenings, which thus further delays the onset of sleep.

Sleep confers enormous benefits on those who choose to get enough of it, benefits that, if more people knew and understand them, should encourage better sleep hygiene in people who at least have the discretion to sleep more. Sleep helps cement new information in your memory; if you learn new information, such as vocabulary in a foreign language, and then nap afterwards, you’re significantly more likely to retain what you learned afterwards. Sleep also provides the body with time to repair some types of cell damage and to recover from muscle fatigue – so, yes, ballplayers getting more sleep might be less prone to injuries related to fatigue, although sleep can’t repair a frayed labrum or tearing UCL.

Walker says he gives himself a non-negotiable eight-hour sleep window every night. I am not sure how he can reconcile that with, say, his trans-Atlantic travel, but he does point out that changing time zones can wreak havoc on our sleep cycles. He suggests avoiding alcohol or caffeine within eight hours of bedtime — so, yes, he even says if you want that pint of beer, have it with breakfast — and offers numerous suggestions for preparing the body for sleep as you approach bedtime, including turning off LED light sources, using blue light filters on devices if you just can’t put them down, and even using blackout shades for total darkness into the morning.

There are some chapters in the middle of Why We Sleep that would stand well on their own, even if they’re not necessarily as relevant to most readers as the rest of it. The chapter on sleep disorders, including narcolepsy and fatal familial insomnia (about as awful a way to die as I could imagine), is fascinating in its own right. Walker also delivers a damning rant on sleeping pills, which produce unconsciousness but not actual sleep, not in a way that will help the body perform the essential functions of sleep. He does say melatonin may help some people, although I think he believes its placebo effect is more reliable, and he questions whether over the counter melatonin supplements deliver as much of the hormone as they claim they do.

Why We Sleep was both illuminating and life-altering in the most literal sense: Since reading it, I’ve set Night Shift modes on my devices, set alarms to remind me to get to bed eight hours before the morning alarm, stopped trying to make myself warmer at night (cold prepares the body for sleep, and you sleep best in temperatures around 57 degrees), and so on. I had already been in the habit of pulling over to nap if I became drowsy on a long drive, but now I build more time into drives to accommodate that, and to give myself more time to wake up afterwards – Walker suggests 20 minutes are required for full cognitive function after even a brief nap. Hearing the health benefits of sleeping more and risks of insufficient sleep, including higher rates of heart disease, cancer, and Alzheimer’s, was more than enough to scare me straight.

Next up: I’m halfway through Brian Clegg’s A Brief History of Infinity: The Quest to Think the Unthinkable.

Not Dead Yet.

I came of age as a music fan right around 1980, thanks in part to some of those old K-Tel pop hits collections (on vinyl!) that my parents bought me as gifts, one of which included Genesis’ hit “Abacab.” I loved the song right away, despite having no idea what it was about (still don’t), and it made me a quick fan of Genesis, and, by extension, Phil Collins’ solo material, which at that point already included “In the Air Tonight.” I’d say I continued as a fan of both until the early 1990s, when Genesis released their self-immolating We Can’t Dance (an atrocious, boring pop record) and Collins’ own solo work became similarly formulaic and dull. It was only well after the fact that I heard any of the first phase of Genesis, where Peter Gabriel was still in the band and their music was progressive art rock that featured adventurous writing and technical proficiency.

Collins’ memoir, Not Dead Yet, details the history of the band through his eyes as well as a look at his solo career and his tangled personal life, some of which made tabloid headlines, leading up to his inadvertent effort at drinking himself to death just a few years ago. The book seems open about many aspects of Collins’ life, including mistreatment of his three wives and his children (mostly by choosing work over his familial duties) and his refusal to accept that he had a substance-abuse problem, but there’s also a strain of self-justification for much of his behavior that I found offputting.

From a narrative sense, the book’s high point is too close to the beginning: When Collins was just starting out in the English music scene, his path intersected with numerous musicians who’d later become superstars and some of whom would be his friends and/or writing partners later in life, including Eric Clapton, Robert Plant, and George Harrison. The Sing Street-ish feel to those chapters is so charming I wondered how much was really accurate, but Collins does at least depict himself as a star struck kid encountering some of his heroes while he’s still learning his craft as a drummer. I also didn’t know Collins was a child actor, even taking a few significant stage roles in London, before his voice broke and he switched to music as a full-time vocation.

The Genesis chapters feel a little Behind the Music, but they’re fairly cordial overall – Collins doesn’t dish on his ex-mates and if anything seems at pains to depict Gabriel as a good bandmate and friend whose vision happened to grow beyond what the band was willing or able to achieve. It’s the stuff on Collins’ personal life that really starts to grate: He talks about being a terrible husband and father, but there’s enough equivocation in his writing (often quite erudite, even though he didn’t finish high school) to suggest that he isn’t taking full responsibility for his actions. He cheated on two wives, he ignored their wishes that he devote more time to his family, and he seems to have harassed the woman half his age (he was 44, she 22) who became his third wife and mother of the last two of his five kids.

It’s also hard to reconcile Collins’ comments on his own songwriting, both on solo records and in later word for Disney films and Broadway shows, with the inferior quality of most of his lyrics. Collins’ strengths were his voice, his sense of melody, and of course his work on the drums. His lyrics often left a lot to be desired, and their quality, never high, merely declined as he became more popular. Even his last #1 song in the U.S., “Another Day in Paradise,” is a mawkish take on the same subject covered more sensitively in “The Way It Is” and a dozen other songs on visible poverty in a developed, wealthy economy.

Since that’s all I have to say on the book, I’ll tell one random Collins-related story. When I was in high school, MTV briefly had an afternoon show called the Heavy Metal Half-Hour, which they later retitled the Hard 30. It was hair metal, so not really very heavy by an objective standard, but harder rock than what they played the rest of the time. One day during the Hard 30 run, they played … Phil Collins’ cover of “You Can’t Hurry Love.” I’m convinced this wasn’t an accident, but a test to see if anyone was watching. The show was cancelled a few weeks later.

Next up: I’m about halfway through Peter Carey’s Booker Prize-winning novel Oscar and Lucinda, later turned into a movie with a very young Voldemort and Queen Elizabeth.

Killers of the Flower Moon.

David Grann’s Killers of the Flower Moon: The Osage Murders and the Birth of the FBI is a non-fiction ‘novel’ that manages to combine a real-world mystery with noir and organized crime elements while also elucidating historical racism against a population seldom considered in modern reevaluations of our own history of oppressing minorities. Drawing on what appears to be a wealth of notes from the initial investigation as well as private correspondence, Grann gives the reader a murder story with a proper resolution, but enough loose ends to set up a final section to the book where he continues exploring unsolved crimes, revealing even further how little the government did to protect the Osage against pitiless enemies. It’s among the leading candidates to win the Pulitzer Prize for Non-Fiction on Monday.

The Osage were one of the Native American tribes banished to present-day Oklahoma when that area was known as “Indian Territory,” marked as such on many maps of the late 19th century; Oklahoma as we know it didn’t exist until 1907, when it became the 46th state. (It always amused me to think of the ‘hole’ in the map of the U.S. as late as 1906, before Oklahoma, Arizona, and New Mexico attained statehood.) By a fortunate accident, the plot of apparently useless land to which the federal government exiled the Osage sat on top of one of the largest petroleum deposits in the continental U.S., which made the Osage mineral millionaires. The government couldn’t quite revoke their rights, but instead ruled that the Osage, being savages, were incompetent to run their own affairs, and that Osage adults required white ‘guardians’ to oversee their financial decisions, which, of course, led to much thievery and embezzlement and, in time, foul play, such as white citizens marrying Osage members and then poisoning their spouses to gain legal control of their headrights and the income they provided.

Two murders in particular attracted the attention of authorities outside of the county, however, as both Osage victims were shot in the head at close range, so there was no question of claiming natural causes, as was often the case when victims were poisoned (often in whiskey, so alcohol could be blamed). These murders were part of a spate of dozens of killings, many of which didn’t appear at first to be connected other than that the victims were either Osage themselves or were in some way investigating the crimes; the sheer scope of this and some media coverage brought in the attention of a young, ambitious bureaucrat named J. Edgar Hoover, who decided to put one of his top agents at the nascent Bureau of Investigation (no ‘federal’ in its title) on the case. The subsequent unraveling of the deceptions and the revelation that the mastermind of the plot was someone closer to the Osage than anyone expected included both early forensic science and dogged investigative work, leading eventually to one confession that toppled the criminal enterprise – only to have the trial twist and turn more than once before the final verdict.

Grann couldn’t have picked a better subject for the book, because these characters often seem plucked from Twin Peaks, from the Osage woman Molly, a survivor of a poisoning attempt whose sister was one of the victims killed by gunshot and who had several other family members die in suspicious circumstances, on up to the head of the scheme, a man whose greed and malice lay hidden behind a façade of benevolence toward his Osage neighbors. Killers of the Flower Moon would make an excellent dramatic film if told straight, but it would take just a little artistic license to turn it into the sort of crime tapestry in which HBO has excelled for years by sharpening or exaggerating some of the individuals’ personalities.

The story of the murders and the federal agents’ work to convict the killers is, in itself, more than enough to stand alone as a compelling narrative work, but Grann explains how the federal, state, and county authorities regularly worked to strip the Osage of their rights, fueled by outright racism and by jealousy of the tribe’s good fortune (with, it appears, no consideration of how racism and avarice drove the tribe to Oklahoma in the first place). After the verdict and what might normally stand as an epilogue, Grann himself appears, writing in the first person about his experiences researching the book and how he found evidence that the Bureau didn’t solve all of the murders, or even most of them, but assumed that they’d gotten the Big Foozle and had thus closed the case. Grann may have solved one more murder himself, but as he interviews more surviving relatives of the victims – many of whom ask him to find out who killed their fathers or uncles or sisters – it becomes clear that the majority of these killings will remain unsolved, a sort of ultimate insult on top of the lifetime of indignities to which these Osage victims were subjected.

It’s hard to escape the conclusion, although Grann never makes it explicit, that this would never have happened if any of the governing (white) authorities viewed the Osage tribe members as actual people. Dozens of killings went unsolved and unaddressed for several years before Hoover’s men arrived, and some unknown but large percentage of the killings will never be solved. What white officials didn’t do for the Osage in the 1920s continues today in what mostly (but not always) white officials don’t do today to address violence in urban, mostly African-American communities, including right near me in the majority-black city of Wilmington, nicknamed “Murder Town” for its disproportionately high rate of deaths by gun. If the governments responsible for the safety of these citizens don’t see those citizens’ deaths as important, or as equal to the deaths of white citizens, then it is unlikely that anything of substance will be done to stop it.

I listened to the audio version of Grann’s book, which has three narrators, one of whom, actor Will Patton, does an unbelievable job of bringing the various characters, especially the conspirators, to life. The other narrators were fine, but Patton’s voice and intonations made this one of the most memorable audiobooks I’ve listened to.

Next up: I just finished George Saunders’ Lincoln in the Bardo, which won the Man Booker Prize in 2017 and is among the favorites to win the Pulitzer Prize for Fiction next week; and have begun Joan Silber’s Improvement, also from 2017.

The Origins of Totalitarianism.

I spent my first year in college as a Government major, with some vague idea of studying law and/or working in politics after graduation, but abandoned the major completely by the middle of my sophomore year because the reading absolutely killed me. I like to read – I would hope that was evident to regulars here – but the kind of writing we were assigned in those classes was just dreadful. There was a book by Samuel Huntington (The Clash of Civilizations and the Remaking of World Order) that ended any interest I might have had in the subject because it was such an arduous, opaque read, and I eventually switched to a joint sociology/economics major, which got me into more of my comfort zone of a blend of math and theory.

Hannah Arendt’s The Origins of Totalitarianism reminded me tremendously of Huntington and John Stuart Mill and other books I was assigned in Gov 1040 but never actually finished, both in prose style and in tone. I understand that this book is considered extremely influential and an important work in our comprehension of how movements like the Nazi Party arise and even gain a modicum of popular support. The arguments herein, however, are almost exclusively assertions, with anecdotal evidence or no evidence at all, and the circumlocutory writing style meant that even though I retain a lot of what I read in most cases, I found I wasn’t even retaining what I read here from one page to the next.

Arendt’s main thrust here is that totalitarian governments, which she distinguishes from mere autocracies, arise when their leaders follow a rough playbook that sets up specific groups as enemies of the state, rallies disaffected followers against those groups, and often makes their supports into unwitting advocates of their own eventual oppression. Such governments then retain power by eliminating the possibility of what Arendt refers to as human spontaneity through an Orwellian system of truth-denial and unpredictable favoritism that puts subjects on ever-shifting ground, preventing them from mounting any effective system of dissent or resistance.

At least, I think that’s what she was arguing, but she used a lot of extraneous words to get there – and some of what she described in the early going, where she addresses the history of the so-called “Jewish question,” sounded a lot like victim blaming. She certainly says the Jews of Europe did not adequately understand how they were being used by European elites or how their connections to unpopular leaders like the Hapsburgs thus put them in the crosshairs of populist movements that aimed at overthrowing the monarchical or despotic status quo. She also seems to credit the same movements with their willingness to employ efficient methods of killing for its surprise value – no one expected anything like the Nazis’ system of killing masses of people, based itself on a process of dehumanization of entire classes of the population.

Whether I fully grasped the arguments Arendt makes in this book – and I freely acknowledge I probably did not – but much of what she does assert seems apposite to our present-day political situation, including the way in which Trump supporters, including his sycophants in the media, have repeatedly handwaved away his distortions of fact or his apparent collusion with a hostile foreign power. I’ll close, therefore, with this selection of quotes from The Origins of Totalitarianism that could just as easily have been written today about our current environment.

In the United States, social antisemitism may one day become the very dangerous nucleus for a political movement.

Politically speaking, tribal nationalism always insists that its own people is surrounded by “a world of enemies,” “one against all,” that a fundamental difference exists between this people and all others. It claims its people to be unique, individual, incompatible with all others, and denies theoretically the very possibility of a common mankind long before it is used to destroy the humanity of man.

The rank and file is not disturbed in the least when it becomes obvious that their policy serves foreign-policy interests of another and even hostile power.

(The Nazis) impressed the population as being very different from the “idle talkers” of other parties.

The mob really believed that truth was whatever respectable society had hypocritically passed over, or covered up with corruption.

Hitler circulated millions of copies of his book in which he stated that to be successful, a lie must be enormous.

The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction (I.e., the reality of experience) and the distinction between true and false (I.e., the standards of thought) no longer exist.