Jeff’s Newsletter #11

Jeff’s Newsletter, Volume 11

1. Improved means to an unimproved end
I’m a sucker for critics of modernity, and Neil Postman was one of the best. On the eve of the 21st century, Postman warned about where our civilization was headed: a cult of technology, instrumentalized education, and a return to tribalism in the absence of any other transcendent values. In Building a Bridge to the 18th Century, he argues that we have to look back to move forward: “If looking ahead means anything, it must mean finding in our past useful and humane ideas with which to fill the future.” His preferred past is the Enlightenment, the source of contemporary ideals like individual liberty and scientific progress but in a context of Deism and moral progress. My sense is that when Postman talks about the Enlightenment he’s really just talking about Rousseau (and subsequent Romantics like Thoreau), whose skepticism about progress without morality actually tended to contradict the prevailing rationalism. But the provenance of his sources aside, Postman offers a compelling criticism of technocratic society. We have too much information, he says, and not enough means for turning it into knowledge and wisdom. Knowledge: information organized to accomplish a goal. Wisdom: the capacity to know what body of knowledge is needed to solve a social problem. Postman is most famous as an education critic, and his best material tends to be his suggestions for radical curriculum overhaul. Here’s what he’d have us teach in school: the art of asking good questions; logic, semantics, and rhetoric; how to think scientifically and the history of science; critical appreciation of technology; and comparative religion.

2. All that’s known, overthrown
NYT science correspondent John Tierney says “The only successful war on science is the one waged by the Left.” After reading his essay, I believe this claim is definitely wrong, but has more merit than I would have guessed going in. I see three sub-claims here: (1) there is some anti-scientific stuff going; (2) the people doing that stuff are the Left; and (3) the Right is not doing something worse. Tierney tells several stories of sins against science. First, social scientists have a leftist bias which entails blanket approval of marginalized groups and hostility toward right-of-center groups like religious Christians. This seems true to me, although I think social science is fundamentally normative and shouldn’t pretend to be “scientific” in the first place. But in light of its pretensions of scientism, this criticism is fair. Second, there is a taboo against studying biological differences of race and gender. Here Tierney touts the book A Troublesome Inheritance, which argues that human evolution has been “recent, copious, and regional.” Some googling shows that the population geneticists whose work the book summarizes do not agree with its conclusions. I acknowledge that any research about genetic differences would face an uphill battle toward acceptance. In light of the history of using these kinds of arguments to justify eugenics and discrimination, I think it’s reasonable to have a strong prior against such claims being true. Third, Tierney discusses the history of leftist approaches to remaking society, from “scientific socialism” to population control and compulsory sterilization to dietary fads. I think this issue of putatively scientific social planning is one of the most important failure modes of modernity. But it’s not so easy to pin it on the Left, at least not in the contemporary American context. Both Left and Right are currently on the Authoritarian side of the Libertarian / Authoritarian dimension in the political compass.

Tierney’s last example is probably the most important for where you land on the politicization of science: climate science. He criticizes the “sneer-and-smear” techniques by which liberals disdain those who disagree about the risk of global warming. The much-cited figure that 97% of scientists believe that global warming is dangerous is, apparently, misleading: it originated in a poll of climate scientists asked whether global warming is man-made, not dangerous. A more recent poll of over 10 times as many climate scientists yielded 52% who think global warming is dangerous. I’m obviously unqualified to judge the merits, but I’d like to learn more about both perspectives. I’m sympathetic to a broad point Tierney makes, which is that even if you agree on the climate models, the resulting policies we might pursue “vary according to political beliefs, economic assumptions, social priorities, and moral principles.” I should add, though, that there are plenty examples of politically motivated denial of science from the Right that Tierney glosses over. Congress’s ban on CDC research on gun violence is an obvious one; so too the lack of research on medical marijuana as an alternative to opioid painkillers.

3. I’d like to be an algocrat when I grow up
There is a community of researchers in sociology, history, information studies, and law trying to study algorithms and their role in society. This research is exciting but scattered, in part because we don’t have much consensus on what we should be paying attention to. What, exactly, are we trying to ask about algorithms? What are the variables? In this context I appreciated John Danaher’s blog post on “the logical space of algocracy.” With this word algocracy he’s talking about situations where algorithms exert power over people, perhaps in subtle ways through constraints, incentives, or nudges. The point of the post is to propose a framework for describing an algocracy and comparing it to others. Here’s his framework:

The rows represent four common stages in an algorithmic decision-making procedure: sensing or ingesting some data, processing or analyzing it, acting by emitting a recommendation or decision, and finally learning from the results of that action. Several researchers have come up with a pretty similar list of stages (Zarsky 2013; Citron and Pasquale 2014). Danaher’s move is to connect this typology to a second one: the typology of divisions of labor between humans and algorithms. For each stage (row), we can specify how the labor is divided (column). A unique sequence of four numbers then describes a very specific type of algorithm. I’m going to play with this in my research and see if it’s a useful way to distinguish algorithmic systems from each other.

4. The Springsteen economic recovery agenda
No matter the role you think struggling, post-manufacturing towns played in the election, there’s a good case to be made for helping such places improve. Sure, we could focus policy on individuals and encourage everyone to move to thriving cities. But cities are more expensive to live in, and moreover many people prefer to live in smaller towns. Adam Ozimek argues that there are diminishing returns to making cities better, and likewise low-hanging fruit for investing in depressed rural areas: “When a place goes from 0% of the population having PhDs to 1%, there are a lot of benefits that you don’t get when you go from 30% to 31%. This deserves more attention from economists than it currently receives, so let me state it succinctly in economist-speak: Agglomeration is not the only human capital non-linearity that matters.” Ozimek points out that we often talk in unproductive all-or-nothing terms about rural America; think of the last time you heard “none of those manufacturing jobs are coming back.” He points to places like Lancaster County, PA, where manufacturing employment has indeed halved since 1990 but population has grown and the unemployment rate is below the national average. We should study places like this and figure out what is going right.

5. Just vouchsafe me some healthcare
Following Trump’s choice of Tom Price to run Health and Human Services, you may have heard talk about “turning Medicare into vouchers.” This topic bubbles up every few years or when Paul Ryan is in the room, but the Price selection finally inspired me to do some reading. For the curious, I highly recommend this 2012 interview with Henry Aaron of Brookings, who originally (in 1995) invented the concept that became bastardized as Medicare vouchers. He presents several reasons why vouchers, or the closely related “premium supports,” no longer make sense. The one that really stood out to me is that to the extent the Obamacare exchanges have failed, vouchers would fail even harder. In Ryan’s world, voucher recipients would buy insurance from one of several private companies competing in a regulated pool. Some might call it an…”exchange.” Here’s Aaron: “It’s close to wacky to repeal the exchanges called for by the Affordable Care Act, which will serve twenty-nine million comparatively healthy people, and then in the next breath propose to create something like them for close to fifty million people who are much sicker and frailer.” This point is one I’ve heard Ezra Klein make several times. If we think Obamacare hasn’t worked–which is a reasonable view, at least regarding the exchanges–the lesson should be that combined public-private insurance markets don’t work. This cuts against preferred Republican alternatives, and in favor of universal healthcare.

Leave a Reply

Your email address will not be published. Required fields are marked *