- The Oyster Club
- Posts
- What I'm reading this week
What I'm reading this week
DEI at UCLA; AI; Twitter’s context problem; a win for free speech?; the death of libertarianism?

Photo by Changbok Ko on Unsplash
Megan Zahneis at the Chronicle of Higher Education takes a balanced look at the UCLA controversy surrounding Dr. Yoel Inbar. A letter recently emerged wherein graduate students in the psychology department at UCLA expressed concerns over Dr. Inbar’s mild criticisms of DEI initiatives at universities. It’s possible that, as a result of this letter, Dr. Inbar was not offered the position for which he was being considered. The piece covers a lot, but in particular, it touches on the problems ideological rigidity poses to science, the importance of nuance, and the chilling effect of campus authoritarianism.
“What we suspect may be happening here is that because Professor Inbar allegedly did not parrot the correct views on DEI and some students objected to that, he may have been discriminated against because of his views in the hiring process,” Morey said. That’s not allowed at a public university, she said: “They can hold faculty to viewpoint-neutral type of criteria, objective standards, but they can’t say, ‘If you don’t pledge allegiance to our particular view on diversity, you can’t have a job.’”
On Tuesday, during the Very Bad Wizards episode, Inbar said the graduate students who opposed his hiring had missed the nuance in his remarks about diversity statements.
“You can pull out selective quotes that make me sound like I’m a rabid anti-diversity-statement person, which I’m really not,” Inbar said. His main concern is with their effectiveness, he said: “What you want is somebody who’s going to be able to teach and to mentor people from diverse backgrounds. But what you get is somebody writing about what they believe, and perhaps what they’ve done to demonstrate that.”
[...]
In the past, he added, he’s urged faculty members to speak up about potentially controversial topics they believe in. His recent experience has changed his mind.
Read it here.

Freddie deBoer has a wonderful essay on AI hype and doomerism. In it, he argues that our millenarianism—the “tacit but intense desire to escape now” —drives us to be hysterical over what will ultimately just be another tool we use to seek out the next millenarian moment. It’s a long and satisfying read, and I recommend you treat yourself to it.
Talk of AI has developed in two superficially-opposed but deeply complementary directions: utopianism and apocalypticism. AI will speed us to a world without hunger, want, and loneliness; AI will take control of the machines and (for some reason) order them to massacre its creators. Here I can trot out the old cliché that love and hate are not opposites but kissing cousins, that the true opposite of each is indifference. So too with AI debates: the war is not between those predicting deliverance and those predicting doom, but between both of those and the rest of us who would like to see developments in predictive text and image generation as interesting and powerful but ultimately ordinary technologies. Not ordinary as in unimportant or incapable of prompting serious economic change. But ordinary as in remaining within the category of human tool, like the smartphone, like the washing machine, like the broom. Not a technology that transcends technology and declares definitively that now is over.
That, I am convinced, lies at the heart of the AI debate – the tacit but intense desire to escape now. What both those predicting utopia and those predicting apocalypse are absolutely certain of is that the arrival of these systems, what they take to be the dawn of the AI era, means now is over. They are, above and beyond all things, millenarians. In common with all millenarians they yearn for a future in which some vast force sweeps away the ordinary and frees them from the dreadful accumulation of minutes that constitutes human life. The particular valence of whether AI will bring paradise or extermination is ultimately irrelevant; each is a species of escapism, a grasping around for a parachute. Thus the most interesting questions in the study of AI in the 21st century are not matters of technology or cognitive science or economics, but of eschatology.
Read it here.

Aaron Ross Powell looks at why some people (in this case, Elon Musk) believe the dumb stuff we all see online every day. Looking at Twitter specifically, he posits that the small communities we cultivate are vulnerable to “context collapse,” which is the idea that context-dependent conversations from one community can enter into another community and introduce confusion and controversy. As a result, we assume the worst in those who are in “obvious tension” with what “‘everyone knows.’”
This unrecognized siloing is made worse by what’s known as “context collapse.” This happens when a multitude of audiences and communities occupy a shared space, as they do on social media platforms, each community having its own context for their conversations, and then something from within the context of one community enters into another.
The simplest example is jargon. My community on Twitter might use a particular term in a quite specific way. If a tweet intended for my community and employing that term gets retweeted into an unintended community that doesn’t share our jargon, the resulting confusion in meaning can lead to misinterpretation, anger, and controversy.
But context collapse isn’t limited to misunderstandings of terminology. It can also apply to in-jokes, commonly accepted arguments, assumptions of shared knowledge of data or premises, and so on. This can then clash with the “everyone knows” illusion social media creates. Here, suddenly, is someone saying something in obvious tension with what “everyone” knows, and so that person must be mistaken, uninformed, irrational, or unethical. If they weren’t, they wouldn’t say or believe that. The imagined consensus and invisible intellectual siloes of social media heightens our already existing biases for credulously believing what our in-group believes, and applying asymmetrical incredulity to arguments and data conflicting with it. We take our circle’s fringe beliefs to be mainstream, and take mainstream beliefs to be fringe.
Read the whole thing here.

Photo by Markus Winkler on Unsplash
Over at Reason, Robby Soave doubts the actual impact of a judge’s recent decision to limit the federal government’s ability to collude with social media companies to censor users. He points out that the ruling reads like a win for free speech but is in fact broad enough to allow the government to keep doing its thing.
Will Duffield, a policy analyst at the Cato Institute, is similarly concerned that Doughty's ruling might not make enough of a difference.
The top half of the injunction reads like a "complete and total shutdown of government communication with social media platforms until courts figure out what's going on, but the bottom half includes exceptions wide enough to include many of the most controversial government communications with platforms," he says.
Duffield would like to see federal legislation that mandates disclosure, forcing government actors to be transparent about their communications with social media companies so that they can be held accountable—even sued—if their conduct crosses the line into censorship.
Read it here.

The always-worth-reading Matt Zwolinski has a post at Bleeding Heart Libertarian wondering if we are witnessing the death of libertarianism. Zwolinski notes that, while he and his co-author John Tomasi did not intend for their new book The Individualists to be an obituary for libertarianism, he did find himself “reluctant to identify [himself] as a libertarian.” Boy, don’t we all. The article’s primary value is in his discussion of “strict” and “broad” libertarianism. These categories are key to the future of libertarianism. Fully embracing one over the other would surely be the death knell of the movement.
Broad libertarianism . . . is defined in terms of the six core libertarian ideas we identify in our book - private property, free markets, skepticism of authority, negative liberty, individualism, and spontaneous order. The idea is that you can endorse these ideas without accepting the radicalism, absolutism, and (often) rationalism that is characteristic of strict libertarianism.
[...]
Strict libertarianism, on the other hand, is strikes me as a philosophically untenable position. I’ve long been critical of the so-called Non-Aggression Principle, partly because it has a notoriously difficult time dealing with problems of externalities.
[...]
But the problem isn’t just the NAP. The problem is the idea that we can construct a plausible political philosophy out of any single, absolute principle. It doesn’t matter whether it’s the NAP or the Law of Equal Freedom or the Principle of Utility. There are a plurality of legitimate moral values; these values sometimes come into conflict; and those conflicts require us to make trade-offs. Strict libertarianism either denies the existence of these conflicts, or insists that liberty always emerges from them the winner. Neither of these positions strikes me as remotely plausible.
Read the whole thing here.