
Greetings!
Here in the high country with the dogs, spring is coming but has not arrived. I was supposed to ski today, last hurrah given my schedule, but feeling puny and it was too cold and windy to sit alone on the single lift kept running so the resort could say it was open. Weather advisory, wind 80 mph above treeline, in the 20s. At the house, the snow is gone, except a few patches leftover from big drifts. More snow will fall through May. I was actually snowed in a few days ago, but I had provisions and didn’t even try to plow myself out, just let it melt down.
We have no new leaves, but we do have birds. Weirdly, lots of robins. I grew up with robins, and think of them as sort of suburban wimpy birds, dependent on manicured lawns. Not true. We call these extreme robins. And the moose and other big animals are hungry after the winter, some of them gestating, and so moving around quite a lot.
The pictures for this Signal carry no particular message, just recent images I liked for one reason or another.
Quixote
I have been up here working, actually mostly trying to force myself to work, on a book about some of the predicaments of social thought in the first decades of the 21st century. Current title: Quixote’s Dinner Party: Hopes for Social Thought. I’m drawing on work done and conversations held with various friends, in several disciplines, in different countries, since the early 2000s. An earlier draft, actually a number of efforts over the years, have more or less failed. There are much worse things than having a book fail, but I’ve had substantial surgeries that were easier. Once more unto the breach, dear friends!
So, watch this space.
Did you see how I did that? It looked like I was just trying to sell books, but the Gaza protests will sell this book. Mostly I’m stiffening my spine, lashing myself to the mast, leaving myself no honorable way out.

Law and AI
Below is an informal precis for a forthcoming article, both technical and syncretic, feel free to skim over. I’m doing quite a lot of reading/thinking/writing on technology and culture, though not much of it is ready for prime time, not even a semi-formal newsletter. There will probably be quite a bit more in the future, if my computer’s desktop is anything to go by. Many thanks to my buddy Rob Goldman, an AI scientist, for keeping me between the channel markers.
LLMs Are Legal Like Chess Computers Are Playful
In jurisprudence, there is a common problem known as "reification," that is, treating a set of more or less dynamic relations as a thing. To some extent, that is what law does. We want a house title to be a house title, for example, and for many purposes, reification does no harm. But law is also the formalization of social relations which are at least implicitly non-binary (because implicating the state) and which shift in time, and for which the costs/risks/opportunities also shift. So, the AI project is, jurisprudentially speaking, ontologically primitive. To bring the matter home, maybe, what is "the law" in the OpenAI debacle? In Silicon Valley Bank case? All of this is before, but related to, understanding law as performative, lawyers as officers of the court, signatories as bound. Consider, in this regard, Ukraine/Russia, or a marriage.
The question of whether law can ever be rendered entirely objective has a long history. Law students are likely to be familiar with the formalist versus realist debate over the nature of contracts in the early 20th century. Similarly, in the middle of the century, courts refused to enforce racially restrictive covenants, that is, to use the power of the state for immoral ends, even though the covenants were, as legal instruments, sound. Much of the excitement over so-called “decentralized finance” is similarly an effort to create a mechanistic law. (I would argue that any law that had no place for equity was not worthy of being called law, but that is a theological position.)
The emergence of large language models (LLMs) like ChatGPT and similar probabilistic programs has changed the structure of this problem. LLMs are probabilistic, not deterministic. Although their outputs may be said to be “objective,” because the internal representations of the program as it runs are not knowable, the relationship between an LLM’s output and the aspect of the world that we seek to address is not knowable. This is the so-called opacity problem. At the same time, probabilistic algorithms, including LLMs, are integrated into all sorts of technology, notably data and document management.
Supporters of contemporary approaches to tend to argue that sufficient scale, compute, and perhaps hybrid approaches to programming (LLM + python) will “solve” these problems. (The term AI, incidentally, is from the ‘50s. See the always interesting Jorgen Veisdal, The Birth of AI (1956).) Critics argue otherwise, and currently seem to be having the better of the argument. For the purposes of this paper, I will assume that the epistemological problems confronting probabilistic approaches to AI are not very tractable, i.e., this is the technology we will continue to see implemented, or not, for the foreseeable future.
What does this mean for law? A great deal depends on context, of course, but one might start by asking after the authority of LLM outputs, and thinking about the maximal application, using LLMs to decide. Since the logic of AI outputs cannot be seen, there is no logic of the case. Legal education, at least, assumes that “law” is a rational enterprise. We appeal cases, arguing that they were wrongly decided. But law has often been less than logical, ritualistic, and one might argue that juries, too, are somewhat inscrutable.
I will argue, however, that the question cannot be finally framed in terms of the veracity of outputs (a legacy of Turing), although veracity matters (is the accused actually guilty?). Law, at least as we understand it in the United States today, is social process, maybe ritualized, and in that sense subjective – what a society does. LLMs simply operate differently. LLMs, then, would be doing law manqué, by creating outputs that mirror human processes. This is like thinking that chess computers are playful.
But law need not be understood as a very human endeavor. That is, just because LLMs are ontologically primitive doesn’t mean that one cannot imagine a society treating them as authoritative. Consider the entrails of sacrificed animals. Taking LLMs seriously as legal authorities, then, amounts to dehumanization of legal process, an understanding of law akin to trial by dice or battle, or for that matter, computerized gaming, key to the emergence of LLMs.
A Poem
I’m going to close with a blast from the past.
Ides of March At about 3:00 in the morning she came over the retaining wall in a heartbeat. Her fall was cushioned by ditchwater, so she waded ashore to meet their anxious jaws. It was really too perfect for poetry: a woman from the country homeless off the one-way bus killed by the nervous lions of the capital. I wondered why restless beasts didn’t escape a cage into which a crazy woman could stumble and realized that lions could never leap out of four feet of water up the face of our ditch, their wall to journey out upon the land.
From Wire: Washington Poems 1995-1998. True story. A prefatory remark, written when I put this up on Amazon (I didn’t have Substack!), still seems good:
Things were different then. We won the Cold War. The Twin Towers stood. We understood the global financial structure, and how countries should develop. Davos Man strode the earth like a colossus. Looking back, however, one sees the emergence of present unhappiness. The distance among people, the envious inequality, the alienation, the desperate arrogance of hollow professionals -- it was all there already, in plain view from the DC Metro.
Safe travels, pilgrims.
— David A. Westbrook
David- Thanks for sharing this. For some reason this part really stood out for me: "a cage into which a crazy woman could stumble." Something about it just sings. Hope you're well this week. Cheers, -Thalia