Butterflies on Beach; AI Bubble?; Circle of Confusion; Fears & Friendlies
Point of Rescue Dispatch
Greetings, Gentle Readers:
I trust the fall proceeds.
Wow, that came out a little dark. It’s been a while since I posted, but hey, the signal is intermittent. Lots on my plate, more on my mind, or curling around the edge of my mind like smoke, things that can’t quite be thought much less said directly, so let’s sneak up and see what we see. Maybe that’s a way to understand “literature.”
But first, Social Thought From the Ruins: Quixote's Dinner Party collects press releases, kind words (blurbs), reviews (not all so kind!) and podcasts about the book, along with ordering information. Again, the e-book is open access, free to you, so you should download it. Send to your friends. If you want to support me, of course you can buy a hard copy. Or you could take out a paying subscription to Intermittent Signal. But the truth, well, what truths I can offer, are free.
I’m in ATL, again. Old joke, told in Birmingham, that if you go to hell, you have to change planes in Atlanta. But Atlanta is home, if home is where you went to high school, and I have my issues with the place, but I love the vibe, even in the airport. So I head to the lounge, receive the usual pleasantries that are my due, thanked for my status (we really are becoming neo-feudal), and on a whim decide to take the stairs. Lots of stairs, flights. Marble. White. I’m alone. Stairway to heaven? Should I be naked?
I reach the top, looking down at my feet, appropriately, and a beautiful baritone voice says, in the warmest and friendliest voice imaginable, “we have to fight the reaper.” It takes me a minute to realize that I am being addressed and to place the voice, a very large, distinguished looking middle aged Black man, graying beard, in a gray suit, who should be played by a young James Earl Jones. I offer that we do what we can. He congratulates me on my effort, and welcomes me in.
In the last Signal, I wrote about a car wreck that did not happen. Trust in the Dark. Here’s a drowning. A week or so after not crashing, I was swimming with my mother and sister in law, a little too close to a rock jetty. I got stoneward and seaward, and said let’s move away a little. It’s only a few feet to them, but with each swell, the distance grows a bit. I’m being pulled out, eighteen inches at a time. I get serious and try to close the gap. I was once a very strong swimmer, but now? I’ve swum in the ocean all my life, and you don’t swim against a rip. Everybody knows that. My mother and sister in law, a few yards away, aren’t making much progress but they aren’t losing ground. Should I throw away this sunshirt? I’m still losing ground, and now getting worried. I get swept out and down the beach past the jetty, and the rip seems a little less strong. I turn to the beach, and get to where I can stand. I urge them to do the same. They don’t, maybe cannot hear. Maybe that was bad advice. They are trying to get in, but not really moving. Not losing ground, though.
So I go back in to help. It’s worse. Or I’m weaker. Or both. Now I’m scared. I’m about to start screaming, no time for pride. I think I’ll make it, I believe I can still tread water for a few hours, but maybe there is the pride, but my mother — am I giving up on my mother by not fighting, here? Is the fear of cowardice making me stupid? Can I summon more strength, damn I’m out of shape and not just old, is it wiser to get in and get help or a surfboard or something? I’m again swept out and down the beach, beyond the jetty, and able to struggle back to the sand. Everything pounding. So I head back up the beach, and into the water from the beach side, but my mother and sister have made it to where they can stand, and they aren’t particularly afraid. We all go in. I say let’s not do that again. I haven’t been scared like that in, well, a week. But can’t remember being that scared in the water, not even in a few whitewater spills. My heart rate is elevated for hours. The local paper headlines the rips; a man drowned. Not mine. Not me.
The “point of rescue” is the location for which the amount of time it will take to reach the victim is greater than the time the victim can survive unaided. Just FYI.
Bad things come in threes, I’ve heard, and I didn’t write this until I’d heard about her scans. They were stable, good, and I knew that but when I held her again I melted and for a few moments my chest tightened and I couldn’t breath, again but not as bad now as I type. When you hear the shot, the bullet has already missed you, and your fear has to chase after the danger. We have our defenses, must have to get through the day, but sometimes the walls are down and the world pours in. And I realized that the bad had indeed come in threes, and this time at least, the bad thing was fear. Onward. Fight the reaper.
Still, maybe again, in ATL, riding a long escalator. In cultural anthropology, airports are the paradigmatic “non-places” of modernity, because they are intentionally divorced from, actually never had, place and time. (Marc Augé is the scholar.) Sure, you could say that. But there’s also this. I’m jaded, weary, not really looking at anything. A man, probably he played football in high school, maybe also at a small college, catches my eye and sticks out his fist. I give him a fist bump, and we both smile. Neither of us says anything.
I had never seen so many butterflies on the beach. Except when the wind was really strong, they were everywhere. The first I saw were monarchs, and I thought they were migrating along the beach, and maybe that was some of it, but later I saw lots of other butterflies, and when the wind changed directions, the butterflies did too, up and down the beach. It was wonderful.
I read with an unsettling shock of recognition:
In photography, the circle of confusion (CoC) is used to determine the depth of field, the part of an image that is acceptably sharp. A standard value of CoC is often associated with each image format, but the most appropriate value depends on visual acuity, viewing conditions and the amount of enlargement. Properly, this is the maximum permissible circle of confusion diameter limit, or the circle of confusion criterion, but is simply called circle of confusion.
Real lenses do not focus all rays perfectly, and so, even at best focus, a point is imaged as a spot rather than a point. The smallest such spot that a lens can produce is often referred to as the circle of least confusion.
— Nikon Glossary of Photography
Is there a better metaphor for social thought? For the epistemological position of the social thinker, most pressingly, me? Every word is a double entendre, I presume unintentional. How well can I know a social moment, a political idea as embodied, the meaning of the smallest gesture? Not “not at all,” but not “completely” either. Therefore, when we write, that is, represent things with words, we must ask after “the part of the image that is acceptably (!) sharp.” Because things are not words. A question of judgment, of taste, with serious stakes for the honor of thinking.
Having just finished a book on the limitations and possibilities of social thought, I am taken aback. This is so succinct, so cutting. If this analysis by metaphor, with its latent brutality, was intended, I hope someday to meet the genius buried in Nikon, like Kojeve in the early days of what is now called the European Union, or maybe T.S. Eliot at the bank, “swallowed” by the bureaucracy like a sand crab, cunning, knowing. I admire this, in the abstract, though I’ve always needed more exposure, vulnerability, glory? Ah, vanity.
Why have I been reading the Nikon Glossary of Photography? For now, let’s say the wolves in my head made me do it. After years, I’m going to have to move beyond even the most advanced phone camera, even though I’ve started shooting in RAW which is great. Lots of new problems arise. What is photography, for, anyway? No doubt there is no single answer to that question, but various answers probably say different things about both “art” and “technology,” what we are trying to accomplish with the equipment and skills available to us. I’m tired already. All for another day.
Beyond the circle of confusion. Unless “acceptable” includes impressionistic, which for me it often does, the camera breaking down and creating a painterly effect.
These pictures are not really about the butterflies. Like my recent bit on gulls, Quixote Sallies Forth, I’m thinking about the space (the temperature, the wind, the smell, the feel, the world) the butterflies inhabit, and the spaces such habitation creates. You might be interested in an older photo essay, Pictures Without Subjects.
There is a growing unease, among those who think about finance and political economy, with the scale of AI investment. It is said that OpenAI has done deals worth over a trillion dollars this year. The US stock market indices — and so recent highs — are overwhelmingly determined by a handful of companies involved with AI. There is much enthusiasm, but it is by its terms speculative. We’ve seen similar movies.
While I started worrying about an AI bubble sometime ago, since then the anxieties have become commonplace. I’ve been thinking about audience, and does this or that need to be said, by me? Simply piling on to this or that received wisdom is not of interest to me, in most cases seems wrong. So I hesitate to respond publicly.
An interlocutor sent me a short article on debt financing, which woke me up one recent morning. What follows is an amended version of my response. The organization may be useful, and some of the points not so obvious, especially to readers less familiar with finance, and some of this I’ve not read elsewhere, though my guess is somebody has written much the same. Anyway, here goes.
***
Hope this finds you well. You sent me something on AI and debt, sorry for tardy reply. In some weird combination of nerdiness and faulty manners, I woke up with my head spinning with thoughts. Off the top of my head/chest, and briefly:
Bubbles, even equity bubbles, are generally characterized by leverage. The idea of leverage (when should a business willingly take on debt, and how much?) is one of the harder things I try to teach. But, if you know X (here AI) is a sure thing, why not borrow to buy more of it? Success leads to more borrowing . . . the theorist of this is the late Kindleberger.
In a classic equity play, leverage is provided by margin, i.e., brokers extend credit. The crash, then, tends to wipe out investor value and brokerage value, but at least initially is kind of an industry balance sheet operation and on a good day might be contained, as in Silicon Valley Bubble or recent Archegos incident. Or not, if the financial institutions in question are also active in the real economy, as in 1929. Glass Steagal was an effort to quarantine the financial economy, because Main Street banks had gotten involved in financial losses, and so “balance sheet” became a problem for payrolls, households, etc.
Relevant to #2, equity “financing” can lead to some eye-popping numbers. But market capitalization, again, accounting, is rather speculative. Number of shares times price of last share sold, as if all the shares could be sold at that price. As I often teach, how much cash can say Jeff Bezos generate? A lot, but not THAT much. So when we say OpenAI is worth X dollars, it’s not at all clear what that number means outside the context of startup math, how it translates. At this scale, a dollar is not a dollar, i.e., the dollar as unit of account is not equal to the dollar as a medium of exchange.
Many of the technological innovations of the last generation or two relied on a fair degree of installed base and/or cash flow. By the time the web or social media came along, people had computers or smartphones. Financing was required to build more/better, but the iPhone in particular was a consumer product. People pay for phones, and thus partially fund the expansion and penetration of the tech. Apple still biggest company in world at some point, but . . .
AI, in contrast, has almost no cash flow (sales are paltry, the real “product” is yet to come, we’re told) and requires a massive expansion of physical infrastructure in order to work at all. That is, somewhat ironically, even if “all you need is scale” as a matter of computer science (I believe this is wrong, fwiw), as a matter of finance, AI does not scale up, in the sense that you cannot start small and grow organically. The first LLMs already required massive amounts of data and compute. OpenAI is talking about trillions.
So AI is a financing problem with huge operational costs from the outset. Again, like the railroads, really only valuable when you’ve got a lot of it. This makes AI very different from early tech revolutions. It’s a return to the “big iron” of IBM days, except worse, anyway, not the WELL of the glory days of the internet.
Equity vs. cash. To go commercial, even the largest tech companies (some of the largest companies ever) are not big enough. We need new data centers with physical computers, lots of them, and which have massive requirements for water and especially electricity. (Estimates are that AI will require additional power equal to the power usage of Japan, the world’s fourth largest and highly electrified economy.) To build real, physical stuff, you can’t just wave numbers around. This isn’t VC math, “two guys in a garage = ? million”, this is a construction company with backhoes and miles of actual metal cables, and computers on trucks. Such people need to be paid, with cash. Silicon Valley talk might get Softbank’s attention, might even crash financial institutions, but it doesn’t get a power plant built. The railroads faced a similar problem. Even exploited labor costs money, and steel certainly does.
So now “the next big thing,” in contrast to recent tech big things, has a great hunger for cash. The classic answer, an IPO, is effectively verboten for complex reasons. Meanwhile the so-called direct lending industry has sprung up. So we see large pools of capital, many funded by insurance companies, engaged in very lightly regulated debt financing. We also see complex equity exchanges. What could go wrong?
The foregoing all assumes, rather heroically, that the technology works, not in any particular engineering sense, and certainly not AGI, but in the financial sense of capable of generating a positive return on this massive investment within a reasonable time frame. (And there are many ways to get to positive cash flow, many unpleasant, e.g., required usage, energy costs, government bailouts, deregulation, etc.).
What worries me: can such returns be sufficiently evenly distributed to avoid the Scylla of the collapse of systemically significant and increasingly indebted institutions, and the Charybdis of a hugely powerful monopoly? New tech has generally been “winner take all.” Think Microsoft and the desktop, Apple and the phone, Google and search, Facebook and social media. Other companies are bought out, or fall by the wayside. That is, it is very difficult to see all of the players in the AI arms race winning enough to repay their investments. Zuckerberg has spoken of betting the company (Facebook). As with railroads, another successful and game changing technology, massive failures seem likely. In fact, if all goes well (from an enterprise standpoint) we seem likely to hit both the whirlpool and the rock, that is, to see systemically important failures combined with massive monopoly power.
As you of course know, once we are talking about trillions of dollars invested, we are in “too big to fail” territory. We, as a society, pensions, health care, governments, operating funds, usw., in claims on these enterprises to allow an equity collapse. See #9. I (we) have elsewhere called this “social capitalism.” See, e.g., Westbrook & Westbrook, Progressive Governance Under Social Capitalism.
Is there a way out? Questions arise, like whether some of these dollars are merely paper, equity holders can slowly take losses, etc.
And perhaps most heroically of all, there seems to be a tacit and unfounded assumption that this cash can be recouped by growth in productivity, and that such growth would be fairly widely distributed — enough to outweigh the reduction of enterprise labor costs and resulting increases in inequality.
I might as well stop here. A related, I think, set of questions surrounds the enterprise commitment to general approaches to ML (scale, AGI, etc.) when all the CS researchers I know and most that I read argue for architectures that are more focused on discrete domains, and generally combine ML with more traditional programming, perhaps even with formal attestation. I think the answer lies not in the technology, but in the economics. But these questions are live, for me, and so maybe I’ll write up on another day. Maybe.
Future’s so bright, I gotta wear shades.
I’m at a Mexican food truck. Not only are the food and the personnel Mexican, the truck itself is Mexican, as it proudly announces somewhere amidst the pictures of food, drinks, and Christ triumphant. Serious joint, parked in the lot of a shuttered sports bar. A short woman, voluptuous, sorta red haired gringa, country gal of a certain age, is holding a bag up to the window. “Habaneros. They are really hot. I can’t eat them all. Can you take them, to make salsa?” The kid in the food truck who takes money and hands out beepers, then food, to the people in the parking lot knows he shouldn’t take supplies from passersby. But I can. So we start talking, about various pepper related injuries even when you are careful, and wearing a mask when she worked at some salsa maker in Kansas City I think and . . .
That evening I wash some peppers, take out the seeds. My cheeks hurt, and I can feel it in my lungs. These things must have a billion Scoville Heat Units. (Hours later, despite all my hand washing, I’ll feel the heat on various patches of skin, but only as a tingle, not a burn much less an emergency.) I chop the pepper fine. Into the wok with some onion and a couple of chicken breasts. Saute. Add miscellaneous vegetables, soy, lemon, and agave. With beer. Strong beer.
Open to the world. Onward.
— David A. Westbrook









