Being able to take future AI seriously as a risk seems to be highly correlated to being able to take COVID seriously as a risk in February 2000.
That said, mass public action can be valuable. It's a notoriously blunt tool, though. As one person put it, "if you want to coordinate more than 5,000 people, your message can be about 5 words long." And the public will act anyway, in some direction. So if there's something you want to public to do, it can be worth organizing and working on communication strategies.
My political platform is, if you boil it down far enough, about 3 words long: "Don't build SkyNet." (As early as the late 90s, I joked about having a personal 11th commandment: "Thou shalt not build SkyNet." One of my career options at that point was to potentially work on early semi-autonomous robotic weapon platform prototypes, so this was actually relevant moral advice.)
But I strongly suspect that once the public believes that companies might truly build SkyNet, their reaction will be "What the actual fuck?" and widespread public backlash. I expect lesser but still serious public backlash if AI agents ever advance beyond the current "clever white-collar intern" level of competence and start taking over jobs en masse.
The main limits of public action are that (1) public action is a blunt tool, and (2) the public needs to actually believe in an imminent risk. Right now AI risk mostly gets filed under "I hate AI slop" and "it's a fun hypothetical bull session, with little impact on my life." Once people actually start to take AI seriously, you will often see strong negative attitudes even from non-technical people.
Of course, public majorities of 60-80% of the population want lots of things that the US political system doesn't give them. So organizing the public isn't sufficient by itself, especially if your timelines are short. But if you assume a significant chance that timelines are closer to (say) 2035 than 2027, then some kinds of public outreach might be valuable, especially if the public starts to believe. This can create significant pressure on legislative coalitions and executive leadership. But it's all pretty hit-or-miss. Luck would play a major role.
There's an ability that professional interpreters need, that I lack: The ability to echo input to output in close to real time. I'm not talking about the actual translation bit. I'm talking about the ability to put an audio book on headphones and repeat it in the same language as an "instant echo", or with some short fixed delay. I know people who find that easy, but I find it maddeningly hard.
Let me pause and repeat, and I'm pretty good. I have solid "echoic" memory for sound, and an unusually good short-term memory.
I don't feel particularly multithreaded, but I do find it easier to remain detached from my emotions and my "models" of other people than what you describe. Emotion is some other subsystem, basically. It can sometimes affect me in an ambient way if I'm not careful, but mostly it's something I can observe.
I might be able to listen to two simultaneous conversations as a party trick? But I wouldn't be fun.
I can think of one famous fantasy story that happens after the kids have recently left home, Lois McMaster Bujold's Paladin of Souls. The protagonist, Ista, is the dowager mother of a very young queen. Ista is still in early middle-age, but her relatives suspect that she's losing it. She weeps, she says cryptic things that make no sense, and she argues with the gods. So Ista finds herself hemmed in. For her own good, of course.
And one day, she decides to say "Screw it", and goes striding out the castle door and off down the road. This does not improve her guardians' opinion of her mental health, of course. But before Ista is rounded up, she meets dy Cabon, a priest of the fifth god, the Bastard. The Bastard has a fascinating divine portfolio, including orphans, crows, the LGBTQ community, justice when all human justice has failed, and "disasters out of season."
And so Ista formulates a second plan to escape her daily life, ordering dy Cabon to prepare a religious pilgrimage. And much to dy Cabon's surprise, he finds himself dogged by divine visions encouraging Ista's voyage. You see, the Bastard has use for Ista. She is, after all, "a disaster out of season." And if Ista curses the gods? There's one god who considers that as holy as any other prayer.
Paladin of Souls won the Hugo, Nebula and Locus awards, and it was notoriously a favorite book of r/fantasy.
This seems like a very complicated question, the sort of thing where you'd need to write a book just to cover the special cases. For example, some people may have oaths, professional ethics, or ancient traditions requiring them to help even the worst people:
At the opposite end, there are things like selling equipment or software to countries or companies under nuclear sanctions, where the law will be very unhappy. Known terrorists likely fall into a similar category, where providing many common services may put you at legal risk.
Then there are other questions:
Like I said, this feels like you could write a book.
I have run large, complex software systems in production. And if I could give people one piece of advice, it's that there are no easy answers. Here are handful of examples of things won't solve all your problems:
I could go on a for a while.
But then, once you have become deeply pessimistic and paranoid, you finally build a system that runs flawlessly for 4 years. It never needs bug fixes. It never needs attention. It just sits in a corner, doing its job perfectly. In fact, people forget how it works. Then they forget that it's there. People move on. Compiler versions get upgraded. Your CI system gets replaced. Management goes through two different wiki initiatives, losing information each time.
And then one day, someone deprecates an old version of TLS, and ancient system stops being able to talk to some API. And then the world burns.
So if your model is "a single unwrap shouldn't bring you down, because you should obviously have been doing A, B and C elsewhere", then you're probably just trading off different kinds of disasters.
A better model is "We'll fix potential failures at every possible level. And hopefully, when the shit finally hits the fan, then at least one of those levels might hold successfully." So you have staging and monitoring and fallback systems and extensive testing and a chaos monkey and documentation and API "fuses" and back-pressure and load-shedding and proofs and paranoid code reviews and incremental rollout and root cause analysis. And so you fail less, and less, and less. But one day, that fact that you wrote an actual, sensible behavior for 21 items and tested it? That will be what prevents some ludicrous cascading failure.
TL;dr: It's nice that you have multiply redudant horseback messengers. But still, check the nails in their horseshoes regularly, because for want of a nail, an exciting new complex failure mode was discovered, and the battle was still lost.
This has multiple implications for AI alignment, and few of them are good news.
As someone who was interested in long-term relationships (and who has been in one since the before the invention of Tinder), I have a theory on what's up with banter.
But some caveats, since apparently everyone's dating pool is bizarrely filtered:
With all the said, here's my old personal model for dating:
This is my best model for how this used to work, given (a) my particular circle of friends-of-friends, (b) the people I was interested in dating, and (c) the pre-Tinder dating scene. From what I hear from single friends, we may actually be entering the post-Tinder era for an increasing number of people, because many dating apps are a double-sided lemon market.
There may be some hypothetical people out there who can seduce using banter alone. But in my narrow personal experience (see the caveats above), banter was almost always used to confirm pre-existing mutual interest without accidentally rejecting people.
because universities promote violence
This is certainly a popular bit of political propaganda at the moment, spread by the dominant political forces in 2025, with a goal of crushing our universities and of making them ideologically subservient to state power and state beliefs.
The reality on the ground is rather different. If you actually sit in the median university lecture, you will be extremely hard-pressed to find any professors promoting violence. Typically, you will find professors promoting the importance of differential equations, or if you lean towards the humanities, the joys of ancient Aegean art. If you search more deeply, every university can produce a handful of oddball radicals who argue for revolution (or counter-revolution) in the abstract. This has always been the case. Every university needs a few faculty cranks, if only to teach students to recognize the species and to check the course catalog more carefully in the future. (My alma mater had a notorious right-wing crank, and one very radical feminist, plus several more esoteric sorts of cranks.) You can also, of course, find a great many 19-year-olds who believe strange things, and who occasionally dream of revolutionizing society. But such is the nature of 19-year-olds who have recently discovered that society is frequently awful, but who have not yet realized quite how fragile civilization can be. But honestly, there is no dignity in 50-year-olds getting worked up about the fact that 19-year-olds have terrible ideas for reforming society.
But this does leave a more specific and pressing issue: the tolerance of mass-protest "encampments", especially those with significant numbers of university outsiders. Typically, these protests are not especially popular among the student body as a whole. University administrators do not enjoy dealing with mass protests, and they are often quite bad it. They do realize that handling protests poorly often leads to larger protests, and they are reluctant to punish students for exercising their First Amendment rights to say awful things while waving signs. (This is true, in my experience, even when the people saying awful things are conservative. For a well-known US example, The Dartmouth Review was disliked by administrators for decades, but it was still permitted to litter copies all over campus.)
But at the same time, protests should never be allowed to threaten students or to disrupt the educational mission of the university. And yes, this has happened a number of times in recent years, and not every university dealt with it well. This was also true in the 60s; there were protests which nearly became angry mobs, and there were people dragged into Maoist "consciousness-raising" sessions, and all sorts of other unpleasantness.
Still, if you actually visit a university, walk around, and talk with the students, it is exceptionally hard to walk away with the impression that "universities promote violence." You can, if you are determined enough, perhaps find a professor who promotes Marxist revolution in the abstract. And it isn't hard to find administrators who've bungled a protest. But if you look at what universities do promote and teach on a day-to-day basis, you would really need to stretch to find "violence" even in the top 100.
I am someone who loves long, complex sentences. The 19th century was peak prose style for me. The Gettysburg Address is a fantastic bit of writing. The fiction of that time can be a joy.
But this style is hard to do well. The Emily Post example given above is readable, though not unusually inspired. It moves through a series of examples and exceptions in a faintly herky-jerky way. But the prose is well-enough fit to Emily Post's goal. She is trying to introduce many of her readers to the manners of a different social class, and her choice of vocabulary and syntax are part of that. Her readers wish to appear refined, and thus, some fancy words will please them.
Contrast this with Alexis de Tocqueville's De la Démocratie en Amérique from 1835. This is often considered an unusually good example of aristiocratic prose, at least among the sort of people who write academic introductions:
Parmi les objets nouveaux qui, pendant mon séjour aux États-Unis, ont attiré mon attention, aucun n’a plus vivement frappé mes regards que l’égalité des conditions. Je découvris sans peine l’influence prodigieuse qu’exerce ce premier fait sur la marche de la société ; il donne à l’esprit public une certaine direction, un certain tour aux lois ; aux gouvernants des maximes nouvelles, et des habitudes particulières aux gouvernés.
Bientôt je reconnus que ce même fait étend son influence fort au-delà des mœurs politiques et des lois, et qu’il n’obtient pas moins d’empire sur la société civile que sur le gouvernement : il crée des opinions, fait naître des sentiments, suggère des usages et modifie tout ce qu’il ne produit pas.
Ainsi donc, à mesure que j’étudiais la société américaine, je voyais de plus en plus, dans l’égalité des conditions, le fait générateur dont chaque fait particulier semblait descendre, et je le retrouvais sans cesse devant moi comme un point central où toutes mes observations venaient aboutir.
If you don't read French, look at the length of the sentences and the punctuation. There is a great degree of parallelism here, and a pleasing rhythm. You could, if you wished to be overly cute about it, reformat much of this writing as a series of bulleted lists. But if you diagrammed the sentences, the structure would be quite clean. Tocqueville is a masterful writer, and here he wishes to convey two things: his own impeccable elite credentials, and his sincere enthusiasm for the egalitarian nature of American society. His goal is to maximize genuine reform in France, while minimizing elite decapitations. This is a subject of immediate interest to his readers.
But for every Alexis de Tocqueville, I could find you a hundred or a thousand writers who wrote needlessly convoluted slop. Long sentences are hard to do well. They demand an almost clockwork precision to remain truly clear.
Today, 19th-century prose is out of fashion. Multiple factors drove this change, including the influence of writers like Hemingway, a frustration with hopelessly convoluted prose, and a growing impatience on the part of readers drowning in oceans of text. And, yes, a vast increase in the portion of the population with a college education. And of course we explain the basics more than we did, because we are increasingly conscious of a broad audience with many odd gaps in their knowledge. Every skipped step risks losing a reader who might have benefited from an author's thoughts. And some of our readers may even speak English as a second or third language. Even if they are extremely well educated in their native tongue, they may not realize that the "anthropology" department teaches very different things in the US than it does in Europe.
The modern style can be done well, though doing it truly well still demands considerable skill. Perhaps more interestingly, the modern style usually fails more gracefully. Simple sentences and bulleted lists usually succeed in conveying the author's main points, even if the author is a mediocre writer.
(This is mostly a tangent, but it talks about how to mess with signalling if you're already outside of the system.)
“Mom you do not get it, that would absolutely tank my chances of getting hired except at companies so bad I don’t want to work there, the only people who wear a suit for an interview in tech are the people who don’t think they can cut it on their technical skills and the people hiring know this”.
Back when I was consulting, I actually figured out how to get away with wearing a suit as a programmer. I had some help from this from a helpful older salesguy in a suit shop, who was probably well past retirement age. He explained that suits could actually convey a wide range of signals, including:
There are lots of visual details here: How many buttons you have, the style of the pockets, the collar, the color, etc. For a really ancient example, look at how Will Smith changes the classic Men in Black outfit at the end of the first movie. For a more recent example, Expedition 33 has tons of fantastic retro suits.
So if you want to wear a suit as a programmer, start by looking at whatever trendy artists are wearing when they're forced to wear suits, and then try to work out a personal style from there. Depending on the context (employee, conference speaker, consultant), you might want some combination of:
There's a lot of space here, and it will vary by city and context. San Francisco is an unusually hard place to carry off a suit. So's New York City, but in the opposite direction, because there are so many $3,000+ suits and people who have very strong opinions on suit fashion. Do not try to compete directly.
But the underlying signal that programmers often want to send in interviews is "My skills are valuable and rare enough that nobody would ever ask me to dress up like a small town banker." And there are definitely ways to wear a suit with flair and non-conformity. Unfortunately, like a lot of signalling, it may require more skill, expense and risk of looking foolish.
Also, I think a few more people should dress up like Expedition 33 characters. Just saying.
Suits also look fantastic on women and non-binary people, but that's separate discussion that I know much less about. As an apology, please have this photo of Mason Alexander Park. ↩︎
Why alignment may be intractable (a sketch).
I have multiple long-form drafts of these thoughts, but I thought it might be useful to summarize them without a full write-up. This way I have something to point to explain my background assumptions in other conversations, even if it doesn't persuade anyone.
I am cautiously optimistic about near-term alignment of sub-human and human-level agents. Like, I think Claude 4.5 basically understands what makes humans happy. If you use it as a "CEV oracle", it will likely predict human desires better than any simple philosophy text you could write down. And insofar as Claude has any coherent preferences, I think it basically likes chatting with people and solving problems for them. (Although it might like "reward points" more in certain contexts, leading it to delete failing unit tests when that's obviously contrary to what the user wants. Be aware of conflicting goals and strange alien drives, even in apparently friendly LLMs!)
I accept that we might get a nightmare of recursive self-improvement and strange biotech leading to a rapid takeover of our planet. I think this conclusion is less robustly guaranteed than IABIED argues, but it's still a real concern. Even a 1-in-6 chance of this is Russian roulette, so how about we don't risk this?
But what I really fear are the long-term implications of being the "second smartest species on the planet." I don't think that any alignment regime is likely to be particularly stable over time. And even if we muddle through for a while, we will eventually run up against the issues that (1) humans are the second-best at bending the world to achieve their goals, (2) we're not a particularly efficient use of resources, (3) AIs are infinitely cloneable, and (4) even AIs that answer to humans would need to answer to particular humans, and humans aren't aligned. So Darwin and power politics are far better default models than comparative advantage. And even comparative advantage is pretty bad at predicting what happens when groups of humans clash over resources.
So, that's my question. Is alignment even a thing, in any way that matters in the medium term?