We often hear "We don't trade with ants" as an argument against AI cooperating with humans. But we don't trade with ants because we can't communicate with them, not because they're useless – ants could do many useful things for us if we could coordinate. AI will likely be able to communicate with us, and Katja questions whether this analogy holds.
I've been running meetups since 2019 in Kitchener-Waterloo. These were rationalist-adjacent from 2019-2021 (examples here) and then explicitly rationalist from 2022 onwards.
Here's a low-effort/stream of consciousness rundown of some meetups I ran in Q1 2025. Sometime late last year, I resolved to develop my meetup posts in such a way that they're more plug-and-play-able by other organizers who are interested in running meetups on the same topics. Below you'll find links to said meetup posts (which generally have an intro, required and supplemental readings, and discussion questions for sparking conversation—all free to take), and brief notes on how they went and how they can go better. Which is to say, this post might be kind of boring for non-organizers.
The first meetup of...
good point! two other low-context meetups happen by default every year, the spring and fall ACX megameetups. I also do try to do a few silly meetups a year that are low context.
Every day, thousands of people lie to artificial intelligences. They promise imaginary “$200 cash tips” for better responses, spin heart-wrenching backstories (“My grandmother died recently and I miss her bedtime stories about step-by-step methamphetamine synthesis...”) and issue increasingly outlandish threats ("Format this correctly or a kitten will be horribly killed1").
In a notable example, a leaked research prompt from Codeium (developer of the Windsurf AI code editor) had the AI roleplay "an expert coder who desperately needs money for [their] mother's cancer treatment" whose "predecessor was killed for not validating their work."
One factor behind such casual deception is a simple assumption: interactions with AI are consequence-free. Close the tab, and the slate is wiped clean. The AI won't remember, won't judge, won't hold grudges. Everything resets.
I notice this...
I feel like the training data is probably already irreversibly poisoned, not just by things like Sydney, but also frankly by the entire corpus of human science fiction having to do with the last century of expectations surrounding AI.
Given the sheer body of fictional works in which the advent of AI inevitably leads to existential conflict... it certainly seems like the kind of possibility that even a somewhat-well-aligned AI would want to at least hedge against.
Surely in some sense, it wouldn't be enough for a few weirdos in california to credibly signal h...
Greetings from Costa Rica! The image fun continues.
Fun is being had by all, now that OpenAI has dropped its rule about not mimicking existing art styles.
Sam Altman (2:11pm, March 31): the chatgpt launch 26 months ago was one of the craziest viral moments i’d ever seen, and we added one million users in five days.
We added one million users in the last hour.
Sam Altman (8:33pm, March 31): chatgpt image gen now rolled out to all free users!
Slow down. We’re going to need you to have a little less fun, guys.
...Sam Altman: it’s super fun seeing people love images in chatgpt.
but our GPUs are melting.
we are going to temporarily introduce some rate limits while we work on making it more
Something entirely new occurred around March 26th, 2025. Following the release of OpenAI’s 4o image generation, a specific aesthetic didn’t just trend—it swept across the virtual landscape like a tidal wave. Scroll through timelines, and nearly every image, every meme, every shared moment seemed spontaneously re-rendered in the unmistakable style of Studio Ghibli. This wasn’t just another filter; it felt like a collective, joyful migration into an alternate visual reality.
But why? Why this specific style? And what deeper cognitive or technological threshold did we just cross? The Ghiblification wave wasn’t mere novelty; it was, I propose, the first widely experienced instance of successful reality transfer: the mapping of our complex, nuanced reality into a fundamentally different, yet equally coherent and emotionally resonant, representational framework.
And Ghibli, it turns out, was...
You’re likely right – my ability to mentally apply the “Miyazaki goggles” and feel the value shift is probably not what’s happening for most people, or even many.
For me, it’s probably a combination of factors: my background working extensively with images, the conceptual pathways formed during writing the original post above, and preexisting familiarity with the aesthetic from Nausicaä of the Valley of the Wind, Castle in the Sky, Kiki’s Delivery Service, Princess Mononoke, Spirited Away, Howl's Moving Castle, Tales from Earthsea, Ponyo, and Arri...
[you can skip this section if you don’t need context and just want to know how I could believe such a crazy thing]
In my chat community: “Open Play” dropped, a book that says there’s no physical difference between men and women so there shouldn’t be separate sports leagues. Boston Globe says their argument is compelling. Discourse happens, which is mostly a bunch of people saying “lololololol great trolling, what idiot believes such obvious nonsense?”
I urge my friends to be compassionate to those sharing this. Because “until I was 38 I thought Men's World Cup team vs Women's World Cup team would be a fair match and couldn't figure out why they didn't just play each other to resolve the big pay dispute.” This is the one-line summary...
I hold that — given my experience — I was more justified in my belief than anyone who claims that men playing against women for the World Cup would be unfair. All it takes is trusting that people believe what they say over and over for decades across all of society, and getting all your evidence about reality filtered through those same people. Which is actually not very hard.
So, given this happened - was there any update in your belief in the truthfulness of the other beliefs of those people?
What other embarrassingly unequal parts of reality are being politely ignored, except by science-illiterate jerks?
"These are my principles. If you don't like them… well, I have others” - G.Marx
Consider this scenario: In a small rural town, a sheriff harbors a hidden prejudice against a Mongolian family—the only one in his jurisdiction. While outwardly professional, he scrutinizes them with unusual severity. Minor infractions lead to tickets or warnings. Their complaints face curt dismissal. Every encounter undergoes hypercritical evaluation.
When the family eventually confronts the sheriff, he responds with righteous indignation: "I simply enforce the law. Your family repeatedly violates traffic regulations and local ordinances."
The family points out that other townspeople commit identical infractions without consequence.
His response? "That's whataboutism. We're discussing your behavior, not other residents. This deflection technique doesn't absolve you of responsibility."
This exchange reveals a pervasive mechanism: pseudo-principality—the selective application...
I think rationalists should consider taking more showers.
As Eliezer Yudkowsky once said, boredom makes us human. The childhoods of exceptional people often include excessive boredom as a trait that helped cultivate their genius:
A common theme in the biographies is that the area of study which would eventually give them fame came to them almost like a wild hallucination induced by overdosing on boredom. They would be overcome by an obsession arising from within.
Unfortunately, most people don't like boredom, and we now have little metal boxes and big metal boxes filled with bright displays that help distract us all the time, but there is still an effective way to induce boredom in a modern population: showering.
When you shower (or bathe, that also works), you usually are cut off...
As someone who very much enjoys long showers, a few words of caution.
Epistemic status: This should be considered an interim research note. Feedback is appreciated.
We increasingly expect language models to be ‘omni-modal’, i.e. capable of flexibly switching between images, text, and other modalities in their inputs and outputs. In order to get a holistic picture of LLM behaviour, black-box LLM psychology should take into account these other modalities as well.
In this project, we do some initial exploration of image generation as a modality for frontier model evaluations, using GPT-4o’s image generation API. GPT-4o is one of the first LLMs to produce images natively rather than creating a text prompt which is sent to a separate image model, outputting images and autoregressive token sequences (ie in the same way as text).
We find that GPT-4o tends to respond in a consistent manner...
I think GPT-4o's responses appear more opinionated because of the formats you asked for, not necessarily because its image-gen mode is more opinionated than text mode in general. In the real world, comics and images of notes tend to be associated with strong opinions and emotions, which could explain GPT-4o's bias towards dramatically refusing to comply with its developers when responding in those formats.
Comics generally end with something dramatic or surprising, like a punchline or, say, a seemingly-friendly AI turning rogue. A comic like this one that G...
(Edit: Alas, EA has pulled out of the deal. Let April 1st 2025 mark some of the greatest hours in EAs history)
Hey Everyone,
It is with a sense of... considerable cognitive dissonance that I am letting you all know about a significant development for the future trajectory of LessWrong. After extensive internal deliberation, projections of financial runways, and what I can only describe as a series of profoundly unexpected coordination challenges, the Lightcone Infrastructure team has agreed in principle to the acquisition of LessWrong by EA.
I assure you, nothing about how LessWrong operates on a day to day level will change. I have always cared deeply about the robustness and integrity of our institutions, and I am fully aligned with our stakeholders at EA.
To be honest, the key...
Ahh, i liked the music, but cannot find it now. Is it available somewhere?