One winter a grasshopper, starving and frail, approaches a colony of ants drying out their grain in the sun to ask for food, having spent the summer singing and dancing.
Then, various things happen.
One last, even more speculative thought:
Literally everything the racist juror does in the back half of the movie is weird and suspicious. It's strange that he expects people to be convinced by his bigoted tirade; it's also strangely convenient that he's willing to vote not guilty by the end even though he A) hasn't changed his mind and B) knows a hung jury would probably eventually lead to the death of the accused, which he wants.
I don't think it's likely, but I'd put maybe a ~1% probability on . . .
. . . him being in league with the protagonist, and them running a two-man con on the other ten jurors to get the unanimous verdict they want.
Although the risk of frogboiling human rights abuses won't go away anytime soon, it's also important to keep in mind that Trump got popular by doing whatever makes the left condemn him because right-wingers seem to interpret that as a costly credible signal of commitment to them/the right/opposing the left, and his administration has spent a decade following this strategy as consistently as can reasonably be considered possible for a sitting president, most of the time landing on strategies to provoke condemnation from liberals in non-costly or ambiguously...
The comments here are a storage of not-posts and not-ideas that I would rather write down than not.
I frequently hear people make the claim that progress in theoretically physics is stalled, partly because all the focus is on String theory and String theory doesn't seem to pan out into real advances.
Believing it fits my existing biases, but I notice that I lack the physics understanding to really know whether or not there's progress. What do you think?
As Adam Scherlis implies, the standard model turns out to be very effective at all the scales we can reach. There are a handful of phenomena that go beyond it - neutrino masses, "dark matter", "dark energy" - but they are weak effects that offer scanty clues as to what exactly is behind them.
On the theoretical side, we actually have more models of possible new physics than ever before in history, the result of 50 years of work since the standard model came together. A lot of that is part of a synthesis that includes the string theory paradigm, but th...
In 2021 I wrote what became my most popular blog post: What 2026 Looks Like. I intended to keep writing predictions all the way to AGI and beyond, but chickened out and just published up till 2026.
Well, it's finally time. I'm back, and this time I have a team with me: the AI Futures Project. We've written a concrete scenario of what we think the future of AI will look like. We are highly uncertain, of course, but we hope this story will rhyme with reality enough to help us all prepare for what's ahead.
You really should go read it on the website instead of here, it's much better. There's a sliding dashboard that updates the stats as you scroll through the scenario!
But I've nevertheless copied the...
There is no infinite growth in nature. Everything will hit a ceiling at some point. So I agree that the intelligence explosion will eventually take a sigmoid shape as it approaches the physical limits. However I think the physical limits are far of. While we will get diminishing returns for each individual technology, we will also shift to a new technology each time. It might slow down when the Earth has been transformed into a super computer, as interplanetary communication naturally slows down processing speed. But my guess is that this will happen long after the scenario described here.
A new Anthropic paper reports that reasoning model chain of thought (CoT) is often unfaithful. They test on Claude Sonnet 3.7 and r1, I’d love to see someone try this on o3 as well.
Note that this does not have to be, and usually isn’t, something sinister.
It is simply that, as they say up front, the reasoning model is not accurately verbalizing its reasoning. The reasoning displayed often fails to match, report or reflect key elements of what is driving the final output. One could say the reasoning is often rationalized, or incomplete, or implicit, or opaque, or bullshit.
The important thing is that the reasoning is largely not taking place via the surface meaning of the words and logic expressed. You can’t look at the words and logic...
I mean, this applies to humans too. The words and explanations we use for our actions are often just post hoc rationalisations. An efficient text predictor must learn not what the literal words in front of them mean, but the implied scenario and thought process they mask, and that is a strictly nonlinear and "unfaithful" process.
A recent essay called "Keep the Future Human" made a compelling case for avoiding building AGI in the near future and building tool AI instead.
The main point of the essay is that AGI is the intersection of three key capabilities:
It argues that these three capabilities are dangerous when combined together and pose unacceptable risks to the job market and culture of humanity and would replace rather than augment humans. Instead of building AGI, the essay recommends building powerful but controllable tool-lik...