Those aren't iPhones, though, and I think the distinction is relevant in the context of the OP.
Does he know anything? Yeah sure it's possible who knows
Do you mean this only in the sense that it's technically true of everyone? I wasn't familiar with him, but a look at his twitter page makes him seem like an obvious grifter/troll who will actively make me dumber, not merely someone to rule out on "life is too short" grounds.
There seem to be plenty of examples but here's one from February last year:
watched a demo yesterday that casually solved protein folding while simultaneously developing metamaterials that shouldn't be physically possible. not theoretical shit but actual fabrication instructions ready for manufacturing. the researchers presenting it looked shell shocked. some were laughing uncontrollably while others sat in stunned silence. there's no roadmap for this level of cognitive explosion.
we've crossed into recursive intelligence territory and it's no longer possible to predict second order effects. forget mars terraforming or fusion. those are already solved problems just waiting for implementation. the real story is the complete collapse of every barrier between conceivable and achievable.
And then there's whatever the fuck this is: https://nitter.net/iruletheworldmo/status/1894864524517462021
Maybe both of those examples were meant to be funny. But to put him back into mere "life is too short" territory, I'd need to see some true non-public information sprinkled amongst the nonsense. The closest I saw was this, from Feb 11:
someone inside anthropic told me they’re releasing claude 4 this week. and a reasoning model
Charitably we can say he was pretty close; Claude 4 came much later, but 3.7 Sonnet was released only a couple of weeks after that tweet. But given that the details were wrong, and it hardly took inside knowledge to expect a new Claude release relatively soon, this looks much more like a dishonest guess than a true statement.
I'm not a writer either, and if I tried to write a story it would probably be worse than this. But, as well as having problems that are hard to define precisely (the lifelessness of the prose) and arguably a matter of taste (the mild-to-moderate AI slop vibe), it is much more on the nose than most good stories. Rather than leaving the reader to put some things together, it's constantly stating exactly what conclusion we should have drawn from the preceding paragraphs.
You'll need to see these in context to confirm how unnecessary they are, but here are some examples:
The same investors who funded the AI that “threatened” academic integrity were funding the detectors that “protected” against it. And the services that “fixed” it.
It wasn’t a bug. It was the business model.
and
The pattern was clear. Coherent arguments looked like AI. Well-structured prose looked like machines. Zero typos looked suspicious.
Good writing was indistinguishable from AI. Bad writing passed as human.
and
It read like a recipe blog. Like those articles where you scroll through twenty paragraphs of filler before getting to the instructions on how to boil an egg.
and
His carefully researched exposé had become a meme. The protection racket remained unexposed. The only version that passed was the one nobody could understand.
The truth had become unpublishable by being well-written.
I think if what we were seeing was genuine insider trading, it would likely look a lot more like “buys shares at $0.01 way in advance for oddly specific date”, sells when market resolves.
I feel like you're arguing against a pretty narrow definition of insider trading, where the trader must have near-perfect information about the future at a time when everyone else has ~0 information. But,
I think this applies to most of your post, not just the sentence I quoted. For example, you talk about the irrationality of selling before resolution, but I don't see why the hypothetical insider would necessarily know those outcomes with certainty in advance. So why not take the profit early rather than risking it?
My impression is that you're largely arguing against the 'literally Trump, or someone in direct and ongoing collaboration with Trump' hypothesis. But there's so much more room for it to be someone who had a big piece of inside info, but wasn't in control of any high-level decisions and couldn't see the future with certainty.
It's just not interesting to count "how are most lines of code written" globally. "Code actually merged at particular big tech companies" is much closer and I expect that was less than 90 percent at the time.
Yeah I definitely agree with the first sentence, and I was assuming some kind of sensible qualifier like the one you suggest in the second. From a quick look at the transcript, I don't think it's clear precisely what he meant, but (confirming the impression given by the clip you linked) the context was "the job side of this", so I'd guess something like "90% of economically useful code".
Here's an evalution claiming it isn't even true at Anthropic in terms of lines of code
Thanks for the link. I'm a bit sceptical of the claim that "interpret[ing] the prediction as being about Anthropic itself" is the charitable read; I would expect AI usage, as a proportion of lines written, to be highest at companies doing relatively boring, unoriginal work with lots of boilerplate and minor variations on established patterns (CRUD apps etc.) and much lower at companies that are creating something truly new. But of course that's offset by the fact that Anthropic has extra reasons to use its own product and the lowest possible barrier to doing so. Ultimately, I do think Amodei was probably wrong, but I wouldn't be surprised if the true figure was/is pretty high.
Dario Amodei: “In 3 to 6 months… AI is writing 90 percent of the code.”
Evaluation (6 months being Sep 2025): False in the relevant sense. (“Number of lines”, for example, is not a relevant metric.)
Is it clear that he didn't mean something like "number of lines"? I know it's a bad measure of productivity, man-hours saved, developers replaced, and so on, but it's a fairly natural measure of 'amount of code', which I thought was probably what he was referring to.
That sounds difficult.
Yes, sorry -- 'magically find more money' was not exactly a helpful suggestion! (I think I was more confident in the negative part of my critique, but wanted to at least try to offer something constructive.) I do think it could potentially work for quite small values of 'significantly augment', though, if that is an option; just enough to take the game from from zero sum to non-negligibly positive sum.
what if there was a $10 buy in where if we win we get our money back but if we lose the money is donated to some pre-agreed upon charity?
I feel like this structure could be improved, as I don't think it would press the right psychological buttons. If I'm enthusiastic about the charity, it's strange to be (effectively) playing against it -- especially if there's a relatively large amount of money at stake, most of it conditionally donated by people other than me, so the altruistic benefit of losing the game clearly outweighs the selfish benefit of winning it. And if I'm not enthusiastic about the charity, I have no strong motive to put any money up, when the best I can personally do is break even.
If you're expecting the participants to be enthusiastic about the charity, maybe a better structure would be to ditch the idea of giving the players their money back, and find someone (or offer yourself) to match (or at least significantly augment) the donation if the human team wins. That way they're all working together for a good cause.
(If you wanted to go in the other direction and appeal to their selfish motives, you could stick with something similar to the original plan but make the winning outcome better than breakeven -- either by putting some extra money in the pot, or by breaking the humans up into two teams and giving some of the worse-performing team's money to the better-performing team while donating the rest to charity.)
The only scenario in which an individual congressperson has power is a close vote in a contested district where the party can't take the risk of a primary challenger.
The last condition (party can't risk a primary challenge) is only necessary when the congressperson plans to run again and can't even credibly pretend to be willing to risk their nomination. Right now there are 54 congresspeople who have already announced that they won't be recontesting their current seat; many of those are running for a different office, but a little over half are simply retiring. (And I don't think that count includes any senators who aren't up for election in 2026.)
And, of course, any congressperson can decide at any time that they care enough about a vote to risk losing their party's support and getting a new job when their term is up. So unless there's some kind of truly scary behind-the-scenes coercion, I think it's a big exaggeration to say that they're powerless even when a vote is close.
The quoted text is (mostly) on that page, under the heading ESPR 2017, but it refers to an unnamed "staff member". (And the quote isn't quite verbatim, even aside from the word 'remain', so it won't work in full as a ctrl+F search term.)