It might be some elements of human intelligence (at least at the civilizational level) are culturally/memetically transmitted. All fine and good in theory. Except the social hypercompetition between people and intense selection pressure of ideas online might be eroding our world's intelligence. Eliezer wonders if he's only who he is because he grew up reading old science fiction from before the current era's memes.
Some competitions have a clear win condition: In a race, be the first to cross a finish line.
The US-China AI competition isn’t like this. It’s not enough to be the first to get a powerful AI system.
So, what is necessary for a good outcome from the US-China AI competition?
I thought about this all the time as a researcher on OpenAI’s AGI Readiness team: If the US races to develop powerful AI before China - and even succeeds at doing so safely - what happens next? The endgame is still pretty complicated, even if we’ve “won” the race by getting to AGI1first.
I suggest two reframes on the US-China AI race:
ASI governed by a value system which, if placed in complete control of the Earth, would still be something we could live with,
That's exactly my point. However, once the value system is defined, it will either lock mankind in or be corrigible. The former case contains options like my take where the AI only provides everyone with access to education and enforces only universally agreed political opinions or[1] the situations where the AI builds the Deep Utopia or governs the world, criminalising social parasitism in the whole world. The race...
What a week, huh? America signed a truly gigantic chip sales agreement with UAE and KSA that could be anything from reasonable to civilizational suicide depending on security arrangements and implementation details, Google announced all the things, OpenAI dropped Codex and also bought Jony Ive’s device company for $6.5 billion, Vance talked about reading AI 2027 (surprise, in a good way!) and all that other stuff.
Lemon, it’s Thursday, you’ve got movie tickets for Mission Impossible: Final Reckoning (19th and Broadway AMC, 3pm), an evening concert tonight from Light Sweet Crude and there’s a livestream from Anthropic coming up at 12:30pm eastern, the non-AI links are piling up and LessOnline is coming in a few weeks. Can’t go backwards and there’s no time to spin anything else out...
If you go to Amazon most of the books in that section look similar
BPC-157, a peptide frequently marketed as a breakthrough for healing and tissue repair, has attracted substantial attention in wellness and performance communities. It’s discussed in forums, recommended by biohackers, and even offered in clinics—despite lacking FDA approval or broader clinical recognition.
The challenge is personal: we all want healing when we’re hurting—but how do we evaluate bold health claims like those surrounding BPC-157 when strong evidence is absent, the origin story is murky, and anecdotes sound convincing? And more pointedly—what does the evidence say about BPC-157’s effectiveness? It’s not just about one compound—it’s a test of how we decide which health treatments to trust when the choice is ours to make. Whether it’s a supplement, a therapy, or advice from a friend, what guides our choices is our...
For some reason, all current benchmarks, with the sole exception of OSWorld[1], now seem to differ by a factor of less than 3. Does this imply that the progress in every benchmark is likely to slow down?
OSWorld resembles a physical task, which LLMs tend to fail. However, the article about LLMs failing basic physical tasks was written in April 14, before the pre-release of Gemini Diffusion. Mankind has yet to determine how well diffusion-based LLMs deal with physical tasks.
The U.S. 30-year Treasury rate has reached 5.13%, a level last seen in October 2023. The last time this rate was at this level was in 2007, when the U.S. federal debt was about $9 trillion. Today, that debt is nearing $37 trillion.
I believe bond market participants are signaling a lack of confidence that the fiscal situation in the United States will improve during President Trump’s second administration. Like many financial professionals, I had high hopes that President Trump’s election would bring the fiscal situation in order. Unfortunately, the "Depa...
It does seem likely that this is less legible by default, although we'd need to look at complete examples of how the sequence changes across time to get a clear sense. Unfortunately I can't see any in the paper.
Yes, this topic has been discussed multiple times. But at this point, long enough ago that my choices for joining the conversation are [Necro] or [New post], so here we are. (You can jump to the 2nd to last paragraph if you want to just see my conclusion.)
Background: Why am I asking this question?
After reading some comments I realize I should give some background on what I mean and why I'm asking this question.
Winning here refers to gaining utility, aka achieving your goals, whatever those goals may be, and is in reference to this post.
The reason for asking the question at all is that I personally expected a large improvement across domains from learning rationality. Plenty of articles here hint at how strong of a power
Good point that its broad, maybe the reasons are domain-specific. You might be right about chess in specific, but I lean against concluding that insufficient eduction is the reason rationalists aren't winning in most domains. Rationalists on average, are significantly more educated that the general population, and I'd imagine that gap grows when you take into account self-directed education.
Have the Accelerationists won?
Last November Kevin Roose announced that those in favor of going fast on AI had now won against those favoring caution, with the reinstatement of Sam Altman at OpenAI. Let’s ignore whether Kevin’s was a good description of the world, and deal with a more basic question: if it were so—i.e. if Team Acceleration would control the acceleration from here on out—what kind of win was it they won?
It seems to me that they would have probably won in the same sense that your dog has won if she escapes onto the road. She won the power contest with you and is probably feeling good at this moment, but if she does actually like being alive, and just has different ideas about how safe...
Gut reaction is “nope!”.
Could you spell out the implication?