AI can learn directly from the passage of time at unlimited scale—no human annotation required. Time provides free supervision. Humans learn from experience with no labels—we constantly form expectations about the world, notice when we're wrong, and update our models accordingly. "Future-as-Label" teaches AI to learn the same way. The...
“This preservation of favourable variations and the rejection of injurious variations, I call Natural Selection.” - Darwin Discussions on the risks of AI often fixate on what a handful of powerful actors decide. What capabilities will Anthropic prioritize? Will OpenAI open-source more models? What safety testing will Congress require? But...
LLMs can teach themselves to better predict the future - no human examples or curation required. In this paper, we explore if AI can improve its forecasts via self-play and real-world outcomes: - Dataset: 12,100 questions and outcomes from Polymarket (politics, sports, crypto, science, etc) - Base model generates multiple...
Are we on the verge of an intelligence explosion? Maybe, but scaling alone won't get us there. Why? The human data bottleneck. Today’s models are dependent on human data and human feedback. Human-level intelligence (AGI) might be possible by teaching AI everything we know, but superintelligence (ASI) requires learning things...
In the recent weeks since Biden’s disastrous debate performance, pundits have theatrically expressed surprise at his apparent cognitive decline, despite years of clear and available evidence in the form of video clips, detailed reports, and fumbled public appearances. This is far from the first popular narrative that has persisted in...
Extreme outcomes drive tax revenue Power Laws Let's do a thought experiment. The year is 1800. You’re a loan officer at a bank. Farmers come to you asking for a loan - maybe to purchase new equipment, buy more land, or make investments to improve their farm’s productivity. What are...
LessWrong Intro: I'm new to the LessWrong community, but was inspired to write this based on recent reading of The Sequences. It seems to me there's a paradox to social progress that tends to undermine itself - societies making rapid ethical progress will (necessarily) become ashamed of their recent past...