121

LESSWRONG
LW

120

1

How well can large language models predict the future?

by Mattreynolds
8th Oct 2025
1 min read
0

1

This post was rejected for the following reason(s):

This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance.

So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.*

"English is my second language, I'm using this to translate"

If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. 

"What if I think this was a mistake?"

For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at team@lesswrong.com and ask for reconsideration.

  1. you wrote this yourself (not using LLMs to help you write it)
  2. you did not chat extensively with LLMs to help you generate the ideas. (using it briefly the way you'd use a search engine is fine. But, if you're treating it more like a coauthor or test subject, we will not reconsider your post)
  3. your post is not about AI consciousness/recursion/emergence, or novel interpretations of physics. 

If any of those are false, sorry, we will not accept your post. 

* (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.)

This is a linkpost for https://forecastingresearch.substack.com/p/ai-llm-forecasting-model-forecastbench-benchmark

1

New Comment
Moderation Log
More from Mattreynolds
View more
Curated and popular this week
0Comments

When will artificial intelligence (AI) match top human forecasters at predicting the future? In a recent podcast episode, Nate Silver predicted 10–15 years. Tyler Cowen disagreed, expecting a 1–2 year timeline. Who’s more likely to be right?

Today, the Forecasting Research Institute is excited to release an update to ForecastBench—our benchmark tracking how well large language models (LLMs) forecast real-world events—with evidence that bears directly on this debate. We’re also opening the benchmark for submissions.

Here are our key findings:

  • Superforecasters still outperform leading LLMs, but the gap is modest. The best-performing model in our sample is GPT-4.5, which achieves a Brier score of 0.101 versus superforecasters’ 0.081 (lower is better).2
  • LLMs now outperform non-expert public participants. A year ago, the median public forecast ranked #2 on our leaderboard, right behind superforecasters and ahead of all LLMs. Today it sits at #22. This achievement represents a significant milestone in AI forecasting capability.
  • State-of-the-art LLMs show steady improvement, with projected LLM-superforecaster parity in late 2026 (95% CI: December 2025 – January 2028). Across all questions in our sample, LLM performance improves by around 0.016 Brier points per year. Linear extrapolation suggests LLMs could match expert human performance on ForecastBench in around a year if current trends continue.