Post written by AI with minimal prompting reaches 30+ upvotes on LessWrong
Score:probably D. I didn't see any high-karma posts from 2022 which were obviously AI-generated, but frankly, I didn't look very hard. I remember reading a few experimental AI-generated posts, but they were all downvoted pretty badly at the time. There were a lot of posts which included smaller excerpts from AI text, but that's not really what I was aiming for, so I'll say I failed this prediction.
AI can regularly fool a randomly-selected (from American population), non-expert judge in a 10-minute Turing test.
Score: D-. What in the world was I thinking with this one?? I suspect I severely over-updated on stories like Blake Lemoine claiming Google's AI was sentient, not realizing that a chatbot seeming "intelligent" is very different from an AI seeming "human" to people. I think we've passed this point by now in 2025 (so I won't give myself an F), but I was a few years too early.
Predictions for 2023
AI reaches human expert level at MATH benchmark.
Score: B+. Looking at https://huggingface.co/datasets/nlile/math_benchmark_test_saturation, it seems that by the end of 2023, the "yearly best" was 84.3%, which is close to an IMO gold medalist score of 90%. We didn't entirely reach "human expert" level, but I'd say my prediction wasn't too bad.
Famous, well-respected public intellectual announces that they believe AI has reached sentience, deserves rights.
Score: C-. By this point, a few famous (or newly famous) people (most notably Blake Lemoine in late 2022) were claiming AI sentience, but as far as I can tell, none of them were particularly "well-respected" or considered serious "public intellectuals" by normative standards. I'd say it's an edge-case if I passed this one or not.
AI can now write a book with a mostly consistent plot, given roughly a page of prompting or less.
Score: A+. I actually thought that I'd failed this one, but I looked it up, and surprisingly (to me), it seems AI was in fact capable of this by 2023! See, for instance, Death of an Author, a novella supposedly written 95%+ by ChatGPT, and described by New Scientist as "not awful." High praise indeed...
"Weak" AGI is announced that can play a randomly-selected game on Steam and get at least one achievement (in games which have Steam achievements enabled) most of the time. This assumes someone bothers to try this in particular, if not it should still be obvious it can be done.
Score: F. This still doesn't seem to be fully possible in 2025 (although we might be getting pretty close). It certainly wasn't happening (or obvious it could happen) by the end of 2023.
AI proves an "interesting" result in mathematics (as judged by professional mathematicians) with minimal prompting.
Score: D+. While I don't believe there were any particularly interesting and original AI proofs produced with minimal prompting in 2023, there were some fascinating results produced with the help of AI. An interesting example of this would be FunSearch. I'd say I didn't do too badly on this prediction, although I still technically failed.
Major lawsuit involving AI trained on "stolen artwork" gets in the news
I move out of my parent's house, possibly to LA for networking/work reasons, possibly remaining in Virginia, for community-building/health reasons. In a possibly related move, I finally come out to my parents, which probably goes okay, albeit with a small chance of being disowned by my grandparents.
Score: C. It happened, but I came out to my parents in early 2024, not 2023. The first half of the prediction can't be scored, as I mentioned both possibilities.
S.B.F. somehow remains a free, not-in-jail citizen, and continues to post questionable statements on Twitter.
Score: F. S.B.F. was in jail by the end of 2023, and although he was under house arrest for the first seven months of the year, that hardly counts as being a "free" citizen, so I'm failing myself on this one.
Anti-EA sentiment mostly dies down, but anti "AI safety" sentiment goes way up. The term has become associated with (perceived) censorship, and right-wing politicians may begin to shun people who use "AI safety" in their public branding. AI governance orgs try to adjust by going for a "national security" public angle. [Note that that last bit is incredibly speculative, and depends on too many factors to predict with any real confidence.]
Score: B. It didn't take too long after the fall of S.B.F. for anti-EA sentiment to fade from the public spotlight (although it still exists to some extent, especially after the whole Zizian cult disaster), but anti-AI-safety sentiment certainly seems much higher than it was in late 2022. I'm not quite sure how accurate my latter prediction was, but I don't think I was entirely wrong, so that counts for something, I'd say.
Multiple people land well-paying coding jobs and publicly post about how they "don't actually know how to code" (beyond some really basic level), but have been outsourcing everything to AI.
Score: C-. As far as I can tell, while people were just beginning to "vibe-code" in earnest, there wasn't much public discussion by the end of 2023 of people with no coding knowledge taking coding jobs. By now it's not that unheard of, but it took a few more years than I thought it would.
Predictions for 2024
Assuming Donald Trump is not barred from running, he will become president. If not him, it’s an easy DeSantos win. (Biden is the Democratic nominee of course, assuming he's still alive. As usual, the media pays no attention to third party candidates.)
Score: A. I didn't do too badly here. Although Biden stepped down at the end while only a presumptive Democratic nominee, "assuming he's still alive" was kind of marginal, so I'll take partial credit for that anyway.
AI writes a NYT best-selling book.
Score: D+. As far as I can tell, this did not happen in 2024. However, it seems actively implausible that AI assistance wasn't used to help write a NYT bestseller this year (though to be fair, I don't have direct proof of that), so I'd consider this a close miss.
Twitter is still functional, and most users haven't left the site. The workplace environment is kind of miserable though, and content moderation is still severely lacking (according to both sides of the culture war). Elon Musk is largely washed-up, and won't be doing anything too groundbreaking with the remainder of his life (outside of politics perhaps, which I won't rule out).
Score: A? I don't think I did too badly on this one. Twitter (now "X") is still fully functional, and it still has a large userbase. There have been multiple waves of layoffs and plenty of reported internal drama there, which sounds pretty miserable to me. Musk's main focus were his DOGE efforts, so he did go into politics, but outside of that, most people seem to consider him well-past his intellectual prime. Obviously this sort of thing is largely subjective, but I think most people would agree my prediction(s) have held up.
A minor celebrity or big-name journalist finally discovers Erik Sheader Smith's video game The Endless Empty for the masterpiece it is, kickstarting its growth as widely-hailed classic of the genre. My own game, Nepenthe, is largely forgotten by history, at least until someone discovers a certain easter egg, which is occasionally mentioned in 40+ minute long Youtube videos (you know the type).
Score: C+. My friend's masterpiece has not yet been discovered by big-name celebrities or journalists, but it has experienced an explosion in players and fan-artists from China, who do genuinely seem to regard it as a cult classic. The growth is entirely grassroots for now, however. Meanwhile, my videogame, while not entirely forgotten, isn't exactly growing a large fanbase or anything. It doesn't help I've stepped away from making videogames over the past few years (though I'm considering getting back into it).
The social media battle going on between those who firmly believe that AI is "just copy-pasting others work" and those who firmly believe that AI is sentient (and want to free it), has reached enough intensity that it gets brought up a few times in the political news cycle. At least one (possibly fringe) candidate pledges to "protect the rights of artists" through AI legislation.
Score: B-. I got things directionally right here I think-- except instead of the opposing view being "AI is sentient/deserves rights," it's "AI is helpful; forget about sentience," for the most part. Politicians did seriously talk about protecting artist's rights with AI legislation in 2024, as evidenced by things like the Generative AI Copyright Disclosure Act.
Some new video game nobody has heard about before goes viral among schoolchildren, sparking a wave of incredibly forced puns across news headlines worldwide.
Score: F. I'm grading myself harshly on this one. Despite there being a few viral indie game hits (like Balatro) in 2024, none of them really went massively viral among schoolchildren in the way something like Five Nights At Freddy's or Undertale did. I did not notice any wave of forced puns relating to said games, either.
China's economy has pretty much recovered from Covid. Other than that, hard to predict, but growth won't look terribly different from the rest of the world.
Score: A. I'm no economist, but it looks like China was doing pretty well for itself by the end of 2024. If anything, a bit better than other countries.
Companies start actually replacing a significant number of customer support jobs with AI. Consumers generally report being more satisfied as a result, to many people's annoyance.
Both teachers and students have the ability to easily automate online assignment work, leading to a growing number of absurdist scenarios where algorithms play meaningless educational games while teachers and students do their own thing, unwatching. This is objectively hilarious, but people get mad about it, leading to a poorly-managed escalation of the school surveillance arms race we already see today.
Score: A. Another win for my predictive abilities...not so much for the rest of the world. This pretty much came to pass, but I'm not giving myself an A+ because it's not clear to me just how much school surveillance has actually increased as a direct result of AI cheating concerns (though AI-powered school surveillance has certainly increased since 2022).
Another billionaire has emerged as an EA mega-donor.
Score: D. We still have Dustin Moskovitz (and his wife Cari Tuna) as billionaire mega-donors, but they aren't exactly new on the scene. Sadly, I was wrong about this one.
Predictions for 2025
Self-driving cars (and drone delivery) never quite reach market saturation due to some consumer/cultural pushback, but mostly due to legislation over "safety concerns," even if self-driving is significantly safer than human-driven vehicles by this point. However, more and more self-driving-adjacent features are added into "normal" cars for "safety reasons," so it's become increasingly hard to delineate any sort of clear line between AI and human-operated vehicles.
Score: A. This seems to be pretty much on the nose! The only potential issue is it's arguably debatable if self-driving is truly "significantly safer" than human driving, mostly due to issues like mass-outages during crises situations. I think it's safer, but I can see how a reasonable person might disagree, so I'm not giving myself an A+.
I am in love.
Score: A. It's a long and dramatic story, but this isn't the time or place to share it...
A mass fatality event occurs due to what could plausibly be interpreted as "misaligned AI." This sparks some countries to pass a whole bunch of AI-related laws which are totally ignored by other countries. The AI safety community is split on if the blame for what happened should be placed on misaligned AI, human error, or some complex mix of both. For whatever reason, a popular language model (developed for entertainment perhaps) publicly takes responsibility, despite seemingly having nothing to do with the incident. For the most part though, this is treated as just another tragedy in the news cycle, and is ignored by most people.
Score: D. There was no single "mass fatality event" caused by AI this year. That being said, there have been a significant number of murders and suicides plausibly linked to AI psychosis, which, if considered together, likely resulted in a large number of unnecessary deaths. It's debatable to me if this should count, but I'm leaning against it, as it's not the sort of thing I was envisioning at the time, I think. There have indeed been a number of irregularly enforced AI safety laws passed, but not as many as I would have expected. I was correct that people are split over how much AI is to blame for the deaths that have occurred, but incorrect that an AI would erroneously take the blame on itself for said deaths. And indeed, most people simply ignore the whole thing, and it's not the primary driver of the news cycle this year.
Someone who has at some point called themself "rationalist" or "EA" commits a serious crime with the intention of halting capabilities gain at some company or another. This is totally ineffective, everyone agrees that that was like, the least rational or altruistic action they could have possibly taken, but the media runs with exactly the sort of story you'd expect it to run with. This makes AI governance work a bit harder, and further dampens communications between safety and capabilities researchers. Overall though, things pretty much move on.
Despite having more funding than ever before, the quality and quantity of AI safety research seems...slightly lesser. It's unclear what the exact cause is, though some point out that they've been having a harder time staying focused lately, what with [insert groundbreaking new technology here].
Score: C. AI safety funding is indeed going strong. It is unclear to me if research is better or worse than it was in late 2022, but AI safety research in general seems to have taken a backseat within the largest AI companies, which is worrying. Some research does suggest that using tools like Cursor actually slowed developers down, despite a perception that it was speeding up work, which arguably counts as a partial win for my prediction.
Youtube dies a horrible death in a totally unpredictable manner. The whole disaster is retroactively considered clearly inevitable by experts. There is much mourning and gnashing of teeth, but the memes, too, are bountiful.
Score: F. This did not happen.
The sun rises and the sun falls.
Score: A+. This actually happened multiple times!
Me and my friends are still alive.
Score: B. I am still alive, and so are most of my friends, but there are a few who seem to have disappeared from the internet, and I am worried about them. I hope they are okay, but I have no guarantee, so I don't feel comfortable giving this an A+.
Conclusion
Um...I'm not sure what conclusion to take away from all of this. Predicting the future is hard, and I certainly failed a lot, but also, I was pleasantly surprised to see how much I got right, or at least got directionally correct. It seems like I generally over-updated on the rate of advancement in 2022, and assumed things would move faster than they did. That being said, I really don't think I did too badly compared to those around me at the time, and I'm proud of what I did get right.
Three years ago, back in 2022, I wrote "A Tentative Timeline of The Near Future (2022-2025) for Self-Accountability." Well, 2025 is almost over now, so let's see how well I did! I'll go over each individual prediction, and assign myself a subjective grade based on how close I got to the truth.
Predictions for 2022
Predictions for 2023
Predictions for 2024
Predictions for 2025
Conclusion
Um...I'm not sure what conclusion to take away from all of this. Predicting the future is hard, and I certainly failed a lot, but also, I was pleasantly surprised to see how much I got right, or at least got directionally correct. It seems like I generally over-updated on the rate of advancement in 2022, and assumed things would move faster than they did. That being said, I really don't think I did too badly compared to those around me at the time, and I'm proud of what I did get right.
Happy New Years!