Inspired by the 2010 prediction thread, I would like to propose this as a thread for people to write in their predictions for the next decade, when practical with probabilities attached.

New Comment
77 comments, sorted by Click to highlight new comments since: Today at 5:31 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Here are some of mine. These are very rough and I could probably be persuaded on many of them to move them significantly in some direction.

By 2030 (and after January 1st, 2020),

No high-level AGI, defined as a single system that can perform nearly every economically valuable task more cheaply than a human, will have been created. 94%

No robot hand will be able to manipulate a Rubik's cube as well as a top human. 80%

No state will secede from the US. 95%

No language model will write a book without substantial aid, that ends up on the New York Times bestseller list. 97%

No pandemic will kill >50 million people. 93%

Neither Puerto Rico or DC will be recognized as states. 80%

Tradititional religion will continue to decline in the West, as measured by surveys that track engagement. 85%

Bryan Caplan will lose a bet. 75%

No US President will utter the words "Existential risk" in public during their term as president. 65%

No human will have stepped foot on Mars. 50%

At least one company sells nearly fully autonomous cars, defined as cars that can autonomously perform nearly all tasks that normal drivers accomplish. 80%

Robin Hanson will disagree with the statement, "The rate of ... (read more)

3Oskar Mathiasen4y
You predict that it is more likely to have an ai which " that can perform nearly every economically valuable task more cheaply than a human, will have been created " than "will write a book without substantial aid, that ends up on the New York Times bestseller list. " This seems weird as the first seems very likely to cause the second.
4Matthew Barnett4y
A language model making it onto the NYT's bestseller list seems like a very specific thing. High level machine intelligence is not.
The Rubik's Cube one strikes me as much more feasible than the other AI predictions. Look at the dexterity improvements of Boston Dynamics over the last decade and apply that to the current robotic hands, and I think there's a better than 70% chance you get a Rubik's Cube-spinning, chopsticks-using robotic hand by 2030.
2Matthew Barnett4y
To help calibrate, watch this video.
This video didn't shift my priors that much. The impressive thing in the video is speed and precision, which is trivial for machines, let alone AI. Speed and precision is already there, it just needs to be hooked on to some qualitative breakthrough in application.
2Matthew Barnett4y
I'm willing to bet on this prediction.
How are you defining “hand”? Obviously this beats humans for speed but I guess you’re thinking of something which is general purpose and Rubik’s cube is just a test of dexterity?
4Matthew Barnett4y
By hand I mean anything that closely resembles a human hand.
2Matthew Barnett4y
My main data point is that I'm not very impressed by OpenAI's robot hand. It is very impressive relative to what we had 10 years ago, but top humans are extremely adept at manipulating things in their hands.
2emanuele ascani4y
Regarding "If a survey is performed, most people in the United States will say that curing aging is undesirable. 85%". One similar survey has already been done. The result depends if you specify that an unlimited lifespan would be in health and not in increasing frailty. If you do, > 40% of respondents opt for unlimited lifespan, otherwise 1%.
Well, even people working on AGI don’t think that is a possibility. I think the word you are looking for is “superintelligence” not AGI.
2Matthew Barnett4y
I'm using a slighly modified definition given by Grace et al. for high level machine intelligence.
So, superintelligence. I would suggest editing your prediction to say so. They’re not synonymous terms. In fact it is the full expectation that AGI in many architectures would be less efficient without extensive training. AGI is a statement of capability—it can, in principle, solve any problem, not that it does so better than humans.
2Matthew Barnett4y
If AGI just means, "can, in principle, solve any problem" then I think we could already build very very slow AGI right now (at least for all well-defined solutions -- you just perform a search over candidate solutions). Plus, I don't think my definition matches the definition given by Bostrom. ETA: I edited the original post to be more specific.
Your prediction reads the same as this definition AFAICT, if you substitute “nearly every” for “practically every”, etc. I think this is an instance of The Illusory Transparency of Words. What you wrote in the prediction probably doesn’t have the interpretation you meant. We don’t have AGI now because there is a lot hiding behind “at least for all well-defined solutions.” Therein lies the magic.
There are some unspecified parameters here. Do you mean autonomous cars that are ... * region-locked, or able to drive anywhere? * able to drive in all weather conditions, or limited to only some?
2Matthew Barnett4y
I think, able to drive on any road that Google Maps has access to, and able to drive in all "normal" weather conditions (some snow, medium amounts of rain). I'm not confident on this, though, and I imagine that it might be a while (>10 years) before autonomous vehicles are truly autonomous (that is, they can drive in any condition that a human would be able to in any context).
"Any road that Google Maps has access to" is a high bar when you consider that that includes the roads in many countries with wildly different driver and pedestrian dynamics than the United States.
“Essays from the Noosphere: Twelve Artificial Intelligences Reflect on Life, the Universe, and Everything” This seems to ignore the quite plausible scenario where an AI-written book finds itself a Schelling point for folks who use their bookshelf as a signaling mechanism. Being 100% AI and 0% human would be a boon in that scenario even if the book is a little rough around the edges.
1Matthew Barnett4y
That's a good point, but it doesn't reduce my credence much. Perhaps 94% or 95% is more appropriate? I'd be willing to bet on this.
I think you might have an inflated sense of how hard it is to get on the NYT bestseller list. Just go a little bit viral for one week and you’re done.
This seems underconfident? I have different intuitions for both: and But this is such that I'd expect that looking into either for a couple of hours would change my mind. For the second one, the Google ngram page for existential risk is interesting, but it sadly only reaches up to the year 2008.

I'd also encourage you to link your predictions to Foretold/Metaculus/other prediction aggregator questions, though only if you write your prediction in the thread as well to prevent link rot.

As a Schelling point, you can use this Foretold community which I made specifically for this thread.

Sorted approximately by strength:

The UK will leave the European Union. (95%)

Industrial / financial consolidation will continue instead of reversing, and the 'superstar cities' phenomenon will be stronger in 2030 than 2020. (90%)

The 'higher education bubble' will burst. (80%) This feels mostly like a "you'll know it when you see it" thing, but clear evidence would be a substantial decrease in the fraction of Americans going to college, or a significant decline in the wage premium for "any college diploma" over a high school diploma (while perhaps some diplo

... (read more)
Re: higher education bubble, do you also predict that tuition increases will not outpace inflation?
Also, I think the law school bubble burst in the wake of the 2008 financial crisis and the contraction in law firms, which you can see in student enrollment statistics but not inflation-adjusted tuition. 
My model doesn't give a detailed answer; I think I expect the number and type of people participating in higher education to change, and then it's unclear what that will do to average tuition. For example, in worlds where all undergraduate education becomes free-to-the-end-user but med school and law school still exist, the tuition statistics become apples to oranges.
Okay then, how about higher education as a fraction of GDP?
When I tried to calculate the equivalent thing for real estate and GDP for the 2008 financial crisis, as far as I can tell the fraction of GDP provided by real estate went up instead of down. The bubble bursting is clearly visible in the home price index, tho. So if someone creates a 'degree value index', that's where I'd expect to see it; the closest approximations that I'm aware of are the wage premium and underemployment rate (this can point to a few things; I mean the "person with a degree working a job you don't need a degree for" one instead of the "unemployed plus part-time seeking full-time work" one).  [Also I'm going to ping Bryan Caplan and see if he has a good operationalization.]
Bryan bets on the percentage of 18-24 year olds enrolled in 4-year degree-granting institutions (here's 2000-2017 numbers). I'm sort of astounded that anyone would take the other side of the bet as he proposed it (a decline from 30% to 20% over the course of 10 years); in my mind a decline from 30% to 25% would be 'substantial'. For the more specific version that I have in mind (a 'coming apart' of "bachelor's degrees" and "valuable bachelor's degrees"), I think it has to show up in a change of enrollment statistics split out by major, which might be too hard to operationalize ahead of time.

I predict that like 2010, a majority of these predictions will be overconfident.

I'd agree, but to be precise, I think this is not exactly the right measure. What matters is less that the majority are overconfident, but generally rather that their score according to a reasonable scoring function is worse than what they would expect, on average (or weighted according to some factor). Else, for instance, it's possible that 51% would technically be slightly overconfident, but the rest would be decently underconfident, averaging to proper calibration. I plan on writing about this more in future posts and similar.
4Matt Goldenberg4y
I agree, what matters is calibration and resolution. If you're talking about an individual s prediction that is, I'm unconvinced that group calibration would be a useful epistemic yardstick in this instance.
2Matt Goldenberg4y
Note also that its' impossible to determine "a majority of predictions to be oveconfident" as a literal statement. A prediction is only right or wrong, overconfidence can only be looked at in terms of the aggregate (which is what I meant in the original post).
  • The market for Certificates of Impact will be smaller than $100K/year in 2030. ~90%.
  • There will be at least 1000 points on Hacker News with "Knowledge Graph" or synonyms in the title. ~60%
  • No [AGI+Superintelligence] ~98%
  • The Economist will be more optimistic in 2030 than it is in Jan 1, 2020.[1] ~80%
  • Judgemental prediction applications will be considered "moderately useful" for EA purposes ~40%
  • There will be at least one US-based prediction market larger than PredictIt is now, in daily traffic. ~50%

I realize these are all super high-level and vague. [1]

... (read more)
Do you mean 2030?
Yes, thanks for noticing!
Would you count Paul's "altruistic equity allocation" as part of an impact certificate market?
Sure. I didn't know about this post when I wrote this, but it seems similar enough.

Usual disclaimers apply: probabilities are not exact betting odds, I try to give quantitive assessments wherever I can but many predictions are too vague to quantify etc. If I am still alive in 2030 I will try to give my subjective assessment to what degree I agree with the predictions.

1. a China will become #1 economy in the coming decade, but will experience continued economic slowdown.

2. Taiwan put under siege by China (i) economically 80% (ii) militarily 50%
conditional on (ii) the US will back down 90%
conditional on the US not backing down... (read more)

Re: cure for dandruff, do you consider this adequate?
1Alexander Gietelink Oldenziel3y
Good point..! I also recently came upon I thought I remebered that some of the mechanisms were unknown.

Is anyone accepting bets on their predictions?

3Matthew Barnett4y
I will probably accept bets, although the fact that someone would be willing to bet me on some of mine is evidence that I'm overconfident, so I might re-evaluate my probability if someone offers.
1Liam Donovan4y
FWIW you can bet on some of these on PredictIt -- for example, Predictit assigns only a 47% chance Trump will win in 2020. That's not a huge difference, but still worth betting 5% of your bankroll (after fees) on if you bet half-Kelly. (if you want to bet with me for whatever reason, I'd also be willing to bet up to $700 that Trump doesn't win at PredictIt odds if I don't have to tie up capital)

Using a reasonable calibration method*, the set of predictions made in this thread will receive a better score than the set of those in the previous thread from 10 years ago (80%)

Nonetheless, lowering each confidence stated by a relative 10% (i.e. 70% to 63% etc.) will yield better total calibration (60%)

I don't know the math for this, but I'm assuming there is one that inputs a set of predictions and their truth values and outputs some number, such that the number measures calibration and doesn't predictably increase or decrease with mo... (read more)

I don't think you want to lower all predictions uniformly; some predictions here are stated with figures below 50%, for instance. One better approach might be to reduce the log odds by some factor. If we pick 10% then we get substantially smaller changes than your proposal gives; maybe reduce the log odds by 25%? So if someone thinks X is 70% likely, that's 7:3 odds; we'd reduce that to (7:3)^0.75 which is the equivalent of a probability of about 65.4%. If they think X is 90% likely it would become 83.9%; if they think X is 50% likely, that wouldn't change at all. (Arguably simpler but seems less natural to me: reduce proportionally not the probability but the difference from 50% of the probability. Say we reduce that by 25%; then 70% becomes 50% + 0.75*20% or 65%, quite similar to the fancy log-odds proposal in the previous paragraph. Things diverge more for more extreme probabilities: 90% turns into 50% + 0.75*40% or 80%, and 100% turns into 87.5% where the log-odds reduction leaves it unchanged.)
1Rafael Harth4y
That might or might not be a better proxy for the kind of overconfidence I've been meaning to predict. The reason why it might not: my formulation relied on the idea that most people will formulate their predictions such that the positive statement corresponds to the smaller subset of positive future space. In that case, even if it's a < 50% prediction, I would still suspect it's overconfident. For example: Now I've no idea about the substance matter here, but across all such predictions, I predict that they'll come true less often than the probability indicates. So if we use either of the methods you suggested here, the 35% figure moves upward rather than downward; however I think it should go down.
Fair enough! I suspect some low-probability predictions will be of that sort and some of the other, in which case there's no simple way to adjust for overconfidence.

As with last decade, I'm most confident about boring things, though less optimistic than I'd like to be.

Fewer than 1 billion people (combatants + civilians) will die in wars in the 2020s: 95%

The United States of America will still exist under its current Constitution (with or without new Amendments) and with all of its current states (with or without new states) as of 1/1/30: 93%

Fewer than 10 million people (combatants + civilians) will die in wars in the 2020s: 85%

The median rent per unit in the United States will increase faster than inflation ... (read more)

A rough distribution (on a log scale) based on the two points you estimated for wars (95% < 1B people die in wars, 85% < 10M people die in wars) gives a median of ~2,600 people dying. Does that seem right?
No. My model is the sum of a bunch of random variables for possible conflicts (these variables are not independent of each other), where there are a few potential global wars that would cause millions or billions of deaths, and lots and lots of tiny wars each of which would add a few thousand deaths. This model predicts a background rate of the sum of the smaller ones, and large spikes to the rate whenever a larger conflict happens. Accordingly, over the last three decades (with the tragic exception of the Rwandan genocide) total war deaths per year (combatants + civilians) have been between 18k and 132k (wow, the Syrian Civil War has been way worse than the Iraq War, I didn't realize that). So my median is something like 1M people dying over the decade, because I view a major conflict as under 50% likely, and we could easily have a decade as peaceful (no, really) as the 2000s.
Yeah this seems pretty reasonable. It's actually stark looking at the Our World in Data – that seems really high per year. Do you have your model somewhere? I'd be interested to see it.
It's not explicit. Like I said, the terms are highly dependent in reality, but for intuition you can think of a series of variables Xk for k from 1 to N, where Xk equals 1/k with probability 1/√N. And think of N as pretty large. So most of the time, the sum of these is dominated by a lot of terms with small contributions. But every now and then, a big one hits and there's a huge spike. (I haven't thought very much about what functions of k and N I'd actually use if I were making a principled model; 1/k and 1/√N are just there for illustrative purposes, such that the sum is expected to have many small terms most of the time and some very large terms occasionally.)
This seems like the kind of bold prediction which failed last time around. Maybe you can make it more specific and say what fraction of online transactions will be processed using something which looks unlike the current credit card setup?
And then have it immediately satisfied by cash transactions! I think you’d have to either predict reductions in credit card usage specifically, or get into a little bit more detail about what sort of transaction setup we are talking about. For example, I could see the spirit of (my interpretation of) this prediction being met by something like the new NFC payment mechanisms which generate one-time use credit card numbers for each transaction. Why pointlessly break compatibility with the legacy system?
I guess Paypal, Amazon Pay, etc. could also qualify--they allow me to make purchases without giving a merchant access to my credit card number.
Dammit, dammit, dammit, I meant to condition these all on no human extinction and no superintelligence. Commenting rather than editing because I forget if the time of an edit is visible, and I want it to be clear I didn't update this based on information from the 2020s.
Times of edits are not currently visible, though we do store all the necessary information, so if anyone ever wants to know whether a comment has been edited, an admin can look it up.
That certainly seems a very reasonable prediction, and perhaps too conservative. In many ways one might say that current chip based card transactions (which would also include all the mobile payments like Apple/Samsung/Google pay) have already departed that non-transaction-specific model. Similarly, for online purchases that use token technologies these are often linked to the specific merchant. However, there might be two ways to interpret that predictions. 1) the payment mechanisms used for non-cash transactions will move towards transaction specific identifiers and cash will not be used or used significantly less than today or 2) we might see some form of transaction specific "money" (block-chain currencies seem to fit but I don't think they are the future) and more transactions are conducted as "cash" rather than using these payment card mechanisms.
Either (1) or (2) (and some other possibilities) would satisfy my prediction. My prediction is just that, however we do things in 2029, it won't be by handing each merchant the keys to our entire credit account.

The human population will be more than 8 billion, and the population of India will reach 1.5 billion. 90%

India's GDP would rise up to 5 trillion dollars. 70%

India will cease to be a secular state, and communal violence will become more common. 50%

Russia's GDP will be less than 2 trillion dollars. 60%

No human being would be living on another celestial object (Moon, another planet or asteroid). 80%

USA would have less than 1000 troops in Afghanistan. 80%

A new civil war or high-level insurgency will break out in Syria. 50%

Israel would not have vacate... (read more)

I noticed that your prediction and jmh's prediction are almost the exact opposite: * Teerth: 80%: No human being would be living on another celestial object (Moon, another planet or asteroid) (by 2030) * jmh: 90%: Humans living on the moon (by 2030) (I plotted this here to show the difference, although this makes the assumption that you think the probability is ~uniformly distributed from 2030 – 2100). Curious why you think these differ so much? Especially jmh, since 90% by 2030 is more surprising - the Metaculus prediction for when the next human being will walk on the moon has a median of 2031.
2Teerth Aloke3y
Difference in intuition. Otherwise, I think that there will be no state-sponsored space colonization program- and there will be no incentive for any private organization to establish a colony - given the price of sending and sustaining.
Do you mean formally (as in changing the wording in constitution, etc.), or by some particular pragmatic measure? I can understand where it's coming from and vaguely even agree, but I'm curious if you have any measurable indicators for this in mind. Care to put any numbers on this, for eg. number of communal incidents in a year or whatever similar measure is available?
1Teerth Aloke3y
I think that the Constitution of India might be modified to declare it a 'Hindu nation'
  • Meditation (or, with a small likelihood, some form of it in a different name) will become even more common and widely known. Not (yet) as widely practised as bathing every day, but as widely recommended as flossing is by dentists. (85%)
  • Capital investments in Europe will grow at a faster pace than in the US. In 2019 it seems be in a 1:4 ratio ($34 billion vs $136 billion ), which will have changed to at least 1:3 (70%). (Something similar probably holds in Asia too, but I'm too lazy to look up the numbers, divide up China vs rest of Asia, etc. )
  • Vegetaria
... (read more)

A few days late, but I finally filled out my big spreadsheet of predictions. Anyone else is welcome to make a new sheet in it and add their own on the same questions!

1) The global multilateral political and economic institutions fail and relationships return to more bi-lateral and regional based systems largely replaces it. Not that something like the UN disappears only that it merely serves as a location for discussion but not seen (which is clearly is not even now) any type of global government with any authority over the member states. 70%

2) A second global financial crisis of larger scale than 2007-2009 period. 50%

3) North Korea recognized as a nuclear state. (20 - 30%). Resulting in the effective abandonment of th... (read more)

Re (7), there's a laughable amount of conjunction on even the first prediction in the chain.
I’m willing to place a large bet on 14 at 1000:1 If we are not destroyed by aliens then you owe me $1,000,000, if we are all destroyed by aliens then I owe you $1,000,000,000.
You don't seem to recognize an attempt at humor. I take it you never read A Hitchhikers Guide to the Galaxy.
I recognised the humour and was responding in kind - specifically that if we are destroyed by aliens then I’m unlikely to be in a position to pay you what I owe...
5's confidence seems a bit high, as does 10. But several of your predictions seem way too confident, given how specific they are. 6,7,14 in particular. 40% for seems wrong due to its burdensome details. What would 15) mean, exactly?
15? More humility mostly but should probably have limited that to certain fields, such as cosmology, rather than painting with a really large brush. As for the assessed probabilities, I can only hope you are correct. As for the burdensome details, I'm not sure that applies (but thanks for the link and I will read it more fully and reconsider). I have reformatted the item -- whether or not that changes it being a burdensome details error....

New to LessWrong?