No overthinking AI risk. People, including here get lost in mind loops and complexity.
An easy guide with everything there being a fact:
Given these easy to understand data points, there is only one conclusion. That AI risk is real, AI risk is NOW.
how can you know if it's exaggerated? It's like an earthquake. The fact that it didn't happen yet doesn't mean that it will not be destructive if it happens through time. The superintelligence slope doesn't stop somewhere to evaluate nor do we have any kind of signal that the more time passes the more improbable it is.
Let's discuss for now, and then check in about it in 31 months.
I really don't like these kind of statements because it's like a null bet. Either the world has gone to hell and nobody cares about this article or author has "I was correct, told ya" rights. I think these kind of statements should not be made in the context of existential risk.
my criticism is that the article is written in a way that is categorically "correcting for a faulted model" from an outsider. Yes you can suggest corrections of course if there is a blatant mistake. But the assumptions are the most important in these models and assumptions are best done by people that have worked and contributed in the top AI labs.
Although I don't like comments starting with "your logic slipped" because it sounds passive-aggressive "you are stupid" vibes I will reply.
So what you are saying is that yes this time is different just not today. It will definately happen and all the doomerism is correct but not on a short timeline because ____ insert reasoning that is different than what the top AI minds are saying today.
This is actually and very blatantly a self preserving mechanism that is called "norlmancy bias" very well documented for human species.
another data point is that there are literally no marketing ads showing white male with black female as a couple. Even when racial diversity needs to be shown even at lgbt or racial friendly groups, brochures etc it's always a black man with a white woman and never vice versa. I guess it's a chicken and egg problem.
but you need to form this not like any other argument but like "first time in history of earth life, a species has created a new superior species". I think all these refutals are missing this specific point. This time is different.
I think having a huge p(doom) vs a much smaller one would change this article substantially. If you have 20-30 or 50% doom you can still be positive. In all other cases it sounds like a terminal illness. But since the number is subjective living your life like you know you are right is certainly wrong. So I take most of your article and apply it in my daily life and the closest to this is being a stoic but by any means I don't believe that it would take a miracle for our civilization to survive. It's more than that and it's important.
are they intelligent species with own will?