The problem with this policy is the unilateralist's curse which says that a single optimistic actor could develop a technology. Technologies such as AI have substantial benefits and risks, the balance is uncertain and the net benefit is perceived differently by different actors. For a technology not to be developed all actors would have to agree not to develop it which would require significant coordination.
Yes, agreed, what you refer to is indeed a huge obstacle.
From years of writing on this I've discovered another obstacle. When ever this subject comes up almost all those who join the conversation focus almost exclusively on obstacles and theories regarding why such change isn't possible, and...
The conversation almost never gets to the point of folks rolling up their sleeves to look for solutions.
I don't have a big pile of solutions to put on the table either. All I really have is the insight that overcoming these challenges isn't optional.
In my judgement there is little chance of such fundamental change to our relationship with unlimited technological progress within the current cultural status quo. However, given the vast scale of forces being released in to the world there would seem to be an unprecedented possibility of revolutionary change to the status quo.
As example, imagine even a limited nuclear exchange between Pakistan and India. More people would die in a few minutes than died in all of WWII. The media would feed on the carnage for a long time, relentlessly pumping unspeakable horror imagery in to every home in the world with a TV.
Consider for instance how all the stories about floods, fires and heat waves etc are editing our relationship with climate change. It's no longer such an abstract issue to us, it's increasingly becoming real, hitting us where we really live, in the emotional realm.
Tired: can humans solve artificial intelligence alignment?Wired: can artificial intelligence solve human alignment?
Tired: can humans solve artificial intelligence alignment?
Wired: can artificial intelligence solve human alignment?
Apologies that I haven't read the article (not an academic) but I just wanted to cast my one little vote that I enjoy this point, and the clever way you put it.
Briefly, it's my sense that most of the self inflicted problems which plague humanity (war for example) arise out of the nature of thought, that which we are all made of psychologically. They're built-in.
I can see how AI, like computing and the Internet, could have a significant impact upon the content of thought, but not the nature of thought.
Genetic engineering seems a more likely candidate for editing the nature of thought, but I'm not at all optimistic that this could happen any time soon, or maybe any time ever.
Thanks much for your engagement Mitchell, appreciated.
Your paradigm, if I understand it correctly, is that the self-sustaining knowledge explosion of modern times is constantly hatching new technological dangers, and that there needs to be some new kind of response
Yes, to quibble just a bit, not just self sustaining, but also accelerating. The way I often put it is that we need to adapt to the new environment created by the success of the knowledge explosion. I just put up an article on the forum which explains further:
from the whole of civilization? just from the intelligentsia
As I imagine it, the needed adaptation would start with intellectual elites, but eventually some critical mass of the broader society would have to agree, to some degree or another. I've been writing about his for years now, and can't actually provide any evidence that intellectual elites can lead on this, but who else?
It's unclear to me if you think you already have a solution.
I don't have a ten point plan or anything, just trying to encourage this conversation where ever I go. Success for me would be hundreds of intelligent well educated people exploring the topic in earnest together. That is happening to some degree already, but not with the laser focus on the knowledge explosion that I would prefer.
You're also saying that focus on AI safety is a mistake...
I see AI discussions as a distraction, as an addressing of symptoms, rather than addressing the source of X risks. If 75% of the time we were discussing the source of X risk, I wouldn't object to 25% addressing particular symptoms.
I'm attempting to apply common sense. If one has puddles all around the house every time it rains, the focus should be on fixing the hole in the roof. Otherwise one spends the rest of one's life mopping up the puddles.
There are in fact good arguments that AI is now pivotal to the whole process and also to its resolution.
I don't doubt AI can make a contribution in some areas, no argument there. But I don't see any technology as being pivotal. I see the human condition as being pivotal.
I'm attempting to think holistically, and consider man and machine as a single operation, with the success of that operation being dependent upon the weakest link, which I propose to be us. Knowledge development races ahead at an ever accelerating rate, while human maturity inches along at an incremental rate, if that. Thus, the gap between the two is ever widening.
Please proceed to engage from whatever perspective you find useful. What I hope to be part of is a long deliberate process of challenge and counter challenge which helps us inch a little closer to some useful truth.
EA and AI safety have invested a lot of resources into building our ability to get coordination and cooperation between big AI labs.
Are you having any luck finding cooperation with Russian, Chinese, Iranian and North Korean labs?
However, since ASI could reduce most risks, delaying the creation of ASI could also increase other existential risks, especially from advanced future technologies such as synthetic biology and molecular nanotechnology.
Here's a solution to all this. I call this revolutionary new philosophy....
Acting Like Adults
Here's how it works. We don't create a new technology which poses an existential risk until we've credibly figured out how to make the last one safe.
So, in practice, it looks like this. End all funding for AI, synthetic biology and molecular nanotechnology etc until we figure how to liberate ourselves from 1945 existential risk technology.
The super sophisticated, high end, intellectual elite, philosophically elegant methodology involved here is called...
If our teenage son wants us to buy him a car, we might respond by saying, "show me that you won't crash this moped first". Prove that you're ready.
The fact that all of this has to be explained, and once explained it will be universally ignored, demonstrates that...
We ain't ready.
Knowledge development feeds back on itself. So when you have a little knowledge you get a slow speed of further development, and when you have a lot of knowledge you get a fast speed. The more knowledge we get, the faster we go.
The first photo was incredible, amazing! Thanks for sharing that.
So what do we make of these men, who risk so much for so little?
Macho madness. Youtube and Facebook is full of it these days, and it truly pains me to watch young people with so much ahead of them risk everything in exchange for a few minutes of social media fame.
But, you know, it's not just young people, it's close to everybody. Here's an experiment to demonstrate. The next time you're on the Interstate count how many people NASCAR drafting tailgate you at 75mph. Risking everything, in exchange for nothing.
On behalf of the Boomer generation I wish to offer my sincere apologies for how we totally ripped off our own children. We feasted on the big jobs in higher education, and sent you the bill.
I paid my own way through the last two years of a four year degree, ending in 1978. I graduated with $4,000 in debt. That could have been you too, but we Boomer administrators wanted the corner office.
I've spent my entire adult life living near, sometimes only blocks away, from the largest university in Florida. It used to be an institution of higher learning, but we Boomers turned it in to a country club. Very expensive. But no worries, cause we passed the bill on to you.
By the way, reading this post costs $1300. But don't worry about it, because I can give you a loan, with interest of course.
As a self appointed great prophet, sage and heretic I am working to reveal that a focus on AI alignment is misplaced at this time. As a self appointed great prophet, sage and heretic I expect to be rewarded for my contribution with my execution, which is part of the job that a good heretic expects in advance, is not surprised by, and accepts with generally good cheer. Just another day in the office. :-)