amelia

"But this one goes up to eleven." (quote from the critically-acclaimed film Spinal Tap)

More info: www.ameliajones.org/about-me

Wiki Contributions

Comments

I find this analysis to be extremely useful. Obviously anything can be refined and expanded, but this is such a good foundation. Thank you. 

Thank you for your thoughtful and useful comment. 

Regarding "AI optimists," I had not yet seen the paper currently on arxiv, but "AI risk skeptics" is indeed far more precise than "AI optimists." 100 percent agreed.

Regarding alternatives to "AI pessimists" or "doomers," Nevin Freeman's term "AI prepper" is definitely an improvement. I guess I have a slight preference for "strategist," like I used above, over "prepper," but I'm probably biased out of habit. "Risk mitigation advocate" or "risk mitigator" would also work but they are more unwieldy than a single term.  

The "Taxonomy on AI-Risk Counterarguments" post is incredible in its analysis, precision and usefulness. I think that simply having some terminology is extremely useful, not just for dialog, but for thought as well. 

As we know, historically repressive regimes like the Soviet Union and North Korea have eliminated terms from the lexicon, to effective end. (It's hard for people to think of concepts for which they have no words.)    

I think that discussing language, sharpening the precision of our language, and developing new terminology has the opposite effect, in that people can build new ideas when they work with more precise and more efficient building materials. Words definitely matter. 

Thanks again. 

Thanks for the feedback, but I don't think it's about "cognitive rewiring." It's more about precision of language and comprehension. You said "AI optimists think AI will go well and be helpful," but doesn't everyone believe that is a possibility? The bigger question is what probability you assign to the "go well and be helpful" outcome. Is there anything we can do to increase the probability? What about specific policies? You say you're an "AI optimist," but I still don't know the scope of what that entails w/ specific policies. Does that mean you support open source AI? Do you oppose all AI regulations? What about an AI pause in development for safety? The terms "AI optimist" and "AI pessimist" don't tell me much on their own. 

One inspiration for my post is the now infamous exchange that went on between Yann LeCun and Yoshua Bengio. 

As I'm sure you saw, Yann LeCun posted this on his Facebook page (& reposted on X): 

"The heretofore silent majority of AI scientists and engineers who

- do not believe in AI extinction scenarios or

- believe we have agency in making AI powerful, reliable, and safe and

- think the best way to do so is through open source AI platforms,

NEED TO SPEAK UP !"

https://www.facebook.com/yann.lecun/posts/pfbid02We6SXvcqYkk34BETyTQwS1CFLYT7JmJ1gHg4YiFBYaW9Fppa3yMAgzfaov7zvgzWl

Yoshua Bengio replied as follows:

Let me consider your three points. 

(1) It is not about 'believing' in specific scenarios. It is about prudence. Neither you nor anyone has given me any rational and credible argument to suggest that we would be safe with future unaligned powerful AIs and right now we do not know how to design such AIs. Furthermore, there are people like Rich Sutton who seem to want us humans to welcome our future overlords and may *give* the gift of self-preservation to future AI systems, so even if we did find a way to make safe AIs, we would still have a socio-political problem to avoid grave misuse, excessive power concentration and the emergence of entities smarter than us and with their own interests. 

(2) Indeed we do have agency, but right now we invest 50 to 100x more on AI capabilities than in AI safety and governance. If we want to have a chance to solve this problem, we need major investments both from industry and governments/academia. Denying the risks is not going to help achieve that. Please realize what you are doing. 

(3) Open-source is great in general and I am and have been for all my adult life a big supporter, but you have to consider other values when taking a decision. Future AI systems will definitely be more powerful and thus more dangerous in the wrong hands. Open-sourcing them would be like giving dangerous weapons to everyone. Your argument of allowing everyone to manipulate powerful AIs is like the libertarian argument that everyone should be allowed to own a machine-gun or whatever weapon they want. From memory, you disagreed with such policies. And things get worse as the power of the tool (hence of the weapons derived from it) increases. Do governments allow anyone to build nuclear bombs, manipulate dangerous pathogens, or drive passenger jets? No. These are heavily regulated by governments.

--

[I added spacing to Bengio's post for readability.]

Media articles about this, along with commenters, have described LeGun as an "AI optimist" and Bengio as an "AI pessimist." 

Just like in how you and I communicated, I think these terms, and even the "good vs bad" dichotomy, radically simplify the nature of the situation. Meanwhile, if the general public were asked what they think the "AI optimist" (supposedly LeGun) or the "pessimist" (supposedly Bengio) believe here, I'm not sure anyone would come back with an accurate response. Thus, the terms are ineffective. 

Obviously you can think of yourself with any term you like, but with respect to others, it seems the term "AI strategist" for Bengio--not to mention Eliezer--is more likely to call to mind something closer to what they actually believe. 

And isn't conveyance of accurate meaning the primary goal of communication?

"...this kind of transformer doesn't seem to provide any evidence on whether we can create full-fidelity simulations in the future." 

My point wasn't that WE would create full-fidelity simulations in the future. There's a decent likelihood that WE will all be made extinct by AI. My point was that future AI might create full-fidelity simulations, long after we are gone. 

"I currently think of the simulation hypothesis as similar to MWI in quantum mechanics - it's a model that cannot be proven or disproven..." 

Ironically, I believe many observable phenomena in quantum mechanics provide strong support (or what you might call "proof") for the simulation hypothesis--or at least for the existence of a deeper/"information level" "under" the quantum level of our universe. Here's a short, informal article I wrote about how one such phenomenon (wave function collapse) supports the idea of an information level (if not the entire simulation hypothesis).

[EDIT: The title of the article reflects how MWI needs a supplemental interpretation involving a "deeper/information" level. From this, you can infer my point.] 

https://medium.com/@ameliajones3.14/a-deeper-world-supplement-to-the-many-worlds-interpretation-of-wave-function-collapse-54eccf4cad30

Also, the fact that something can't currently be proven or disproven does not mean it isn't true (and that it won't be "proven" in the future).  Such has been the case for many theories at first, including general relativity, evolution through natural selection, etc. 

Thanks for sharing this! It's so interesting how multiple people start having similar thoughts when the environment is right. It seems the simulation hypothesis and AI Risk are inextricably linked, even if for no other purpose than conducting thought experiments that help us understand both better. 

To the people who upvoted this post, 

Thank you very much for the support. As you maybe saw below, the restriction on my account has been lifted! 

As I also mentioned below, I might not take advantage of the restored liberty in the short term. I’ve already begun consolidating all my writing on my personal website (AmeliaJones.org), with links to Medium blog posts for the writing. (The writing that was on LW would mostly be under writing...AI, or writing....physics. There are also short stories and other right-brain type stuff, but I don't think LW folks would be too interested in that i.e. just ignore it.) 

However, I might return to LW in the longer term. For now, please don’t be offended if I don’t respond to comments on this post. I don’t think I will be checking in too often. This will allow me to focus more on my new platform. 

Warmest wishes to all of you, and thanks again for the support when I really needed it, 

Amelia

Thanks for being curious! I’ve begun using my personal website (AmeliaJones.org) as a place for all my work. From there, I will have links to Medium blog posts. (Posts that were previously on LW would mostly be under the writing....philosophy, or writing....physics categories on the website.) I appreciate your interest!

Raemon, thank you very much for lifting the restriction on my account! I’m sure it’s extremely challenging to maintain high LW standards, while at the same time trying to promote open dialog with differing perspectives. I don’t envy your job, but I really appreciate the work you do. 


In the short term, I might not take full advantage of my restored liberty. I’ve started using my personal website (www.AmeliaJones.org) for both AI art projects and all my writing (not just LW writing). The writing will have links to Medium blog posts, so people can comment as much as they choose. It’s actually turning out to be easier for me to do things this way. However, depending on whether I get any visibility or feedback via this method, I might return to LW for niche writing projects in the longer term.  

Thanks again for lifting the restriction, and for the important work you do. 

Best wishes, 

Amelia

Load More