Wiki Contributions

Comments

Yes, I like it! Thanks for sharing that analysis, Gunnar.

Good list. I think I'd use a triangle to organize them. Have consciousness at the base, then sentience, then drawing from your list, phenomenal consciousness, followed by Intentionality? 

Thank you for asking. 

To generalize across disciplines, a critical aspect of human-level artificial intelligence, requires the ability to observe and compare. This is a feature of sentience. All sentient beings are conscious of their existence. Non-sentient conscious beings exist, of course, but none who could pass a Turing test or a Coffee-making test. That requires both sentience and consciousness.

What happens if you shut down power to the AWS or Azure console powering the Foundation model? Wouldn't this be the easiest way to test various hypotheses associated with the Shutdown Problem in order to either verify it or reject it as a problem not worth sinking further resources into?

That's a good example of my point. Instead of a petition, a more impactful document would be a survey of risks and their probability of occurring in the opinion of these notable public figures. 

In addition, there should be a disclaimer regarding who has accepted money from Open Philanthropy or any other EA-affiliated non-profit for research. 

Which makes it an existential risk. 

"An existential risk is any risk that has the potential to eliminate all of humanity or, at the very least, kill large swaths of the global population." - FLI

What aspect of AI risk is deemed existential by these signatories? I doubt that they all agree on that point. Your publication "An Overview of Catastrophic AI Risks" lists quite a few but doesn't differentiate between theoretical and actual. 

Perhaps if you were to create a spreadsheet with a list of each of the risks mentioned in your paper but with the further identification of each as actual or theoretical, and ask each of those 300 luminaries to rate them in terms of probability, then you'd have something a lot more useful. 

I looked at the paper you recommended Zack. The specific section having to do with "how" AGI is developed (para 1.2) skirts around the problem. 

"We assume that AGI is developed by pretraining a single large foundation model using selfsupervised learning on (possibly multi-modal) data [Bommasani et al., 2021], and then fine-tuning it using model-free reinforcement learning (RL) with a reward function learned from human feedback [Christiano et al., 2017] on a wide range of computer-based tasks.4 This setup combines elements of the techniques used to train cutting-edge systems such as GPT-4 [OpenAI, 2023a], Sparrow [Glaese et al., 2022], and ACT-1 [Adept, 2022]; we assume, however, that 2 the resulting policy goes far beyond their current capabilities, due to improvements in architectures, scale, and training tasks. We expect a similar analysis to apply if AGI training involves related techniques such as model-based RL and planning [Sutton and Barto, 2018] (with learned reward functions), goal-conditioned sequence modeling [Chen et al., 2021, Li et al., 2022, Schmidhuber, 2020], or RL on rewards learned via inverse RL [Ng and Russell, 2000]—however, these are beyond our current scope."

Altman has recently said in a speech that continuing to do what has led them to GPT4 is probably not going to get to AGI. ""Let's use the word superintelligence now, as superintelligence can't discover novel physics, I don't think it's a superintelligence. Training on the data of what you know, teaching to clone the behavior of humans and human text, I don't think that's going to get there. So there's this question that has been debated in the field for a long time: what do we have to do in addition to a language model to make a system that can go discover new physics?" 

https://the-decoder.com/sam-altman-on-agi-scaling-large-language-models-is-not-enough/

I think it's pretty clear that no one has a clear path to AGI, nor do we know what a superintelligence will do, yet the Longtermist ecosystem is thriving. I find that curious, to say the least.

My apologies for not being clear in my Quick Take, Chris. As Zach pointed out in his reply, I posed two issues. 

The first being an obvious parallel for me between EA and Judeo-Christian religions. You may or may not agree with me, which is fine. I'm not looking to convince anyone of my point-of-view. I was merely interested in seeing if others here had a similar POV. 

The second issue I raised was what I saw as a failure in the reasoning chain where you go from Deep Learning to Consciousness to an AI Armageddon. Why was that leap in faith so compelling to people?

I don't see either of those questions as not being in the interest of the "public good", but perhaps you just said that because my first attempt wasn't clear. Hopefully, I've remedied that with this answer.

Thank you for the link to that paper, Zack. That's not one that I've read yet. 

And you're correct that I raised two separate issues. I'm interested in hearing any responses that members of this community would like to give to either issue. 

Load More