LESSWRONG
LW

Haiku
2240460
Message
Dialogue
Subscribe

I am a volunteer organizer with PauseAI and PauseAI US, a top forecaster, and many other things that are currently much less important.

The risk of human extinction from artificial intelligence is a near-term threat. Time is short, p(doom) is high, and anyone can take simple, practical actions right now to help prevent the worst outcomes.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Implicit and Explicit Learning
Haiku2d20

I'm open to the idea that autopoietic systems invariably butt up against against reality in a way that shapes them over time. But I am missing some connections that would help me evaluate this idea.

I'm completely sold on the notion that a co-evolving swarm of sophisticated intelligences is a very bad thing for the longevity of the human race, but I already thought that beforehand. What I still don't see is why a single well-aligned superintelligence (if such a thing can be created) would be certain to eventually drift away from an intent to allow or facilitate the flourishing of humanity.

You stated an assumption that there is an "evolutionary feedback loop," but the existence of a feedback loop does not necessarily imply evolution.

What does it mean for a single entity to evolve? Doesn't implicit learning (i.e. selection effects) require population-level dynamics, and the death or marginalization of systems that cannot compete?

And why would a superintelligence evolve? I do not expect it to be under threat in any way. Can't it persist by merely updating its surface-level predictions about the environment, without specific priorities ever being affected? Even if the AI was under threat, I would expect the changes that it would undergo in order to survive to be explicit / intentional, not implicit / environmentally forced.

Reply
Foom & Doom 1: “Brain in a box in a basement”
Haiku7d20

The problem is: public advocacy is way too centered on LLMs, from my perspective.[9] Thus, those researchers I mentioned, who are messing around with new paradigms on arXiv, are in a great position to twist “Pause AI” type public advocacy into support for what they’re doing!

I am a long-time volunteer with the organization bearing the name PauseAI. Our message is that increasing AI capabilities is the problem -- not which paradigm is used to get there. The current paradigm is dangerous in some fairly legible ways, but that doesn't at all imply that other paradigms are any better. Any effort to create increasingly capable and increasingly general AI systems ought to be very illegal unless paired with a robust safety case, and we mostly don't tie this to the specifics of LLMs.

Well, the government regulators hardly matter anyway, since regulating the activity of “playing with toy models, and publishing stuff on arXiv and GitHub” is a hell of an ask—I think it’s so unlikely to happen that it’s a waste of time to even talk about it, even if it were a good idea all-things-considered.

Yeah, restricting the creation and dissemination of most AGI-related research is definitely a much harder ask. I can imagine a world that has an appetite for that kind of invasive regulation (if it is necessary), but it would probably require intervening steps to get there, including first regulating only the biggest players in the AGI race (which is a very popular idea across all political spectra in the western world).

1.6.2 I’m broadly pessimistic about existing efforts towards regulating AGI

My overall p(doom from AI by 2040) is about 70%, which shows pessimism on my part as well. But of course, that's why I'm trying so hard. My ranking of "ways we survive" from most to least likely goes: Robust Governance Solutions > Sheer Dumb Luck > Robust Technical Solutions. So advocacy is where I spend my time.

In any case, a world that is more aware of the problem is one that is more likely to solve it by some means or another. I'm working to buy us some luck, so to speak.

Reply
A case for courage, when speaking of AI danger
Haiku15d159

It's important for everyone to know that there are many things they can personally do to help steer the world onto a safer trajectory. I am a volunteer with PauseAI, and while I am very bad at grassroots organizing and lobbying, I still managed to start a local chapter and have some positive impact on my federal representative's office.

I suggest checking out PauseAI's Take Action page. I also strongly endorse ControlAI, who are more centralized and have an excellent newsletter.

Reply2
Too Soon
Haiku2mo41

I'm sorry for your loss. It is something no one should have to go through.

My father was diagnosed with Parkinson's last year. I have processed and accepted the fact that he is going to die.

Under the circumstances, he is most likely going to die from artificial intelligence at about the same time that I do.

There is no temptation you could give me that would make me risk the end of all things. Not prevention of my father's death. Not the prevention of my death. Not the prevention of my partner's death. I do not need AGI. Humanity as a whole does not need AGI, nor do most people even want it.

Death is horrible, which is why everyone should be strongly advocating for AGI to not be built, until it is safe. By default, it will kill literally everyone.

If you find yourself weighing the lives of everyone on earth and deciding for yourself whether they should be imperiled, then you have learned the wrong lesson from stories of comic book supervillains. It's not our choice to make, and we are about to murder everyone's mothers.

Reply
Why Should I Assume CCP AGI is Worse Than USG AGI?
Haiku3mo205

I don't know what it would mean for AI to "be democratic." People in a democratic system can use tool AI, but if ASI is created, there will be no room for human decision-making on any level of abstraction that the AI cares about. I suppose it's possible for an ASI to focus its efforts solely on maintaining a democratic system, without making any object-level decisions itself. But I don't think anyone is even trying to build such a thing.

If intent-aligned ASI is successfully created, the first step is always "take over the world," which isn't a very democratic thing to do. That doesn't necessarily mean there is a better alternative, but I do so wish that AI industry leaders would stop making overtures to democracy out of the other side of their mouth. For most singularitarians, this is and always has been about securing or summoning ultimate power and ushering in a permanent galactic utopia.

Reply
The Last Light
Haiku3mo20

This is beautiful, and I have been soothed and uplifted by it. Thank you for sharing the gift of gratitude with me.

Reply
Short Timelines Don't Devalue Long Horizon Research
Haiku3mo2-11

There is a lot wrong with this post.

  1. The AI you want to assist you in research is misaligned and not trustworthy.
  2. AI is becoming less corrigible as it becomes more powerful.
  3. AI safety research almost certainly cannot outpace AI capabilities research from an equal start.
  4. AI safety research is way behind capabilities research.
  5. Solving the technical alignment problem on its own does not solve the AI doom crisis.
  6. Short timelines very likely mean we're just dead, so this is a conversation about what to do with the last years of your life, not what to do that stands a chance at being useful.

Overall, the argument in this post serves primarily to reinforce an existing belief and to make people feel better about what they are already doing. (In other words, it is just cope.)

Bonus:

  • AI governance is strictly necessary in order to prevent the world from being destroyed.
  • AI governance on its own is sufficient to prevent the world from being destroyed.
  • AI governance is evidentially much more tractable than AI technical alignment.
Reply
Google DeepMind: An Approach to Technical AGI Safety and Security
Haiku3mo121

On a first reading of this summary, I take it as a small positive update on the quality of near-term AI Safety thinking at DeepMind. It treads familiar ground and doesn't raise any major red flags for me.

The two things I would most like to know are:

  1. Will DeepMind commit to halting frontier AGI research if they cannot provide a robust safety case for systems that are estimated to have a non-trivial chance of significantly harming humanity?
  2. Does the safety team have veto power on development or deployment of systems they deem to be unsafe?

A "yes" to both would be a very positive surprise to me, and would lead me to additionally ask DeepMind to publicly support a global treaty that implements these bare-minimum policies in a way that can be verified and enforced.

A "no" to either would mean this work falls under milling behavior, and will not meaningfully contribute toward keeping humanity safe from DeepMind's own actions.

Reply1
We Have No Plan for Preventing Loss of Control in Open Models
Haiku4mo10

The answer is to fight as hard as humanly possible right now to get the governments of the world to shut down all frontier AI development immediately. For two years, I have heard no other plan within an order of magnitude of this in terms of viability.

I still expect to die by default, but we won't get lucky without a lot of work. CPR only works 10% of the time, but it works 0% of the time when you don't do it.

Reply
The machine has no mouth and it must scream
Haiku4mo20

Only the sane are reaching for water.

Reply
Load More
No posts to display.