Wiki Contributions

Comments

Isn't Zvi's post an attempt to explain those observations?

Any explanations for why Nick Bostrom has been absent, arguably notably, in recent public alignment conversations (particularly since chatgpt)?

He's not on this list (yet other FHI members, like Toby Ord, are). He wasn't on the FLI open letter, too, but I could understand why he might've avoided endorsing that letter given its much wider scope.

Most of the argument can be boiled down to a simple syllogism: the superior intelligence is always in control; as soon as AI is more intelligent than we are, we are no longer in control.

Seems right to me.  And it's a helpful distillation. 

When we think about Western empires or alien invasions, what makes one side superior is not raw intelligence, but the results of that intelligence compounded over time, in the form of science, technology, infrastructure, and wealth. Similarly, an unaided human is no match for most animals. AI, no matter how intelligent, will not start out with a compounding advantage.

Similarly, will we really have no ability to learn from mistakes? One of the prophets’ worries is “fast takeoff”, the idea that AI progress could go from ordinary to godlike literally overnight (perhaps through “recursive self-improvement”). But in reality, we seem to be seeing a “slow takeoff,” as some form of AI has arrived and we actually have time to talk and worry about it (even though Eliezer claims that fast takeoff has not yet been invalidated).

If some rogue AI were to plot against us, would it actually succeed on the first try? Even genius humans generally don’t succeed on the first try of everything they do. The prophets think that AI can deduce its way to victory—the same way they think they can deduce their way to predicting such outcomes.

I'm not seeing how this is conceptually distinct from the existing takeoff concept. 

  • Aren't science, technology, infrastructure, and wealth merely intelligence + time (+ matter)?
  • And compounding, too, is just intelligence + time, no?
  • And whether the rogue AI succeeds on its first attempt at a takeover just depends on its intelligence level at that time is, right? Like a professional chess player will completely dominate me in a chess match on their first try because our gap in chess intelligence is super large. But that pro chess player competing against their adjacently-ranked competitor won't likely result in such a dominating outcome, right? 

I'm failing to see how you've changed the terms of the argument?

Is it just that you think slow takeoff is more likely?

There's a somewhat obscure but fairly-compelling-to-me model of psychology which states that people are only happy/okay to the extent that they have some sort of plan, and also expect that plan to succeed.

What's the name of this model; or, can you point to the fuller version of it? Seems right and would see it fleshed out.

hi Matt! on the coordination crux, you say 

The first AGIs we construct will be born into a culture already capable of coordinating, and sharing knowledge, making the potential power difference between AGI and humans relatively much smaller than between humans and other animals, at least at first.

but wouldn’t an AGI be able to coordinate and do knowledge sharing with humans because 

a) it can impersonate being a human online and communicate with them via text and speech and 

b) it‘ll realize such coordination is vital to accomplish it‘s goals and so it’ll do the necessary acculturation? 

Watching all the episodes of Friends or reading all the social media posts by the biggest influencers, as examples. 

You can get many of the benefits of having one country through mechanisms like free trade agreements, open borders, shared currency zones etc.

This is key in my opinion.

Duplicates - digital copies as opposed to genetic clones - might not require new training (unless a whole/partial restart/retraining was being done).

Wouldn't new training be strongly adaptive -- if not strictly required -- if the duplicate's environment is substantively different from the environment of its parent?

When combined with self-modification, there could be 'evolution' without 'deaths' of 'individuals' - just continual ship of Theseus processes. (Perhaps stuff like merging as well, which is more complicated.)

I understand this model; at the same time, however, it's my impression that it's commonplace in software development to periodically altogether jettison an old legacy software system in favor of building a new system from the ground-up. This seemed to be evidence that there are growing costs to continual self-modification in software systems that might limit this strategy.

I'll check it out -- thanks Zachary!

Load More