LESSWRONG
LW

375
Noosphere89
3960Ω1648221817
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
An Opinionated Guide to Computability and Complexity
2Noosphere89's Shortform
3y
48
Noosphere89's Shortform
Noosphere898mo*63

Link to long comments that I want to pin, but are too long to be pinned:

https://www.lesswrong.com/posts/Zzar6BWML555xSt6Z/?commentId=aDuYa3DL48TTLPsdJ

https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/?commentId=Gcigdmuje4EacwirD

https://www.lesswrong.com/posts/DCQ8GfzCqoBzgziew/?commentId=RhTNmgZqjJpzGGAaL

Reply
Sense-making about extreme power concentration
Noosphere891d30

I do buy this, but note this requires fairly drastic actions that essentially amount to a pivotal act using an AI to coup society and government, because they have a limited time window in which to act before economic incentives means that most of the others kill/enslave almost everyone else.

Contra cousin_it, I basically don't buy the story that power corrupts/changes your values, instead it corrupts your world model because there's a very large incentive for your underlings to misreport things that flatter you, but conditional on technical alignment being solved, this doesn't matter anymore, so I think power-grabs might not result in as bad of an outcome as we feared.

But this does require pretty massive changes to ensure the altruists stay in power, and they are not prepared to think about what this will mean.

Reply
Sense-making about extreme power concentration
Noosphere892d*73

but I don't think it's obvious this also applies in situations where there are enough resources to make everyone somewhat rich because betting on a coup to concentrate more power in the hands of your faction becomes less appealing as people get closer to saturating their selfish preferences.

I think the core blocker here is existential threats to factions if they don't gain power, or even perceived existential threats to factions will become more common, because you've removed the constraint of popular revolt/ability to sabotage logistics, and you've made coordination to remove other factions very easy, and I expect identity-based issues to be much more common post-AGI as the constraint of wealth/power is almost completely removed, and these issues tend to be easily heightened to existential stakes.

I'm not giving examples here, even though they exist, since they're far too political for LW, and LW's norm against politics is especially important to preserve here, because these existential issues would over time make LW comment/post quality much worse.

More generally, I'm more pessimistic about what a lot of people would do to other people if they didn't have a reason to fear any feedback/backlash.

Edit: I removed the section on the more common case being Nigeria or Congo.

Reply
Is there actually a reason to use the term AGI/ASI anymore?
Noosphere893d62

One of the cruxes here is whether one believes that "AGI" is in fact a real distinct thing, rather than there just being a space of diverse cognitive algorithms of different specializations and powers, with no high-level organization. I believe that, and that the current LLM artefacts are not AGI. Some people disagree. That's a crux.

My take is that there is a distinction being revealed here, but this is more so a distinction between fully subsituting for a human and complementing the human, because the human becomes a bottleneck quickly, but that this distinction is not necessarily an algorithmic distinction.

I remember a curve where costs first fell, then asymptoted while the human was the bottleneck, and then rapidly went to 0 as the AI did 100% of the task.

This is why I use the word fully automated (task) AI instead of AGI.

Consider a function f(x) that could be expanded into a power series p(x) with an infinite convergence radius, e. g. f(x)=ex.[2] The power series is that function, exactly, not an approximation; it's a valid way to define f(x).

However, that validity only holds if you use the whole infinite-term power series, p∞(x). Partial sums pn(x), with finite n, do not equal f(x), "are not" f(x). In practice, however, we're always limited to such finite partial sums. If so, then if you're choosing to physically/algorithmically implement f(x) as some pn(x), the behavior of pn(x) as you turn the "crank" of n is crucial for understanding what you're going to achieve.

This is probably the best explanation of why philosophical arguments around AI/AGI were so unproductive, and is a better version of my comment on this subject, because they tried to deal with the regime of infinite compute, and forgot that there is no actual difference between an AI and a human philosophically if you allow this, and the constraints actually matter, or else the Chinese Room is a valid AGI.

That said, I do think there's a case to be made that we too readily tend to nitpick capabilities, but I do think this is one of the most useful answers, so cheers.

Reply1
Yes, AI Continues To Make Rapid Progress, Including Towards AGI
Noosphere893d51

I disagree with this, and in particular the move by TsviBT of arguing that today's AI has basically zero relevance to what AGI needs to have/claiming that LP25 programs aren't actually creative, and more generally setting up a hard border between today's AI and AGI is a huge amount of AI discourse, especially on claims that AI will soon hit a wall for xyz reasons.

Reply
Yes, AI Continues To Make Rapid Progress, Including Towards AGI
Noosphere893d20

Yeah, this seems like a standard dispute over words, which the sequence posts Disguised Queries and Disputing Definitions already solved.

I'll also link my own comment on how what TsviBT is doing is making AI discourse worse, because it promotes the incorrect binary frame and dispromotes the correct continuous frame around AI progress.

Reply
AIs will greatly change engineering in AI companies well before AGI
Noosphere894d100

I generally agree with this, so I'll just elaborate on disagreements, anything that I don't mention you should assume I agree with it.

On Amdahl's law:

  • These AIs wouldn't be able to automate some tasks (without a human helping them) and this bottleneck would limit the speed-up due to Amdahl's law.

While I agree in the context of the post, I generally don't like Amdahl's law arguments, and tend to think they're a midwit trap, because people forget that more resources don't just cause people to solve old problems more efficiently, but to make new problems practical at all, and this is why I believe parallelization is usually better than pessimists argue, due to Gustafson-Barsis's law.

This doesn't matter here, but it does matter once you fully automate a field.

There is an obvious consequence that this will cause increased awareness and salience of: AI, AI automating AI R&D, and the potential for powerful capabilities in the short term.

So I agree there will be more salience, but I generally expect this to be pretty restrained, and in genpop, I expect much more discontinuous salience and responses, and I expect much weaker responses until we have full automation of AI R&D at least, and maybe even longer than that.

A key worldview difference is I expect genpop already believes in/is motivated to hear this argument for a very long time, regardless of whether this is correct:

"Now that we've seen AIs automate AI R&D and no one is even claiming that we're seeing explosive capabilities growth, the intelligence explosion has been disproven; compute bottlenecks really are decisive. (Or insert whichever bottleneck this person believes is most important.) The intelligence explosion must have been bullshit all along and look, we don't see any of these intelligence explosion proponents apologizing for being wrong, probably they're off inventing some new milestone of AI to fearmonger about." 

Reply
But Have They Engaged With The Arguments? [Linkpost]
Noosphere895d30

I basically agree with the 1st and 2nd points, somewhat disagree with the 3rd point (I do consider it plausible that ASIs develop goals that are incompatible with human survival, but I don't think it's very likely), the 4th point is right but the argument is locally invalid, because processor clock speeds are not how fast AIs think, and I basically agree with the point that sufficiently aggressive policy responses can avert catastrophe, but don't agree with the premise that wait and see is utterly unviable for AI tech, and also disagree with the premise that ASI is a global suicide bomb.

Reply
Critic Contributions Are Logically Irrelevant
Noosphere895d0-9

Yes, but not at the ratios we see in practice.

Reply
But Have They Engaged With The Arguments? [Linkpost]
Noosphere895d41

For what it's worth, I agree that empirical results have made me worry more relative to last year, and it's part of the reason I no longer have p(doom) below 1-5%.

But there are other important premises which I don't think are supported well by empirics, and are arguably load-bearing for the confidence that people have.

One useful example from Paul Christiano is there's a conflation between solving the alignment problem on the first critical try, and not being able to experiment at all, and while this makes AI governance way harder, it doesn't make the science problem nearly as difficult:

Eliezer often equivocates between “you have to get alignment right on the first ‘critical’ try” and “you can’t learn anything about alignment from experimentation and failures before the critical try.” This distinction is very important, and I agree with the former but disagree with the latter. Solving a scientific problem without being able to learn from experiments and failures is incredibly hard. But we will be able to learn a lot about alignment from experiments and trial and error; I think we can get a lot of feedback about what works and deploy more traditional R&D methodology. We have toy models of alignment failures, we have standards for interpretability that we can’t yet meet, and we have theoretical questions we can’t yet answer.. The difference is that reality doesn’t force us to solve the problem, or tell us clearly which analogies are the right ones, and so it’s possible for us to push ahead and build AGI without solving alignment. Overall this consideration seems like it makes the institutional problem vastly harder, but does not have such a large effect on the scientific problem.

From this list of disagreements

I mostly agree with the rest of your comment.

Reply
Load More
13Is there actually a reason to use the term AGI/ASI anymore?
Q
3d
Q
5
71But Have They Engaged With The Arguments? [Linkpost]
11d
14
18LLM Daydreaming (gwern.net)
2mo
2
11Difficulties of Eschatological policy making [Linkpost]
3mo
3
7State of play of AI progress (and related brakes on an intelligence explosion) [Linkpost]
4mo
0
57The case for multi-decade AI timelines [Linkpost]
5mo
22
15The real reason AI benchmarks haven’t reflected economic impacts
5mo
0
22Does the AI control agenda broadly rely on no FOOM being possible?
Q
6mo
Q
3
0Can a finite physical device be Turing equivalent?
6mo
10
37When is reward ever the optimization target?
Q
11mo
Q
17
Load More
Acausal Trade
3 months ago
(+18/-18)
Shard Theory
a year ago
(+2)
RLHF
a year ago
(+27)
Embedded Agency
3 years ago
(+640/-10)
Qualia
3 years ago
(-1)
Embedded Agency
3 years ago
(+314/-43)
Qualia
3 years ago
(+74/-4)
Qualia
3 years ago
(+20/-10)