LESSWRONG
LW

1213
Noosphere89
3960Ω1648221917
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
2Noosphere89's Shortform
3y
48
Noosphere89's Shortform
Noosphere899mo*63

Link to long comments that I want to pin, but are too long to be pinned:

https://www.lesswrong.com/posts/Zzar6BWML555xSt6Z/?commentId=aDuYa3DL48TTLPsdJ

https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/?commentId=Gcigdmuje4EacwirD

https://www.lesswrong.com/posts/DCQ8GfzCqoBzgziew/?commentId=RhTNmgZqjJpzGGAaL

Reply
AI Governance Strategy Builder: A Browser Game
Noosphere8912h20

I'm not sure what commitment levels you are using, but I am using the moderate commitment scenario, and got significantly better results.

I spent all my political capital and $124 billion of play money to achieve this:

The policies I used:

Reply
Sense-making about extreme power concentration
Noosphere893d30

I do buy this, but note this requires fairly drastic actions that essentially amount to a pivotal act using an AI to coup society and government, because they have a limited time window in which to act before economic incentives means that most of the others kill/enslave almost everyone else.

Contra cousin_it, I basically don't buy the story that power corrupts/changes your values, instead it corrupts your world model because there's a very large incentive for your underlings to misreport things that flatter you, but conditional on technical alignment being solved, this doesn't matter anymore, so I think power-grabs might not result in as bad of an outcome as we feared.

But this does require pretty massive changes to ensure the altruists stay in power, and they are not prepared to think about what this will mean.

Reply
Sense-making about extreme power concentration
Noosphere893d*76

but I don't think it's obvious this also applies in situations where there are enough resources to make everyone somewhat rich because betting on a coup to concentrate more power in the hands of your faction becomes less appealing as people get closer to saturating their selfish preferences.

I think the core blocker here is existential threats to factions if they don't gain power, or even perceived existential threats to factions will become more common, because you've removed the constraint of popular revolt/ability to sabotage logistics, and you've made coordination to remove other factions very easy, and I expect identity-based issues to be much more common post-AGI as the constraint of wealth/power is almost completely removed, and these issues tend to be easily heightened to existential stakes.

I'm not giving examples here, even though they exist, since they're far too political for LW, and LW's norm against politics is especially important to preserve here, because these existential issues would over time make LW comment/post quality much worse.

More generally, I'm more pessimistic about what a lot of people would do to other people if they didn't have a reason to fear any feedback/backlash.

Edit: I removed the section on the more common case being Nigeria or Congo.

Reply
Is there actually a reason to use the term AGI/ASI anymore?
Noosphere894d62

One of the cruxes here is whether one believes that "AGI" is in fact a real distinct thing, rather than there just being a space of diverse cognitive algorithms of different specializations and powers, with no high-level organization. I believe that, and that the current LLM artefacts are not AGI. Some people disagree. That's a crux.

My take is that there is a distinction being revealed here, but this is more so a distinction between fully subsituting for a human and complementing the human, because the human becomes a bottleneck quickly, but that this distinction is not necessarily an algorithmic distinction.

I remember a curve where costs first fell, then asymptoted while the human was the bottleneck, and then rapidly went to 0 as the AI did 100% of the task.

This is why I use the word fully automated (task) AI instead of AGI.

Consider a function f(x) that could be expanded into a power series p(x) with an infinite convergence radius, e. g. f(x)=ex.[2] The power series is that function, exactly, not an approximation; it's a valid way to define f(x).

However, that validity only holds if you use the whole infinite-term power series, p∞(x). Partial sums pn(x), with finite n, do not equal f(x), "are not" f(x). In practice, however, we're always limited to such finite partial sums. If so, then if you're choosing to physically/algorithmically implement f(x) as some pn(x), the behavior of pn(x) as you turn the "crank" of n is crucial for understanding what you're going to achieve.

This is probably the best explanation of why philosophical arguments around AI/AGI were so unproductive, and is a better version of my comment on this subject, because they tried to deal with the regime of infinite compute, and forgot that there is no actual difference between an AI and a human philosophically if you allow this, and the constraints actually matter, or else the Chinese Room is a valid AGI.

That said, I do think there's a case to be made that we too readily tend to nitpick capabilities, but I do think this is one of the most useful answers, so cheers.

Reply1
Yes, AI Continues To Make Rapid Progress, Including Towards AGI
Noosphere895d51

I disagree with this, and in particular the move by TsviBT of arguing that today's AI has basically zero relevance to what AGI needs to have/claiming that LP25 programs aren't actually creative, and more generally setting up a hard border between today's AI and AGI is a huge amount of AI discourse, especially on claims that AI will soon hit a wall for xyz reasons.

Reply
Yes, AI Continues To Make Rapid Progress, Including Towards AGI
Noosphere895d20

Yeah, this seems like a standard dispute over words, which the sequence posts Disguised Queries and Disputing Definitions already solved.

I'll also link my own comment on how what TsviBT is doing is making AI discourse worse, because it promotes the incorrect binary frame and dispromotes the correct continuous frame around AI progress.

Reply
AIs will greatly change engineering in AI companies well before AGI
Noosphere896d100

I generally agree with this, so I'll just elaborate on disagreements, anything that I don't mention you should assume I agree with it.

On Amdahl's law:

  • These AIs wouldn't be able to automate some tasks (without a human helping them) and this bottleneck would limit the speed-up due to Amdahl's law.

While I agree in the context of the post, I generally don't like Amdahl's law arguments, and tend to think they're a midwit trap, because people forget that more resources don't just cause people to solve old problems more efficiently, but to make new problems practical at all, and this is why I believe parallelization is usually better than pessimists argue, due to Gustafson-Barsis's law.

This doesn't matter here, but it does matter once you fully automate a field.

There is an obvious consequence that this will cause increased awareness and salience of: AI, AI automating AI R&D, and the potential for powerful capabilities in the short term.

So I agree there will be more salience, but I generally expect this to be pretty restrained, and in genpop, I expect much more discontinuous salience and responses, and I expect much weaker responses until we have full automation of AI R&D at least, and maybe even longer than that.

A key worldview difference is I expect genpop already believes in/is motivated to hear this argument for a very long time, regardless of whether this is correct:

"Now that we've seen AIs automate AI R&D and no one is even claiming that we're seeing explosive capabilities growth, the intelligence explosion has been disproven; compute bottlenecks really are decisive. (Or insert whichever bottleneck this person believes is most important.) The intelligence explosion must have been bullshit all along and look, we don't see any of these intelligence explosion proponents apologizing for being wrong, probably they're off inventing some new milestone of AI to fearmonger about." 

Reply
But Have They Engaged With The Arguments? [Linkpost]
Noosphere896d30

I basically agree with the 1st and 2nd points, somewhat disagree with the 3rd point (I do consider it plausible that ASIs develop goals that are incompatible with human survival, but I don't think it's very likely), the 4th point is right but the argument is locally invalid, because processor clock speeds are not how fast AIs think, and I basically agree with the point that sufficiently aggressive policy responses can avert catastrophe, but don't agree with the premise that wait and see is utterly unviable for AI tech, and also disagree with the premise that ASI is a global suicide bomb.

Reply
Critic Contributions Are Logically Irrelevant
Noosphere897d0-9

Yes, but not at the ratios we see in practice.

Reply
Load More
An Opinionated Guide to Computability and Complexity
13Is there actually a reason to use the term AGI/ASI anymore?
Q
5d
Q
5
71But Have They Engaged With The Arguments? [Linkpost]
13d
14
18LLM Daydreaming (gwern.net)
2mo
2
11Difficulties of Eschatological policy making [Linkpost]
3mo
3
7State of play of AI progress (and related brakes on an intelligence explosion) [Linkpost]
5mo
0
57The case for multi-decade AI timelines [Linkpost]
5mo
22
15The real reason AI benchmarks haven’t reflected economic impacts
5mo
0
22Does the AI control agenda broadly rely on no FOOM being possible?
Q
6mo
Q
3
0Can a finite physical device be Turing equivalent?
6mo
10
37When is reward ever the optimization target?
Q
11mo
Q
17
Load More
Acausal Trade
3 months ago
(+18/-18)
Shard Theory
a year ago
(+2)
RLHF
a year ago
(+27)
Embedded Agency
3 years ago
(+640/-10)
Qualia
3 years ago
(-1)
Embedded Agency
3 years ago
(+314/-43)
Qualia
3 years ago
(+74/-4)
Qualia
3 years ago
(+20/-10)