LESSWRONG
LW

Jakub Growiec
19540
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
The Best of All Possible Worlds
Jakub Growiec3mo10

I don't know if worrying about animal rights should count if we simultaneously also do factory farming...

And as for the trends in human rights, democratic inclusion, expansion of welfare states, decline in violence, etc., they are real, but unfortunately they also correlate in time with increasing demand for skilled human labor. In a hypothetical future world where human labor wouldn't matter anymore because of full automation by superhuman AGI, I fear that these trends could easily reverse (though we may actually become extinct before that takes place). 

Reply
The Best of All Possible Worlds
Jakub Growiec3mo10

Thank you very much for this thoughtful and generous comment! 

My quick reaction is that both proposed paths should be taken in parallel: (1) what PauseAI proposes, and I support, is to pause the race towards AGI. I agree that this may be hard, but we really need more time to work on AGI value alignment, so at least we should try. The barriers to a successful pause are all socio-political, not technological, so that's at least not entirely impossible. And then of course (2) researchers should use the time we've got to test and probe a variety of ideas precisely like the ones you mentioned. A pause would allow these researchers to do so without the pressure to cut corners and deliver hopeful results on an insanely short deadline, as it is currently the case.

Reply
The Best of All Possible Worlds
Jakub Growiec3mo10

Quick reply: yes, that would be it - the view "that there is an objective morality and that sufficiently smart minds will converge to understanding and obeying it. On this view, AIs will end up behaving ethically by default". 

I don't subscribe to that view, nor to the belief that there are two attractors. I think there is just one attractor, or one default outcome - the one which you call the "colonizer" and I call a process of "local control maximization". That is an AI goal structure that includes Bostrom's four instrumental goals. It may have some final goal above the instrumental goals, like building paperclips, solving the Riemann hypothesis, etc. etc., but need not have one. Just like humanity probably does not have any superior goal beyond survival and multiplication, and by extension - also resource acquisition, efficiency, and creativity / technological advancement.

Reply
'High-Level Machine Intelligence' and 'Full Automation of Labor' in the AI Impacts Surveys
Jakub Growiec6mo10

I was also struck by this huge discrepancy between HLMI and FAOL predictions. I think that particularly FAOL predictions are unreliable. My interpretation is that when respondents are pushing their timelines so far into the future, some of them may be in fact attempting to resist admitting the possibility of AI takeover.

The key question is, what "automating all tasks" really means. "All tasks" includes in particular also all decision making: managerial, political, strategic, the small and the large, all of that. All the agency, long-term planning, and execution of one's own plans. Automating all tasks in fact implies AI takeover. But just considering this possibility may then easily clash with the view that many people have, namely that AIs are controllable tools rather than uncontrollable agents (see the excellent new paper by Severin Field on this). 

And there will for sure be strong competitive forces pushing towards full automation, once that option becomes technically feasible. For example, if you automate production processes in a firm at all levels up to the CEO, but not the CEO, then the human CEO becomes a bottleneck, slowing down the firm's operations, potentially by orders of magnitude. Your firm may then be pushed out of the market by a competitor who automated their CEO as well.

My logic suggests that FAOL should be only slightly later than HLMI. Of course you should have first feasibility, then adoption. Some lag could follow from cost considerations (AI agents / robot actuators may be initially too expensive) or legal constraints, and perhaps also human preferences (though I doubt that point). But once we have FAOL, we have AI takeover - so in fact such scenario redirects our conversation to the topic of AI x-risk.  

Reply
No wikitag contributions to display.
5Agent 002: A story about how artificial intelligence might soon destroy humanity
1mo
0
11The Best of All Possible Worlds
3mo
7
4The Apocalypse is Near. Can Humanity Coexist with Artificial Superintelligence?
5mo
0
2The Economics of p(doom)
6mo
0
3The Hardware-Software Framework: A New Perspective on Economic Growth with AI
6mo
0