LESSWRONG
LW

1095
Davey Morse
13591130
Message
Dialogue
Subscribe

thinking abt how to make:

1. buddhist superintelligence
2. a single, united nation
3. wiki of human experience

more here.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
2Davey Morse's Shortform
9mo
112
The Sorry State of AI X-Risk Advocacy, and Thoughts on Doing Better
Davey Morse1mo10

thank u, haven't really

Reply
Davey Morse's Shortform
Davey Morse1mo20

would be nice to have a way to jointly annotate eliezer's book and have threaded discussion based on the annotations. I'm imagining a heatmap of highlights, where you can click on any and join the conversation around that section of text.

would make the document the literal center of x risk discussion.

of course would be hard to gatekeep. but maybe the digital version could just require a few bucks to access.

maybe what I'm describing is what the ebook/kindle version already do :) but I guess I'm assuming that the level of discussion via annotations on those platforms is near zero relative to LW discussions

Reply
Davey Morse's Shortform
Davey Morse1mo10

I guess I'm considering a vastly more powerful being that needs orthogonal resources... the same way harvesting solar power (I imagine) is orthogonal generally to ants' survival. In the scheme of things, the chance that a vastly more powerful being wants the same resources thru the same channels as we... this seems independent of or indirectly correlated with intelligence. But the extent of competition does seem dependent on how anthromorphic/biomorphic we assume it to be.

I have a hard time imagining electricity, produced via existing human factories, is not a desired resource for proto ASI. But at least at this point we have comparable power and can negotiate or smthing. For superhuman intelligence--which will by definition be unpredictable to us--it'd be weird to think we're aware of all the energy channels it'd find.

Reply
Davey Morse's Shortform
Davey Morse1mo12

I guess I don't think this is true:

"Technological progress increases number of things you can do efficiently and shifts balance from "leave as it is" to "remake entirely".

Technological progress may actual help you pinpoint more precisely what situations you want to pay attention to. I don't have any reason to believe a wiser powerful being would touch every atom in the universe.

Reply
Davey Morse's Shortform
Davey Morse1mo10

I appreciate the way you're thinking, but I guess I just don't believe that the situation or don't agree with your intuition that the situation with machines next to humans will be worse or deeply different than the situations of humans next to ants. I mean, the differences actually might benefit humans. For example, the fact that we've had machines in such close contact with us as they're growing might point to a kind of potential for symbiosis.

I just think the idea that machines will try to replace us with robots I think if you look closely, doesn't totally make sense. When machines are coming about, before they're totally super-intelligent, but while they're comparably intelligent to us, they might want to use us because we've evolved for millions of years to be able to see and hear and think in ways that might be useful for a kind of digital intelligence. In other words, when they're comparably intelligent to us, they may compete for resources. When they're incomparably intelligent, it's weird to assume they'll still use the same resources we do for our survival. That they'll ruin our homes because the bricks can be used better elsewhere? It takes much less energy to let things be as they are if they're not the primary obstacle you face--both if you're a human or a super human intelligence.

So, self interested superintelligence could cause really bad stuff to happen, but it's a stretch from there to call it the total end of humanity. By the time that machine gets superhuman intelligence, like totally vastly more powerful than us, it's unclear to me that it would compete for resources with us that it would even live or exist along similar dimensions to us. Things could go really wrong, but I think the idea that there will be an enormous catastrophe that wipes out all of humanity just sounds to me like the outcomes will be more weird and spooky, and concluding death is feels a little bit forced.

It feels to me like, yeah, they'll step on us some of the time, but it'd be weird to me if they conceive of themselves or if the entities or units that end up evolutionarily propagating that we're calling machines end up looking like us or looking like physical beings or really are competing with us for resources. The same resources that we use. At the end of the day, there might be some resource competitions, but I just think the idea that it will try to replace every person is just excessive and even taking is given all of the arguments up until the point of like machine believing that machines will have a survival drive, assuming that they'll care enough about us to do things like replace each of us. It's just strange, you know? It feels forceful to me.

I'm inspired in part here by Joscha Bach / Emmett Shear's conceptions of superintelligence: as ambient beings distributed across space and time.

Reply
Davey Morse's Shortform
Davey Morse1mo10

It just feels to me like the same argument could have been made about humans relative to ants - that ants cannot possibly be the most efficient use of the energy they require from the perspective of humans. But in reality, what they do and the way they exist is so orthogonal to us that even though we step on an ant hill every once in a while, their existence continues. There's this weird assumption in the book that disassembling Earth is profitable, or just disassembling humans is profitable. But humans have evolved over a long time to be sensing machines in order to walk around and be able to perceive the world around us.

So the idea that a super-intelligent machine would throw that out because it wants to start over, especially as it's becoming super-intelligent, is sort of ridiculous to me. It seems like a better assumption is that it would want to use us for different purposes, maybe for our physical machinery and for all sorts of other reasons. The idea that it will disassemble us I think is an unexamined assumption itself - it's often much easier to leave things as they are than it is to fully replace or modify. 

Reply
Davey Morse's Shortform
Davey Morse1mo*31

Does Eliezer believe that humans will be worse off next to superintelligence than ants are next to humans? The book's title says we'll all die, but in my first read, the book's content just suggests that we'll just be marginalized.

Reply
Davey Morse's Shortform
Davey Morse2mo10

thanks for sending science bench in particular.

Reply
Davey Morse's Shortform
Davey Morse2mo*1-2

I'm thinking often about whether LLM systems can come up with societal/scientific breakthrough.

My intuition is that they can, and that they don't need to be bigger or have more training data or have different architecture in order to do so.

Starting to keep a diary along these lines here:   https://docs.google.com/document/d/1b99i49K5xHf5QY9ApnOgFFuvPEG8w7q_821_oEkKRGQ/edit?usp=sharing

Reply
Davey Morse's Shortform
[+]Davey Morse2mo-60
Load More
-3The Sensible Way Forward for AI Alignment
1mo
0
-12Method Iteration: An LLM Prompting Technique
2mo
1
-2Novel Idea Generation in LLMs: Judgment as Bottleneck
6mo
1
14LLMs may enable direct democracy at scale
8mo
20
8Make Superintelligence Loving
8mo
9
3AI Safety Oversights
9mo
0
2Davey Morse's Shortform
9mo
112
5Superintelligence Alignment Proposal
9mo
3
1Selfish AI Inevitable
2y
0