Kenny

Wiki Contributions

Comments

Finance - accessibility (asking for textbook recommendations)

I've been asking various people this same question basically and am still looking for (more) concrete recommendations. (Several people basically answered 'work for a finance/investment/trading company', which is ... not ideal!)

I'm tentatively planning on doing a more intense search for this soonish and I'll comment here, or supply an answer, if I find anything that seems promising.

What's the "This AI is of moral concern." fire alarm?

I've already explained why that's an anti pattern. If you had rejected the very idea of magnetism when magnetism wasn't understood, it would not now be understood.

I'm rejecting the idea of 'qualia' for the same reason I wouldn't reject the idea of magnetism – they both seem (or would seem, for magnetism, in your hypothetical).

I'm rejecting 'mysterious answers', e.g. "Theres not supposed to be a good explanation of qualia".

Who said otherwise? You seem to have decided that "qualia are subjective experiences and we don't understand them" means something like "qualia are entirely and irredeemably subjective and will be a mystery forever".

Sorry – that's not what I intended to convey. And maybe we're writing past each other about this. I suspect that 'qualia' is basically equivalent to something far more general, e.g. 'information processing', and that our intuitions about what 'qualia' are, and the underlying mechanics of them (of which many people seem to insist don't exist), are based on the limited means we have of currently, e.g. introspecting on them, communicating with each other about them, and weakly generalizing to other entities (e.g. animals) or possible beings (e.g. AIs).

I also suspect that 'consciousness' – which I'm currently (loosely/casually) modeling as 'being capable of telling stories' – and us having consciousness, makes thinking and discussing 'qualia' more difficult than I suspect it will turn out to be.

Maybe we will have qualiometers one day, and maybe we will abandon the very idea of qualia. But maybe we won't, so we have no reason to treat qualia as poison now.

What I'm (kinda) 'treating as poison' is the seemingly common view that 'qualia' cannot be explained at all, i.e. that it's inherently and inescapably 'subjective'. It sure seems like at least some people – tho maybe not yourself – are either in the process 'retreating', or have adopted a 'posture' whereby they're constantly 'ready to retreat', from the 'advances' of 'objective investigation' and then claim that only the 'leftover' parts of 'qualia' that remain 'subjective' are the 'real qualia'.

Where I agree and disagree with Eliezer

I'm not sure what you mean by "how high the relative tech capabilities are of the AGI".

I think the general capability of the AGI itself, not "tech" capabilities specifically, are plenty dangerous themselves.

The general danger seems more like 'a really powerful but unaligned optimizer' that's 'let loose'.

I'm not sure that 'agent-ness' is necessary for catastrophe; just 'strong enough optimization' and a lack of our own capability in predicting the consequences of running the AGI.

I do agree with this:

It's not supposed to be a self-contained cinematic universe, it's supposed to be "we have little/no reason to expect it to not be at least this weird", according to his background assumptions (which he almost always goes into more detail on anyway).

Where I agree and disagree with Eliezer

But specific P problems can still be 'too hard' to solve practically.

Where I agree and disagree with Eliezer

I agree that "verification is much, much easier than generation".

But I don't agree that verification is generally 'easy enough'.

Where I agree and disagree with Eliezer

I've long interpreted Eliezer, in terms of your disagreements [2-6], as offering deliberately exaggerated examples.

I do think you might be right about this [from disagreement 2]:

By the time we have AI systems that can overpower humans decisively with nanotech, we have other AI systems that will either kill humans in more boring ways or else radically advanced the state of human R&D.

I do like your points overall for disagreements [1] and [2].

I feel like there's still something being 'lost in translation'. When I think the of the Eliezer-AGI and why it's an existential risk, I think that it would be able to exploit a bunch of 'profitable capability boosting opportunities'. I agree that a roughly minimal 'AGI' probably/might very well not be able to do that. Some possible AGI can tho. But you're also right that there are other risk, possibily also existential, that we should expect to face before Eliezer's specific 'movie plots' would be possible.

But then I also think the specific 'movie plots' are besides the point.

If you're right that some other AI system mega+-kills humans – that then is the "nanotech" to fear. If it's a foregone conclusion, it's not that much better in the cases where it takes the AI, e.g. several years to kill us all, versus 'instantaneously'.

I also have a feeling that:

  1. Some (minimal) 'AGI' is very possible, e.g. in the next five (5) years.
  2. The gap between 'very disruptive' and 'game over' might be very small.

I guess I disagree with your disagreement [7]. I think partly because:

AI systems that can meaningfully accelerate progress by generating ideas, recognizing problems for those ideas and, proposing modifications to proposals, etc.

might be AI systems that are "catastrophically dangerous" because of the above.

I think maybe one disagreement I have with both you and Eliezer is that I don't think an AI system needs to be 'adversarial' to be catastrophically dangerous. A sufficiently critical feature missing from the training data might be sufficient to generate, e.g. an idea, that can be apparently reasonably verified as aligned and yet lead to catastrophe.

I am very happy that you're asking for more details about a lot of Eliezer's intuitions. That seems likely to be helpful even if they're wrong.

I'm skeptical of your disagreement [19]. Is it in fact the case that we currently have good enough abilities at verifying, e.g. ideas, problems, and proposals? I don't feel like that's the case; definitely not obviously so.

I think I've updated towards your disagreements [18] and [22], especially because we're probably selecting for understandable AIs to at least some degree. It seems like people are already explicitly developing AI systems to generate 'super human' human-like behavior. Some AI systems probably are, and will continue to be, arbitrarily 'alien' tho.

For your disagreement [23], I'd really like to read some specific details about how that could work, AIs reasoning about each other's code.

Overall, I think you made some good/great points and I've updated towards 'hope' a little. My 'take on your take (on Eliezer's takes)' is that I don't know what to think really, but I'm glad that you're both writing these posts!

What's the "This AI is of moral concern." fire alarm?

Theres not supposed to be a good explanation of qualia

That's a perfect example of why it seems sensible to "reject the whole topic". That's just picking 'worship' instead of 'explain'.

Yes, I defy the assumption that qualia are "supposed to be subjective". I would expect 'having surgery under anesthesia or not" to not be entirely subjective.

How do you know if you don't what "qualia" means?

What do you mean by "know"?

I think that what other people mean when they say or write 'qualia' is something like 'subjective experience'.

I think 'having qualia' is the same thing as 'sentience' and I think 'sentience' is (roughly) 'being a thing about which a story could be told'. The more complex the story that can be told, the more 'sentience' a thing has. Photons/rocks/etc. have simple stories and basically no sentience. More complex things with more complex sensation/perception/cognition have more complex stories, up to (at least) us, where our stories can themselves contain stories.

Maybe what's missing from my loose/casual/extremely-approximate 'definition' of 'sentience' is 'perspective'. Maybe what that is that's missing is something like a being with qualia/sentience being 'a thing about which a story could be told – from its own perspective', i.e. a 'thing with a first-person story'.

'Subjective experience' then is just the type of the elements, the 'raw material', from which such a story could be constructed.

For a person/animal/thing with qualia/sentience:

  1. Having surgery performed on them with anesthesia would result in a story like "I fell asleep on the operating table. Then I woke up, in pain, in a recovery room."
  2. Having surgery without anesthesia would (or could) result in a story like "I didn't fall asleep on the operating table. I was trapped in my body for hours and felt all of the pain of every part of the surgery. ..."

I don't think there's any good reason to expect that we won't – at least someday – be able to observe 'subjective experiences' (thus 'sentience'), tho the work of 'translating' between the experiences of 'aliens' and the experiences of people (humans) can be arbitrarily difficult. (Even translating between or among humans is often extremely difficult!)

In this extremely-rough model then, 'consciousness' is the additional property of 'being able to tell a story oneself'.

What's the information value of government hearings?

This seems like sensible 'meta advice'; thanks!

I'm not sure that listening-to/watching the hearings themselves, in this or other cases, would be of sufficient 'info profit' to me to justify not just 'triangulating' on the info/evidence I pickup thru my 'secondary sources'.

I would have been surprised had the 'primary sources' NOT seemed meaningful! I think they're optimizing for meaningfulness! But I think the means by which they're doing that is crafting a Narrative, which I do distrust. (I expect reality to be generally much messier than a relatively simple story, especially any Morality Play.)

I think a big part of my judging the 'primary source material' not being of sufficient 'info profit' (for me) is that there's so much of it. It also doesn't seem like the kind of info that can be easily, and 'representatively', 'sampled'. I'm sure I'd learn of lots of (supposed) details were I to listen to "a random sample of about 45 minutes" of something like this. But I wouldn't expect to be able update my beliefs in the right (true) direction. But maybe that wouldn't matter if I was also still 'triangulating' overall.

I'm definitely open to some info from 'secondary sources' about this kind of thing. I've already revised my beliefs a good bit from that kind of thing.

[Link] "The madness of reduced medical diagnostics" by Dynomight

I don't disagree but I think I think (ha) of this kind of thing more along the lines of 'needing to prepare for' being "more emotional than usual" or preparing to handle "triggers" or other circumstances in which we'd expect our reasoning to less rational than usual/ideal.

The time to steel oneself to handle 'difficult reasoning scenarios' is before they occur, i.e. before we feel "more emotional than usual" or are 'triggered'.

In my case, I'm long past being 'over' trusting doctors blindly. I've been practicing asking them about, e.g. base rates and the strength of research evidence. What I liked about this post is that it made me realize/recognize that I don't need to also avoid medial diagnostics – I can just 'ignore' the doctor's 'default algorithm' output instead!

As you pointed out in your original comment, it still might be sensible/reasonable/best to avoid some situations. I just find knowing of the option to just 'not do something stupid', or let someone make a stupid decision for me, to be a very helpful frame.

What's the information value of government hearings?

Do you find, even just personally, that there's a particular high 'info profit' from you getting access to "framing" from watching government hearings?

'Framing' – assuming I understand what you mean by that precisely enough – doesn't seem like the kind of thing that I feel like I'm missing before these hearings start. The framing is, AFAICT, always 'pre-released'.

Load More