Wiki Contributions

Comments

Wanted to say that I enjoyed this and found it much more enlightening than I expected to, given that I have no intrinsic interest in dentistry. I would value a large cross-discipline sample of this question set and think it would have been very useful to my younger self. I think the advice millennials were given when considering college degrees and careers was generally unhelpful magical thinking. These practical questions are helpful. I'd be interested in slightly longer form answers. Are these edited, or was this interviewee laconic?

Yes, the notion of being superceded does disturb me. Not in principle, but pragmatically. I read your point, broadly, to be that there are a lot of interesting potential non-depressing outcomes to AI, up to advocating for a level of comfort with the idea of getting replaced by something "better" and bigger than ourselves. I generally agree with this! However, I'm less sanguine than you that AI will "replicate" to evolve consciousness that leads to one of these non-depressing outcomes. There's no guarantee we get to be subsumed, cyborged, or even superceded. The default outcome is that we get erased by an unconscious machine that tiles the universe with smiley faces and keeps that as its value function until heat death. Or it's at least a very plausible outcome we need to react to. So caring about the points you noted you care about, in my view, translates to caring about alignment and control.

"For example, if I were making replicators, I'd ensure they were faithful replicators "

Isn't this the whole danger of unaligned AI? It's intelligent, it "replicates" and it doesn't do what you want.

Besides physics-breaking 6, I think the only tenuous link in the chain is 5; that AI ("replicators") will want to convert everything to comptronium. But that seems like at least a plausible value function, right? That's basically what we are trying to do. It's either that or paperclips, I'd expect.

(Note, applaud your commenting to explain downvote.)

While I may or may not agree with your more fantastical conclusions, I don't understand the downvotes. The analogy between biological, neural, and AI systems is not new but is well presented. I particularly enjoyed the analogy that comptronium is "habitable space" to AI. Minus physics-as-we-know-it breaking steps, which are polemic and not crucial to the argument's point, I'd call on downvoters to be explicit about what they disagree with or find unhelpful.

Speculatively, perhaps at least some find the presentation of AI as the "next stage of evolution" infohazardous. I'd disagree. I think it should start discussion along the lines of "what we mean by alignment". What's the end state for a human society with "aligned" AI? It probably looks pretty alien to our present society. It probably tends towards deep machine mediated communication blurring the lines between individuals. I think it's valuable to envision these futures.

Netcentrica, in this letter your explicit opinion is that fiction with a deep treatment of the alignment problem will not be palatable to a wider audience. I think this is not necessarily true. I think that compelling fiction is perhaps the prime vector for engaging a wider, naive audience. Even the Hollywood treatment of I Robot touched on it and was popular. Not deep or nuanced, sure. But it was there. Maybe more intelligent treatments could succeed if produced with talent.

I mostly stopped reading sci Fi after the era of Asimov and Bradbury. I'd be interested in comments on which modern, popular authors have written or produced AI fiction with the most intelligent treatment of the assignment issue (or related issues), to establish a baseline.

Hmm, yeah, I guess that's a good point. I was thinking myopically at a systems level. The post is useful advice for a patient who is willing to do their own research, confident they can do it thoroughly, and is not afraid to "stare into the abyss" i.e risk getting freaked out or overwhelmed.

Although, I also wonder if insurance companies might try to exploit a patient's prior decision to decline recommended treatment/tests as a reason to not cover future costs...

.

I don't disagree with you exactly, but I think the focus on rational decision making misses the context the decisions are being made in. Isn't this just an unaligned incentives problem? When a patient complains of an issue, doctors face exposure to liability if they do not recommend tests to clarify the issue. If the tests indicate something, doctors face liability for not recommending corrective procedures. They generally face less liability for positively recommending tests and procedures because the risk is quantifiable beforehand and the patient makes the decision. If they decline a recommended test, the doctor can't be blamed.

The push to do less testing makes sense in that context. It has to emerge at the level of a movement so that the doctors have safety in numbers.

I am not in healthcare, perhaps this is cynical?

Edit, I see that Gwern already mentioned lawsuits briefly in a comment. But I think it deserves a lot more focus and obviates "you're not dealing with fully rational agents." I mean, maybe not, but that's not necessary to get this result.

Thanks for that link! I agree that there is a danger this pitch doesn't get people all the way to X-risk. I think that risk might be worth it, especially if EA notices popular support failing to grow fast enough - i.e., beyond people with obviously related background and interests. Gathering more popular support for taking small AI-related dangers seriously might move the bigger x-risk problems into the Overton window, whereas right now I think they are very much not. Actually I just realized that this is a great summary of my entire idea, basically, "move the Overton window with softballs before you try to pitch people the fastball."

But also as you said, that approach does model the problem as a war of attrition. If we really are metaphorically moments from the final battle, hail-mary attempts to recruit powerful allies is the right strategy. The problem is that these two strategies are pretty mutually exclusive. You can't be labeled as both a thoughtful, practical policy group with good ideas and also pull the fire alarms. Maybe the solution is to have two organizations pursuing different strategies, with enough distance between them that the alarmists don't tarnish the reputation of the moderates.

Whoops, apologies, none of the above. I meant to use the adage "you can't wake someone who is pretending to sleep" similarly to the old "It is difficult to make a man understand a thing when his salary depends on not understanding it." A person with vested interests is like a person pretending to sleep. They are predisposed not to acknowledge arguments misaligned with their vested interests, even if they do in reality understand and agree with the logic of those arguments. The most classic form of bias.

I was trying to express that in order to make any impression on such a person you would have to enter the conversation on a vector at least partially aligned with their vested interests, or risk being ignored at best and creating an enemy at worst. Metaphorically, this is like entering into the false "dream" of the person pretending to sleep.

Although I do like ACC, I haven't read any of the Rama series. It sounds like you're asking if I am advocating for a top down authoritarian society. It's hard to tell what triggered this impression without more detail from you, but possibly it was my mention of creating an "always-good-actor" bot that guards against other unaligned AGIs.

If that's right, please see my update to my post: I strongly disclaim to have good ideas about alignment, and should have better flagged that. The AGA bot is my best understanding of what Eliezer advocates, but that understanding is very weak and vague, and doesn't suggest more than extremely general policy ideas.

If you meant something else, please elaborate!

Load More