AprilSR

AprilSR's Posts

Sorted by New

AprilSR's Comments

The Critical COVID-19 Infections Are About To Occur: It's Time To Stay Home [crosspost]

If you trust public authorities so highly why are you even on this website? Being willing to question authority when necessary (and, hopefully, doing it better than most) is one of the primary goals of this community.

More Dakka for Coronavirus: We need immediate human trials of many vaccine-candidates and simultaneous manufacturing of all of them

The way society works currently this can’t happen, but it’s a good insight into what an actually competent civilization would do.

Edit: After reading ChristianKI’s comment I’m realizing I was focusing overmuch on the US. Other countries might be able to manage it.

The Epistemology of AI risk

People seem to be blurring the difference between "The human race will probably survive the creation of a superintelligent AI" and "This isn't even something worth being concerned about." Based on a quick google search, Zuckerberg denies that there's even a chance of existential risks here, whereas I'm fairly certain Hanson thinks there's at least some.

I think it's fairly clear that most skeptics who have engaged with the arguments to any extent at all are closer to the "probably survive" part of the spectrum than the "not worth being concerned about" part.

What long term good futures are possible. (Other than FAI)?

I have no comment on how plausible either of these scenarios are. I'm only making the observation that long term good futures not featuring friendly AI require some other mechanism preventing UFAI from happening. Either SAI in general would have to be implausible to create at all, or some powerful actor such as a government or limited AI would have to prevent it.

Hazard's Shortform Feed

I think people usually just use “the number is the root of this polynomial” in and of itself to describe them, which is indeed more than radicals. There probably are more round about ways to do it, though.

What long term good futures are possible. (Other than FAI)?

Given SAI is possible, regulation on AI is necessary to prevent people from making a UFAI. Alternatively, an SAI which is not fully aligned but has not goals directly conflicting with ours might be used to prevent the creation of UFAI.

ozziegooen's Shortform

If you have epistemic terminal values then it would not be a positive expected value trade, would it? Unless "expected value" is referring to the expected value of something other than your utility function, in which case it should've been specified.

ozziegooen's Shortform

Doesn't being willing to accept a trade *directly follow* from the expected value of the trade being positive? Isn't that like, the *definition* of when you should be willing to accept a trade? The only disagreement would be how likely it is that losses of knowledge / epistemics are involved in positive value trades. (My guess is it does happen rarely.)

Solution to the free will homework problem

Eh. The next question to ask is going to depend entirely upon context. I feel like most of the time people use it in practice they’re talking about the extent of capabilities, where whether you were able to want something is irrelevant. There are other cases though.

Solution to the free will homework problem

I think when people say “Could I have done X?” We can usually interpret it as if they said “Could I have done X had I wanted to?”

Load More