AprilSR

Comments

"Who I am" is an axiom.

If each human makes predictions as if they are a randomly chosen human, then the predictions of humanity as a whole will be well-calibrated. That seems, to me, to be a property we shouldn't ignore.

I think the Doomsday problem mostly fails because it's screened off by other evidence. Yeah, we should update towards being around the middle of humanity by population, but we still can observe the world and make predictions based on what we actually see as to how long humanity is likely to last. 

To re-use the initial framing,  think of what strategy would produce the best prediction results for humanity as a whole on the Doomsday problem. Taking a lot of evidence into account other than just "I am probably near the middle of humanity" will produce way better discrimination, if not necessarily much better calibration.

(Of course, most people have historically been terrible at identifying when Doomsday will happen.)

I'm from a parallel Earth with much higher coordination: AMA

I mean, surely Eliezer is going to have somewhat dath-ilan typical preferences, having grown up there.

What (feasible) augmented senses would be useful or interesting?

I feel like this would require brain surgery beyond what is realistic without major technological development - possibly post-AGI. Although the brain is able to reinterpret sensory information well, it seems to only do this using pre-existing neural structures. (I'm unsure to what degree this applies to new colors.) I'm no neuroscience expert though.

Micromorts vs. Life Expectancy

There are a lot of young people who have not yet reached the point in their life where their micromort count increases dramatically. The expected average per person per lifetime being off does not matter; we do not include the risk that the young people will have when they are older, probably much higher than they have now, in our calculation.

How is Cryo different from Pascal's Mugging?

Cryogenics may be low probability, but it’s certainly not very low in the way Pascal’s mugging is.

NaiveTortoise's Short Form Feed

The problem with this is that it is very difficult to figure out what counts as a legitimate proof. What level of rigor is required, exactly? Are they allowed to memorize a proof beforehand? If not, how much are they allowed to know?

The Critical COVID-19 Infections Are About To Occur: It's Time To Stay Home [crosspost]

If you trust public authorities so highly why are you even on this website? Being willing to question authority when necessary (and, hopefully, doing it better than most) is one of the primary goals of this community.

More Dakka for Coronavirus: We need immediate human trials of many vaccine-candidates and simultaneous manufacturing of all of them

The way society works currently this can’t happen, but it’s a good insight into what an actually competent civilization would do.

Edit: After reading ChristianKI’s comment I’m realizing I was focusing overmuch on the US. Other countries might be able to manage it.

The Epistemology of AI risk

People seem to be blurring the difference between "The human race will probably survive the creation of a superintelligent AI" and "This isn't even something worth being concerned about." Based on a quick google search, Zuckerberg denies that there's even a chance of existential risks here, whereas I'm fairly certain Hanson thinks there's at least some.

I think it's fairly clear that most skeptics who have engaged with the arguments to any extent at all are closer to the "probably survive" part of the spectrum than the "not worth being concerned about" part.

What long term good futures are possible. (Other than FAI)?

I have no comment on how plausible either of these scenarios are. I'm only making the observation that long term good futures not featuring friendly AI require some other mechanism preventing UFAI from happening. Either SAI in general would have to be implausible to create at all, or some powerful actor such as a government or limited AI would have to prevent it.

Load More