Wiki Contributions

Comments

Thank you.

I'd like to note that my "opt out of Petrov day" checkbox has not been removed from my user settings, contrary to what I had expected based on your previous post. 

Most people in the Bay Area rationalist community are not financially limited on classes, books, and intelligence-enhancing drugs.

I’ve never met someone who told me they didn’t take modafinil because they couldn’t afford it.

Oh I support increasing the karma cutoff.

I do think that running such an exercise is valuable, if only because it allows us to learn things about our community.

Opted out. Also, IQ?

Edit: ah I get it now

It’s an exercise in collective adequacy.

I like this post. We should be able to practice collective adequacy without public shaming. In the real world, policies that rely on public shaming are quite imprecise, often cruel, and frequently ineffective. 

It seems that it would actually be quite hard for M to hack D.  M would then have to emulate P to figure out which sensor states are likely to be produced given its actions. It would then have to figure out how the consequent worldstate is likely to be perceived. However, this doesn’t just require emulating a human, since even humans can‘t go directly from sensor states to favorability. M would probably need to also emulate D so that it can output text descriptions, and then select on those text descriptions with an emulated human.

A corollary to this is that it should be possible to find mistakes in the works of almost everyone. 

It is probably good practice to do this.

It takes a certain degree of maturity and thought to see that a lot of advice from high profile advice-givers are bad.

It could be valuable to do point-by-point critiques of popular advice by high profile technologists. 

Load More