All of Confusion's Comments + Replies

Is Stupidity Expanding? Some Hypotheses.

I expect B8 is the major factor. Before social media, if you had a bad idea and two of your five close friends told you they didn’t think it was a good idea, you’d drop it. Now five random ‘friends’ will tell you how insightful you are and how blind everyone else is. You’ve publicly stated your belief in the idea and got social proof. That makes it that much harder to drop.

People individually don’t have more bad ideas than before, but there is much more selection pressure in favor of them.

Open question: are minimal circuits daemon-free?

We want to show that given any daemon, there is a smaller circuit that solves the problem.

Given any random circuit, you can not, in general, show whether it is the smallest circuit that produces the output it does. That's just Rice's theorem, right? So why would it be possible for a daemon?

2Jalex Stark2yRice's theorem applies if you replace "circuit" with "Turing machine". The circuit version can be resolved with a finite brute force search.

But within every apple eater lies that hunger -- that deep abyssal craving -- for apples

Which an apple eater could perfectly satisfy without apples, if only they weren’t told ad nauseam that satisfying their cravings without apples is inferior and shameful, that apples are really much better and that they should strive for eating apples. This is a cultural problem, not a natural one.

Sleeping Beauty Resolved?

Don't worry about not being able to convince Lubos Motl. His prior for being correct is way too high and impedes his ability to consider dissenting views seriously.

RFC: Philosophical Conservatism in AI Alignment Research

Given there are friendly human intelligences, what would have to be true about the universe in order for friendly AGI’s to be impossible?

3Paperclip Minimizer3yIf the orthogonality thesis is incorrect and the reason (some) humans are friendly is because they are not intelligent enough (as Nick Land argue here [http://www.xenosystems.net/against-orthogonality/]), then friendly AGIs would be impssible. I think the arguments for this position are really badly argued, but this is still a good reductio of gworley's philosophical conservatism.
Terrorism, Tylenol, and dangerous information

A list that is probably vastly incomplete. It seems very likely that there have been vehicle attacks for as long as vehicles exist. What would be the odds of no one in the past 100 years, no angry spouse, disgruntled ex-employee or lunatic, having thought of taking revenge on the cruel world by ramming a vehicle into people? Wouldn’t a prior on the order of at least one such event per 1 million vehicles per year be more likely to yield correct predictions than 0, for events before, say, the year 2005?

Intellectual Hipsters and Meta-Contrarianism

In that triad the meta-contrarian is broadening the scope of the discussion. They address what actually matters, but that doesn’t change that the contrarian is correct (well, a better contrarian would point out the number of deaths due to Ebola is far less than any of those examples and Ebola doesn’t seem a likely candidate to evolve into a something causing an epidemic) and that the meta-contrarian has basically changed the subject.

"Taking AI Risk Seriously" (thoughts by Critch)

Suppose the Manhattan project was currently in progress, meaning we somehow had the internet, mobile phones, etc. but not nuclear bombs. You are a smart physicist that keeps up with progress in many areas of physics and at some point you realize the possibility of a nuclear bomb. You also foresee the existential risk this poses.

You manage to convince a small group of people of this, but many people are skeptical and point out the technical hurdles that would need to be overcome, and political decisions that would need to be taken, for the existential risk ... (read more)

5Raemon3yI think this sentence actually contains my own answer, basically. I didn't say "invest three years of your life in AI safety research." (I realize looking back that I didn't clearly *not* say that, so this misunderstanding is on me and I'll consider rewriting that section). What I meant to say was: * Get three years of runway (note: this does not mean you're quitting your job for three years, it means that you have 3 years of runway so you can quit your job for 1 or 2 years before starting to feel antsy about not having enough money) * Quit your job or arrange your life such that you have to time to think clearly * figure out what's going on (this involves keeping up on industry trends and understanding them well enough to know what they mean, keeping up on AI safety community discourse, following relevant bits of politics in both government, corporations, etc) * figure out what to do (including what skills you need to gain in order to be able to do it) * do it i.e, the first step is to become not clueless. And then step 2 depends a lot on your existing skillset. I specifically am not saying to go into AI safety research (although I realize it may have looked that way). I'm asserting that some minimum threshold of technical literacy is necessary make serious contributions in any domain. Do you want to persuade powerful people to help? You'll need to know what you're talking about. Do you want to direct funding to the right places? You need to understand what's going on well enough to know what needs funding. Do you want to just be a cog in an organization where you mostly just work like a normal person but are helping move progress forward? You'll need to know what's going on enough to pick an organization where you'll be a marginally beneficial cog. The question isn't "what is the optimal thing for AI risk people collectively to do". It's "what is the optimal thing for you in particular to do, given that the AI risk community e