I shall discuss many concepts, later in the book, of a similar nature to these. They are puzzling if you try to understand them concretely, but they lose their mystery when you relax, stop worrying about what they are, and use the abstract method.
Timothy Gowers in Mathematics: A Very Short Introduction, p. 34
How many people have been or are still worried about the basilisk is more important than whether people disagree with how it has been handled. It is possible to be worried and disagree about how it was handled if you expect that maintaining silence about its perceived danger would have exposed less people to it.
In any case, I expect LessWrong to be smart enough to dismiss the basilisk in a survey, in order to not look foolish for taking it seriously. So any such question would be of little value as long as you do not take measures to make sure that people ...
Please rot13 the part from “potentially” onwards, and add a warning as in this comment (with “decode the rot-13'd part” instead of “follow the links”), because there are people here who've said they don't want to know about that thing.
I can't believe you missed the chance to say, "Taboo pirates and ninjas."
"Pirates versus Ninjas is the Mind-Killer"
“I do not say this lightly... but if you're looking for superpowers, this is the place to start.”
Now I can't get this image out of my head of Eliezer singing 'I am the very model of a singularitarian '...
The primary issue with the Roko matter wasn't as much that an AI might actually do but that the relevant memes could cause some degree of stress in neurotic individuals.
The original reasons given:
Meanwhile I'm banning this post so that it doesn't (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail, though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (I'm not sure I know the sufficient de
What if asking what the sum of 1+1 is causes the Oracle to devote as many resources as possible to looking for an inconsistency arising from the Peano axioms?
If the Oracle we are talking about was specifically designed to do that, for the sake of the thought experiment, then yes. But I don't see that it would make sense to build such a device, or that it is very likely to be possible at all.
If Apple was going to build an Oracle it would anticipate that other people would also want to ask it questions. Therefore it can't just waste all resources on loo...
I am not sure what exactly you mean by "safe" questions. Safe in what respect? Safe in the sense that humans can't do something stupid with the answer or in the sense that the Oracle isn't going to consume the whole universe to answer the question? Well...I guess asking it to solve 1+1 could hardly lead to dangerous knowledge and also that it would be incredible stupid to build something that takes over the universe to make sure that its answer is correct.
We have tried to discuss topics like race and gender many times, and always failed.
The overall level of rationality of a community should be measured by their ability to have a sane and productive debate on those topics, and on politics in general.
Sure, agreed. But it doesn't follow that a community that desires to be rational should therefore engage in debates on those topics (and on politics in general) when it has low confidence that it can do so in a sane and productive way.
So, did anyone actually save Roko's comments before the mass deletion?
Google Reader fetches every post and comment that is being made on lesswrong. Editing or deleting won't remove it. All comments and posts that have ever been made are still there, saved by Google. You just have to add the right RSS feeds to Google Reader.
None of the simulation projects have gotten very far...this looks to me like it is a very long way out, probably hundreds of years.
Couldn't you say the same about AGI projects? It seems to me that one of the reasons that some people are being relatively optimistic about computable approximations to AIXI, compared to brain emulations, is that progress on EM's is easier to quantify.
In statements posted on the Internet, the ITS expresses particular hostility towards nanotechnology and computer scientists. It claims that nanotechnology will lead to the downfall of mankind, and predicts that the world will become dominated by self-aware artificial-intelligence technology. Scientists who work to advance such technology, it says, are seeking to advance control over people by 'the system'.
What do you do if you really believe that someone's research has a substantial chance of destroying the world?
Go batshit crazy.
...people occasionally need to settle on a policy or need to decide whether a policy is better complied with or avoided?
One example would be the policy not to talk about politics. Authoritarian regimes usually employ that policy, most just fail to frame it as rationality.
What he's talking about is knowledge that's objectively harmful for someone to have.
Someone should make a list of knowledge that is objectively harmful. Could come in handy if you want to avoid running into it accidentally. Or we just ban the medium that is used to spread it, in this case natural language.
No one is seriously disputing where the boundary between basilisk and non-basilisk lies...
This assumes that everyone knows where the boundary lies. The original post by Manfred either crossed the boundary or it didn't. In the case that it didn't, it only serves as a warning sign of where not to go. In the case that it did, how is your knowledge of the boundary not a case of hindsight bias?
...before exposing the public to something that you know that a lot of people believe to be dangerous.
The pieces of the puzzle that Manfred put together can all be found on lesswrong. What do you suggest, that research into game and decision theory be banned?
I obviously think it's safe.
Be careful to trust Manfred, he is known to have destroyed the Earth on at least one previous occasion.
Given that you believe that unfriendly AI is likely, I think one of the best arguments against cryonics is that you do not want to increase the probability of being "resurrected" by "something". But this concerns the forbidden topic, so I can't get into more details here. For hints see Iain M. Banks' novel Surface detail on why you might want to be extremely risk averse when it comes to the possibility of waking up in a world controlled by posthuman uploads.