shrink
shrink has not written any posts yet.

The rationality and intelligence are not precisely same thing. You can pick e.g. those anti vaccination campaigners whom have measured IQ >120, and put them in a room, and call that a very intelligent community, that can discuss a variety of topics besides the vaccines. Then you will get some less insane people whom are interested in safety of vaccines coming in and getting terribly misinformed, which just is not a good thing. You can do that with almost any belief, especially using the internet to be able to get the cases from the pool of a billion or so.
Implied in your so called 'question' is the statement that any online community that you know of (I shouldn't assume you know of 0 other communities, right?), you deemed less rational than lesswrong. I would say lesswrong is substantially less rational than average, i.e. if you pick a community at random, it is typically more rational than lesswrong. You can choose any place better than average - physicsforums, gamedev.net, stackexchange, arstechnica observatory, and so on, those are all more rational than LW. But of course, implied in your question is that you won't accept this answer. The LW is rather interested in AI, and the talk about AI here is significantly less... (read more)
Instrumental rationality, i.e. "winning"? Lots...
Precisely.
Epistemic rationality? None...
I'm not sure it got that either. It's more like medieval theology / scholasticism. There are questions you think you need answered, you can't answer them now with logical thought, you use empty cargo cult imitation of reasonable thought. How rational is that? Not rational at all. Wei_Dai is here because he was concerned with AI, and calls this community rational because he sees concern with AI as rational and needs confirmation. It is neatly circular system - if the concern with AI is rational, then every community that is rational must be concerned with AI, then the communities that are not concerned with AI are less rational.
It was definitely important to make animals come, or to make it rain, tens thousands years ago. I'm getting a feeling that as I tell you that your rain making method doesn't work, you aren't going to give up trying if I don't provide you with an airplane, a supply of silver iodide, flight training, runway, fuel, and so on (and even then the method will only be applicable to some days, while the pray for rain is applicable any time).
As for the best guess, if you suddenly need a best guess on a topic because someone told you of something and you couldn't really see a major flaw in vague reasoning... (read more)
I think you have somewhat simplistic idea of justice... there is the "voluntary manslaughter", there's the "gross negligence", and so on. I think SIAI falls under the latter category.
How are they worse than any scientist fighting for a grant based on shakey evidence?
Quantitatively, and by a huge amount. edit: Also, the of beliefs, that they claim to hold, when hold honestly, result in massive loss of resources such as moving to cheaper country to save money, etc etc. I dread to imagine what would happen to me if I honestly were this mistaken about AI. The erroneous beliefs damage you.
The lying is about having two sets of incompatible beliefs, that are picked... (read more)
You are declaring everything gray here so that verbally everything is equal.
There are people with no knowledge in physics and no inventions to their name, whose first 'invention' is a perpetual motion device. You really don't see anything dishonest about holding an unfounded belief that you're this smart? You really see nothing dishonest about accepting money under this premise without doing due diligence such as trying yourself at something testable, even if you think you're this smart?
There are scientists whom are trying very hard to follow processes that are not prone to error, people trying to come up with ways to test their beliefs, do you really see them as all... (read more)
That's how religions were created, you know - they could not actually answer why lightning is thundering, why sun is moving through the sky, etc. So they did look way 'beyond' the non-faulty reasoning, in search for answers now (being inpatient), and got answers that were much much worse than no answers at all. I feel LW is doing precisely same thing with AIs. Ultimately, when you can't compute the right answer in the given time, you will either have no answer or compute a wrong one.
On the orthogonality thesis, it is the case that you can't answer this question given limited knowledge and time (got to know AI's architecture first),... (read more)
Did they make a living out of those beliefs?
See, what we have here is a belief cluster that makes the belief-generator feel very good (saving the world, the other smart people are less smart, etc etc) and pays his bills. That is awfully convenient for a reasoning error. Not saying that it is entirely impossible to have a serendipitously useful reasoning error, but doesn't seem likely.
edit: note, I'm not speaking about some inconsequential honesty in idle thought, or anything likewise philosophical. I'm speaking of not exploiting others for money. There's nothing circular about the notion that honest person would not be talking a friend into paying him upfront to... (read more)
Would you take criticism if it is not 'positive' and doesn't give you alternative method to use for talking about same topic? Faulty reasoning has unlimited domain of application - you can 'reason' about purpose of the universe, number of angels that fit on a tip of a pin, of what superintelligences would do, etc. In those areas, non-faulty reasoning can not compete in terms of providing a sort of pleasure from reasoning, or in terms of interesting sounding 'results' that can be obtained with little effort and knowledge.
You can reason what particular cognitive architecture can do on a given task given N operations; you can reason what the best computational... (read more)
If you want to maximize your win, it is a relevant answer.
For the risk estimate per se, I think one needs not so much methods as a better understanding of the topic, which is attained by studying the field of artificial intelligence - in non cherry picked manner - and takes a long time. If you want easier estimate right now, you could try to estimate how privileged is the hypothesis that there is the risk. (There is no method that would let you calculate the wave from spin down and collision of orbiting black holes without spending a lot of time studying GR, applied mathematics, and computer science. Why do... (read more)