All of shrink's Comments + Replies

Non-orthogonality implies uncontrollable superintelligence

If you want to maximize your win, it is a relevant answer.

For the risk estimate per se, I think one needs not so much methods as a better understanding of the topic, which is attained by studying the field of artificial intelligence - in non cherry picked manner - and takes a long time. If you want easier estimate right now, you could try to estimate how privileged is the hypothesis that there is the risk. (There is no method that would let you calculate the wave from spin down and collision of orbiting black holes without spending a lot of time studying ... (read more)

Thank you for your answer. I don't think the methods you describe are much good for predictions. On the other hand, few methods are much good for predictions anyway. I've already picked up a few online AI courses to get some background; emotionally this has made me feel that AI is likely to be somewhat less powerful than anticipated, but that it's motivations are more certain to be more alien than I'd thought. Not sure how much weight to put on these intuitions.
Do people think Less Wrong rationality is parochial?

The rationality and intelligence are not precisely same thing. You can pick e.g. those anti vaccination campaigners whom have measured IQ >120, and put them in a room, and call that a very intelligent community, that can discuss a variety of topics besides the vaccines. Then you will get some less insane people whom are interested in safety of vaccines coming in and getting terribly misinformed, which just is not a good thing. You can do that with almost any belief, especially using the internet to be able to get the cases from the pool of a billion or so.

Can you list some specific examples of irrational thinking patterns that occur on LessWrong but not on those communities? The one guess I can make is that they're all technical-sounding, in which case they might exist in the context of a discipline that has lots of well-defined rules and methods for testing success, and so less "bullshit" gets through because it obviously violates the rules of X-technical-discipline. Is this what you mean, or is it something else entirely?
I see, I had taken your earlier comment (the one I originally replied to) as saying that lesswrong was above average but there were even more rational people elsewhere (otherwise I probably wouldn't have bothered to reply). But since we're already talking, if you actually think it's below average, what are you hoping to accomplish by participating here?
I notice that you didn't actually answer any of my questions. Earlier you said " there's upper limit above which you would see it as self important pompous fools being very wrong on some few topics and not interesting on other topics". It seems to me if that was actually the case, then there would be communities of such people taking about topics they think are interesting, and in a way that is noticeably more rational than typical discussions on lesswrong. If you are right, why can't you give an example, or at least be very interested in trying to create such a community? Note that my question isn't purely rhetorical. If such a community actually exists then I'd like to join or at least eavesdrop on them.
Non-orthogonality implies uncontrollable superintelligence

It was definitely important to make animals come, or to make it rain, tens thousands years ago. I'm getting a feeling that as I tell you that your rain making method doesn't work, you aren't going to give up trying if I don't provide you with an airplane, a supply of silver iodide, flight training, runway, fuel, and so on (and even then the method will only be applicable to some days, while the pray for rain is applicable any time).

As for the best guess, if you suddenly need a best guess on a topic because someone told you of something and you couldn't rea... (read more)

Not a relevant answer. You have given me no tools to estimate the risks or lack thereof in AI development. What methods do you use to reach conclusions on these issues? If they are good, I'd like to know them.
Do people think Less Wrong rationality is parochial?

I think you have somewhat simplistic idea of justice... there is the "voluntary manslaughter", there's the "gross negligence", and so on. I think SIAI falls under the latter category.

How are they worse than any scientist fighting for a grant based on shakey evidence?

Quantitatively, and by a huge amount. edit: Also, the of beliefs, that they claim to hold, when hold honestly, result in massive loss of resources such as moving to cheaper country to save money, etc etc. I dread to imagine what would happen to me if I honestly were this... (read more)

Do people think Less Wrong rationality is parochial?

You are declaring everything gray here so that verbally everything is equal.

There are people with no knowledge in physics and no inventions to their name, whose first 'invention' is a perpetual motion device. You really don't see anything dishonest about holding an unfounded belief that you're this smart? You really see nothing dishonest about accepting money under this premise without doing due diligence such as trying yourself at something testable, even if you think you're this smart?

There are scientists whom are trying very hard to follow processes th... (read more)

Your hypothetical is a good one. And you are correct: I don't think you are dishonest if you are sincerely trying to build or sell a perpetual motion machine. You're still wrong, and even silly, but not dishonest. I need a word to refer to conscious knowing deception, and "dishonest" is the most useful word for the purpose. I can't let you use it for some other purpose; I need it where it is. The argument is not applicable to all criminal conduct. In American criminal law, we pay a lot of attention to the criminal's state of mind. Having the appropriate criminal state of mind is an essential element of many crimes. It's not premeditated murder if you didn't expect the victim to die. It's not burglary if you thought you lived there. It's utterly routine -- and I think morally necessary -- to ask juries "what what the defendant's intention or state of mind". There is a huge moral and practical difference between a conscious and an unconscious criminal. Education much more easily cures the latter, while punishment is comparatively ineffective. For the conscious criminal, the two are reversed: punishment is often appropriate, whereas education has limited benefits. I don't believe I am giving liars a get-out-of-jail-free card. Ignorance isn't an unlimited defense, and I don't think it is so easy to convince an outside observer (or a jury) that you're ignorant in cases where knowledge would be expected. If you really truly are in a state of pathological ignorance and it's a danger to others, we might lock you up as a precaution, but you wouldn't be criminally liable. As to scientific ethics: All human processes have a non-zero chance of errors. The scientists I know are pretty cynical about the process. They are fighting to get papers published and they know it. But they do play by the rules -- they won't falsify data or mislead the reader. And they don't want to publish something if they'll be caught-out having gotten something badly wrong. As a result, the process
Non-orthogonality implies uncontrollable superintelligence

That's how religions were created, you know - they could not actually answer why lightning is thundering, why sun is moving through the sky, etc. So they did look way 'beyond' the non-faulty reasoning, in search for answers now (being inpatient), and got answers that were much much worse than no answers at all. I feel LW is doing precisely same thing with AIs. Ultimately, when you can't compute the right answer in the given time, you will either have no answer or compute a wrong one.

On the orthogonality thesis, it is the case that you can't answer this qu... (read more)

But if the question is possibly important and you have to make a decision now, you have to make a best guess. How do you think we should do that?
Do people think Less Wrong rationality is parochial?

Did they make a living out of those beliefs?

See, what we have here is a belief cluster that makes the belief-generator feel very good (saving the world, the other smart people are less smart, etc etc) and pays his bills. That is awfully convenient for a reasoning error. Not saying that it is entirely impossible to have a serendipitously useful reasoning error, but doesn't seem likely.

edit: note, I'm not speaking about some inconsequential honesty in idle thought, or anything likewise philosophical. I'm speaking of not exploiting others for money. There'... (read more)

It's possible we are just using terms differently. I agree that people are biased by their self-interest. I just don't think that bias is a form of dishonesty. It's a very easy mistake to make, and nearly impossible to prevent. I don't think SIAI is unusually guilty of this or unusually dishonest. In science, everybody understands that researchers are biased to believing their own results and to believing new results that make their existing work more important. Most professional scientists are routinely in the position of explaining to funding agencies why their work is extremely important and needs lots of government grant dollars. Everybody, not just SIAI, has to talk donors into funding their Very Important OWrk. For government grants and prestigious publications, we try to mitigate the bias by having expert reviewers. We also tolerate a certain amount of slop. SIAI is cutting out the government and trying to convince the public, directly, to fund their work. It's an unusual strategy, but I don't see that it's dishonest or immoral or even necessarily unwise.
Non-orthogonality implies uncontrollable superintelligence

Would you take criticism if it is not 'positive' and doesn't give you alternative method to use for talking about same topic? Faulty reasoning has unlimited domain of application - you can 'reason' about purpose of the universe, number of angels that fit on a tip of a pin, of what superintelligences would do, etc. In those areas, non-faulty reasoning can not compete in terms of providing a sort of pleasure from reasoning, or in terms of interesting sounding 'results' that can be obtained with little effort and knowledge.

You can reason what particular cogn... (read more)

I am interested in anything that allows better reasoning about these topics. Mathematics has a somewhat limited use when discussing the orthogonality thesis. AIXI, and some calculations about the strength of optimisation processes and stuff like that. But when answering the question "is it likely that humans will build AIs with certain types of goals", we need to look beyond mathematics. I won't pretend the argument in this post is strong - it's just, to use the technical term, "kinda neat" and I'd never seen it presented this way before. What would you consider reasonable reasoning on questions like the orthogonality thesis in practice?
Non-orthogonality implies uncontrollable superintelligence

There's so much that can go wrong with such reasoning, given that intelligence (even at the size of a galaxy of Dyson spheres) is not a perfect God, as to render such arguments irrelevant and entirely worthless. Furthermore there's enough ways how the non-orthogonality can hold, such as e.g. almost all intelligences with wrong moral systems crashing or failing to improve, that are not covered by 'converges'.

meta: Tendency to talk seriously about products of very bad reasoning really puts an upper bracket on the sanity of newcomers to LW. As is the idea that very bad argument trumps authority (when it comes to the whole topic).

What type of reasoning would you prefer to be used when talking about superintelligences?
(Almost) every moral theory can be represented by a utility function

You can represent any form of agency with utility function that is 0 for doing what agency does not want to do, and 1 for doing what agency want to do. This looks like a special case of such triviality, as true as it is irrelevant. Generally one of the problems with insufficient training in math is the lack of training for not reading extra purpose into mathematical definitions.

Do people think Less Wrong rationality is parochial?

I think you hit nail on the head. It seems to me that LW represent bracketing by rationality - i.e. there's lower limit below which you don't find site interesting, there is the range where you see it as rationality community, and there's upper limit above which you would see it as self important pompous fools being very wrong on some few topics and not interesting on other topics.

Dangerously wrong, even; the progress in computing technology leads to new cures to diseases, and misguided advocacy of great harm of such progress, done by people with no under... (read more)

Are you aware of another online community where people more rational than LWers gather? If not, any ideas about how to create such a community?

Also, if someone was worried about the possibility of a bad singularity, but didn't think that supporting SIAI was a good way to address that concern, what should they do instead?

Do people think Less Wrong rationality is parochial?

Popularization is better without novel jargon though.

2Rob Bensinger9y
Unless there are especially important concepts that lack labels (or lack adequate labels).
Do people think Less Wrong rationality is parochial?

That's why I said 'self deluded', rather than just 'deluded'. There is a big difference between believing something incorrect that's believed by default, and coming up yourself with a very convenient incorrect belief that makes you feel good and pays the bills, and then actively working to avoid any challenges to this belief. Honest people are those who put such beliefs to good scrutiny (not just talk about putting such beliefs to scrutiny).

The honesty is elusive matter, when the belief works like that dragon in the garage. When you are lying, you have to ... (read more)

Hrm? If Newton and Kepler were deluded by mysticism, they were self-deluded. They weren't toeing a party line and they weren't echoing conventional views. They sat down and thought hard and came up with beliefs that seem pretty nuts to us. I see that you want to label it as "not honest" if they don't put those beliefs to good scrutiny. I think you are using "honest" in a non-standard (and possibly circular) way here. We can't easily tell from the outside how much care they invested in forming those beliefs, or how self-deluded they are. All we can tell is whether the belief, in retrospect, seems to have been plausible given the evidence available at the time. If you want to label it as "not honest" when it seems wacky to us, then yes, tautologically honest people don't come to have wacky beliefs. The impression I have is that N and K (and many scientists since) weren't into numerology or mysticism to impress their peers or to receive external benefits: they really did believe, based on internal factors.
Do people think Less Wrong rationality is parochial?

Well the issue is that LW is heavily biased towards agreement with the rationalizations of the self important wankery in question (the whole FAI/uFAI thing)...

With the AI, basically, you can see folks who have no understanding what so ever of how to build practical software and whose idea of the AI is 'predict outcomes of actions, choose actions that give best outcome' (entirely impractical model given the enormous number of actions when innovating) accusing the folks in the industry whom do, of anthropomorphizing the AI - and taking it as operating assum... (read more)

honest people can't stay self deluded for very long.

This is surely not true. Lots of wrong ideas last a long time beyond when they are, in theory, recognizably wrong. Humans have tremendous inertia to stick with familiar delusion, rather than replace them with new notions.

Consider any long-lived superstition, pseudoscience, etc. To pick an uncontroversial example, astrology. There were very powerful arguments against it going back to antiquity, and there are believers down to the present. There are certainly also conscious con artists propping up the... (read more)

A Kick in the Rationals: What hurts you in your LessWrong Parts?

It's more a question of how charitably you read LW, maybe? The phenomenon I am speaking of is quite generic. About 1% of people are clinical narcissists (probably more), that's a lot of people, and the narcissists dedicate more resources to self promotion, and take on projects that no well calibrated person of same expertise would attempt, such as e.g. making a free energy generator without having studied physics or invented anything not so grandiose first.

Do people think Less Wrong rationality is parochial?

Some of the rationality may to significant extent be a subset of standard, but it has important omissions - in the areas of game theory for instance - and much more importantly significant miss-application such as taking the theoretically ideal approaches given infinite computing power as the ideal, and seeing as the best try the approximations to them which are grossly sub-optimal on the limited hardware where different algorithms have to be employed instead. One has to also understand that in practice computations have cost, and any form of fuzzy reasoni... (read more)

I don't have enough knowledge to agree/disagree with points before your "edit: Note." I do agree with what you said after that. And applying your own advice, please add some paragraph breaks to your post. If nothing else, add a break between "extra powers of rational thinking" and "he asked a question." It should make your post much easier to read and, as a consequence, more people are likely to read it.
A Kick in the Rationals: What hurts you in your LessWrong Parts?

Look up on quantum gravity (or rather, lack of unified theory with both QM and GR). It is a very complex issue and many basics have to be learnt before it can be at all discussed. The way we do physics right now is by applying inconsistent rules. We can't get QM to work out to GR in large scale. It may gracefully turn 'classical' but this is precisely the problem because the world is not classical at large scale (GR).

I am well aware of the QG issues. That was not my point. I will disengage now.
A Kick in the Rationals: What hurts you in your LessWrong Parts?

One basic thing about MWI is that it is a matter of physical fact that large objects tend to violate 'laws of quantum mechanics' as we know them (the violation is known as gravity), and actual physicists do know that we simply do not know what the quantum mechanics works out to at large scale. To actually have a case for MWI one would need to develop a good quantum gravity theory where many worlds would naturally arise, but that is very difficult (and many worlds may well not naturally arise).

I cannot agree with this assertion. Except for the mysterious "measurement" thing, where only a single outcome is seen where many were possible (I'm intentionally use the word "seen" to describe our perception, as opposed to "occurs", which may irk the MWI crowd), the quantum world gracefully turns classical as the objects get larger (the energy levels bunch tighter together, the tunneling probabilities vanish exponentially, the interaction with the environment, resulting in decoherence, gets stronger, etc.). This has not been shown to have anything to do with gravity, though Roger Penrose thinks that gravity may limit the mass of quantum objects, and I am aware of some research trying to test this assertion. For all I know, someone might be writing a numerical code to trace through decoherence all the way to the microscopic level as we speak, based on the standard QM/QFT laws.
A Kick in the Rationals: What hurts you in your LessWrong Parts?

Various cases of NPD online. The NPD-afflicted individuals usually are too arrogant to study or do anything difficult where they can measurably fail, and instead opt to blog on the topics where they don't know the fundamentals, promoting misinformed opinions. Some even live on donations for performing work that they never tried to study for doing. It's unclear what attracts normal people to such individuals, but I guess if you don't think yourself a supergenius you can still think yourself clever for following a genius whom you can detect without relying o... (read more)

OMG, is that the real "shrink"? If so - we're not worthy! I own a copy of every one of your books.
Ooh, burn! Your last link explains the ire I expressed in my other comment [] , thank you.

You know, an uncharitable reading of this would almost sort-of kinda maybe construe it as a rebuke of the LW community. Almost.

Sounds like the LaRouche cult. edit: that last link is excellent. The ingroup-outgroup thing gone pathological. All the ingroup needs is a defined enemy and WHAM! cult.