If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Open Thread, January 4-10, 2016
New Comment
426 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Thank you.

1Jayson_Virissimo
You're welcome.
[-]gjm150

I will be most interested to find out what it is that requires a sockpuppet but doesn't require it to be secret that it's a sockpuppet or even whose sockpuppet.

I think the point is that when googling his name, the post does not show up, but if LWers know it's the same person, there's no harm.

5gjm
Yup, he has confirmed essentially this by PM.
4tut
What is your credence that the google of five years in the future won't find things written under pseudonyms when you search for the author's real name? 10 years?
4Vaniver
I agree that will likely be available as a subscription service in 5 years or so, but I think it would be somewhat uncharacteristic for Google to launch that for everyone. (As I recall, they had rather good face recognition software ~5 years ago but decided to kill potential features built on that instead of rolling them out, because of privacy and PR concerns.)
0ChristianKl
By replying you eliminated his ability to delete the post and thus maybe the point of the effort.
0gjm
Can't he still replace it with [deleted] or something? (If so, and if it is helpful, I will happily amend what I wrote to leak less information about what happened.) Anyway: of course it was not my intention to deanonymize anyone, and I regret it if I have.
0ChristianKl
I think that's only happens when he would delete his own account. I don't think that's the case but if he wants to create a annonymous account he likely should start over with a new one.
0gjm
I meant replacing the content with "[deleted]", not the account name.
0ChristianKl
I think from the context of your post the meaning would still have been clear. Apart from that I don't think he can do it after he retracked the post. (the strikethrough)
0philh
Nope, I can still edit it.

Lessons from teaching a neural network...

Grandma teaches our baby that a pink toy cat is "meow".
Baby calls the pink cat "meow".
Parents celebrate. (It's her first word!)

Later Barbara notices that the baby also calls another pink toy non-cat "meow".
The celebration stops; the parents are concerned.
Viliam: "We need to teach her that this other pink toy is... uhm... actually, what is this thing? Is that a pig or a pink bear or what? I have no idea. Why do people create such horribly unrealistic toys for the innocent little children?"
Barbara shrugs.
Viliam: "I guess if we don't know, it's okay if the baby doesn't know either. The toys are kinda similar. Let's ignore this, so we neither correct her nor reward her for calling this toy 'meow'."

Barbara: "I noticed that the baby also calls the pink fish 'meow'."
Viliam: "Okay... I think now the problem is obvious... and so is the solution."
Viliam brings a white toy cat and teaches the baby that this toy is also "meow".
Baby initially seems incredulous, but gradually accepts.

A week later, the baby calls every toy and grandma "meow".

[-][anonymous]150

So the child was generalizing along the wrong dimension, so you decided the solution was to train an increase in generalization of the word meow which is what you got. You need to teach discrimination; not generalization. A method for doing so is to present the pink cat and pink fish sequentially. Reward the meow response in the presence of the cat, and reward fish responses to the fish. Eventually meow responses to the fish should extinguish.

7[anonymous]
Teaching subtraction: 'See, you had five apples, and you ate three. How many apples do you have?' 'Five.' 'No, look here, you only have two left. Okay, you had six apples, and ate four, how many apples do you have now?' 'Five.' 'No, dear, look here... Okay...' Sigh. 'Mom?' 'Yes, dear?' 'And if I have many apples, and I eat many, how many do I have left?..'
5Gunnar_Zarncke
Piagets problem: The child tries to guess what the teacher/parent/questioner wants. I never teach math. At least not in the school way of offering problems and asking questions about them. For example I 'tought' subtraction the following way: (in the kitchen) Me: "Please give me six potatoes." Him: "1, 2, 3" Me (putting them in the pot): "How many do we still need?" Him: "4, 5, 6" (thinking) "3 more." A specific situation avoids guessing the password.
5Dagon
The necessity of negative examples is well-known when training classifiers.
4Viliam
In theory I agree. Experimentally, trying to teach her that other toys are connected to different sounds, e.g. that the black-and-white cow is "moo", didn't produce any reaction so far. And I believe she doesn't understand the meaning of the word "not" yet, so I can't explain that some things are "not meow". I guess this problem will fix itself later... that some day she will start also repeating the sounds for other animals. (But I am not sure what is the official sound for turtle.)
3moridinamael
I am sure this isn't necessary, but, you do realize that she's going to learn language flawlessly without you actively doing anything? Instead of saying "my " my daughter used to exclusively say "the that I use." My son used to append a "t" sound to the end of almost every word. These quirks sorted themselves out without us mentioning them. =)
6Crux
The phrase "actively doing anything" is too slippery. What one person does passively another may do actively. People who post on Less Wrong tend to do things consciously more often than the general public. The theories which say that children acquire language without anyone doing anything special are no doubt studying the behavior of normal people. The conclusion is that Viliam is probably simply thinking out loud about things that most people consider only subconsciously and implement in some way but don't know how to articulate. If you try to acquire a foreign language by merely listening to native speakers converse, you will learn very little. Children learn language when adults adapt their speech to their level and attempt to bridge the inferential distance. Most people do this by accident of having the impulses of a human parent.
1Gunnar_Zarncke
Not actively but maybe subconsciously. As I already mentioned child directed speech is different. And also yes: Most children probably can get by without that either. And also: I'm sure gwern will chime in an cite that parents have no impact on language and concept acquisition at all.
3TimS
There's overwhelming data that parenting can prevent language acquisition. But that requires extreme degenerate cases - essentially child abuse on the level of locking the child in the closet and not talking to them at all. For typical parenting, I agree that it is unlikely that variance in parenting style has measurable effect on language acquisition.
0Gunnar_Zarncke
And what about the size of the vocabulary?
6gwern
I don't see why you would expect that to be affected much either. Vocabulary is a good measure of intelligence because words have a very long tail (Pareto, IIRC) distribution of usage; most words are hardly ever used. I pride myself on my vocabulary and I know it's vastly larger than most English speakers as evidenced by things like a perfect SAT verbal score, but nevertheless if I read through one page of my compact OED, I will run into scores or hundreds of words I've never seen before (and even more meanings of words!). This doesn't bother me since when you reach the point where you're reading the OED to learn new words, the words are now useless for any sort of actual communication... Anyway, since most words are hardly ever used, people will be exposed to them very few times, and so they are a sensitive measure of how quickly a person can learn the meaning of a word. If people are exposed to the word 'perspicacious' only 3 or 4 times in a lifetime on average, then intelligence will heavily affect whether that was enough for them to learn it; and by testing a few dozen words drawn from the critical region of rarity where they're rare enough that most people would not have been exposed enough times but not so rare that no one learns them normally (as determined empirically - think item-response theory curves here), you can get a surprisingly good proxy for intelligence despite the obvious fragility to cheating. (Of course, you can get around it, and the SAT does, by simply having lots of semi-rare words to draw upon. Have you ever looked at comprehensive SAT vocab lists? There's just no way most people could memorize more than a few hundred without, well, being very smart and verbally adept.) Since there are so many potential rare vocabulary words to learn (and which could be sampled on an IQ test) and since parents speak only a small fraction of the number of words a person is exposed to over a lifetime... Even a parent deliberately trying to build vocabulary
4Lumifer
It's also a good measure of how much do you read (or how much have you read as a child). People who read books -- lots and LOTS of books -- have a very good vocabulary. People who don't, don't. There is certainly a correlation, but vocabulary is still just a proxy for intelligence, maybe not a good proxy for math people or in the 'net age.
0Gunnar_Zarncke
Yes, that is about what I expected you to confirm. And wow, are you omnipresent or how comes you actually noticed the post so quickly (or is there a 'find my name in posts' functionality I overlooked)?
3gwern
I just skim http://lesswrong.com/r/discussion/comments/ and occasionally C-f for my name. (My PMs/red-box are so backed up that I haven't dared check them in many months, so this is how I see most replies to my comments...)
0polymathwannabe
Then teach her to say turtle. Likewise for other animals; a cat is not a meow.
0Viliam
Yes, as soon as she learns to speak polysyllabic words (which in my language also include "cat").
5polymathwannabe
My first word was "daddy" (papi in Spanish). It should be possible to start with regular words. Edited to add: I just looked up "turtle" in Slovak. I can't believe how much of a jerk I was in my previous comment.
2Gunnar_Zarncke
I'm not convinced that using different names is a really helpful idea. It requires an extra transition later on. Well no real harm done. But I wonder about the principle behind that: Dropping complexity because it is hard? I agree that child directed speech is different. It is simpler. But it isn't wrong. Couldn't you have said "the cat meows" or "this toy meows" or even "it meows"? That would have placed the verb in the right place in a simple sentence. The baby can now validly repeat the sound/word.
3Emily
Hard to come by in normal language acquisition, though. So it probably doesn't quite work like that.
0Unnamed
Sounds like she hasn't learned shape bias yet.

I've gotten around to doing a cost-benefit analysis for vitamin D: http://www.gwern.net/Longevity#vitamin-d

3closeness
Is it 5000IU per day?
1gwern
We don't know. Since you asked, here's the comment from one of the more recent meta-analyses to discuss dose in connection with all-cause mortality, Autier 2014: 1μg=40IU, so 10μg=400IU, 20μg=800IU, and 1250μg=5000IU. Personally, I'm not sure I agree. The mechanistic theory and correlations do not predict that 400IU is ideal, it doesn't seem enough to get blood serum levels of 25(OH)D to what seems optimal, and I don't even read Rejnmark the same way: look at the Figure 3 forest plot. To me, this looks like after correcting for Smith's use of D2 rather than D3 (D2 usually performs worse), that there are too few studies using higher doses to make any kind of claim (Table 1; almost all the daily studies use <=20μg), and the studies which we do have tend to point to higher being better within this restricted range of dosages. That said, I cannot prove that 5k IU is equally or more effective, so if anyone is feeling risk-averse or dubious on that score, they should stick with 800IU doses.
1ChristianKl
People in the studies presumably don't take it all in the morning. Do you have an estimate of how that affects the total effect? How much bigger would you estimate the effect to be when people take it in the morning?
5gwern
I take it in the morning just because I found that taking it late at night harmed my sleep. I have no idea how much people taking it later in the day might reduce benefits by damaging sleep; I would guess that the elderly people usually enrolled in these trials would be taking it as part of their breakfast regimen of pills/prescriptions and so the underestimate of benefits is not that serious.
0Lumifer
D is a fat-soluble vitamin that the body can store. It's not like, say, the B vitamins which get washed out of your body pretty quickly. I don't think when you take it makes any difference (though you might want to take it together with food that contains fat for better absorption).
0ChristianKl
Multiple people such as gwern and Seth Roberts found that the timing makes a difference for them.
1Lumifer
That's true. What I meant is that blood levels of vitamin D are fairly stable and for the purposes of reduction in mortality it shouldn't matter when in the day do you take it. However side-effects, e.g. affecting sleep, are possible and may be a good reason to take it at particular times.
2ChristianKl
I don't think it's clear at all the the purpose of the reduction of mortality is different than the purpose of sleep quality. Vitamin D does do different things but I would estimate that a lot of the reduction of mortality is due to having a better immune system. Sleeping badly means a worse immune system.
0NoSignalNoNoise
Thanks for posting that! The key stats: expected life extension: 4 months; optimal starting age: 24.

Why too much evidence can be a bad thing

(Phys.org)—Under ancient Jewish law, if a suspect on trial was unanimously found guilty by all judges, then the suspect was acquitted. This reasoning sounds counterintuitive, but the legislators of the time had noticed that unanimous agreement often indicates the presence of systemic error in the judicial process, even if the exact nature of the error is yet to be discovered. They intuitively reasoned that when something seems too good to be true, most likely a mistake was made.

In a new paper to be published in The Proceedings of The Royal Society A, a team of researchers, Lachlan J. Gunn, et al., from Australia and France has further investigated this idea, which they call the "paradox of unanimity."

"If many independent witnesses unanimously testify to the identity of a suspect of a crime, we assume they cannot all be wrong," coauthor Derek Abbott, a physicist and electronic engineer at The University of Adelaide, Australia, told Phys.org. "Unanimity is often assumed to be reliable. However, it turns out that the probability of a large number of people all agreeing is small, so our confidence in unanimity is ill-fo

... (read more)
8gwern
See: * "Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes" * http://www.gwern.net/The%20Existential%20Risk%20of%20Mathematical%20Error * Jaynes on the Emperor of China fallacy * Schimmack's incredibility index
0gwern
Looks like the paper is now out: http://arxiv.org/pdf/1601.00900v1.pdf
0[anonymous]
Thanks Panorama and Gwern, incredibly interesting quote and links
6philh
This isn't "more evidence can be bad", but "seemingly-stronger evidence can be weaker". If you do the math right, more evidence will make you more likely to get the right answer. If more evidence lowers your conviction rate, then your conviction rate was too high. Briefly, I think what's going on is that a 'yes' presents N bits of evidence for 'guilty', and M bits of evidence for 'the process is biased', where M>N. The probability of bias is initially low, but lots of yeses make it shoot up. So you have four hypotheses (bias yes/no cross guilty yes/no), the two bias ones dominate, and their relative odds are the same as when you started.
2casebash
So, why not stab someone in front of everyone to ensure that they all rule you guilty?
2Slider
If you are more confident that the method is inaccurate when it is operating then it being low spread is indication that it is not operating. A TV that shows a static image that flickers when you kick it more likely is recieving actual feed than one that doesn't flicker when punched. If you have multiple TVs that all flicker at the same time it is likely that the cause was the weather rather than the broadcast
0[anonymous]
Can you clarify what youre talking about without using the terms method, operating and spread.
3Slider
I have a device that displays three numbers when a button is pressed. If any two numbers are different then one of the numbers is the exact room temperature but no telling which one it is. If all the numbers are the same number I don't have any reason to think the displayed number would be the room temperature. In a way I have two info channels "did the button pressing result in a temperature reading?" and "if there was a temperature reading what it tells me about the true temperature?". The first of these channels doesn't tell me anything about the temperature but it tells me about something. Or I could have three temperature meters one of which is accurate in cold, on in moderate temperatures and one in hot temperatures. Suppose that cold and hot don't overlap. If all the temperature cauges show the same number it would mean both the cold and hot meters would in fact be accurate in the same temperatures. I can not be more certain about the temperature than the operating principles of the measuring device as the temperature is based on those principles. The temperature gauges showing differnt temperatures supports me being rigth about the operating principles. Them being the same is evidence that I am ignorant on how those numbers are formed.
0[anonymous]
Very well explained :)
-2IlyaShpitser
https://en.wikipedia.org/wiki/Central_limit_theorem
-4Slider
That is the case that +ing amongs many should be gaussian. If the distribution is too narrow to be caussian it tells against the "+ing" theory. Someone who is amadant that it is just a very narrow caussian could never be proven conclusively wrong. However it places restraints on how ranodm the factors can be. At some point the claim of regularity will become implausible. If you have something that claims that throwing a fair dice will always come up with the same number there is an error lurking about.
2IlyaShpitser
The variance of the Gaussian you get isn't arbitrary and related to the variance of variables being combined. So unless you expect people picking folks out of a lineup to be mostly noise-free, a very narrow Gaussian would imply a violation of assumptions of CLT. This Jewish law thing is sort of an informal law version of how frequentist hypothesis testing works: assume everything is fine (null) and see how surprised we are. If very surprised, reject assumption that everything is fine.
-4Slider
Thus our knowledge on people being noisy means the mean is illdefined instead of inaccurate.
0IlyaShpitser
Sorry, what?
-4Slider
having unanimous tesitimony means that the gaussian is too narrow to be the results of noisy testimonies. So either they gave absolutely accurate testimonies or they did something else than testify. Having them all agree raises more doubt on that everyone was trying to deliver justice than their ability to deliver it. If a jury answers a "guilty or not guilty" verdict with "banana" it sure ain't a result of a valid justice process. Too certasin results are effectively as good as "banana" verdicts. If our assumtions about the process hold they should not happen.
1Viliam
I believe I read somewhere on LW about an investment company that had three directors, and when they decided whether to invest in some company, they voted, and invested only if 2 of 3 have agreed. The reasoning behind this policy was that if 3 of 3 agreed, then probably it was just a fad. Unfortunately, I am unable to find the link.
[-][anonymous]140

A side note.

My mother is a psychologist, father - an applied physicist, aunt 1 - a former morgue cytologist, aunt 2 - a practicing ultrasound specialist, father-in-law - a general practitioner, husband - a biochemist, my friends (c. 5) are biologists, and most of my immediate coworkers teach either chemistry or biology. (Occasionally I talk to other people, too.) I'm mentioning this to describe the scope of my experience with how they come to terms with the 'animal part' of the human being; when I started reading LW I felt immediately that people here come from different backgrounds. It felt implied that 'rationality' was a culture of either hacking humanity, or patching together the best practices accumulated in the past (or even just adopting the past), because clearly, we are held back by social constraints - if we weren't, we'd be able to fully realize our winning potential. (I'm strawmanning a bit, yes.) For a while I ignored the voice in the back of my mind that kept mumbling 'inferential distances between the dreams of these people and the underlying wetware are too great for you to estimate', or some such, but I don't want to anymore.

To put it simply, there is a marked diff... (read more)

6Viliam
I am not sure what exactly you wanted to say. All I got from reading it is: "human anatomy is complicated, non-biologists hugely underestimate this, modifying the anatomy of human brain would be incredibly difficult". I am not what is the relation to the following part (which doesn't speak about modifying the anatomy of human brain): Are you suggesting that for increasing rationality, using "best practices" will be not enough, changes in anatomy of human brain will be required (and we underestimate how difficult it will be)? Or something else?
4Lumifer
I read Romashka as saying that the clean separation between the hardware and the software does not work for humans. Humans are wetware which is both.
4[anonymous]
That, and that those changes in the brain might lead to other changes not associated with intelligence at all. Like sleep requirements, haemorrages or fluctuations in blood pressure in the skull, food cravings, etc. Things that belong to physiology and are freely discussed by a much narrower circle of people, in part because even among biologists many people don't like the organismal level of discussion, and doctors are too concerned with not doing harm to consider radical transformations. Currently, 'rationality' is seen (by me) as a mix of nurturing one's ability to act given the current limitations AND counting on vastly lessened limitations in the future, with some vague hopes of adapting the brain to perform better, but the basis of the hopes seems (to me) unestablished.
0Viliam
That's also more or less how I see it. I am not planning to perform a brain surgery on myself in the near future. :D
2Gunnar_Zarncke
The XKCD for it: DNA (or "Biology is largely solved"): https://xkcd.com/1605/
0ChristianKl
I see three lines of addressing this concern: 1) Anatomy was over a long time under strong evolutionary pressure. Human intelligence is a fairly recent phenomena of the last 100,000 years. It's a mess that's not as well ordered as anatomy. 2) Individual humans deviate more from the textbook anatomy than you would guess by reading the textbook. 3) The brain seems to be build out of basic modules that easily allow it to add an additional color if you edit the DNA in the eye via gene therapy. People with implented magnets can feel magnetic fields. It's modules allow us to learn complex mental tasks like reading texts which is very far from what we evolved to do.
3[anonymous]
Also, human intelligence has been evolving exactly as long as human anatomy, it simply leaped forward recently in ways we can notice. That doesn't mean it hasn't been under strong evolutionary pressure before. I would say that until humans learned to use tools, the pressure on an individual human had had to be stronger.
1ChristianKl
I don't think that reflects reality. Our anatomy isn't as different from chimpanzee's as our minds. Most people hear voices in their head that say stuff to them. Chimpanzee's don't have language to do something similar.
1[anonymous]
I'm not saying otherwise! I'm saying that the formulation has little sense either way. Compare: 'there is little observed variation in anatomy between apes in broad sense because the evolutionary pressure constraining anatomical changes is too great to allow much viable variation', 'there is little observed variation in anatomy ..., but not in intelligence, because further evolution of intelligence allows for greater success and so younger branches are more intelligent and better at survival', 'only change in anatomy drives change in intelligence, so apparently there was some great hack which translated small changes in anatomy to lead to great changes in intelligence', 'chimpanzees never tell us about the voices they hear'...
0ChristianKl
There are million of years invested into the task about how to move with legs. There's not millions of years invested into the task of how brains best deal with language.
0[anonymous]
What do you understand as evolution of the mind, then, and how is it related to that of organs?
0ChristianKl
I think adding lanugage produced something like a quantum leap for the mind and that there's no similar quantum leap for other organ's like the human heart. The quantum leap means that other parts have to adapt and optimize for now language being a major factor. You could look at IQ. The mental difference between a human at IQ 70 and a human at IQ 130 is vast. Intelligence is also highly heritable. With a few hundred thousand years and a decent amount of evolutionary pressure on stronger intelligence you wouldn't have many low IQ people anymore.
1[anonymous]
And yet textbook anatomy is my best guess about a body when I haven't seen it, and all deviations are describable compared to it. What I object to is the norm of treating phenomenology, such as the observations about magnets and eye color, as more-or-less solid background for predictions about the future. If we discuss, say, artificial new brain modules, that's fine by me as long as I keep in mind the potential problems with cranial pressure fluctuations, the need to establish interconnections with other neurons - in some very ordered fashion, building blood vessels to feed it, changes in glucose consumption, even the possibility of your children cgoosing to have completely different artificial modules than you, to the point that heritability becomes obsolete, etc. I am not a specialist to talk about it. I have low priors on anybody here pointing me to The Literature were I to ask. I think seeing at least the bones and then trying to gauge the distance to what experimental interference one considers possible would be a good thing to happen.
[-][anonymous]130

Would anyone actually be interested if I prepared a post about the recent "correlation explanation" approach to latent-model learning, the "multivariate mutual information"/"total correlation" metric it's all based on, supervenience in analytical philosophy, and implications for cognitive science and AI, including FAI?

Because I promise I didn't write that last sentence by picking buzzwords out of a bag.

6IlyaShpitser
I might be super mean about this!
0[anonymous]
Is "super mean" still a bad thing, or now a good thing?
5gjm
I will be very interested to read both your account of correlation explanation and Ilya's super-meanness about it.
4IlyaShpitser
In the words of Calvin's dad, it builds character.
0[anonymous]
Ah. You mean you'll act as Reviewer 2. Excellent.
0IlyaShpitser
There is a relevant quote from Faust by Mephistopheles.
0[anonymous]
That being, for those of us too gauche to have read Faust in the original?
1IlyaShpitser
Ein Teil von jener Kraft, Die stets das Böse will und stets das Gute schafft. Ich bin der Geist der stets verneint! ---------------------------------------- Part of that power which would Do evil constantly and constantly does good. I am the spirit of perpetual negation
0[anonymous]
Anyway, could you PM me your email address? I figure that for a start at being Reviewer 2, I might as well send you the last thing I wrote along these lines, and then start writing the one I've actually just promised.
0[anonymous]
I really don't think that Reviewer 2 has anything to do with Lucifer, or with the Catholic view of Lucifer/Satan as self-thwarting.
1gjm
I think you are overestimating how literally and seriously Ilya intended his reference to be taken. I don't think the intended parallel goes beyond this: the devil (allegedly) tries to do evil and ends up doing good in spite of that; a highly critical reviewer feels (to the reviewee) like he's doing evil but ends up doing good in spite of that.
0[anonymous]
Ah. But of course the reviewer thinks he's good, from his point of view within the system.
1gjm
Oh yes, indeed. (For me that's actually part of why the parallel Ilya is drawing is funny.)
6Manfred
I'd be interested! I hereby promise to read and comment, unless you've gone totally off the bland end.
3[anonymous]
Ok, then, it'll definitely happen Real Soon Now.
3Lumifer
Moderately. On the plus side it's forcing people to acknowledge the uncertainty involved in many numbers they use. On the minus side it's treating everything as a normal (Gaussian) distribution. That's a common default assumption, but it's not necessarily a good assumption. To start with an obvious problem, a lot of real-world values are bounded, but the normal distribution is not.
0iarwain1
It's open source. Right now I only know very basic Python, but I'm taking a CS course this coming semester and I'm going for a minor in CS. How hard do you think it would be to add in other distributions, bounded values, etc.?
0Douglas_Knight
As a matter of programming it would be very easy. The difficult part is designing the user interface so that the availability of the options doesn't make the overall product worse.
0[anonymous]
Author is on the effective altruism forum, he said his next planned future is more distributions, and that he specifically architected it to be easy to add new distributions.
0Lumifer
How hard will it be to add features depends on the way it's architected, but the real issue is complexity. After you add other distributions, bounds, etc. the user would have to figure out what are the right choices for his specific situation and that's a set of non-trivial decisions. Besides, one of the reasons people like normal distributions is that they are nicely tractable. If you want to, say, add two it's easy to do. But once you go to even slightly complicated things like truncated normals, a lot of operations do not have analytical solutions and you need to do stuff numerically and that becomes... complex and slow.
0Douglas_Knight
It is already doing everything numerically.
2moridinamael
This is awesome. Awesome awesome awesome. I have been trying to code something like this for a long time but I've never got the hang of UI design.

Why does E. Yudkowsky voice such strong priors e.g. wrt. the laws of physics (many worlds interpretation), when much weaker priors seem sufficient for most of his beliefs (e.g. weak computationalism/computational monism) and wouldn't make him so vulnerable? (With vulnerable I mean that his work often gets ripped apart as cultish pseudoscience.)

You seem to assume that MWI makes the Sequences more vulnerable; i.e. that there are people who feel okay with the rest of the Sequences, but MWI makes them dismiss it as pseudoscience.

I think there are other things that rub people the wrong way (that EY in general talks about some topics more than appropriate for his status, whether it's about science, philosophy, politics, or religion) and MWI is merely the most convenient point of attack (at least among those people who don't care about religion). Without MWI, something else would be "the most controversial topic which EY should not have added because it antagonizes people for no good reason", and people would speculate about the dark reasons that made EY write about that.

For context, I will quote the part that Yvain quoted from the Sequences:

Everyone should be aware that, even though I’m not going to discuss the issue at first, there is a sizable community of scientists who dispute the realist perspective on QM. Myself, I don’t think it’s worth figuring both ways; I’m a pure realist, for reasons that will become apparent. But if you read my introduction, you are getting my view. It is not only my view. It is probabl

... (read more)

Because he was building a tribe. (He's done now).


edit: This should actually worry people a lot more than it seems to.

3Lumifer
Why?
6IlyaShpitser
Consider that if stuff someone says resonates with you, that someone is optimizing for that.
4Lumifer
There are two quite different scenarios here. In scenario 1 that someone knows me beforehand and optimizes what he says to influence me. In scenario 2 that someone doesn't know who will respond, but is optimizing his message to attract specific kinds of people. The former scenario is a bit worrisome -- it's manipulation. But the latter one looks fairly benign to me -- how else would you attract people with a particular set of features? Of course the message is, in some sense, bait but unless it's poisoned that shouldn't be a big problem.
0Dagon
I don't know why scenario 2 should be any less worrisome. The distinction between "optimized for some perception/subset of you" and "optimized for someone like you" is completely meaningless.
0Lumifer
Because of degree of focus. It's like the distinction between a black-hat scanning the entire 'net for vulnerabilities and a black-hat scanning specifically your system for vulnerabilities. Are the two equally worrisome?
0Dagon
equally worrisome, conditional on me having the vulnerability the blackhat is trying to use. This is equivalent to the original warning being conditional on something resonating with you.
-1IlyaShpitser
MIRI survives in part via donations from people who bought the party line on stuff like MWI.
4ChristianKl
Are you saying that based on having looked at the data? I think we should have a census that has numbers about donations for MIRI and belief in MWI.
2Vaniver
Really, you would want MWI belief delta (to before they found LW) to measure "bought the party line."
1IlyaShpitser
I am not trying to emphasize MWI specifically, it's the whole set of tribal markers together.
4bogus
If there is a tribal marker, it's not MWI persay; it's choosing an interpretation of QM on grounds of explanatory parsimony. Eliezer clearly believed that MWI is the only interpretation of QM that qualifies on such grounds. However, such a belief is quite simply misguided; it ignores several other formulations, including e.g. relational quantum mechanics, the ensemble interpretation, the transactional interpretation, etc. that are also remarkable for their overall parsimony. Someone who advocated for one of these other approaches would be just as recognizable as a member of the rationalist 'tribe'.
0[anonymous]
* contested the strength of the MW claim. Explanatory parsimony doesn't differentiate a strong from a weak claim OP's original claim:
3Lumifer
A fair point. Maybe I'm committing the typical mind fallacy and underestimating the general gullibility of people. If someone offers you something, it's obvious to me that you should look for strings, consider the incentives of the giver, and ponder the consequences (including those concerning your mind). If you don't understand why something is given to you, it's probably wise to delay grabbing the cheese (or not touching it) until you understand. And still this all looks to me like a plain-vanilla example of a bootstrapping an organization and creating a base of support, financial and otherwise, for it. Unless you think there were lies, misdirections, or particularly egregious sins of omission, that's just how the world operates.
2Richard_Kennaway
Also, anyone who succeeds in attracting people to an enterprise, be it by the most impeccable of means, will find the people they have assembled creating tribal markers anyway. The leader doesn't have to give out funny hats. People will invent their own.
1IlyaShpitser
People do a lot of things. Have biases, for example. There is quite a bit of our evolutionary legacy it would be wise to deemphasize. Not like there aren't successful examples of people doing good work in common and not being a tribe. ---------------------------------------- edit: I think what's going on is a lot of the rationalist tribe folks are on the spectrum and/or "nerdy", and thus have a more difficult time forming communities, and LW/etc was a great way for them to get something important in their life. They find it valuable and rightly so. They don't want to give it up. I am sympathetic to this, but I think it would be wise to separate the community aspects and rationality itself as a "serious business." Like, I am friends with lots of academics, but the academic part of our relationship has to be kept separate (I would rip into their papers in peer review, etc.) The guru/disciple dynamic I think is super unhealthy.
0[anonymous]
Because warning against dark side rationality with dark side rationality to find light side rationalists doesn't look good against the perennial c-word claims against LW...
1knb
I think LW is skewed toward believing in MWI because they've all read Yudkowsky. It really doesn't seem likely Yudkowsky just gleaned MWI was already popular and wrote about it to pander to the tribe. In any case I don't really see why MWI would be a salient point for group identity.
4IlyaShpitser
That's not what I am saying. People didn't write the Nicene Creed to pander to Christians. (Sorry about the affect side effects of that comparison, that wasn't my intention, just the first example that came to mind). MWI is perfect for group identity -- it's safely beyond falsification, and QM interpretations are a sufficiently obscure topic where folks typically haven't thought a lot about it. So you don't get a lot of noise in the marker. But I am not trying to make MWI into more than it is. I don't think MWI is a centrally important idea, it's mostly an illustration of what I think is going on (also with some other ideas).
0[anonymous]
Consequentialist ethic

My model of him has him having an attitude of "if I think that there's a reason to be highly confident of X, then I'm not going to hide what's true just for the sake of playing social games".

3ChristianKl
Given the way the internet works bloggers who don't take strong stances don't get traffic. If Yudkowsky wouldn't have took positions confidently, it's likely that he wouldn't have founded LW as we know it. Shying away from strong positions for the sake of not wanting to be vulnerable is no good strategy.
0username2
I don't agree with this reasoning. Why not write clickbait then if the goal is to drive traffic?
3ChristianKl
I don't think the goal is to drive traffic. It's also to have an impact on the person who reads the article. If you want a deeper look at the strategy look at Nassim Taleb is quite explicit about the principle in Antifragile. I don't think that Elizers public and private beliefs differ on the issues that RaelwayScot mentioned. A counterfactual world where Eliezer would be a vocal about his beliefs wouldn't have ended up with LW as we know it.
2[anonymous]
its a balancing act
0hairyfigment
Actually, I can probably answer this without knowing exactly what you mean: the notion of improved Solomonoff Induction that gets him many-worlds seems like an important concept for his work with MIRI. I don't know where "his work often gets ripped apart" for that reason, but I suspect they'd object to the idea of improved/naturalized SI as well.
4IlyaShpitser
His work doesn't get "ripped apart" because he doesn't write or submit for peer review.
0[anonymous]
inductive bias
0hairyfigment
The Hell do you mean by "computational monism" if you think it could be a "weaker prior"?

So I think I've genuinely finished http://gwern.net/Mail%20delivery now. It should be an interesting read for LWers: it's a fully Bayesian decision-theoretic analysis of when it is optimal to check my mail for deliveries. I learned a tremendous amount working my way through it, from how to much better use JAGS to how to do Bayesian model comparison & averaging to loss functions and EVSI and EVPI for decision theory purposes to even dabbling in reinforcement learning with Thompson sampling/probability-matching.

I thought it was done earlier, but then I r... (read more)

2gwern
Related to this, I am trying to get a subreddit going for statistical decision theory links and papers to discuss: https://www.reddit.com/r/DecisionTheory/ Right now it's just me dumping in decision-theory related material like cost-benefit analyses, textbooks, relevant blog posts, etc, but hopefully other people will join in. We have flair and a sidebar now! If anyone wants to be a mod, just ask. (Workload should be negligibly small, this is more so the subreddit doesn't get locked by absence.)
0gwern
If anyone with graphics skills would like to help me make a header for the subreddit, I have some ideas and suggested images in https://plus.google.com/103530621949492999968/posts/ZfEtb54aN4Q for visualizing the steps in decision analysis.

Recently my working definition of 'political opinion' became "which parts of reality did the person choose to ignore". At least this is my usual experience when debating with people who have strong political opinions. There usually exists a standard argument that an opposing side would use against them, and the typical responses to this argument are "that's not the most important thing; now let's talk about a completely different topic where my side has the argumentational advantage". (LW calls it an 'ugh field'.) Sometimes the argument... (read more)

I'd say that his critics are annoyed that he's ignoring their motte [ETA: Well, not ignoring, but not treating as the bailey], from which they're basing their assault on Income Inequality. "Come over here and fight, you coward!"

There's not much concession in agreeing that fraud is bad. Look: Fraud is bad. And income inequality is not. Income inequality that promotes or is caused by fraud is bad, but it's bad because fraud is bad, not because income inequality is bad.

It's possible to be ignorant of the portion of the intellectual landscape that includes that motte; to be unaware of fraud. It's possible to be ignorant of the portion of the intellectual landscape that doesn't include the bailey; to be unaware of wealth inequality that isn't hopelessly entangled in fraud. But once you realize that the landscape includes both, you have two conversations you can have: One about income inequality, and one about fraud.

Which is to say, you can address the motte, or you can address the bailey. You don't get to continue to pretend they're the same thing in full intellectual honesty.

7TheAncientGeek
"More specifically, after reading the essay Economic Inequality by Paul Graham, I would say that the really simplified version is that there are essentially two different ways how people get rich. (1) By creating value; and today individuals are able to create incredible amounts of value thanks to technology. (2) By taking value from other people, using force or fraud in a wider meaning of the word; sometimes perfectly legally; often using the wealth they already have as a weapon." Which one is inheritance?
4tut
I think it would be counted as whichever way was used by whoever you got the inheritance from.
4Viliam
Inheritance is how some people randomly get the weapon they can choose to use for (2). I don't have a problem with inheritance per se; I see it as a subset of donation, and I believe people should be free to donate their money. It's just that in a bad system, it will be a multiplier of the badness. If you have a system where evil people can get their money by force/fraud, and where they can use the money to do illegal stuff or to buy lobbists and change laws in ways that allow them to use more force/fraud legally... in such system inheritance gives you people who benefit from crimes of their ancestors, who in their childhood get extreme power over other people without having to do anything productive ever, etc.
0TheAncientGeek
Can't an inheritance be used as seed money for some wonderful world-enhacing entrepeneurship?
2ChristianKl
Bill Gates argues that it's bad to inherent children so much money that they don't have to work: https://www.ted.com/talks/bill_and_melinda_gates_why_giving_away_our_wealth_has_been_the_most_satisfying_thing_we_ve_done I think the world is a better place for Bill Gates thinking that way.
2polymathwannabe
I never thought I'd find myself saying this: I don't want to be Bill Gates's kid.
3PipFoweraker
Does that not-want take into consideration your changed capacity to influence him if you became his child?
0polymathwannabe
How would I have any more influence than his actual child does?
1PipFoweraker
I would posit that his actual children have a comfortably non-zero amount of influence over him, and that the rest of us have a non-zero-but-muchcloser-to-zero amount of influence over him.
1Viliam
Yeah, the idea of "I could have been a part of the legendary 1%, but my parents decided to throw me back among the muggles" could make one rather angry.
1gjm
I bet Bill Gates's children will still be comfortably in the 1%. (I found one source saying he plans to leave them $10M each. It didn't look like a super-reliable source.)
0Lumifer
/snort In such a case I would probably think that you failed at your child's upbringing, much earlier than deciding to dispossess her.
3Viliam
Imagine that your parents were uneducated and homeless as teenagers. They lived many years on the streets, starving and abused. But they never gave up hope, and never stopped trying, so when they were 30, they already had an equivalent of high-school education, were able to get a job, and actually were able to buy a small house. Then you were born. You had a chance to start your life in much better circumstances than your parents had. You could have attended a normal school. You could have a roof above your head every night. You could have the life they only dreamed about when they were your age. But your parents thought like this: "A roof above one's head, and a warm meal every day, that would spoil a child. We didn't have that when we were kids -- and look how far we got! All the misery only made our spirits stronger. What we desire for our children is to have the same opportunity for spiritual growth in life that we had." So they donated all the property to charity, and kicked you out of the house. You can't afford a school anymore. You are lucky to find some work that allows you to eat. Hey, why the sad face? If such life was good enough for them, how dare you complain that it is not good anough for you? Clearly they failed somewhere at your upbringing, if you believe that you deserve something better than they had. (Explanation: To avoid the status quo bias of being in my social class -- to avoid the feeling that the classes below me have it so bad that it breaks them, but the classes above me have it so good that it weakens their spirits; and therefore my social class, or perhaps the one only slighly above me, just coincidentally happens to be the optimal place in the society -- I sometimes take stories about people, and try to translate them higher or lower in the social ladder and look if they still feel the same.)
4Lumifer
It's not a social class thing. It's a human motivation thing. Humans are motivated by needs and if you start with a few $B in the bank, many of your needs are met by lazily waving your hand. That's not a good thing as the rich say they have discovered empirically. That, of course, is not a new idea. A quote attributed to Genghis Khan says and there is an interesting post discussing the historical context. The consequences, by the way, are very real -- when you grow soft, the next batch of tough, lean, and hungry outsiders comes in and kills you. The no-fortune-for-you rich do not aim for their children to suffer (because it ennobles the spirit or any other such crap). They want their children to go out into the world and make their own mark on the world. And I bet that these children still have a LOT of advantages. For one thing, they have a safety net -- I'm pretty sure the parents will pay for medevac from a trek in Nepal, if need be. For another, they have an excellent network and a sympathetic investor close by.
0Viliam
Same difference. So there is an optimal amount of wealth to inherit to maximize human motivation, and it happens to be exactly the same amount that Gateses are going to give their children. (The optimal amount depends on the state of global economy or technology, so it was a different amount for Genghis Khan than it is now.) I'd like to see the data supporting this hypothesis. Especially the kind of data that allows you to estimate the optimal amount as a specific number (not merely that the optimal amount is less than infinity). Which cannot be done if you have too much money. But will be much easier to do if you have less money. And you know precisely that e.g. 10^7 USD is okay, but 10^8 USD is too much. Imagine that your goal would be to have your children "make their own mark on the world", and that you really care about that goal (as opposed to just having it as a convenient rationalization for some other goals). As a rational person, would you simply reduce their inheritance to sane levels and more or less stop there? If you would spend five minutes thinking about the problem, couldn't you find a better solution?
2gjm
You say this as if it's a silly thing that no one could have good reason to believe. I've no idea whether it's actually true but it's not silly. Here, let me put it differently. "It just happens that the amount some outstandingly smart people with a known interest in world-optimization and effectively unlimited resources have decided to leave their children is the optimal amount." I mean, sure, they may well have got it wrong. But they have obvious incentives to get it right, and should be at least as capable of doing so as anyone else. I doubt they would claim to know precisely. But they have to choose some amount, no? You can't leave your children a probability distribution over inheritances. (You could leave them a randomly chosen inheritance, but that's not the same.) It seems like whatever the Gateses were allegedly planning, you could say "And you know precisely that doing X is okay, but doing similar-other-thing-Y is not" and that would have just the same rhetorical force. I don't know. Could you? Have you? If so, why not argue "If the Gateses really had the goals they say, they would do X instead" rather than "If the Gateses really had the goals they say, they would do something else instead; I'm not saying what, but I bet it would be better than what they are doing."? Again, I'm not claiming that what the Gateses are allegedly planning is anything like optimal; for that matter, I have no good evidence that they are actually planning what they're allegedly planning. But the objections you're raising seem really (and uncharacteristically) weak. But I'm not sure I've grasped what your actual position is. Would you care to make it more explicit?
7Viliam
My actual position is that: 1) Gateses had some true reason for donating most of the money -- probably a combination of "want to do a lot of good", "want to become famous", etc. -- and they decided that these goals are more important for them than maximizing the inheritance of their children. I am not criticizing them for making that decision; I think it is a correct one, or at least in a good direction. 2) But the explanation that they want their children to "make their own mark on the world" is most likely a rationalization of the previous paragraph. It's like, where the true version is "saving thousand human lives is more important for me than making my child twice as rich", this explanation is trying to add "...and coincidentally, not making my child twice as rich is actually better for my child, so actually I am optimizing for my child", which in my opinion is clearly false, but obviously socially preferable. 3) What specifically would one do to literally optimize for the chance that their children would "make their own mark on the world"? I am not going into details here, because that would depend on specific talents and interests of the child, but I believe it is a combination of giving them more resources; spending more resources on their teachers or coaches; spending my own time helping them with their own projects. 4) I can imagine being the child, and selfishly resenting that my parents did not optimize for me. 5) However I think that the child still has more money than necessary to have a great life. My whole point is that (2) is a rationalization.
0gjm
OK, I understand. Thanks.
0Richard_Kennaway
Does this work? I don't know; I have no children.
0Lumifer
Who are you arguing against? I saw no one express the position that you're attacking. Huh? Who stopped there? Do you have any reason to believe that the Gates handed their kids a "small" check and told them to get lost?
2Viliam
Sure, it can be used for whatever purpose. So now we have an empirical question of what is the average usage of inheritance in real life. Or even better, the average usage of inheritance, as a function of how much was inherited, because patterns at different parts of the scale may be dramatically different. I would like to read a data-based answer to this question. (My assumption is that the second generation usually tries to copy what their parents did in the later period of life, only less skillfully because regression to the mean; and the third generation usually just wastes the money. If this is true, then it's the second generation, especially if they are "criminals, sons of criminals", that I worry about most.)
1TheAncientGeek
I don't think it's a question of more research being needed, I think it's a an issue ofthe original two categories being two few and too sharply delineated.
0gjm
Yeah, should have been (1) by creating value, (2) by taking value from others by force or fraud, (3) by being given value willingly by benevolently disposed others. Of these #3 is rather rare except for inheritance (broadly understood; parents may give their children a lot of money while still alive). Make it "essentially two different ways how people or families get rich", though, and the remaining cases of #3 are probably rare enough to ignore. Here's another case that isn't so neatly fitted into Viliam's dichotomy. Suppose your culture values some scarce substance such as gold, purely because of its scarcity, and you discover a new source of that substance or a new way of making it. You haven't created much value because the stuff was never particularly valued for its actual use, but it's not like you stole it either. What actually happened: everyone else's gold just got slightly less valuable because gold became less scarce, and what they lost you gained. But for some reason gold mining isn't usually considered a variety of large-scale theft. Of course gold has some value. You can make pretty jewelry out of it, and really good electrical contacts, and a few other things. But most of the benefit you get if you find a tonne of gold comes from its scarcity-value rather than from its practical utility. Printing money has essentially the same effect, but isn't generally used directly to make individuals rich.
1TheAncientGeek
"Make it "essentially two different ways how people or families get rich", though, and the remaining cases of #3 are probably rare enough to ignore." I think inheritance is an important case, because lack of inherited wealth, by default, is what leads to some people being excluded from becoming self-made millionaires like Mr Graham; and because inheritance isn't inevitable, it's something that can be adjusted independently of other variables. "Here's another case that isn't so neatly fitted into Viliam's dichotomy. Suppose your culture values some scarce substance such as gold, purely because of its scarcity, and you discover a new source of that substance or a new way of making it. You haven't created much value because the stuff was never particularly valued for its actual use, but it's not like you stole it either. What actually happened: everyone else's gold just got slightly less valuable because gold became less scarce, and what they lost you gained. But for some reason gold mining isn't usually considered a variety of large-scale theft." How natural resources are dealt with is an important point in political philosophy. If you think people are entitled to keep whatever they find, you end up with a conservative philosophy, if you think they should be shared or held in common you end up with a leftish one.
1gjm
In case it wasn't clear: So do I. But if we think of families rather than individuals as the holders of wealth, Viliam's two ways of getting rich cover the available options fairly well; that's all I was saying.
6Lumifer
Well, but he's writing an essay and has a position to put forward. Not being blind to counter-arguments does not require you to never come to a conclusion. At a crude level, the pro arguments show the benefits, the contra arguments show the costs, but if you do the cost-benefit analysis and decide that it's worth it, you can have an express definite position without necessarily ignoring chunks of reality.
0The_Lion
So why are you focusing your complaining on Paul Graham's essay rather than on the essays complaining about "economic inequality" without even bothering to make the distinction? What does that say about your "ugh fields"? In fact a remarkable number of the people perusing strategy (1) are the same people railing against economic inequality. One would almost suspect they're intentionally conflating (1) and (2) to provide a smokescreen for their actions. Also since strategy (1) requires more social manipulation skills then strategy (2), the people pursuing strategy (1) can usually arrange for anti-inequality policies to mostly target the people in group (2).
1bogus
We hold someone like Paul Graham to higher standards than some random nobody trying to score political points. Isn't Graham one of the leading voices in the rationalist/SV-tech/hacker tribe?
-2The_Lion
Ok, while we're nitpicking Paul Graham's essay, I should mention the part of it that struck me as least rational when I read it. Namely, the sloppy way he talks about "poverty", conflating relative and absolute poverty. After all, thanks to advances in technology what's considered poverty today was considered unobtainable luxury several centuries ago.
1bogus
Advances in technology have certainly improved living standards across the board, but they have not done much for the next layer of human needs - things like social inclusion or safety against adverse events. Indeed, we can assume that, in reasonably developed societies (as opposed to dysfunctional places like North Korea or several African countries) lack of such things is probably the major cause of absolute 'poverty', since primary needs like food or shelter are easily satisfied. It's interesting to speculate about focused interventions that could successfully improve social inclusion; fostering "organic" social institutions (such as quasi-religious groups with a focus on socially-binding rituals and public services) would seem to be an obvious candidate.
5Richard_Kennaway
You have redefined "absolute poverty" to mean "absolute poverty on a scale revised to ignore the historic improvements", i.e. relative poverty. The internet has done a great deal for that. Which ones? Disease? Vast progress. Earthquakes and hurricanes? We make better buildings, better safety systems. Of course, we can also build taller buildings, and cities on flood plains, so the technology acts on both sides there. Institutions that require focused interventions to foster them are the opposite of "organic". Besides, "quasi-religious groups with a focus on socially-binding rituals and public services" already exist. Actual religions, for example, and groups such as Freemasons.
0bogus
I'm not 'redefining' the scale absolute poverty is measured on, or ignoring the historic improvements in it. These improvements are quite real. They're also less impressive than we might assume by just looking at material living standards, because social dynamics are relevant as well.
0bogus
Sure, but does rent-seeking really explain the increase in inequality since, say, the 1950s or so, which is what most folks tend to be worried about and what's discussed in Paul Graham's essay? I don't think it does, except as a minor factor (that is, it could certainly explain increased wealth among congress-critters and other members of the 'Cathedral'); the main factor was technical change favoring skilled people and sometimes conferring exceptional amounts of wealth to random "superstars".
4Viliam
I don't know. Seems to me possible that people like Paul Graham (or Eliezer Yudkowsky) may overestimate the impact of technical change on wealth distribution because of the selection bias -- they associate with people who mostly make wealth using the "fair" methods. If instead they would be spending most of their time among African warlords, or Russian oligarchs, or whatever is their more civilized equivalent in USA, maybe they would have very different models of how wealth works. The technological progress explains why the pie is growing, not how the larger pie is divided. There are probably more people who got rich selling homeopathics, than who got rich founding startups. Yet in our social sphere it is a custom to pretend that the former option does not exist, and focus on the latter.
0ChristianKl
If you look at the Forbes list there aren't many African warlords on it. Which people do you think became billionaire's mainly by selling homeopathics? Homeopathics is a competive market where there no protection from competitors that allows to charge high sums of money in the way startups like Google produce a Thielean monopoly.
2gjm
It seems possible that African warlords' wealth is greatly underestimated by comparing notional wealth in dollars. E.g., if you want to own a lot of land and houses, that's much cheaper (in dollars) in most of Africa than in most of the US. If you want a lot of people doing your bidding, that's much cheaper (in dollars) in most of Africa than in most of the US.
0ChristianKl
On the other hand the African warlord has to invest resources into avoiding getting murdered.
0gjm
Yup. It's certainly not clear-cut, and there are after all reasons why the more expensive parts of the world are more expensive.
0Viliam
Money has more or less logarithmic utility. So selling homeopathics could still bring higher average utility (although less average money) than startups. For every successful Google there are thousands of homeopaths.
0ChristianKl
That depends on your goals. If you want to create social or political impact with money it's not true. Large fortunes get largely made in tech, resources and finance.
0tut
I think the generalized concept is 'politicians'. And yeah, that sounds likely. But I would say that it is a problem that the ones who make the rules and the ones who explain to everyone else what's what all live in an environment where earning something honestly is weird is a problem. That there are some who are not in such a bubble is not the problem.
4Viliam
Oligarchs are the level above politicians. You can think about them as the true employers of most politicians. (If I can make an analogy, for a politician the voters are merely a problem to be solved; the oligarch is the person who gave them the job to solve the problem.) Imagine someone who has incredible wealth, owns a lot of press in the country, and is friendly with many important people in police, secret service, et cetera. The person who, if they like you as a wannabe politician, can give you a lot of money and media power to boost your career, in return for some important decisions when you get into the government.
2Lumifer
So, can you tell us who employs Frau Merkel? M. Hollande? Mr. Cameron? Mr. Obama? Please be specific.
3Viliam
This requires a good investigative journalist with good understanding of economics. Which I am not. I could tell you some names for Slovakia (J&T, Penta, Brhel, Výboh), which probably you would have no way to verify. (Note that the last one doesn't even have a Wikipedia page. These people in general prefer privacy, they own most of the media, and they have a lot of money to sue you if you write something negative about them, and they also own the judges which means they will win each lawsuit.) I am not even sure if countries other than ex-communist use this specific model. (This doesn't mean I believe that the West is completely fair. More likely the methods of "power above politicians" in the West are more sophisticated, while in the East sophistication was never necessary if you had the power -- you usually don't have to go far beyond "the former secret service bosses" and check if any of them owns a huge economical empire.)
0Lumifer
Ah, well, that's a rather important detail. I'm not saying that your model is entirely wrong -- just that it's not universally applicable. By the way, another place where you are likely to find it is in Central and South America. However I think it's way too crude to be applied to the West. The interaction between money and power is more... nuanced there and recently the state power seem to be ascending.
2Lumifer
Except that, well, you know, in Soviet Russia the politician is above the oligarchs :-D
0ChristianKl
He does something. He uses niceness as a filter for filtering out people from YCombinator who aren't nice. YCombinator has standardized term sheats to prevent bad VC's from ripping off companies by adding intransparent terms. I have heard it written that YCombinator works as a sort of union for startup founders whereby a VC can't abuse one YCombinator company because word would get around within YCombinator and the VC suffer negative consequences from abusing a founder.
1Viliam
Yes. But for a person who is focused on the problem of "people taking a lot of value from others by force and fraud" this is like a drop in the ocean. Okay, PG has created a bubble of niceness around himself, that's very nice of him. What about the remaining 99.9999% of the world? Is this solution scalable? EDIT: Found a nice quote in Mean People Fail:
5ChristianKl
If you take a single company YCombinator company like AirBnB I think it affects a lot more than 0.0001% of the world. The solution of standardized term sheets seems to scale pretty well. The politics of standardized terms sheats aren't sexy but they matter very strongly. Power in our society is heavily contractualized. As far as for the norms of YCombinator being scalable, YCombinator itself can scale to be bigger. YCombinator is also a role-model for other accelerators due to the fact that it's the only accelerator that produced unicorns. Apart from that the idea that Paul Graham fails because he doesn't singlehandily turn the world towards the good is ridiculous. You criticisze him because of not signaling that he cares by talking enough about the issue. I think you get the idea of how effective political action looks very wrong. It's not about publically complaining about evil people and proposing ways to fight evil people. It's about building effective communities with good norms. Think globally but act locally. Make sure that your enviroment is well so that it can grow and become stronger.
4gjm
So maybe it's only 99.99% rather than 99.9999%. I don't think this really affects Viliam's point, which is that if a substantial fraction of the world's economic inequality arises from cause 2 (taking by force or fraud) more than from cause 1 (creating value), and Paul Graham writes and acts as if it's almost all cause 1, then maybe Paul Graham is doing the same thing he complains about other people doing and ignoring inconvenient bits of reality. Note that PG could well be doing that even if when working on cause 1 he takes some measures to reduce the impact of cause 2 on it. It's not like PG completely denies that some people get rich by exploiting or robbing others; Viliam's suggesting only that he may be closing his eyes to how much of the world's economic inequality arises that way. If you have a world full of evil then don't you want to do both of (1) fight the evil and (2) build enclaves of not-evil?
6Lumifer
That may have been implied, but wasn't stated. Is it actually Viliam's point? I am not sure how true it is -- consider e.g. Soviet Russia. A lot of value was taken by force, but economic inequality was very low. Or consider the massive growth of wealth in China over the last 20 years. Where did this wealth come from -- did the Chinese create it or did they steal it from someone? This is a tricky subject because Marxist-style analysis would claim that capital owners are fleecing the workers who actually create value and so pretty much all wealth resulting from investment is "stolen". If we start to discuss this seriously, we'll need to begin with the basics -- who has the original rights to created value and how are they established?
5Viliam
I believe this, at least in long turn; i.e. that even if once in a while some genius creates a lot of wealth and succeeds to capture a significant amount of it, sooner or later most of that money will pass into hands of people who are experts on taking value from others. No Marxism here, merely an assumption that people who specialize at X will become good at X, especially when X can be simply measured. Here X is "taking value from others". Nope, that was merely the official propaganda. In fact, high-level Communists were rich. Not only they had much more money, but perhaps more importantly, they were allowed to use "common property" that the average muggle wasn't allowed to touch. For example, there would be a large villa that nominally belonged to the state, but in fact someone specific lived there. Or there would be a service provided nominally to anyone (chosen by an unspecified algorithm), but in fact only high-level Communists had that service available and average muggles didn't. High-level Communists were also in much better positions to steal things or blackmail people. How is this wealth distributed among the specific Chinese? It can be both true that "China" created the wealth, and that the specific "Chinese" who own it, mostly stole it (from the other Chinese). My argument is completely unrelated to this. For me the worrying part about rich people is that they can use their wealth to (1) do crime more safely, and even (2) change laws so that the things they wanted to do are no longer crimes, but the things that other people wanted to do suddenly become crimes.
0Lumifer
I disagree. As I mentioned, they did live better (more comfortably, higher consumption) than the peons, but not to the degree that I would call "rich". I don't believe that critics of communist regimes, both internal and external, called the party bosses "rich" either. For comparison, consider, say, corrupt South/Central American dictatorships. Things have changed, of course. Putin is very rich. You are worried about power, not wealth. It's true that wealth can be converted to power -- sometimes, to some degree, at some conversion rate. But if you actually want power, the straightforward way is attempt to acquire more power directly. There is also the inverse worry: if no individuals have power, who does? Is it good for individuals to have no power, to be cogs/slaves/sheep?
3gjm
I'll let Viliam answer that one (while remarking that the bit you quoted certainly isn't what I claimed V's point to be, since you chopped it off after the antecedent). That's not a counterexample; what you want is a case where economic inequality was high without a lot of value being taken by force. Mostly a matter of real growth through technological and commercial advancement, I've always assumed. (Much of it through trade with richer countries -- that comes into category 1 in so far as the trade was genuinely beneficial to both sides.) But I'm far from an expert on China. It seems like one could say that about a very wide variety of issues, and that it's more likely to prevent discussion than to raise its quality in general. As for the actual question with which you close: I am not convinced that moral analysis in terms of rights is ever the right place to begin.
0Lumifer
I am not so much asking for moral analysis as for precise definitions for "using force or fraud in a wider meaning of the word; sometimes perfectly legally; often using the wealth they already have as a weapon". That seems like a very malleable part which can be bent into any shape desired.
0gjm
Well, that would be for Viliam to clarify rather than for me, should he so choose. It doesn't seem excessively malleable to me, for what it's worth.
1IlyaShpitser
I am contesting this.
0Lumifer
The first part, or the second, or both?
0IlyaShpitser
Second.
0Lumifer
To get a bit more concrete I'm talking about the Soviet Russia of the pre-perestroika era, basically Brezhnev times. Do you have something specific in mind? Of course party bosses lived better than village peons, but I don't think that the economic inequality was high. Money wasn't the preferred currency in the USSR -- it was power (and access).