As per a recent comment this thread is meant to voice contrarian opinions, that is anything this community tends not to agree with. Thus I ask you to post your contrarian views and upvote anything you do not agree with based on personal beliefs. Spam and trolling still needs to be downvoted.

New Comment
812 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Dualism is a coherent theory of mind and the only tenable one in light of our current scientific knowledge.

Which dualism?
Do you mean that, without strong evidence that we don't have, we should assume dualism, or that we have strong evidence for dualism? If it's the second one, can you give me an example of such a piece of evidence?
The second position. An example of the evidence is the two-way causal connection between your inner subjective experiences and the external universe.

How is that better explained by dualism?

Indeed. Two way interaction uis as well or better explained by physicalism.
I upvoted because I disagree (strongly) with the second conjunct, but I do agree that certain varieties of dualism are coherent, and even attractive, theories of mind.

[Please read the OP before voting. Special voting rules apply.]

Human value is not complex, wireheading is the optimal state, and Fun Theory is mostly wrong.

What would you have to see to convince you otherwise?
I think it would take an a priori philosophical argument, rather than empirical evidence.
Wouldn't cognitive science or neuroscience be sufficient to falsify such a theory? All we really have to do is show that "good life", as seen from the inside, does not correspond to maximized happy-juice or dopamine-reward.
The most that would show is what humans tend to prefer, not what they should prefer.
You're going to have to explain what meta-ethical view you hold such that "prefer on reflection given full knowledge and rationality" and "should prefer" are different.
I don't think neuroscience would tell you what you'd prefer on reflection given full knowledge and rationality.
Sufficiently advanced cognitive science definitely will, though.
I'm skeptical of that.
I can think of something I prefer, on reflection, against wireheading. Now what?
There's a lot of things that people are capable of preferring that's not pleasure, the question is whether it's what they should prefer.
Awfully presumptuous of you to tell people what they should prefer.
Why? We do this all the time, when we advise people to do something different from what they're currently doing.
No, we don't. That's making recommendations as to how they can attain their preferences. That you don't seem to understand this distinction is concerning. Instrumental and terminal values are different.
My position is in line with that - people are wrong about what their terminal values are, and they should realize that their actual terminal value is pleasure.
Why is my terminal value pleasure? Why should I want it to be?
Fundamentally, because pleasure feels good and preferable, and it doesn't need anything additional (such as conditioning through social norms) to make it desirable.
Why should I desire what you describe? What's wrong with values more complex than a single transistor? Also, naturalistic fallacy.
It's not a matter of what you should desire, it's a matter of what you'd desire if you were internally consistent. Theoretically, you could have values that weren't pleasure, such as if you couldn't experience pleasure. Also, the naturalistic fallacy isn't a fallacy, because "is" and "ought" are bound together.
Why is the internal consistency of my preferences desirable, particularly if it would lead me to prefer something I am rather emphatically against? Why should the way things are be the way things are?
(Note: Being continuously downvoted is making me reluctant to continue this discussion.) One reason to be internally consistent is that it prevents you from being Dutch booked. Another reason is that it enables you to coherently be able to get the most of what you want, without your preferences contradicting each other. As far as preferences and motivation are concerned, however things should be must appeal to them as they are, or at least as they would be if they were internally consistent.
Retracted: Dutch booking has nothing to do with preferences; it refers entirely to doxastic probabilities. I very much disagree. I think you're couching this deontological moral stance as something more than the subjective position that it is. I find your morals abhorrent, and your normative statements regarding others' preferences to be alarming and dangerous.
You can be Dutch booked with preferences too. If you prefer A to B, B to C, and C to A, I can make money off of you by offering a circular trade to you.
And if I'm unaware that such a strategy is taking place. Even if I was aware, I am a dynamic system evolving in time, and I might be perfectly happy with the expenditure per utility shift. Unless I was opposed to that sort of arrangement, I find nothing wrong with that. It is my prerogative to spend resources to satisfy my preferences.
That's exactly the problem - you'd be happy with the expenditure per shift, but every time a fill cycle would be made, you'd be worse off. If you start out with A and $10, pay me a dollar to switch to B, another dollar to switch to C, and a third dollar to switch to A, you'd end up with A and $7, worse off than you started, despite being satisfied with each transaction. That's the cost of inconsistency.
And 3 utilons. I see no cost there.
But presumably you don't get utility from switching as such, you get utility from having A, B, or C, so if you complete a cycle for free (without me charging you), you have exactly the same utility as when you started, and if I charge you, then when you're back to A, you have lower utility.
If I have utility in the state of the world, as opposed to the transitions between A, B, and C, I don't see how it's possible for me to have cyclic preferences, unless you're claiming that my utility doesn't have ordinality for some reason. If that's the sort of inconsistency in preferences you're referring to, then yes, it's bad, but I don't see how ordinal utility necessitates wireheading.
Regarding inconsistent preferences, yes, that is what I'm referring to. Ordinal utility doesn't by itself necessitate wireheading, such as if you are incapable of experiencing pleasure, but if you can experience it, then you should wirehead, because pleasure has the quale of desirability (pleasure feels desirable).
And you think that "desirability" in that statement refers to the utility-maximizing path?
I mean that pleasure, by its nature, feels utility-satisfying. I don't know what you mean by "path" in "utility-maximizing path".
Can you define 'terminal values', in the context of human beings?
Terminal values are what are sought for their own sake, as opposed to instrumental values, which are sought because they ultimately produce terminal values.
I know what terminal values are and I apologize if the intent behind my question was unclear. To clarify, my request was specifically for a definition in the context of human beings - that is, entities with cognitive architectures with no explicitly defined utility functions and with multiple interacting subsystems which may value different things (ie. emotional vs deliberative systems). I'm well aware of the huge impact my emotional subsystem has on my decision making. However, I don't consider it 'me' - rather, I consider it an external black box which interacts very closely with that which I do identify as me (mostly my deliberative system). I can acknowledge the strong influence it has on my motivations whilst explicitly holding a desire that this not be so, a desire which would in certain contexts lead me to knowingly make decisions that would irreversibly sacrifice a significant portion of my expected future pleasure. To follow up on my initial question, it had been intended to lay the groundwork for this followup: What empirical claims do you consider yourself to be making about the jumble of interacting systems that is the human cognitive architecture when you say that the sole 'actual' terminal value of a human is pleasure?
That upon ideal rational deliberation and when having all the relevant information, a person will choose to pursue pleasure as a terminal value.

[Please read the OP before voting. Special voting rules apply.]

The replication initiative (the push to replicate the majority of scientific studies) is reasonably likely to do more harm than good. Most of the points raised by Jason Mitchell in The Emptiness of Failed Replications are correct.

Imagine a physicist arguing that replication has no place in physics, because it can damage the careers of physicists whose experiments failed to replicate! Yet that's precisely the argument that the article makes about social psychology.

I read this trying to keep as open a mind as possible, and I think there is SOME value to SOME of what he said (ie no two experiments are totally the same and replicators often are motivated to prove the first study wrong)... But one thing that really set me off is that he genuinely considers a study that doesn't prove its hypothesis as a failure, not even acknowledging that IN PRINCIPLE, this study has proven the hypothesis wrong, which is valuable knowledge all the same. Which is so jarring with what I consider the very basis of science that I find difficult to take Mitchell seriously.

There is no territory, it's maps all the way down.

There are no maps, it's reality all the way up.

You might be facetious, but I suspect that it is another way of saying the same thing.
I suspect it isn't. The words map and territory aren't relative terms like up and down.
I meant to communicate the latter. We share this view.
the parent post implies a belief in non-psychisism
I'm not sure I understand what you mean.
That there are no representations. There is no computational system that can be said to be about something.
"The territory" is just whatever exists. It may well be an infinite series of entities, each more refined than the last. It's still a territory. If there is no territory, what is a map?
I don't normally call it a map, I call it a model, but whatever the name, it's something that turns observations into predictions of future observations, without claiming that the source of these observations is something called "reality". This can go as much meta as you like. The map-territory model is one such useful model, except when it's not.
Are you saying that the universe is built like Solomonoff induction? It randomly produces observations and eliminates possibilities that don't follow them? I'd still consider that as having a territory, but it's certainly contrarian. At the very least, your model if the universe implies the existence of a series of maps along a timeline.
I think this post should win the thread for blowing the most minds. (I'll upvote even though I think your position is tenable, since I only assign it 20% probability or so.)
I think the whole point is that there's no fact of the matter. "There are only maps" is a map, and on its own logic it's only as true as it is useful. I'm not sure how I would assign a probability to it.
That sounds awfully like social constructionism.
Never heard of it until now, had to look it up, couldn't find a decent writeup about it. This link seems to be the best, yet it does not even give a clear definition.
Executive summary of social constructionism: all of reality is socially agreed; nothing is objective.
I'm lost at "socially agreed". I define models as useful if they make good predictions. This definition does not rely on some social agreement, only on the ability to replicate the tests of said predictions.
That's the Motte vvrsion, not the Bailey version.
Can you unpack this? At the moment it seems nonsensical, in a "throwing together random words and hoping people read profound insights into it" way.
Sure. Have you actually seen "the territory"? Of course not. There are plenty of unexplained observations out there. We assume that these come from some underlying "reality" which generates them. And it's a fair assumption. It works well in many cases. But it is still an assumption, a model. To quote Brienne Strohl on noticing: To most people the map/territory observation is such a "one and the same". I'm suggesting that it's only a hypothesis. It gives way when making a map changes the territory (hello, QM). It is also unnecessary, because the useful essence of the map/territory model is that "future is partially predictable", in a sense that it is possible to take our past experiences, meditate on it for a while, figure out what to expect in the future and see our expectations at least partially confirmed. There is no need to attach the notion of some objective reality causing this predictability, though admittedly it does feel good to pretend that we stand on a solid ground, and not on some nebulous figment of imagination. If you extract this essence, that future experiences are predictable from the past ones, and that we can shape our future experiences based on the knowledge of the past, it is enough to do science (which is, unsurprisingly, designing, testing and refining models). There is no indication that this model building will one day be exhausted. In fact, there is plenty of evidence to the contrary. It has happened many times throughout human history that we thought that our knowledge was nearly complete, there was nothing more to discover, except for one or two small things here and there. And then those small things became gateways to more surprising observations. Yet we persist in thinking that there are ultimate laws of the universe, and that some day we might discover them all. I posit that there are no such laws, and we will continue digging deeper and deeper, without ever reaching the bottom... because there is no bottom.
Thanks for explaining, upvoted. But I still don't see how this could possibly make sense. But our models have become more accurate over time. We've become, if you will, "less wrong". If there's no territory, what have we been converging to? ...Yes? I see it all the time. I seem to recall someone (EY?) defining "reality" as "that which generates our observations". Which seems like a fairly natural definition to me. If it's just maps generating our observations, I'd call the maps part of the territory. (Like a map with a picture of the map itself on the territory. Except, in your world, I guess, there's no territory to chart so the map is a map of itself.) This feels like arguing about definitions. I see how this might sorta make sense if we postulate that the Simulator Gods are trying really hard to fuck with us. Though still, in that case, I think the simulating world can be called a territory.
Indeed they have. We can predict the outcome of future experiments better and better. We've become, if you will, "less wrong". Yep. If there's no territory, what have we been converging to? Why do you think we have been converging to something? Every new model asks generates more questions than it answers. Sure, we know now why emitted light is quantized, but we have no idea how to deal, for example, with the predicted infinite vacuum energy. No, you really don't. What you think you see is a result of multiple layers of processing. What you get is observations, not the unfettered access to this territory thing. It is not a definition, it's a hypothesis. At least in the way Eliezer uses it. I make no assumptions about the source of observations, if any. First, I made no claims that maps generate anything. maps are what we use to make sense of observations. Second, If you define the territory the usual way, as "reality", then of course maps are part of the territory, everything is. Not quite. You construct progressively more accurate models to explain past and predict future inputs. In the process, you gain access to new and more elaborate inputs. This does not have to end. I realize that is how you feel. The difference is that if the assumption of the territory implies that we have a chance to learn everything there is to learn some day, construct the absolutely accurate map of the territory (possibly at the price of duplicating the territory and calling it a map). I am not convinced that it is a good assumption. Quite the opposite, our experience shows that it is a bad one, it has been falsified time and again. And bad models should be discarded, no matter how comforting they may be.
You could argue that sensing is part of the territory while any thing that is sensed is part of the map, I think.
You could, but you should be very careful, since most of sensing is multiple levels of maps. Suppose you see a cat. So, presumably the cat is part of the territory, right? Well, let's see: * what you perceive as a cat is constructed in your brain from genetics, postnatal development, education, previous experiences and nerve impulses reaching your visual cortex. There are multiple levels of processing: light entering through your eye, being focused, absorbed by light-sensitive cells, going through 3 or 4 levels of other cells before triggering spikes in the afferent fibers reaching deep into your visual cortex. The work done inside it to trigger "this is a cat" subroutine in a totally different part of the brain is much much more complex. * Any of these levels can be disrupted, so that when you see a cat others don't agree (maybe someone drew a "realistic" picture to fool you, or maybe your brain constructed a cat image where that of a different but unfamiliar animal (say, raccoon) would be more accurate). Multiple observations are required to validate that what you perceive as a cat behaves the way your internal model of the cat predicts. * Even the light rays which eventually resulted in you being aware of the cat are simplified maps of propagating excitation of the EM field interacted with atoms in what could reasonably be modeled as cat's fur. Unless it is better modeled as lines on paper. * This stack of models currently ends somewhere in the Standard Model of Particle physics. Not because it's the "ultimate reality", but because we don't have a good handle on how to continue building the stack. * You could argue that all the things I have described are "real" and part of the territory. Absolutely you can. But then why stop there? If light rays are real and not just abstractions, then so are images of cats in your brain. * Thus any model is as "real" as any other, though one can argue that accurate (better at anticipating future experiences) model a
By "sensing" I was referring to the end result of all those nerves firing and processes processing when awareness meets the result of all that stuff. I suppose I could have more accurately stated that awareness is a part of the territory as awareness arises directly from some part of your circuitry. Everything about the cat in your example may happen in the brain or not and so you can't really be sure that there's an underlying reality behind it, but awareness itself is a direct consequence of the configuration of the processing equipment.
So what is a map and not the territory in your example? The cat identification process? The "I see a cat" quale? I am confused.
Yes, the cat quale is map.
I'd argue that it is as real as any other brain process.
It's real, but the thing that's being experienced isn't the real thing. The cat quale is a real process, but it's not a real cat (probably). The part of processing the quale that is the awareness (not the object of awareness) is itself the real awareness and holds the distinction of actually being in the territory rather than in the map.
What is the point of science, otherwise? Better prediction of observations? But you can't explain what an observantion is. If the territory theory is able to explain the purpose of science, and the no-territory theory is not , the territory theory is better. ..according to a map which has "inputs from the territory" marked on it. Well, you need to. If the territory theory can explain the very existence of observations, and the no-territory theory cannot, the territory theory is better, Inputs from where? No it doesn't. "The territory exists, but is not perfectly mappable" is a coherent assumption, particularly in view if the definition of the territory as the source of observations.
Is that contrarian? In the community I come from (physics), that's a pretty commonly considered theory, even if not commonly held as most probable.
I'm an ex-physicist, and I am pretty sure that realism, and more specifically scientific realism, is the standard, if implicit, ontology in physics.
That depends on exactly what itis supposed to mean. Some people se it to mean that reality is not accessible outside an interpretational framework - that's a Bailey version. A Motte version would be that there is literally nothing in existence except human-made theories. Physicists often aren't good at stating or noticing degrees of realism and anti realism, since they aren't trained for it,
I didn't interpret shminux's statement as being about realism. There is also the theory that as we move into higher and higher energy we will cover more and more and more specific rules and never reach the terminal fundamental rule set. in other words the fundamental rules of the universe are fractally complex with the fractal function being unknowable.
Maybe. But Shminux also says that the territory is a map, not that it is unmappable.
Every computation requires something that instatiates it, ie a abstract or concrete machine to run on. In a very extreme case you might come up with a very abstract idea. However then the instation provider is the imaginer. Every bit of information requires a transfer of energy. Instation is transitive relation. If there is simulation of me it neccesarily instanties my thoughts too. Also the parent comment implies a belief in panpsychisism.
Taken literally it is unlikely. However, it is not clear how literally it is meant to be taken.

Open borders is a terrible idea and could possibly lead to the collapse of civilization as we know it.

EDIT: I should clarify:

Whether you want open borders and whether you want the immigration status quo are different questions. I happen to be against both, but it is perfectly consistent for somebody to be against open borders but be in favor of the current level of immigration. The claim is specifically about completely unrestricted migration as advocated by folks like Bryan Caplan. Please direct your upvotes/downvotes to the former claim, rather than the latter.


[Please read the OP before voting. Special voting rules apply.]

Current levels of immigration are also terrible, and will significantly speed up the collapse of the Western world.

Citation required.
I'm not clear on whether it's actually a good idea, but if Bryan Caplan's arguments are the best available, it's definitely a horrible idea. He sidesteps all the potential problems without addressing them, or in some cases draws analogies that, when actually considered properly, indicate that it would be a bad idea.
I particularly like how he manages to switch between deontology and consequentialism in the same argument.
Actually here I disagree. There are many counterarguments listed on the Open Borders site, and for that I give him credit. Especially as he actually attempts to engage with racialist arguments, rather than dismissing them.
I remember at one point encountering the Open Borders site, considering that it sounded like a pretty good idea, then reading through much of the site and becoming decreasingly convinced as I read the specific arguments, which consisted of more holes than solid points. Recently, it's come up again, specifically in an interview with Caplan which was going around (I saw it via Kaj). Again, I was initially intrigued by the idea, but the more I saw of the actual arguments, the weaker they seemed. He seems to routinely deflect the significant concerns with non-denials and never actually address the pragmatic reasoning against it.
I'm not sure if it's a technique worth crediting. There are voting trend issues with Hispanic immigrants that do not boil down to paranoid delusions about the Reconquista, and there are arguments regarding crime and immigration that are strongly and obviously distinguishable from sending every African-American person -- including those innocent of any crime -- out of the country. I'm hard-pressed to believe those arguments were selected for any reason but their weakness and unpalatability. I personally favor reduced barriers to immigration (outside of a criminal background check and unique person identification, the modern limits are counterproductive at best), but writing up the worst arguments against that belief doesn't really strengthen them.
Hey, it's a step up from denying outright that certain types of immigrants will commit more crimes. A lot of people have drank that Kool-Aid.
Why do you believe this? Countries with the most liberal immigration policies today don't seem to be on the verge of collapse.
Ebola is more an argument for colonialism than against open borders but let's not be picky.
Ebola is an example of a locally-originated virulent existential threat open borders fail to contain, biological, social or otherwise. Controlled borders, despite all the issues, at least can act as an immune system of sorts.
Define "controlled borders"? In the "open borders" context the debate is usually about residency and citizenship restrictions, but in the context of ebola those don't matter; tourists and airline workers and cargo ship crews and so on all carry diseases too.
Yes I agree, I was just being facetious :s
You should visit Bradford someday.
I'm sure Bradford isn't the greatest place to live, but (1) it's better than many US inner cities, (2) the UK seems quite far from collapse, and generally (3) "such-and-such a country allows quite a lot of immigration, and there is one city there that has a lot of immigrants and isn't a very nice place" seems a very very very weak argument against liberal immigration policies.
On the other hand, "such-and-such a country allows quite a lot of immigration, and the niceness of a city inversely correlates with the number of immigrants there" is a stronger argument. Especially if I can get an even stronger correlation by conditioning on types of immigrants.
Stronger, yes. But ... * It's far from clear that the central premise is correct. (Cambridge has a lot of immigrants and I think it's very nice. I'm told Stoke-on-Trent is pretty rubbish but it has few immigrants. Two cherry-picked cases don't tell you much about correlation but, hey, that's one more case than bramflakes offered.) * The differential effects of immigration within a country might look different from the overall effects on the country as a whole. (Toy model, not intended to be a description of how things actually are: suppose some immigrant group produces disproportionate numbers of petty criminals and brilliant business executives; then maybe areas with more of that group will have more crime but by the magic of income tax the high earnings of the geniuses will make everyone better off.) * For some people -- I am not claiming you are one -- the very fact that a place has more immigrants (or more of particular "types of immigrants", nudge nudge wink wink) makes it less nice. Those who happen not to feel that way may have a different view of the correlation between niceness and immigration from those who do. To take a special case, the immigrants themselves probably don't feel that way, and for some who favour liberal immigration policies the benefit to the immigrants is actually an important part of the point.
Actually they probably do. That's why they immigrated in the first place. Well it's remarkable how strong a correlation there is between one's support for immigration and how strong a bubble one has around oneself to protect oneself from them. Look how many of the most prominent immigration advocates live in gated communities.
I can think of reasons why someone might migrate from country A to country B other than preferring country B's people to country A's. [EDITED to add: Maybe I should give some examples, in case they really aren't obvious. Country B might have: a better political system, less war, more money, better treatment for some group one's part of (women, gay people, intellectuals, Sikhs, ...), less disease, nicer climate, lower taxes, better public services, better jobs, better educational opportunities. Some of those might in some cases be because country A's people are somehow better, but they needn't be, and even if in fact Uzbekistan has lower taxes because it has fewer Swedes and Swedes have a genetic predisposition to raise taxes, someone migrating from Sweden to Uzbekistan for lower taxes needn't be aware of that and needn't have any preference for not being around Swedes.] I am interested: How strong, and how many? Do you have figures? (And how does it compare with how many of the most prominent advocates of anything you care to mention live in gated communities? The most prominent people in any given group are more likely to be rich, and richer people more often live in gated communities.) In any case, assuming for the sake of argument that there is indeed a positive correlation between being "protected" from immigrants and supporting letting more of them in: I don't understand how your reply is responsive to what I wrote. It seems exactly parallel to this: "Many people advocate prison reform for the sake of the prisoners." "Oh year? Well, a lot of those people prefer to live in places with lower crime rates." Which is true enough, but hardly relevant. There's no inconsistency between wanting some group of people to be better off, and having a personal preference for not living near a lot of them.
Rich people are more likely to advocate open boarders. As for prominent people: Mark Zuckerberg bought the four houses surrounding his own "because he wanted more privacy". Bryan Caplan prides himself on the bubble he's constructed around himself.
Again, I'd be interested in the statistics. (That isn't a coded way of saying I think you're wrong, by the way. But I'd be interested to know how big the differences are, whether it depends on what you mean by "rich", etc.) I'm not sure why this is relevant. I'm guessing that both of those people advocate open borders, but surely the absolute most any observation of this form could show is that there are at least two people in the world who advocate open borders for bad reasons, or advocate open borders but are terrible people, or something. How can that possibly matter?
Of course, one consequence of this is that if enough Swedes migrate they'll destroy the aspect of Uzbekistan that attracted them in the first place. It is hypocritical in the original sense of the term, the one from which the word's negative connotations derive, i.e., a leader who insists that the group make sacrifices for the "greater good" without participating in those sacrifices himself.
Until the number of Swedes in Uzbekistan is extremely large, it'll presumably still be better than Sweden in that respect. That doesn't actually seem to be the original sense of the term, at least according to my reading of the OED, but I don't think it matters. Anyway, let's suppose you're right and some advocates of liberal immigration policies are hypocrites in that sense. I don't see how that's evidence that the policies are bad, nor do I see how it's responsive to what your comment was a reply to (namely, a claim that many people advocate liberal immigration policies for the benefit of the immigrants). I'm still curious about "how strong, and how many", by the way. I assume, from what you said on this point, that you have figures; I'd love to see them.
Other than climate and to some extent money and war and disease, these mainly depend on which kind of people Country B has.
"... niceness of a city inversely correlates with the number of immigrants there" Ask any Native American, ho ho.
I'm being flippant of course. I didn't intend it as a serious argument. Quick response: 1) You cannot compare the UK's cities to the US' cities because the US has a 14% black population and the UK does not. "Inner city" is a codeword for the kind of black dysfunction that thankfully the UK does not possess. 2) The UK is not close to collapse because we don't have fully Open Borders yet. For all its faults, the EU's migration framework isn't quite letting in millions of third-worlders yet. 3) Of course. If you don't mind, I don't want to get into a lengthy debate on the subject.
I am quite happy not to have a lengthy debate with you on this topic.
The difference is only apparent; both societies have treated their nonwhites like trash. The British Empire merely avoided its "dysfunction" problem at home by outsourcing it to India.
Then why, despite the xenophobic laws of the 19th and early 20th centuries, are East Asians a dominant minority in the US? Why, despite a millenium of antisemitism, are Ashkenazim getting 27% of Nobels and making up about quarter [edit: not sure of exact number] of US billionaires? White people have treated all nonwhites like trash at some point or another, yet there's a giant variation in outcomes. Racism as an all-powerful explanation of black dysfunction is untenable.

White people have treated all nonwhites like trash at some point or another

I think that most peoples have treated some other tribe as trash at some point or another. The particular case which prompted this response was the English and the Irish, but the list of examples is very long.

A non-representative event happened and was blown out of proportion by media.
What event are you talking about? If you mean the Pakistani rape gang, that was Rotheram, not Bradford.
Collapsing civilisation as we know it is presumably not a bad thing if you think that our current civilisation is fundamentally unjust or suboptimally allocates resources based on arbitrary geographic boundaries.
I'll take both of those over the Camp of the Saints.

[Please read the OP before voting. Special voting rules apply.]

As a first approximation, people get what they deserve in life. Then add the random effects of luck.

Max L.

Why do Africans deserve so much less than Americans? Why did people in the past deserve so much less than current people? Why do people with poor parents deserve less than people with rich parents?

I count "the circumstances into which you are born" as luck. I'd guess it is the biggest component of luck, along with being struck by a disabling genetic condition or exposed to pandemic. So, the first observation has more salience in similar groups of people. So, for example, the group of people that I hang out with or work with are roughly similar enough for desert to have more salience than luck. But perhaps that means that birth-luck should be the first approximation, then desert, then additional luck. Max L.
Can you give me an example of something that is neither desert nor luck?
Very nice question; better, in fact than the statement to which you responded.. Examples I have in mind: * Personal level injustice. * Social injustice. * How other people treat you. But my primary point was whether things for which we are personally responsible is a bigger or lesser influence than luck. That is, if I am guessing with little knowledge, I am going to guess desert before luck for most groups with which I'd be interacting. (Also, I am thinking that variation in luck, when the fact of variation if predictable and bad luck can be insured or mitigated, is desert, not luck.) Particular applications might make it more clear. If you don't have a job in America, and you appear physically able to work, my first guess is that you are the biggest contributor to your unemployment. If you are unhealthy in America, and weren't born with it, my first approximation will be that you contributed mightily to your poor health. And so on. Max L.
If you fail to buy car insurance, you deserve the expected cost? I was thinking deserving something bad meant you did something bad, not that you did something stupid. When you say "deserve," do you mean to imply that it is terminally better for people who deserve more to get more, and people who deserve less to get less?
If you fail to buy auto liability insurance and cause an accident (which is entirely predictable over long periods), then my first guess is that you deserve the impoverishment that comes from the situation. If you fail to buy uninsured motorist insurance and are in an accident that you don't cause (which is entirely predictable) and faulty driver has no insurance and can't pay (which is also entirely predictable), then my first approximation is still pretty good. It is a little off because you could be beset with e string of bad luck. I think of it the other way around. If I see someone happy and reasonably well off, I am first going to say that they had a hand in it. If I see someone continually unhappy or impoverished (setting aside birth luck), my first guess is also going to be that they are mainly responsible for their own outcomes. Turning it round, they are usually getting what they deserve. Whether that is better or not depends on more than individual morality, so no, I'm not saying it is better. Also, the examples seem to have focused on material outcomes, since they are easier to talk about, but I'm also thinking of non-material things. Relationships, self-esteem, etc. Max L.
What ethical theory are you using for your definition of "deserve"?
It is a fine question, since the word "deserve" is the link between an observation and a judgment about the person. I don't think I need an answer to it to make the observation that most people here don't hold that view. Which is a good thing, because I don't think I have a satisfactory answer beyond rough moral intuition. Max L.

[Please read the OP before voting. Special voting rules apply.]

Feminism is a good thing. Privilege is real. Scott Alexander is extremely uncharitable towards feminism over at SSC.

Yes, Yes, No. Still upvoting, because "Scott Alexander" and "uncharitable" in the same sentence does not compute.

I consider him a modern G.K. Chesterton. He's eloquent, intelligent, and wrong.

Do you mind telling me how you think he's being uncharitable? I agree mostly with your first two statements. (If you don't want to put it on this public forum because hot debated topic etc I'd appreciate it if you could PM; I won't take you down the 'let's argue feminism' rabbit-hole.)

(I've always wondered if there was a way to rebut him, but I don't know enough of the relevant sciences to try and construct an argument myself, except in syllogistic form. And even then, it seems his statements on feminists are correct.)

Fortunately, LW is not an appropriate forum for argument on this subject, but for an example of an uncharitable post, see Social Justice and Words, Words, Words.
For a very quick example, see this Tumblr post. Mr. Alexander finds an example of a neoreactionary leader trying to be mean to a transgender woman inside the NRx sphere, and then shows the vast majority response of (non-vile) neoreactionaries to at least be less exclusionary than that, even though they have ideological issues with the diagnosis or treatment of gender dysphoria. Then he describes a feminist tumblr which develops increasingly misgendering and rude ways to describe disagreeing transgender men. I don't know that this is actually /wrong/. All the actual facts are true, and if anything understate their relevant aspects -- if anything, I expect Ozy's understated the level of anti-transmale bigotry floating around the 'enlightened' side of Tumblr. I don't find NRx very persuasive, but there are certainly worse things that could be done than using it as a blunt "you must behave at least this well to ride" test. I don't know that feminism really needs external heroes: it's certainly a large enough group that it should be able to present internal speakers with strong and well-grounded beliefs. And I can certainly empathize with holding feminists to a higher standard than neoreactionaries hold themselves. The problem is that it's not very charitable. Scott's the person that's /come up/ with the term "Lizardman's Constant" to describe how a certain percentage of any population will give terrible answers to really obvious questions. He's a strong advocate of steelmanning opposing viewpoints, and he's written an article about the dangers of only looking at the . But he's looking at a viewpoint shown primarily in the <5% margin feminist tumblr, and comparing them to a circle of the more polite neoreactionaries (damning with faint praise as that might be, still significant), and, uh, I'm not sure that we should be surprised if the worst of the best said meaner things than the best of the worst. I'm not sure he /needs/ to be charitable, again -- feminism should h
Being 5% of the group doesn't mean they are 5% of the influence. The loudest 5% may get to set the agenda of the remaining 95% if the remaining ones are willing to go along with things they don't particularly care about, but don't oppose enough to make these things deal-breakers either.
It also helps if the 5% have arguments for their positions.
See also:
According to the 2013 LW survey, the when asked their opinion of feminism, on a scale from 1 (low) to 5 (high), the mean response was 3.8 , and social justice got a 3.6. So it seems that "feminism is a good thing" is actually not a contrarian view. If I might speculate for a moment, it might be that LW is less feminist that most places, while still having an overall pro-feminist bias.
If by most places you're talking about the world (or Western/American world) in general, that's pretty clearly false. The considerable majority of Americans reject the feminist label, for example. If you're talking about internet communities with well-educated members, then it probably is true.
How would you define "privilege"?

Easier difficulty setting for your life in some context through no fault or merit of your own.

So would you describe someone tall as having "height privilege" because they're better at basketball?

I'd argue that height privilege (up to a point, typically around 6'6") is a real thing, having nothing to do with being good at sports. There is a noted experiment, which my google-fu is currently failing to turn up, in which participants were shown a video of an interview between a man and a woman. In one group, the man was standing on a footstool behind his podium, so that he appeared markedly taller than the woman. In the other group, the man was standing in a depression behind his podium, so t that he appeared shorter. The content of the interview was identical.

Participants rated the man in the "taller" condition as more intelligent and more mature than the same man in the "shorter" condition. That's height privilege.

There's also a large established correlation between height and income, though not enough to completely rule out a potential common cause like "good genes" or childhood nutrition.
You really need riders to the effect that privilege of an objectionable kind is unrelated to achievement or intrinsic abilities,
The problem is that most of the examples SJW object to are in fact related to achievement or intrinsic abilities.
This is a good definition. In particular, "Anti-oppressionists use "privilege" to describe a set of advantages (or lack of disadvantages) enjoyed by a majority group, who are usually unaware of the privilege they possess. ... A privileged person is not necessarily prejudiced (sexist, racist, etc) as an individual, but may be part of a broader pattern of *-ism even though unaware of it." No, this is not a motte.

Why the "majority group" qualifier? Privilege has been historically associated with minorities, like aristocracy.

Does it have to be a majority group? For example, does this compared with this count as an example of "black privilege"? Would you describe the fact that some people are smarter (or stronger) than others as "intelligence privilege" (or "strength privilege")?
That's in the bailey, because of "enjoyed by a majority group."
Why focus only specific majority groups and thereby ignore things like men in domestic violence issues getting a lot less help from society than women? Nearly everyone has some advantages and disadvantages. It's often not helpful to conflate that huge back of advantages and disadvantages into a single variable.
Like a few others, I agree with the first two but emphatically disagree with the last. And if you were right about it, I'd expect Ozy to have taken Scott to task about it, and him to have admitted to being somewhat wrong and updated on it. EDIT: This has, in fact, happened.
See this tumblr post for an example of Ozy expressing dissatisfaction with Scott's lack of charity in his analysis of SJ (specifically in the "Words, Words, Words" post). My impression is that this is a fairly regular occurrence. You might be right about him not having updated. If anything it seems that his updates on the earlier superweapons discussion have been reverted. I'm not sure I've seen anything comparably charitable from him on the subject since. I don't follow his thoughts on feminism particularly closely, so I could easily be wrong (and would be glad to find I'm wrong here).
OK, those things have indeed happened, to some degree. Above comment corrected. I still don't understand what is uncharitable about the Wordsx3 post specifically. It accurately describes the behavior of a number of people I know (as in, have met, in person, and interacted with socially, in several cases extensively in a friendly manner), and I have no reason to consider them weak examples of feminist advocacy and every reason to consider typical (their demographics match the stereotype). I have carefully avoided catching the receiving end of it, because friends of mine have honestly challenged aspects of this kind of thing and been ostracized for their trouble.
There's something wrong with the first link (I guess you typed the URL on a smartphone autocorrecting keyboard or similar). EDIT: I think this is the correct link.
Yeah, that happened when I edited a different part from my phone. Thanks, fixed.
Imo this quote from her response is a pretty weak argument: "The concept of female privilege is, AFAICT, looking at the disadvantages gender-non-conforming men face, noticing that women with similar traits don’t face those disadvantages, and concluding that this is because women are advantaged in society. " In order for this to be a sensible counterpoint you would need to either say "gender conforming male privilege" or you would need to show that there are few men who mind conforming to gender roles. I don't really see why anyone believes most men are fine with living out standard gender norms and I certainly don't see how anyone has evidence for this. If a high percentage fo men are gender non-conforming and such men are at a disdadvantage in society then the concept of male privilege is seriously weakened. And using it is dangerous as it might harm those men to here that they are "privileged" when this is not the case (at least in terms of gender, maybe they are rich etc).
I agree with claim 1 for some definitions of feminism and not for others. I agree with claim 2. I think that Scott would agree wtih claim 1 (for some definitions) and with claim 2 as well, so I disagree with claim 3.
Can you defend these statements?
I can, but I don't want to fall into that inferential canyon.
2Scott Garrabrant
I think that if you actually can defend them, it might be worth it to go through the canyon. Inferential canyons are a lot easier to cross when your targets are aware of their existence and are willing and able to discuss responsibly. ("worth it" is of course relative to other ways you discuss with strangers on the internet}

[Please read the OP before voting. Special voting rules apply.]

Superintelligence is an incoherent concept. Intelligence explosion isn't possible.

How smart does a mind have to be to qualify as a "superintelligence"? It's pretty clear that intelligence can go a lot higher than current levels.

What do you predict would happen if we uploaded Von Neumann's brain onto an extremely fast, planet-sized supercomputer? What do you predict would happen if we selectively bred humans for intelligence for a couple million years? "Impractical" would be understandable, but I don't see how you can believe superintelligence is "incoherent".

As for "Intelligence explosion isn't possible", that's a lot more reasonable, e.g. see the entire AI foom debate.

Possibly the concept of intelligence as something that can increase in a linear fashion is in itself incoherent
Well, I will predict this Very bored Von Neumann. People that are very good at solving tests which you use to measure intelligence.

[Please read the OP before voting. Special voting rules apply.]

Buying a lottery ticket every now and then is not irrational. Unless you have thoroughly optimized the conversion of every dollar you own into utility-yielding investments and expenses, the exposure to large positive tail risk netted by spending a few dollars on lottery tickets can still be rational.

Phrased another way, when you buy a lottery ticket you aren't buying an investment, you're buying a possibility that is not available otherwise.

disagree because the cost of the possibility is too high.
If one lottery ticket is worth while, why not two? Are you assigning a nonlinear value to the probability of winning the lottery? That causes a number of problems.
At the risk of looking even more like an idiot: Buying one $1 lottery ticket earns you a tiny chance - 1 in 175,000,000 for the Powerball - of becoming absurdly wealthy. The Powerball gets as high as $590,500,000 pretax. NOT buying that one ticket gives you a chance of zero. So buying one ticket is "infinitely" better than buying no tickets. Buying more than one ticket, comparably, doesn't make a difference. I like to play with the following scenario. A LessWrong reader buys a lottery ticket. They almost certainly don't win. They have one dollar less to donate to MIRI and because they're not wealthy they may not have enough wealth to psychologically justify donating anything to MIRI anyway. However, in at least one worldline, somewhere, they win a half a billion dollars and maybe donate $100,000,000 to MIRI. So from a global humanity perspective, buying that lottery ticket made the difference between getting FAI built and not getting it built. The one dollar spent on the ticket, in comparison, would have had a totally negligible impact. I fully realize that the number of universes (or whatever) where the LessWrong reader wins the lottery is so small that they would be "better off" keeping their dollar according to basic economics, but the marginal utility of one extra dollar is basically zero. edit: Digging myself in even deeper, let me attempt to simplify the argument. You want to buy a Widget. The difference in net utility, to you, between owning a Widget and not owning a Widget is 3^3^3^3 utilons. Widgets cost $100,000,000. You have no realistic means of getting $100,000,000 through your own efforts because you are stuck in a corporate drone job and you have lots of bills and a family relying on you. So the only way you have of ever getting a Widget is by spending negligible amounts of money buying "bad" investments like lottery tickets. It is trivial to show that buying a lottery ticket is rational in this scenario: (Tiny chance) x (Absurdly, unquantifiably
So your utility function is nonlinear with respect to probability. You don't use expected utility. It results in certain inconsistencies. This is discussed in the article the allais paradox, but I'll give a lottery example here. Suppose I offer you a choice between paying one dollar and getting a one in a million chance of winning $500,000, and paying two dollars and getting a one in one million chance of winning $500,000 and a one in two million chance of winning $500,001. You figure that what's basically a 0.00015% chance of winning vs. a 0.0001% chance isn't worth paying another dollar for, so you just pay the one dollar. On the other hand, suppose I only offer you the first option, but, once you see if you've won, you get another chance. If you win, you don't really want another lottery ticket, since it's not a big deal anymore. So you buy a ticket, and if you lose, you buy another ticket. This results in a 0.0001% chance of ending up with $499,999, a 0.00005% chance of ending up with $499,998, and a 99.99985% chance of ending up with -2$. This is exactly the same set of probabilities as you had for the second option before. No it would not. Or at least, it's highly unlikely for you to know that. Suppose MIRI has their probability of success increased by 50 percentage points if they get a 100 million dollar donation. This means that, if 100 million people all donate a dollar, their probability of success goes up by 50 percentage points. Each successive one will change the probability by a different amount, but on average, each donation will increase the chance of success by one in 200 million. Furthermore, it's expected that the earlier donations would make a bigger difference, due to the law if diminishing returns. This means that donating one dollar improves MIRI's probability of success by more than one in 200 million, and is therefore better than getting a one in 100 million chance of donating 100 million dollars. Even if MIRI does end up needing a mini
You're looking at the (potential) benefits and ignoring the costs. The costs are not negligible: "Thirteen percent of US citizens play the lottery every week. The average household spends around $540 annually on lotteries and poor households spend considerably more than the average." (source). Buying a second ticket doubles your chances, obviously. For each timeline where you buy a lottery ticket there is one where you don't. Under MWI you don't make any choices -- you choose everything, always. You've never been poor, have you? :-/ It is just as trivial to show that you should spend all your disposable income and maybe more on lottery tickets in this scenario.
I'm only commenting to the rationality of one individual buying one ticket, not the ethics of the existence of lotteries. Buying one ticket takes you from zero to one, buying two tickets takes you from one to two. 1/0 = infinity, 2/1 = 2. Buying anything more than 1 ticket has sharply diminishing utility. I realize this is a somewhat silly line of argument, so I'm not going to sink any more energy defending it. I don't think we understand each other on this point. I was referring not to choosing, just winning. And the measure of the winning universes is a tiny fraction of all universes. But that doesn't matter when the utility of winning is sufficiently large. And the chance of a given individual buying a ticket isn't 50% in any meaningful quantum-mechanical sense, so "For each timeline where you buy a lottery ticket there is one where you don't" isn't true. No, and I wouldn't recommend that a poor person buy lottery tickets. My original claim was that buying lottery tickets can be rational, not that it is rational in the general case. That's true. People also say that you should donate all your disposable income to MIRI, or to efficient charities, for exactly the same reasons, and I don't do those things for the same reason that I don't spend all my money on lottery tickets - I'm a human. My line of argument only applies when you want a Widget and have no other way of affording it. I don't really feel strongly enough about this to continue defending it, it's just that I'm quite sure I'm right in the details of my argument and would welcome an argument that actually changes my mind / convinces me I'm wrong.
I treat buying lottery tickets as buying a license to daydream. Once you realize you don't need a license for that... :-)
There are ways to win a lottery without buying a ticket. For example, someone may buy you a ticket as a present, without your knowledge, which then wins. No, it is much more likely that you'll win the lottery by buying tickets than by not buying tickets (assuming it's unlikely to be gifted a ticket), but the cost of being gifted a ticket is zero, which makes not buying tickets an "infinitely" better return on investment.
I agree with the first sentence, but I'm not sure if our reasoning is the same. Here's mine: If humans were perfectly rational overall, buying a lottery ticket would never make sense. But we aren't. I think it's rational to buy a lottery ticket say, every six months, and then not check if it's a winner for the six months. Just as humans seems to enjoy the anticipation of an upcoming vacation more than the actual vacation, the human brain can get utility from the hope that ticket might be a winner, and 6 months of an (irrational, but so what?) hope far outweigh the one day of disappointment and one dollar lost when you check the ticket and it hasn't won.
I totally agree with this reasoning, but I don't think "fun" is the only good reason to buy a ticket.
I'd agree that if a lottery ticket and a chocolate bar give you the same hedons and no other options are available, you are better off buying a piece of paper than an unhealthy snack.
I'll put this here because I wish to provide a different perspective without getting bogged down in probabilistic thinking. To say buying a lottery ticket is irrational might be correct if you consider winning the lottery or not to be the only real outcome of such a purchase. The fact is, however, that buying a lottery ticket provides entertainment. You pay a relatively small sum of money to play, until results day, with the fantasy of receiving an enormous amount of money. Entertainment is utility, as far as I'm concerned. Obviously the more money you spend on the lottery, the less justifiable this entertainment is, because as others have pointed out, buying a small number of more tickets doesn't appreciably change the probability of winning. Just one ticket, however, and you're one stroke of luck away from enormous wealth; the enjoyment of knowing that alone can be worth a dollar.

[Please read the OP before voting. Special voting rules apply.]

The dangers of UFAI are minimal.

Do you think that it is unlikely for a UFAI to be created, that if a UFAI is created it will not be dangerous, or both?
I think humans will become sufficiently powerful that UFAI does not represent a threat to them before creating UFAI.
“Dangers” being defined as probability times disutility, right?
With the caveat that I'm treating unbounded negative utility as invalid, sure.
Please do elaborate!

[Please read the OP before voting. Special voting rules apply.]

For many smart people, academia is one of the highest-value careers they could pursue.


Clarify "many"?

~30% maybe?
What about “smart people”? IQ > 100? IQ > 115? IQ > 130? IQ > 145?
Let's say IQ 145 or higher? ETA: Although I would push things like conscientiousness into the picture as well if I were trying to be more precise; but for the sake of not writing an essay I'm happy to stick with an IQ cutoff.
Highest value for the person, for society, or both? Also, by "high value" do you mean purely monetary or do you mean other benefits?
Society. For the second question, not quite sure what it would mean to provide monetary value to society, since money is how people trade for things within society rather than some extrinsic good.
It sure isn't great for the smart people.
Yes, I think that's pretty trivially true. Academia functions monastically: the academic accepts relatively worse material income in order to have the opportunity to donate large sums of value to society.

[Please read the OP before voting. Special voting rules apply.]

Utilitarianism is a moral abomination.

I am very interested in this. * Exactly what is repugnant about utilitarianism? (Moi, I find that it leads to favoring torture over 3^^^3 specks, which is beyond facepalming; I'd like to hear your view.) * I guess the moral assumptions based on which you condem utilitarianism are the same you would propose instead. What moral theory do you espouse?

Exactly what is repugnant about utilitarianism?

It's inhuman, totalitarian slavery.

Islam and Christianity are big on slavery, but it's mainly a finite list of do's and don'ts from a Celestial Psychopath. Obey those, and you can go to a movie. Take a nap. The subjugation is grotesque, but it has an end, at least in this life.

Not so with utilitarianism. The world is a big machine that produces utility, and your job is to be a cog in that machine. Your utility is 1 seven billionth of the equation - which rounds to zero. It is your duty in life to chug and chug and chug like a good little cog without any preferential treatment from you, for you or anyone else you actually care about, all through your days without let.

And that's only if you don't better serve the Great Utilonizer ground into a human paste to fuel the machine.

A cog, or fuel. Toil without relent, or harvest my organs? Which is less of a horror?

Of course, some others don't get much better consideration. They, too, are potential inputs to the great utility machine. Chew up this guy here, spit out 3 utilons. A net increase in utilons! Fire up the woodchipper!

But at least one can argue that there is a net increase of util... (read more)


I disagree, but my reasons are a little intricate. I apologize, therefore, for the length of what follows.

There are at least three sorts of questions you might want to use a moral system to answer. (1) "Which possible world is better?", (2) "Which possible action is better?", (3) "Which kind of person is better?". Many moral systems take one of these as fundamental (#1 for consequentialist systems, #2 for deontological systems, #3 for virtue ethics) but in practice you are going to be interested in answers to all of them, and the actual choices you need to make are between actions, not between possible worlds or characters.

Suppose you have a system for answering question 1, and on a given occasion you need to decide what to do. One way to do this is by choosing the action that produces the best possible world (making whatever assumptions about the future you need to), but it isn't the only way. There is no inconsistency in saying "Doing X will lead to a better world, but I care about my own happiness as well as about optimizing the world so I'm going to do Y instead"; that just means that you care about other things besides morality. Which pr... (read more)

Lots to comment on here. That last paragraph certainly merits some comment.

Yes, most people are almost entirely inconsistent about the morality they profess to believe. At least in the "civilized world". I get the impression of more widespread fervent and sincere beliefs in the less civilized world.

Do Christians in the US really believe all their rather wacky professions of faith? Or even the most tame, basic professions of faith? Very very few, I think. There are Christians who really believe, and I tend to like them, despite the wackiness. Honest, consistent, earnest people appeal to me.

For the great mass, I increasingly think they just make talking noises appropriate to their tribe. It's not that they lie, it's more that correspondence to reality is so far down the list of motivations, or even evaluations, that it's not relevant to the noises that come from their mouths.

It's the great mass of people who seem to instinctively say whatever is socially advantageous in their tribe that give be the heebie jeebies. They are completely alien - which, given the relative numbers, means I am totally alien. A stranger in a strange land.

Isn't it better to classify people in a

... (read more)
I'm repeating myself here, but: I think you are mixing up two things: utilitarianism versus other systems, and singleminded caring about nothing but morality versus not. It is the latter that generates attitudes and behaviour and outcomes that you find so horrible, not the former. You are of course at liberty to say that the term "utilitarian" should only be applied to a person who not only holds that the way to answer moral questions is by something like comparison of net utility, but also acts consistently and singlemindedly to maximize net utility as they conceive it. The consequence, of course, will be that in your view there are no utilitarians and that anyone who identifies as a utilitarian is a hypocrite. Personally, I find that just as unhelpful a use of language as some theists' insistence that "atheist" can only mean someone who is absolutely 100% certain, without the tiniest room for doubt, that there is no god. It feels like a tactical definition whose main purpose is to put other people in the wrong even before any substantive discussion of their opinions and actions begins. It's both. (Just as a literal purchase may be both at great cost, and of great benefit.) Which is one reason why, if this person -- or someone who feels and acts similarly on the basis of utilitarian rather than religious ethics -- acts in this way because they genuinely think it's the best thing to do, then I don't think it's appropriate to complain about how grotesquely subjugated they are. What do you believe my code to be, and why?
Seconding the question "What moral theory do you espouse?"
That was beautiful.
Under utilitarianism, human farming for research purposes and organ harvesting would be justified if it benefited enough future persons. Under utilitarianism the ideal life is one spent barely subsisting while giving away all material wealth to effective altruism/charity. (reason being - unless you are barely subsisting, there is someone who would benefit from your wealth more than you). Also there is no way to compare interpersonal utility. There is a sense in which I might prefer A to B, but there is no sense in which I can prefer A more than you prefer B. We could vote, or bid money but neither of these results in a satisfactory ethical theory.
Perhaps not with utility theory's usual definition of "prefer", but in practice there is a commonsense way in which I can prefer A more than you prefer B, since we're both humans with almost identical brain architecture.
Interesting, so your utilitarianism depends on agents having similar minds, it doesn't try to a be a universal ethical theory for sapient beings. What exactly is that way in which you can prefer something more than I can? It is not common sense to me, unless you are talking about hedonic utilitarianism. Are you using intensity of desire or intensity of satisfaction as a criteria? Neither one seems satisfactory. People's preferences do not always (or even mostly) align with either. I suppose what I'm asking is for you to provide a systematic way of comparing interpersonal utility.
If I say "I prefer not to be tortured more than you prefer a popsicle", any sane human would agree. This is the commonsense way in which utility can be compared between humans. Of course, it isn't perfect, but we could easily imagine ways to make it better, say by running some regression algorithms on brain-scans of humans desiring popsicles and humans desiring not-to-be-tortured, and extrapolating to other human minds. (That would still be imperfect, but we can make it arbitrarily good.) This isn't just necessary if you're a utilitarian, it's necessary if your moral system in any way involves tradeoffs between humans' preferences, i.e. it's necessary for pretty much every human who's ever lived.
So you are a hedonic utilitarian? You think that morality can be reduced to intensity of desire? I already pointed out that human preferences do not reduce to intensity of desire.
I'm not any sort of utilitarian, and that has nothing to do with my point, which is that there obviously is a sense in which I can prefer A more than you prefer B.
that's more like being conditional that we cooperate. If my enemy would say that I could find it offensive and it doesn'ty compel me to change my actions. If you try to use utlitarian theory to (en)force a cooperation the argument doesn't go throught.

AI boxing will work.

EDIT: Used to be "AI boxing can work." My intent was to contradict the common LW positions that AI boxing is either (1) a logical impossibility, or (2) more difficult or more likely to fail than FAI.

"Can" is a very weak claim. With what probability will it work?
It seems unlikely that the first people to build fooming AGI will box it sufficiently thoughtfully. I think it's likely to work if implemented very carefully by the first people to build AGI. For instance, if they were careful, a team of 100 people could manually watch everything the AI thinks, stopping its execution after every step and spending a year poring over its thoughts. With lots of fail-safes, with people assigned to watch researchers in case they try anything, with several nested layers of people watching so that if the AI infects an inner layer of people, an outer layer can just pull a lever and kill them all, etc. And with the AI inside several layers of simulated realities, so that if it does bad things in an inner layer we just kill it, and so on. Plus a thousand other precautions that we can think up if we have a couple centuries. Basically, there are asymmetries such that a little bit of human effort can make it astronomically more difficult for an AI to escape. But it seems likely that we won't take advantage of all these asymmetries, especially if e.g. there's something like an arms race. (See also this, which details several ways to box AIs.)
Seems like an ad hominum attack. Why wouldn't the people working on this be aware of the issues? My contrarian point is that people concerned about FAI should be working on AI boxing instead.

[Please read the OP before voting. Special voting rules apply.]

It would be of significant advantage to the world if most people started living on houseboats.

Waste management?
Is there even enough coast for that? If people didn't live in cities, they'd have to commute more. There would be a large increase in transportation costs.
Where I live there is an abundance of canals. "Most people" is perhaps an exaggeration, but the main points in defence of increased houseboating would be: (1) a house is a large, expensive, immobile and illiquid asset. A houseboat is rather less expensive, which frees up capital for other purposes. (2) the internet makes it less necessary for most people to live in cities. (3) there would be less costs associated with moving between different areas.
I find it difficult to believe that houseboats are inherently less expensive. It seems more likely that there's some reason house boats cannot be made as large and expensive as regular houses, so the average houseboat is much cheaper than the average house, even if it's more expensive than a house of the same quality. The internet gets much more difficult if you don't live in cities. While it mitigates the costs of people not living near each other, it does not remove them. There are still lots of people putting large amounts of time into physically commuting. Why not use mobile homes? They can't be stacked in three dimensions like apartments, but at least you can put them in two-dimensional grids.
There certainly are houseboats much larger and more expensive than regular houses.
Your link is broken. I'm not sure the proper way to fix it, but it's hard to have links to pages with end parentheses in them.
Whoops. Fixed.
Motor homes might well make more sense for this. The reason I came to this view is that I like canals and so houseboating seemed like a pleasant idea; at around the same time, I read this NY Times piece suggesting that home ownership is not necessarily a good thing. Houseboating seemed like a way of dealing with that; motorhomes simply didn't occur to me as a (probably better) alternative.
Your mileage may vary. Getting internet made me yearn to move to a larger city where I could meet more interesting people and do more interesting stuff---which in the end I did.
If you don't want much cost of moving you can simply rent a flat.
I am pretty sure that out of two equivalent houses the one which floats would be noticeably more expensive, and more expensive to maintain, too. Houseboats are typically less expensive than houses because they are smaller and less convenient.
Sounds like a Dutch city. But, it seems, no less desired. See e.g. LW meetups.
Aren't RVs even cheaper?
And shacks made out of plywood and corrugated iron are cheaper still.
Indeed. I would in principle be willing to apply a similar argument to RVs, but (since living in an RV holds no aesthetic appeal for me, whereas houseboating does) I am rather less aware of what the logistics would be like.

[Please read the OP before voting. Special voting rules apply.]

There probably exists - or has existed at some time in the past - at least one entity best described as a deity.

Define deity?

Having political beliefs is silly. Movements like neoreaction or libertarianism or whatever will succeed or fail mostly independently of whether their claims are true. Lies aren't threatened by the truth per se, they're threatened by more virulent lies and more virulent truths. Various political beliefs, while fascinating and perhaps true, are unimportant and worthless.

Arguing for or against various political beliefs functions mostly (1) to signal intelligence or allegiance or whatever, and (2) as mental masturbation, like playing Scrabble. "I want to improve politics" is just a thin veil that system 2 throws over system 1's urges to achieve (1) and (2).

If you actually think that improving politics is a productive thing to do, your best bet is probably something like "ensure more salt gets iodized so people will be smarter", or "build an FAI to govern us". But those options don't sound nearly as fun as writing political screeds.

(While "politics is the mind-killer" is LW canon, "believing political things is stupid" seems less widely-held.)

While I mostly agree, trying to devise political systems that would encourage a smarter populace (ex. SSC's Graduation Speech with the guaranteed universal income and abolishing public schools) seems like a potentially worthwhile enterprise.
I agree that forming political beliefs is not a productive use of my time in the same way that earning a salary to donate to SCI to cure people of parasites is. I disagree that this makes it silly. The reasons you gave may not be the most noble of reasons, but they are still perfectly valid.
Twelve people disagree with this? I'm surprised. I was going to downvote for ‘not in the spirit of the game, obviously not a contrarian view’, but I guess I was a victim of the typical mind fallacy.

Roko's Basilisk legitimately demonstrates a problem with LW. "Rationality" that leads people to believe such absurd ideas is messed up, and 1) the presence of a significant number of people psychologically affected by the basilisk and 2) the fact that Eliezer accepts that basilisk-like ideas can be dangerous are signs that there is something wrong with the rationality practiced here.

My contrarian idea: Roko's basilisk is no big deal, but intolerance of making, admitting, or accepting mistakes is cultish as hell.
Are you sure you have pinpointed the right culprit? Why exactly "rationality"? "Zooming in" and "zooming out" would lead to potentially different conclusions. E.g. G.K.Chesterton would probably blame atheism[1]. Zooming out even more, for example, someone immersed in Eastern thought might even blame Western thought in general. Despite receiving vastly disproportionate share of media attention it was such a small part of LessWrong history and thought (by the way, is anything that any LWer ever came up with a part of LW thought?) that it seems to wrong to put the blame on LessWrong or rationality in general. Furthermore, which would you say is better, an ability to formulate an absurd idea and then find its flaws (or, for e.g. mathematical ideas, exactly under what strange conditions they hold) or inability to formulate absurd ideas at all? Ability to come up with various absurd ideas is an unavoidable side effect of having an imagination. What is important is not to start believing it immediately, because in the history of any really new and outlandish idea at the very beginning there is an important asymmetry (which arises due to the fact that coming up with any complicated idea takes time) - an idea itself has already been invented but the good counterarguments do not yet exist (this is similar to the situation where a new species is introduced to an island where it does not have natural predators, which are introduced only later). This also applies to the moment when a new outlandish idea is introduced to your mind and you haven't heard any counterarguments by that moment, one must nevertheless exercise caution. Especially if that new idea is elegant and thought provoking whereas all counterarguments are comparatively ugly and complicated and thus might feel unsatisfactory even after you have heard them. Was there really a significant number of people or is this just, well, an urban legend? The fact that some people are affected is not particularly surprising -
The quotes indicate that I'm not blaming rationality, I'm blaming something that's called rationality. You're replying as if I'm blaming real rationality, which I'm not. Censoring substantial references to the basilisk was partly done in the name of protecting the people affected. This requires that there be a significant number of people, not just that there be the normal number of people who can be affected by any unusual idea. His explanations have varied. The explanation you linked to is fairly innocuous; it implies that he is only banning discussion because people get harmed when thinking about it. Someone else linked a screengrab of Eliezer's original comment which implies that he banned it because it can make it easier for superintelligences to acausally blackmail us, which is very different from the one you linked.
Curiously, it is not necessary. For example, it would suffice that people who do the censoring overestimate the number of people that might need protection. Or consider PR explanation that I gave in another comment which similarly does not require a large number of people affected. Some other parts of your comment are also addressed there.
It is certainly possible that few people were affected by the Basilisk, and the people who do the censoring either overestimate the number or are just using it as an excuse. But this reflects badly on LW all by itself, and also amounts to "you cannot trust the people who do the censoring", a position which is at least as unpopular as my initial one.
I would guess that the dislike of censorship is not an unpopular position, whatever its motivations.
It's whatever makes LW different from the wider population, even the wider nerdy-western-liberal-college-educated cluster. The general population of atheists does not have problems with basilisks, and laughs them off when you describe them to them. It also received a disproportionate amount of ex cathedra moderator action. Which things are so important to EY that he feels it necessary to intervene directly and in a massively controversial way? By their actions we can conclude that the Basilisk is much more important to the LW leadership than e.g. the illegitimate downvoting that drove danerys away. I don't think this addresses the original argument. If these ideas are dangerous to us then we are doing something wrong. If you're saying that danger is an unavoidable cost of being able to generate interesting ideas, then the large number of other groups who seem to come up with interesting ideas without ideas that present a danger to them seems like a counterexample. I don't know, but the LW leadership's statements seem to be grounded in the claim that there were
At the time the Basilisk episode happened Eliezer was a lot more active in general then when the illegitimate downvoting happened. If you look at the self professed skeptic community there are episodes such as elevator gate. If you go a bit further back and look at what Stalin did, I would call the ideas on which he acted dangerous. It's pretty easy to speak about a lot of topics in a way that the people you are talking to laugh and don't take the idea seriously. A bunch of that atheist population also treats their new atheism like a religion and closes itself from alternative ideas that sound weird. For practical purposes they are religious and do have a fence against taking new ideas seriously.
What ideas does the general population of atheists have in common besides the lack of belief in God? And what interesting ideas can you derive from that? F.Dostoevsky (who wasn't even an atheist) seems to have thought that from this one could derive that everything is morally permitted. Maybe some atheistic ideas seemed new, interesting and outlandish in the past when there were few atheists (e.g. separation of church and state), but nowadays they are part of common sense. No, the claim of this hypothetical Chesterton would not be that atheism creates new weird ideas. It would be that by rejecting god you lose the defense against various weird ideas ("It’s the first effect of not believing in God that you lose your common sense." - G.K.Chesterton). It is not general atheism, it is specific atheist groups. And in the history of the world, there were a lot of atheists who believed in strange things. E.g. some atheists believe in reincarnation or spiritism. Some believe that the Earth is a zoo kept by aliens. In previous times, some revolutionaries (led not by their atheism, but by other ideologies) believed that just because the social order is not god given it could be easily changed into basically anything. The hypothetical Chesterton would probably claim that had all these people closely followed church's teachings they would not have believed in these follies since the common sense provided by the traditional christianity would have prevented them. And he would probably be right. The hypthetical Chesterton would probably think that the basilisk is yet another thing in the long list of things some atheists stupidly believe. Yes, on LessWrong the weirdness heuristic is used less than in more general atheist/skeptic community (in my previous post I have already mentioned why I think it is often useful), and it is considered bad to dismiss the idea if the only counterargument to it is that is weird. Difference in acceptance of weirdness heuristic probably comes from
XiXiDu's screenshot is damning because it indicates that Eliezer banned the Basilisk because he thought a variation on it might work, not because of either PR reasons or psychological harm. Unless you think he was lying about that for the same reason he might want to lie about psychological harm.
Well, in that post by Xixidu, there is a quote by Mitchell Porter (that is approved by Eliezer) that, combined with the [reddit post] I have linked earlier, seems he was not able to provide a proof that no variation of basilisk would ever work given that there are more than one possible decision theory, including some exotic and obscure ones that are not yet invented (but who knows what will be invented in the future). Eliezer seems to think that humans minds are unable to actually rigorously follow such a decision theory strictly enough that would be required for such a concept to work. But the human ability is such a vague concept, it is not clear how one can give a formal proof. However, it seems to me that an inability to provide a formal proof seems to be an unlikely reason to freak out. What (I guess) has happened was that this inability to provide a proof, combined with that unnamed SIAI person's nightmares (I would guess that Eliezer knows all SIAI people personally) and the fear of the aforementioned potential PR disaster might have resulted into the feeling of losing control of a situation and made him panic, thus resulting into that nervous and angry post, emphasizing the danger and need to protect some people (and leaving out cult PR reasons). This is my personal guess, I do not guarantee that it is correct. Is an inability to actually deny a thing equivalent to a belief that negation of that belief has a positive probability? Well, logically they are somewhat similar, but these two ways to express similar ideas certainly have different connotations and leave very different impressions in the listener's mind what was the person's actual degree of belief. (I must add that I personally do not like speculating about another person's motivations why he did what he did when I actually have no way of knowing them)
I think many users do not think it's a serious danger, but it's still banned here. It is IMO reasonable for outsiders to judge the community as a whole by our declared policies. Coming up with absurd ideas is not a problem. Plenty of absurd things are posted on LW all the time. The problem is that the community took it as a genuine danger. If EY made a bad decision at the time that he now disagreed with, surely he would have reversed it or at least dropped the ban for future posts. A huge part of what this site is all about is being able to recognize when you've made a mistake and respond appropriately. If EY is incapable of doing that then that says very bad things about everything we do here. What's cultish as hell to me is having leaders that would wilfully deceive us. If there are some nonpublic rules under which the basilisk is being censored, what else might also be being censored?
Well, nobody in LW community is without flaws. People often fail (or sometimes not even try) to live up to the high standards of being a good rationalist. The problem is that in some internet forums "judging the community" somehow becomes something like "this is what LW makes you to believe, and even if they deny it, they do it only because not doing it would give them a bad image" or "they are a cult that wants you to believe in their robot god" which are such gross misrepresentations of LW (or even thedrama surrounding the basilisk stuff) that even after considering Hanlon's razor one is left wondering whether that level of misinterpretation is possible without at least some amount of intentional hostility. I would guess that nowadays a large part of annoyance at somebody even bringing this topic up is a reaction to this perceived hostility. No, neither it says very bad things about everything we do here, nor about everything we do here. Whenever EY makes a mistake and fails to recognize and admit it, it is his personal failing to live up to the standards he wrote about so much. You may object that not enough people called him out on that on LW itself, but it was my impression that many of those that do e.g. on reddit seem to be LW users (as currently there are few related discussions here on LW, there is no context to do that here, besides, EY rarely comments here anymore). In addition to that on this thread there seems to be several LW users who agree with you, thus definitely you are not a lone voice, among LWers there seem to be many different opinions. Besides, on that reddit thread he seems to basically admit that, in fact, he did make a lot of mistakes in handling this situation. It has just dawned to me that while we are talking about censorship, at the same time we are having this discussion. And frankly, I do not remember when was the last time a comment was deleted solely for bringing this topic up. Maybe the ban has been silently lifted or at least i
Does "rolling my eyes and reading something else" count as "psychologically affected"?
May I suggest reading Singularity Sky by Charles Stross, which has precisely such a menacing future AI as an antagonist? (Spoiler: no basilisk memes involved in the plot; they're obviously not obvious to everyone who thinks of this scenario.)
I agree with this so much that, in order to not affect the mechanics of this thread, I'm going to upvote some other post of yours.
wait. now I'm not sure how to vote on THIS comment, which is brilliant.
The overwhelming majority of everyone on LessWrong, now and previously, believes that The Thing is completely ridiculous and would never work at all. Last I heard, Eliezer barely gave thought to the concept that it would really work, but instead blew up at the fact that hapless, innocent readers were being very stressed out by their lack of understanding of why it can't really work.
If you want to point out LW beliefs that sound crazy to most people, I guess you don't need to go as far as Roko's basilisk. FAI or MWI would suffice.

[opening post special voting rules yadda yadda]

Biological hominids descended from modern humans will be the keystone species of biomes loosely descended from farms pastures and cities optimized for symbiosis and matter/energy flow between organisms, covering large fractions of the Earth's land, for tens of millions of years. In special cases there may be sub-biomes in which non-biological energy is converted into biomass, and it is possible that human-keystone ocean-based biomes might appear as well. Living things will continue to be the driving force of non-geological activity on Earth, with hominid-driven symbiosis (of which agriculture is an inefficient first draft) producing interesting new patterns materials and ecosystems.

Upvoted because it is much too specific (too many conjunctions) to be true. Even if many of them sound plausible.
Bah, I'm always doing that. I have clusters of related suspicions which I put down in one big chunk rather than as separate possibly independent points. If I had to extract a main point it would be the first bit, biological hominids descended from modern humans existing tens of millions of years from now with their most obvious alterations to the world being an extension of what we have begun with agriculture.
Are you imagining these human descendents will be technology using?
Yes, as hominids have been for more than a million years. An expanded toolkit though, even compared to today (though its possible that not all of our current tools will have the futures many of us expect, in the long run). Good manipulation of electromagnetism alone is having very interesting effects that we have only really begun to touch on, and I expect biotechnology and related things to have interesting roles to play. All of this will have to occur within the context of ecological laws which are pretty immutable, and living systems are very good at evolving and replicating and surviving in many contexts on this planet.

[Please read the OP before voting. Special voting rules apply.]

Fossil fuels will remain the dominant source of energy until we build something much smarter than ourselves. Efforts spent on alternative energy sources are enormously inefficient and mostly pointless.

Related claim: the average STEM-type person has no gut-level grasp of the quantity of energy consumed by the economy and this leads to popular utopian claims about alternative energy.

It isn't very hard to do a little digging here. China's aggressive nuclear strategy seems reasonable.
Not exactly sure what you mean by "digging." I already comprehend the quantities of energy being consumed because of my education and experience in related fields, it's the average person who I think does not, since I hear them saying things about how a small increase in solar panel efficiency is going to completely and rapidly "cure us of our fossil fuel addiction." Also, your figure only reflects electricity generation, not total energy consumption which is a much higher figure. Currently non-hydrocarbon fuel sources for transportation is very fringe. The truth is that the price of fossil fuels has always and will continue to fluctuate in accord with simple supply-demand economics for a long time to come; the cheaper it gets to make energy via alternative methods, the cheaper fossil fuels will become to undercut those alternative sources.
I looked through the numbers and the trend line. I updated in your direction. Even nuclear can't make a big dent without true mass production of reactors, which almost certainly will not happen.
I give it well over 70 percent chance of happening. Mostly because I am expecting coal and gas to get really unpleasantly expensive in the next two decades. The remaining 30 percent is mostly taken up by "Technological surprise rendering all extant generation tech obsolete. One of the small-scale fusion plants working out very well, for example.
The only reason they have been getting expensive at all is that governments have been over-regulating them.
If you don't regulate them you don't pay directly but pay in medical cost for conditions such as asthma. You also get lower children IQ which is worth something. According to the EPA calculations the children IQ is worth more than the increased monetary cost of coal plants due to mercury regulation.
Ehrr.. Just No. Nuclear might be able to make that case, tough mostly the problem there is sticking with over-grown submarine reactors (pwr's are an asinine choice for use on land) but coal and gas? Those are, if anything underregulated due to excessive political clout. Fossil fuels will get more costly for straightforward reasons of supply and demand. The third world is industrializing, and the first world is going to use ever more electricity due to very predictable changes like the coming switch to all-electric motoring (Which, again, will not be driven by government policy, but by better batteries making the combustion engine a strictly inferior technology for cars) Thus, world wide electricity demand is going to go up. By a lot. That, in turn is going to bid up sea-borne coal and liquified natural gas to ridiculous heights because there just isn't any way to increase the supply to match. Very shortly after that, Resistance to more reactors is going to keel over and die - high electricity costs to industry being entirely unacceptable to people with lots of political clout and lots of media ownership, and suddenly mass-production is going to be on the menu. Hopefully of more sensible designs. Molten salt, molten lead, even sodium. Any design that doesn't require the power to be on for shut-down cooling to work, basically.
Unfortunately it is not quite this simple. The current oil price is on the order of $100 per barrel, but it never broke $40 per barrel prior to 1998. See figure. Also see this figure which is in terms of inflation-adjusted dollars, and shows another huge spike around 1980. The reason for these tremendous spike in price isn't simple supply-demand - complex nonlinear political factors are almost certainly to blame, and price stickiness is partially why oil remains as expensive as it is. It doesn't cost even in the ballpark of $100 per barrel to get oil out of the ground and it won't for a very very long time. The upshot is that the price of oil will continue to beat out other sources of energy by just enough to keep those sources of energy at a marginal level of profitability, because oil (and other fossil fuels) can remain profitable at much lower prices. I would also point out that the scenario you have just described is highly complex and conjunctive, while "oil continues to do what it has been doing" is an intrinsically simple hypothesis.
Price is set on the margins. The marginal barrels of oil coming out of the ground are certainly in the ballpark of $100, from various shale and tight deposits.
The oil prices do not play by the economics textbook rules because most of the world's oil production is controlled by governments and governments have a variety of interests and incentives beyond what a profit-maximizing purely economic agent might have.
Oil is nearly utterly irrelevant to electricity, however. Nobody sane produces electricity on anything but the most minor of scales using it, and given mass-market electric cars, it is never going to be able to compete on price with electricity. charging a 100 kwh battery pack from bone dry to full would cost an average of 12 dollars and change in the US. That is the equivalent of a gas price of <60 cents per gallon, and electric cars are a better driving experience. (well, a car with a 100 kwh battery back certainly will be. That's a lot of oomp.) The sauds aren't going to be able to beat this transition just by dropping the price of oil for a year or two, nor are price hikes on the electric side going to do it - most of the cost of electricity to private consumers is taxes, so even quite large rises in the cost of coal and gas will not translate one-to-one as end-customer pain, and the differential in price is simply to large. More importantly, the electrification of the world continues apace, and most of the places joining the age of the electron do not have vast coal reserves of their own. The secular pressure to go nuclear is only going to rise, and most of the world is well beyond the reach of the various groups dedicated to the defense of helpless Actinides.
At the moment. But your scenario assumes that most of transportation switches its energy source from petrol/diesel to electricity. That implies that the demand for oil will drop through the floor. And that implies that oil will become very cheap. Which in turn implies that it will again start to make sense to burn it to make electricity to recharge the car batteries. Remember that electricity is not a source of energy. To support your case for nuclear you need to show that coal and hydrocarbons will be unable to support the energy needs of the humanity in the near future. Whether cars run directly on hydrocarbons or whether there is the intermediate stage of electricity involved does not matter much for this issue.
Oil has uses other than automotive fuel - way before it reaches the point where it becomes competitive with coal or uranium for stationary power plants, demand from the plastics, avionics and the petrochemicals industry is going to put a floor on the price. I don't expect the saudi oil to stay under the sand, but as an energy player, the global oil industry is doomed. Coal is going to be raking in money hand over fist for a while as prices spike, but once a transition to fission starts, - and high coal prices will get that started - they are done for too. King coal only still reigns at all because the world has been collectively insane about fission due to living in the shadow of the atomic bomb.
Certainly true and yes, that will put a floor under the price. That's good, isn't it? People have been pointing out for quite a while that just burning something as useful as oil is pretty silly. It also means that the world is not going to run out of oil in the foreseeable future, right? I don't know about that -- there is an awful lot of gas around. Do you happen to have some sort of a timetable for your predictions? While that may be true, I don't see any signs of the world becoming more sane.
tesla says the model e is going live 2017. Average fleet turnover is 7 years and with affordable EV's on the market (I am sort of assuming at least some competition from other manufacturers here..) combustion cars become roughly as marketable as a buggy and horse, so near-total conversion to electric cars 80+ %) by 2027 - the knockon effects of that on the grid are very predictable, so a sudden dire interest in more baseload during that time period.
You DO realize that even if Tesla's wildest dreams are realized and they double the world production of lithium ion batteries, they can sell at most a few hundred thousand cars a year...
Yhea. Musk isn't breaking ground on big enough factories. That is why I am saying ten years for the switchover rather than much less than that. But once the world spots someone in bog-standard manufacturing making more money than god, everyone, their sister, and the crazed aunt nobody wants to talk to will pile in.
Increasing the amount of lithium that get's mined each year isn't as easy of just retasking a factory to another task.
But things ARE moving in this direction, I believe. Bolivia is trying to figure a way to start getting money from the world's largest reserve of lithium, currently untouched because under the natural wonder Salar de Uyuni
Things are moving into the direction of producing more lithium but not enough to simply double lithium production in one or two years. Replacing all cars with electric cars might require a lot more than doubling.
You are making a rather huge assumption that the electric battery energy density will drastically improve. Without that electric cars will still be limited to cities and commuting. California had some significant electric shortages recently and that did NOT make them fans of building more power plants, never mind nuclear ones. If the demand spikes, high (electricity) prices will downregulate it at which point, given the cheap oil, the ICE (internal combustion engine) cars could turn out to be a sensible option :-)
No I'm not. I am assuming the exact batteries that are going to be coming out of the factories currently being built. Because when it comes to it, nobody is going to give a half a damn that they have to take a mid day break of thirty minutes the one time a year they visit aunt Greta three states over. Tesla is aiming at 200 mile range. At legal speeds, that is just under four hours of driving on the highway. Try and recall the last time you did more than that in one day? Now, for that trip, would a 30 minute lunch break have ruined your life? The actual pattern of use for everyone in their day to day lives is going to be "jack the car in when you come home for the day, unplug in the morning". Total time spent; 22 seconds. This is more convenient than a weekly stop at the gas station, and vastly cheaper. At this point everyone I discuss with will bring up road trips. Except. People plan those. Including a stop at a super charger station is not a hardship. It is certainly not a hardship severe enough to justify paying 6 to ten times as much to keep your car running on a day to day basis. Is avoiding those semiannual 30 minute pitstops really worth 4 and a half thousand dollars to you? Assuming gas drops in price by half, is it worth 2250 dollars? I guarantee that avoiding them is not worth it to the average consumer. And given a sane discount rate, that difference in fuel costs means nobody is going to buy a gasoline car. You would have to give them away for free! average car turnover; 7 years. 7 x 4.5 = 31.5 thousand. The tesla E is aiming at price point of 35k. So, basically, the fuel savings are going to pay for the car. The only flaw in Elon Musks design I can spot here is that I think his planned production facilities are way to small. That provides an opening for panicked retooling for the production of "Me Too" cars from the traditional car makers. This is the main reason I said ten years for full switchover rather than 7 - most car manufacturers are currently
Y'know, that number bothered me so I decided to check. Are the annual gas expenses really $4,500? Let's see. An average American car drives somewhere around 12,000 miles per year. A contemporary sedan running on gas goes for around 30 miles per gallon (EPA combined numbers). This means that a car burns about 400 gallons of gas per year. I filled up yesterday for $3.15 per gallon, but let's say the average current price is $3.25. 400 * $3.25 = $1,300 per year. This is the average annual gas expense. And if you buy a small (still ICE) car, you can get gas mileage up to about 40 mpg, I think. For such a car the annual costs of gas would be below a thousand dollars. Where does your $4,500 number come from?
Uhm.. That looks a heck of a lot like the reasoning I used, except I did something stupid with imperial/metric conversion, and then it didn't trigger any "That must be a mistake" bells because gas in these parts is 2 dollars.. per liter. Why can't you use metric like everyone else? ;)
This is the case of your government being greedy, not of gasoline being intrinsically expensive :-P The US is special -- haven't you heard of the American Exceptionalism doctrine? X-D
No need to try, I drive long distances on a fairly regular basis. Your estimate of half an hour for a full recharge also seems to have nothing to do with the current reality. I trust you've heard of the typical mind fallacy? People are different. Trying to pretend everyone does the same thing isn't particularly useful. And assuming electricity costs go up by how much? One reason gas costs so much is because it's a source of revenue for the government. Do you think the government will just forget about this revenue or maybe electric won't be so cheap after all? You are predicting a huge spike in demand, right? LOL. OK, then, it's a simple way to become rich. Short the stocks of everyone who depends on ICE engines -- engine manufacturers, most obviously, but there's a large ecosystem around that -- and go long Tesla and its ecosystem. In ten years you should be swimming in money. On a bit more serious note, clearly electric cars make sense for some people and some uses. They also clearly do NOT make sense for other people and other uses. Of course there will be more electric cars on the road in ten years. But there will be ICE cars as well.
In ten? sure. As I said, 80% penetration. It might be higher, but that /does/ depend on better batteries than conservative forward extrapolation of trends predict. Most of them will be used, because you will be able to get a used ice car for junk value and a dollar, and then junk them the first time they have any kind of major problem. And the government wont be able to tax the electricity to the extent they do gas - no good way to do that without being lynched because electrons are electrons. Most likely, we will wind up with.. I dunno, ridiculously high taxes on tyres?
So, to take the used cars out of the equation, you're saying that in 10 years no one will be producing ICE cars..? Or, to avoid absolutes, given 80% penetration and the existence of used cars, do you claim that in ten years something like 95% of cars produced will be purely electric? I wish I shared your optimism :-/
More or less. Technology transitions have reinforcing feedback loops - once the transition starts, the bottom falls out of the market for used ICE cars (the junk value and a dollar thing..) which makes new ICE cars very difficult indeed to sell. After a couple years of that, gasoline is no longer sold in nearly as many places... It's not blind optimism - look, the oil barons currently bribing the US congress (and various european politicians..) fall into two categories; "Marginal producers" and "Funny looking/weirdly dressed foreigners". That means that once the price of oil falls to any significant extent, the political lobby for oil rapidly gets reduced to 90+% "Guys in turbans with no vote". That is an interest group which politicians will loose absolutely no sleep over burning all bridges with. So there shouldn't be any pressure from the top to keep the ICE alive artificially. That will cost revenue, yes, but it will save consumers lots of money. Which they will spend. And that will create revenue, and reduce expeditures, and again.. Being the politician that steps up and says "I know you just saved thousands of dollars that were heading down to the kingdom of sand, but I just cant stand to see a commoner with money so I'm going to slap a 3 thousand dollar surcharge on your electric bill" is a good way to end up on a literal pike. No matter how much economy speak you try to dress it up in.
Well, we'll see. In the meantime, Chevy Volt, an electric car selling for $27K (with applicable tax credits, that is, a government bribe to make you buy it) is selling rather poorly, I believe. Heh. No, the politicians have gotten quite good at saying "Look at the shiny!" while they're rifling through you wallet..
And the united kingdoms are still happily paying the poll tax, the american war of independence never happened. Some taxes are much more.. annoying.. to the general public than others. And in this case, what you are envisioning just can't happen. You can not collect high taxes on gasoline and electricity both - the transition is fast, not instant, and so if you do that low income people who are obliged to use gasoline will actually run out of money all-together. And riot. In theory, it is possible to stop taxing gas and start taxing electricity instead, but that is so painfully stupid an idea you wouldn't be elected dogcatcher after suggesting it.
Well a lot of European countries are doing just that. Or rather they have "sustainable energy" mandates, which from the consumer's point of view function as a high tax on electricity.
Not high enough to make gasoline competitive with electricity on price. which is the subject under debate. Not that I'm happy about the mandates, because they have failed. The only policies that have ever worked to clean up electricity generation are dams and nukes. Barring technological breakthroughs, I fear they are the only ones that are ever going to work outside of a band near the equator where solar might eventually become sane.
Humans aren't that rational; as someone here (Yvain, I think) once mentioned, they will rent/buy houses with an extra bedroom just in case aunt Greta comes over, even though the money they'd save with a smaller house would be enough to buy a stay in a five-star hotel every time aunt Greta comes over. Also, electric cars just aren't that cool among a large segment of the population, and social status is a major part of the reason people buy expensive-ass cars.
.. And for the irrational and moneyed segment of the population willing to blow money on cool factor for conspicuous consumption EV motoring offers total freedom of design space. The basis of an EV is a skateboard. 4 wheels on the corners with electric steering, batteries and drive by wire on top of which you can drop any chassis you care to. So, blow the money needed to get the jumbo-tron extra large battery pile, and stick any chassis you care to on top. Areo-dynamics? Who cares, it isn't like the kind of person who would ignore that price gap on the fuel side of things is going to feel the pain that their car is eating watt-hours like a house decked out in Christmas lights.
If my model of the people I'm talking of is correct, the very idea of electric cars reminds them of granola-eating hippie wusses. (But I haven't interacted with such people on a regular basis since 2009, so my model may be outdated.)
I assure you that this is not true, unless I misunderstand you. edit: The Finding and Development cost of a typical worthwhile shale play is $1.50/Mcf (many are even better), the current natural gas price is $3.50/Mcf. Of course there are crappy fields with higher F&D cost, and these won't be drilled until prices are high enough to justify it. In effect there is a continuum of price/barrel out there in the world and this is not what controls present day prices.
Eh, Ill stand by my reasoning, but I agree other people might not assign as high probabilities to each step in the chain as I do, so here is a much simpler causative chain that is going to lead to the same place. China isn't going to keep sacrificing tens of thousands of it's people to the demon smog every year. And once the chinese are knocking of reactors at a high pace, the rest of the world will follow.
And a simple solution to this is just to copy the current-day US which does not use a lot of nuclear power and also does not sacrifice many people to the demon smog.
We have roughly doubling in solar panel efficiency every 7 years. That's not what I would call "small increase".
Even if solar panels were 100% efficient it would not change the overall picture very much. Solar panels are expensive and do not use space efficiently.
With efficiency I meant the amount you pay per kilowatt hour. It's a variable that has seen consistent doubling every 7 years over the last two decades. Space on top of most buildings is unused and there are huge deserts that aren't used.
Does the include the subsidies many governments have been providing to solar?
Subsidies per kilowatt hour didn't raise exponentially. I'm not sure to what extend they are factored out. Solar is also not the only form of energy that get's subventioned. In Germany we used to pay billions per year in coal subventions.
They started from zero, so it's technically super-exponential.
From what I remember, transportation's responsible for about a third of CO2 emissions, a bit less than electricity generation. (Various other sources make up the remaining third, most not involving direct energy consumption.) I'm not sure exactly how that translates to energy consumption, but given the economies of scale involved, I suspect the power grid would end up dominating total energy consumption.
Is this a claim about the choices we will make or what is possible? If 1 I can buy it as an argument that states will not be rational enough to choose better options, if 2 I think its false.

[Please read the OP before voting. Special voting rules apply.]

Frequentist statistics are at least as appropriate as, if not more appropriate than, Bayesian statistics for approaching most problems.

[Please read the OP before voting. Special voting rules apply.]

Reductionism as a cognitive strategy has proven useful in a number of scientific and technical disciplines. However, reductionism as a metaphysical thesis (as presented in this post) is wrong. Verging on incoherent, even. I'm specifically talking about the claim that in reality "there is only the most basic level".


Meta-comment: I'm not sure that structure or voting scheme is particularly useful. The hope would be to allow conversation about contrarian viewpoints which are actually worth investigating. I'm not sure how you separate the wheat from the chaff, but that should be the goal...

Yes. Contrarian position: This thread would be better if we upvoted contrarian positions that are interesting or caused updates, not those that we disagree with.

Upvote interestingness, downvote incoherence, ignore agreement and disagreement?
Although you must be certain that incoherence is actually incoherence. Inferential distance means that an idea sufficiently distant from your own beliefs will seem incoherent. Otherwise I like this.
I think it might be better to have one where you upvote things you agree with, and just never downvote.

[Please read the OP before voting. Special voting rules apply.]

An AI which followed humanity's CEV would make most people on this site dramatically less happy.

Do you mean that, if shown the results, we would decide that we don't like humanity's CEV, or that humanity desires that we be unhappy?
What Nancy said, so 1, and instrumentally but not terminally 2.
Or possibly that if the majority of people got what they want, most people at LW would be incidentally made unhappy.
My intuition is in agreement with this, but I would love a more worked out description of your own thoughts (in part because my own thoughts aren't clear).