If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Open thread, Dec. 1 - Dec. 7, 2014
New Comment
346 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

GiveWell's top charities updated today. Compared to previous recommendations, they have put Against Malaria Foundation back on the top charities list (partial explanation here), and they have also added an "Other Standout Charities" section.

8tog
Here's GiveWell's detailed announcement post. I've posted a summary of the new (and old) charities to LW Discussion.
6Metus
Also note that there is information on tax-deductibility of donations outside of the U.S. on that site. If you are paying a lot of income tax you might be able to get some money back, donate even more or some combination of those two.
9tog
Even more easily, you can visit this interactive tool I made and it'll tell you which charities are tax-deductible or tax-efficient in your country, and give you the best links to them. It also has a dropdown covering 18 countries, including some in which tax-efficient routes are far from obvious.
5Metus
Thank you. It is a bit of a shame that it is so complicated to donate tax-efficiently from one EU country to another. I can understand complications going from the US to the EU member states and vice versa but this is plenty strange.

Several weeks ago I wrote a heavily upvoted post called Don't Be Afraid of Asking Personally Important Questions on LessWrong. I thought it would only be due diligence if I tried to track users on LessWrong who have received advice from here, and it's backfired. In other words, to avoid bias in the record, we might notice what LessWrong as a community is bad at giving advice about. So, I'm seeking feedback. If you have anecdotes or data of how a plan or advice directly from LessWrong backfired, failed, or didn't lead to satisfaction, please share below. If you would like to keep the details private, feel free to send me a private message.

If the ensuing thread doesn't get enough feedback, I'll try making asking this question as a Discussion post in its own right. If for some reason you think this whole endeavor isn't necessary, critical feedback about that is also welcome.

[-]Shmi110

What cause would an NRx EA donate to?

8bramflakes
Depends on what kind of NRx. There isn't a single value system shared among them. The popular trichotomy is "Techno-commercialist / Theonomist / Ethno-nationalist" - I don't know about the first two, but the ethnonationalists would probably disagree with a lot of Givewell's suggestions.
3skeptical_lurker
Not uniformly, I think - Japan is an Ethno-nationalist state, and also used to be the world's largest supplier of foreign aid.
3[anonymous]
Ethno-nationalists certainly have no problem with geopolitics or mutually-beneficial investment, and foreign aid can be useful there.
3ZankerH
The most coherent proposal I've heard so far is applying being TRS at the polling place to charity: The principle of optimising your donations for cultural-marxist outrage.
1Azathoth123
Sarah Hoyt isn't quite NRx, but her recent (re)post here seems relevant. In particular, the old distinction between deserving and undeserving poor.
1IlyaShpitser
The Austrian "Iron Ring" party. Restore the Hapsburg Empire! ---------------------------------------- Yes, I am aware that there are things to understand about the crazy straw design world. :)
-1Azathoth123
NRx's are generally not utilitarians.
4Shmi
I've met at least one claiming he is.
0skeptical_lurker
What ethical system do you follow?
-1Azathoth123
I'm a virtue ethicist.
-4Lumifer
Ha. Good question. Subverting the Cathedral, maybe?
[-]Shmi110

My feeling was that SSC is getting close to LW in terms of popularity, but Alexa says otherwise: SSC hasn't yet cracked top 100k sites (LW is ranked 63,755) and has ~600 of links to it vs ~2000 for LW. Still very impressive for a part-time hobby of one overworked doctor. Sadly, 20% of searches leading to SSC are for heartiste.

My suspicion is that SSC would get a lot more traffic if its lousy WP comment system was better, but then Scott is apparently not motivated by traffic, so there is no incentive for him to improve it.

SSC would get a lot more traffic

SSC getting a lot more traffic might change it and not necessarily for the better.

4ChristianKl
Why do you think that's the case? Are there any cases of blogger getting much more popular after switching to a different comment system? And what comment system would you advocate?
2Shmi
It's a good question, maybe it does not, I am not aware of any A/B testing done on that. I simply go by the trivial inconveniences. Scott is against reddit-style karma system, so I'd go for Scott marking comments he finds interesting, at a minimum. Additionally, comment formatting and presentation which improves nesting and visibility would be nice. Reddit/LW is an OK compromise, userfriendly.org is better in terms of seeing more threads at a glance.
-2ChristianKl
There are many reasons against using the reddit code base. While it's open source in theory it's not structured in a way that allows easy updating. Is there any solution that would be plug&play for a wordpress blog that you would favor Scott implementing? Coding something himself would be more than a trival inconvenience. I also think you underrate the time cost of comment moderation. Want to be a blogger and wanting to moderate a forum are two different goals.
0Shmi
Scott uses WP, and it has plenty of comment ranking plugins. Here is one popular blog with a simple open voting system: http://www.preposterousuniverse.com/blog . It is probably not good enough for SSC, but many other versions are available. As I said, Scott is not interested in improving the commenting system, and probably is not interested in taking any steps beyond great writing toward improving the blog's popularity, either.
0ChristianKl
That has voting but it doesn't seem to have threaded comments. That means switching to that plugin would break all the existing comment threads. I would guess that the main issue is that he doesn't want to do work to improve it. Arguing what's an improvement also isn't easy. If I look at the blogs of influential people who do put effort into it, I don't see that they all use a comment solution that Scott refuses to use.
2NancyLebovitz
The amount of comments can be rather overwhelming as it is. Do you want a larger SSC community, for the ideas to get a wider audience, or what?
5Shmi
It is overwhelming because it is poorly formatted and presented, not because of the volume. There are plenty of forums with better comment formatting, like reddit, userfriendly.org, or slashdot. Lack of comment ranking does not help readability, either.
1NancyLebovitz
I find that the bakot's ~new~ on new comments and the dropdown list of new comments is enough to get by with-- for me, the quantity really is the overwhelming aspect on the more popular posts.
2Shmi
Other forums have lots more comments, yet are easier to navigate through.

Good futurology is different from storytelling in that it tries to make as few assumptions as possible. How many assumptions do we need to allow cryonics to work? Well, a lot.

  • The true point of no return has to be indeed much later than we believe it to be now. (Besides does it even exist at all? Maybe a super-advanced civilization can collect enough information to backtrack every single process in the universe down to the point of one's death. Or maybe not)

  • Our vitrification technology is not a secure erase procedure. Pharaohs also thought that their mummification technology is not a secure erase procedure. Even though we have orders of magnitude more evidence to believe we're not mistaken this time, ultimately, it's the experiment that judges.

  • Timeless identity is correct, and it's you rather than your copy that wakes up.

  • We will figure brain scanning.

  • We will figure brain simulation.

  • Alternatively, we will figure nanites, and a way to make them work through the ice.

  • We will figure all that sooner than the expected time of the brain being destroyed by: slow crystal formation; power outages; earthquakes; terrorist attacks; meteor strikes; going bankrupt; economy collapse; n

... (read more)
9Gondolinian
While mainstream belief in an afterlife is probably a contributing factor in why we aren't doing enough longevity/immortality research, I doubt it's a primary cause. Firstly, because very few people alieve in an afterlife, i.e. actually anticipate waking up in an afterlife when they die. (Nor, for that matter, do most people who believe in a Heaven/Hell sort of afterlife, actually behave in a way consistent with their belief that they may be eternally rewarded or punished for their behavior.) Secondly, because the people who are in a position to do such research are less likely than the general population to believe in an afterlife. And finally, because even without belief in an afterlife, people would still probably have a strong sense of learned helplessness around fighting death, so instead of a "Dying is sure scary, we won't truly die, so problem solved, let's do something else." attitude, we'd have a "Dying is sure scary, but we can't really do anything about it, let's do something else." attitude (I have a hunch the former is really the latter dressed up a bit.).
2maxikov
On this particular point, I would say that people who are in a position to allocate funds for research programs are probably about as likely as the general population to believe in the belief in afterlife. Generally, I agree - it's definitely not the only problem. The USSR, where people were at least supposed to not believe in afterlife, didn't have longevity research as its top priority. But it's definitely one of the cognitive stop signs, that prevents people from thinking about death hard enough.
2RowanE
About half of your list is actually an OR statement (timeless identity AND brain scanning AND simulation) OR (nanites through ice), and that doesn't even exhaustively cover the possibilities since at least it needs a term for unknown unknowns we haven't hypothesized yet. It's probably easiest to cover all of them with something like "it's actually possible to turn what we're storing when we vitrify a cryonics patient back into that person, in some form or another". And the vast majority of cryonicists, or at least, those in Less Wrong circles who your post are likely to reach, already accept that the probability of cryonics working is low, but exactly how low they think the probability is after considering the four assumptions your list reduces to is something they've definitely already considered and probably would disagree with you on, if you actually gave a number for what "very low" means to see whether we even disagree (note: if it's above around 1%, consider how many assumptions there are in trying to achieve "longevity escape velocity", and maybe spread your bets). And, as others have already pointed out, belief in cryonics doesn't really funge against longevity research. If anything, I expect the two are very strongly correlated together. At least as far as belief in them being desirable or possible goes, it's quite apparent that they're both ideas that are shared by a few communities such as our own and rejected by other communities including "society at large". How much we spend on each is probably affected by e.g. cryonics being a thing you can buy for yourself right now but longevity being a public project suffering from commons problems, so the correlation might be less strong and even inverse if you check it (I would be very surprised if it actually turned out to be inverse), but if so that wouldn't necessarily be because of the reasons you suggest.
3maxikov
I would say it's probably no higher than 0.1%. But by no means I'm arguing against cryonics. I'm arguing for spending more resources on improving it. All sorts of biologists are working on longevity, but very few seem to work on improving vitrification. And I have a strong suspicion that it's not because nothing can be done about it - most of the time I talked to biologists about it, we were able to pinpoint non-trivial research questions in this field.
6ChristianKl
I think LW looks favorably on the work of the Brain Preservation Foundation and multiple people even donated.
1ChristianKl
How about putting numbers on it? Without doing so, your argument is quite vague. Have you actually looked at the relevant LW census numbers for what "we are hoping"?
0maxikov
I would estimate the cumulative probability as the ballpark of 0.1% I was actually referring to the apparent consensus what I see among researchers, but it's indeed vague. I should look up the numbers if they exist.
1ChristianKl
Most researchers don't do cryonics. I think a good majority of LW anti-aging research is underfunded. I don't buy the thesis that people who do cryonics are investing less effort into other ways of fighting aging. The 2013 LW census asked a questions : "P(Anti-Agathics) What is the probability that at least one person living at this moment will reach an age of one thousand years, conditional on no global catastrophe destroying civilization in that time?" "P(Cryonics) What is the probability that an average person cryonically frozen today will be successfully restored to life at some future time, conditional on no global catastrophe destroying civilization before then?" And "Are you signed up for cryonics?" The general takeaway is that even among people signed up with cryonics the majority doesn't think that it"s chance of working are bigger than 50%. But they do believe that it's bigger than 0.1%.
0pengvado
Who is "we", and what do "we" believe about the point of no return? Surely you're not talking about ordinary doctors pronouncing medical death, because that's just irrelevant (pronouncements of medical death are assertions about what current medicine can repair, not about information-theoretic death). But I don't know what other consensus you could be referring to.
0maxikov
Surely I do. The hypothesis that after a certain period of hypoxia under the normal body temperature the brain sustains enough damage so that it cannot be recovered even if you manage to get the heart and other internal organs working is rather arbitrary, but it's backed up by a lot of data. The hypothesis that with the machinery for direct manipulation of molecules, which doesn't contradict our current understanding of physics, we could fix a lot beyond the self-recovery capabilities of the brain is perfectly sensible, but it's just a hypothesis without the data to back it up. This, of course, can remind you the skepticism towards flying machines heavier than air in 19th century. And I do believe that some skepticism was a totally valid position to take, given the evidence that they had. There are various degrees of establishing the truth, and "it doesn't seem to follow from our fundamental physics that it's theoretically impossible" is not the highest of them.
0gothgirl420666
You missed a few: * you will die in a way that leaves your brain intact * people will care enough in the future to revive frozen people * the companies that provide these services will stick around for a long time
-1cameroncowan
I think trying to stop death is a rather pointless endeavour from the start but I agree the fact that most everyone has accepted it and we have some noble myths to paper it over certainly keep resources from being devoted to living forever. But then, why should we live forever?

New research suggests that life may be hard to come by on certain classes of planets even if they are in the habitable zone since they will lose their water early on. See here. This is noteworthy in that in in the last few years almost all other research has pointed towards astronomical considerations not being a major part of the Great Filter, and this is a suggestion that slightly more of the Filter may be in our past.

How do people who sign up to cryonics, or want to sign up to cryonics, get over the fact that if they died, there would no-longer be a mind there to care about being revived at a later date? I don't know how much of it is morbid rationalisation on my part just because signing up to cryonics in the UK seems not quite as reliable/easy as in the US somehow, but it still seems like a real issue to me.

Obviously, when I'm awake, I enjoy life, and want to keep enjoying life. I make plans for tomorrow, and want to be alive tomorrow, despite the fact that in betwee... (read more)

[-]jefftk220

Say you're undergoing surgery, and as part of this they use a kind of sedation where your mind completely stops. Not just stops getting input from the outside world, no brain activity whatsoever. Once you're sedated, is there any moral reason to finish the surgery?

Say we can run people on computers, we can start and stop them at any moment, but available power fluctuates. So we come up with a system where when power drops we pause some of the people, and restore them once there's power again. Once we've stopped someone, is there a moral reason to start them again?

My resolution to both of these cases is that I apparently care about people getting the experience of living. People dying matters in that they lose the potential for future enjoyment of living, their friends lose the enjoyment of their company, and expectation of death makes people enjoy life less. This makes death different from brain-stopping surgery, emulation pausing, and also cryonics.

(But I'm not signed up for cryonics because I don't think the information would be preserved.)

0MockTurtle
Thinking about it this way also makes me realise how weird it feels to have different preferences for myself as opposed to other people. It feels obvious to me that I would prefer to have other humans not cease to exist in the ways you described. And yet for myself, because of the lack of a personal utility function when I'm unconscious, it seems like the answer could be different - if I cease to exist, others might care, but I won't (at the time!). Maybe one way to think about it more realistically is not to focus on what my preferences will be then (since I won't exist), but on what my preferences are now, and somehow extend that into the future regardless of the existence of a personal utility function at that future time... Thanks for the help!
9CBHacking
Short version: I adjusted my sense of "self" until it included all my potential future selves. At that point, it becomes literally a matter of saving my life, rather than of being re-awakened one day. It didn't actually take much for me to take that leap when it came to cryonics. The trigger for me was "you don't die and then get cryopreserved, you get cryopreserved as the last-ditch effort before you die". I'm not suicidal; if you ask any hypothetical instance of me if they want to live, the answer is yes. By extending my sense of continuity into the not-quite-really-dead-yet instance of me, I can answer questions for that cryopreserved self: "Yes, of course I want you to perform the last-ditch operation to save my life!" If you're curious: My default self-view for a long time was basically "the continuity that led to me is me, and any forks or future copies/simulations aren't me", which tended toward a somewhat selfish view where I always viewed the hypothetical most in-control version (call it "CBH Alpha") as myself. If a copy of me was created; "I" was simply whichever one I wanted to be (generally, the one responsible for choosing to create the new instance or doing the thing that the pre-fork copy wanted to be doing). It took me a while to realize how much sense that didn't make; I always am the continuity that led to me, and am therefore whatever instance of CBH that you can hypothesize, and therefore I can't pick and choose for myself. If anything that identifies itself as CBH can exist after any discontinuity from CBH Alpha, I am (and need to optimize for) all those selves. This doesn't mean I'm not OK with the idea of something like a transporter that causes me to cease to exist at one point and begin again at another point; the new instance still identifies as me, and therefore is me and I need to optimize for him. The old instance no longer exists and doesn't need to be optimized for. On the other hand, this does mean I'm not OK with the idea of a mac
0MockTurtle
I remember going through a similar change in my sense of self after reading through particular sections of the sequences - specifically thinking that logically, I have to identify with spatially (or temporally) separated 'copies' of me. Unfortunately it doesn't seem to help me in quite the same way it helps you deal with this dilemma. To me, it seems that if I am willing to press a button that will destroy me here and recreate me at my desired destination (which I believe I would be willing to do), the question of 'what if the teleporter malfunctions and you don't get recreated at your destination? Is that a bad thing?' is almost without meaning, as there would no-longer be a 'me' to evaluate the utility of such an event. I guess the core confusion is that I find it hard to evaluate states of the universe where I am not conscious. As pointed out by Richard, this is probably even more absurd than I realise, as I am not 'conscious' of all my desires at all times, and thus I cannot go on this road of 'if I do not currently care about something, does it matter?'. I have to reflect on this some more and see if I can internalise a more useful sense of what matters and when. Thanks a lot for the fiction examples, I hope to read them and see if the ideas therein cause me to have one of those 'click' moments...
0CBHacking
The first is a short story that is basically a "garden path" toward this whole idea, and was a real jolt for me; you wonder why the narrator would be worried about this experiment going wrong, because she won't be harmed regardless. That world-view gets turned on its ear at the end of the story. The second is longer, but still a pretty short story; I didn't see a version of it online independent of the novel-length collection it's published in. It explores the Star Trek transporter idea, in greater detail and more rationally than Star Trek ever dared to do. The third is a huuuuuuge comic archive (totally worth reading anyhow, but it's been updating every single day for almost 15 years); the story arc in question is The Teraport Wars ( http://www.schlockmercenary.com/2002-04-15 ), and the specific part starts about here: http://www.schlockmercenary.com/2002-06-20 . Less "thinky" but funnier / more approachable than the others.
0RowanE
Although with your example in particular it's probably justified by starting off with very confused beliefs on the subjects and noticing the mess they were in, at least as far as suggesting it to other people I don't understand how or why you'd want to go change a sense of self like that. If identity is even a meaningful thing to talk about, then there's a true answer to the question of "which beings can accurately be labelled "me"?", and having the wrong belief about the answer to that question can mean you step on a transporter pad and are obliterated. If I believe that transporters are murder-and-clone machines, then I also believe that self-modifying to believe otherwise is suicidal.
2Richard_Kennaway
Perhaps that is not so obvious. While you are awake, do you actually have that want while it is not in your attention? Which is surely most of the time. If you are puzzled about where the want goes while you are asleep, should you also be puzzled about where it is while you are awake and oblivious to it? Or looking at it the other way, if the latter does not puzzle you, should the former? And if the former does not, should the Long Sleep of cryonics? Perhaps this is a tree-falls-in-forest-does-it-make-a-sound question. There is (1) your experience of a want while you are contemplating it, and (2) the thing that you are contemplating at such moments. Both are blurred together by the word "want". (1) is something that comes and goes even during wakefulness; (2) would seem to be a more enduring sort of thing that still exists while your attention is not on it, including during sleep, temporarily "dying" on an operating table, or, if cryonics works, being frozen.
1MockTurtle
I think you've helped me see that I'm even more confused than I realised! It's true that I can't go down the road of 'if I do not currently care about something, does it matter?' since this applies when I am awake as well. I'm still not sure how to resolve this, though. Do I say to myself 'the thing I care about persists to exist/potentially exist even when I do not actively care about it, and I should therefore act right now as if I will still care about it even when I stop due to inattention/unconsciousness'? I think that seems like a pretty solid thing to think, and is useful, but when I say it to myself right now, it doesn't feel quite right. For now I'll meditate on it and see if I can internalise that message. Thanks for the help!
-9advancedatheist

What exactly causes a person to stalk other people? Is there research that investigates the question when people start to stalk and when they don't?

To what extend is getting a stalker a risk worth thinking about before it's too late?

No research, just my personal opinion: borderline personality disorder.

alternating between high positive regard and great disappointment

First the stalker is obsessed by the person because the target is the most awesome person in the universe. Imagine a person who could give you infinitely many utilons, if they wanted to. Learning all about them and trying to befriend them would be the most important thing in the world. But at some moment, there is an inevitable disappointment.

Scenario A: The target decides to avoid the stalker. At the beginning the stalker believes it is merely a misunderstanding that can be explained, that perhaps they can prove their loyalty by persistence or something. But later they give up hope, or receive a sufficiently harsh refusal.

Scenario B: The stalker succeeds to befriend the the target. But they are still not getting the infinite utilons, which they believe they should be getting. So they try to increase the intensity of the relationship to impossible levels, as if trying to become literally one person. At some moment the target refuses to cooperate, or is simply unable to cooperate in the way the stalker wants them to, but to the stalker even this... (read more)

5polymathwannabe
This sounds eerily close to the mystical varieties of theistic religions.
2Lumifer
The only anonymous celebrity I can think of is Bansky. Staying anonymous is not compatible with becoming famous.
8Jayson_Virissimo
Satoshi Nakamoto is also famous and pseudonymous, but this conjunction is very rare IMO.
0Lumifer
Aha, thank you, a second example. Though I don't know if he's known by name in the general population.
3Viliam_Bur
I would guess most people become famous before they realize the advantage of anonymity, and then it's too late to start with a fresh name. But it's also possible that it's simply not worth the effort, because when you become famous enough, someone will dox you anyway. Could be interesting to know how much advantage (trivial inconvenience for wannabe stalkers) provides a pseudonym when your real name can be easily found on wikipedia; e.g. "Madonna". Or how big emotional difference for a potential stalker does it make whether a famous blogger displays their photo on their blog or not. My favorite anonymous person is B. Traven.
1Gondolinian
*Banksy
5Lumifer
He's so anonymous I don't even know how to spell his (or maybe her) name! :-)
0ChristianKl
I'm at the moment quite unsure how to handle a girl who seems to have bipolar depression and wants to have a relationship with me. Four years ago I think she was in a quite stable mental state (I'm more percetive today than I was back then(. At the time she turned me down. I haven't seen her for a while and now she seems to be pretty broken as a result of mobbing in an enviroment that she now left. One the one hand there the desire in me to try to fix her. Having a physical relationship with her also has it's appeal. On the other hand I can't see myself being open personally with her as long as she is in that messed up mental state.
4Viliam_Bur
That is a difficult situation, but the last sentence suggests that the correct answer is "no". :(
2Jackercrack
I've had a 3 year relationship with a woman I thought I could fix. She said she'd try hard to change, I said I'd help her, I tried to help her and was extremely supportive for a long time. It was emotionally draining because behind each new climbed mountain there was another problem, and another, and another. Every week a new thing that was bad or terrible about the world. I eventually grew tired of the constant stream of disasters, most stemming from normal situations interpreted weirdly then obsessed over until she broke down in tears. It became clear that things were not likely to ever get better so I left. There were a great number of fantastic things about this woman; we were both breakdancers and rock climbers, we both enjoyed anime and films, we shared a love for spicy food and liked cuddling, we both had good bodies. We had similar mindsets about a lot of things. I say all this so that you understand exactly how much of a downside an unstable mental state can be. So that you know that all of these great things about her were in the end not enough. Understand what I mean when I tell you it was not worth it for me and that I recommend against it. That I lost 3 years of time I could have spent making progress in a state with no energy. If you do plan to go for it anyway, set a time limit on how long you will try to fix her before letting go, some period of time less than half a year. I'll answer any questions that might seem useful.
0ChristianKl
Trying hard to change is not useful for changing. It keeps someone in place. Someone who has emotional issues because they obsess too much doesn't get a benefit from trying harder. Accepting such a frame is not the kind of mistake I would make. If a person breaks down crying I'm not disassociating and going into a low energy state. It rather draw me into a situation and makes me more present. But I'm not sure whether it brings me into a position where I consider the other person an agent rather than a rubics cube having to be solved.
1Jackercrack
Yes well I wasn't a rationalist at the time, nor did I know enough about psychology to say what the right thing to do to help a person whose father... Well I cannot say the exact thing but suffice to say that If I ever meet the man at least one of us is going to the hospital. I'm rather non-violent at all other times. There wasn't exactly a how-to guide I could read on the subject. I am also the kind of person that would be drawn out and try to help a person who breaks down crying. You use your energy to help their problems, and have less left for yourself. It starts to wear on you when you get into the third year of it happening every second week like clockwork over such charming subjects as a thoughtless word by a professional acquaintance or having taken the wrong bins out. Bonus points for taking the wrong bins out being a personal insult that means I hate her. Anyway, that really isn't the point.Telling me how to solve my rubics cube which I am no longer in contact with is not very helpful. The point is, I've been there and I want to help you make the right decision, whatever that may be for you.
-1ChristianKl
As far as I see it, you basically were faced with a situation without having any tools to deal with it. That makes your situation quite different. When sitting in front of the hospital bed of my father speaking confused stuff because of morphium, my instinctual response was to do a nonverbal trance induction to have him in a silent state in half a minute. Not because I read some how-to guide of how to deal with the situation but because NLP tools like that are instinctual behavior for me. I'm very far from normal and so a lot of lessons that might be drawn from your experience for people that might be similar as you are, aren't applicable to me. While reading a how-to guide doesn't give you any skills, there's is psychological literature on how to help people with most problems.
1Jackercrack
You may be right about my lack of tools, and I can't honestly say I used the try harder in the proper manner seeing as I hadn't been introduced to it at the time. I played the role of the supportive boyfriend and tried (unsuccessfully) to convince her to go to a therapist who was actually qualified at that sort of thing. I am suspicious, however that you took pains to separate yourself into a new reference class before actually knowing that one way or the other. Unless of course you have a track record of taking massive psychological issues and successfully fixing them in other people and are we really doing this? I mean come on. A person offers to help and you immediately go for the throat, picking apart mistakes made in an attempt to help a person, then using rather personal things in a subtly judgemental manner. Do you foresee that kind of approach ending well? Is that really the way you want this sort of conversation to play out? I like to think we can do better. I have information. Do you want it or not?
0chaosmage
Are you sharing your feelings or asking for advice?
0ChristianKl
It's context for the question I asked earlier. There's a lot of information that goes into decision making that I won't be open about publically, so I'm not really asking on specific advice.

Elon Musk often advocates looking at problems from a first principles calculation rather than by analogy. My question is what does this kind of thinking imply for cryonics. Currently, the cost of full body preservation is around 80k. What could be done in principle with scale?

Ralph Merkle put out a plan (although lacking in details) for cryopreservation at around 4k. This doesn't seem to account for paying the staff or transportation. The basic idea is that one can reduce the marginal cost by preserving a huge number of people in one vat. There is some discussion of this going on at Longecity, but the details are still lacking.

5jefftk
Currently the main cost in cryonics is getting you frozen, not keeping you frozen. For example, Alcor gives these costs for neuropreservation: * $25k -- Comprehensive Member Standby (CMS) Fund * $30k -- Cryopreservation * $25k -- Patient Care Trust (PCT) * $80k -- Total The CMS fund is what covers the Alcor team being ready to stabilize you as soon as you die, and transporting you to their facility. Then your cryopreservation fee covers filling you with cryoprotectants and slowly cooling you. Then the PCT covers your long term care. So 69% of your money goes to getting you frozen, and 31% goes to keeping you like that. (Additionally I don't think it's likely that current freezing procedures are sufficient to preserve what makes you be you, and that better procedures would be more expensive, once we knew what they were.) EDIT: To be fair, CMS would be much cheaper if it were something every hospital offered, because you're not paying for people to be on deathbed standby.
0Lumifer
So, for how long will that $25K keep you frozen? Any estimates?
3gjm
I believe the intention is "unlimitedly long", which is reasonable if (1) we're happy to assume something roughly resembling historical performance of investments and (2) the ongoing cost per cryopreservee is on the order of $600/year.
3Lumifer
The question is whether the cryofund can tolerate the volatility. Aha, that's the number I was looking for, thank you.
1gjm
Note that it's just a guess on my part (on the basis that a conservative estimate is that if you have capital X then you can take 2.5% of it out every year and be pretty damn confident that in the long run you won't run out barring worldshaking financial upheavals). I have no idea what calculations Alcor, CI, etc., may have done; they may be more optimistic or more pessimistic than me. And I haven't made any attempt at estimating the actual cost of keeping cryopreservees suitably chilled.
0Lumifer
Didn't you say it's on the order of $600/year?
3gjm
It sounds as if I wasn't clear, so let me be more explicit. * I believe the intention is to be able to keep people cryopreserved for an unlimited period. * For this to be so, the alleged one-off cost of keeping them cryopreserved should be such as to sustain that ongoing cost for an unlimited period. * A conservative estimate is that with a given investment you can take 2.5% of it out every year and, if your investments' future performance isn't tragically bad in comparison with historical records, be reasonably confident of never running out. * This suggests that Alcor's estimate of the annual cost of keeping someone cryopreserved is (as a very crude estimate) somewhere around $600/year. * This is my only basis for the $600/year estimate; in particular, I haven't made any attempt to estimate (e.g.) the cost of the electricity required to keep their coolers running, or the cost of employing people to watch for trouble and fix things that go wrong. (Why 2.5%? Because I've heard figures more like 3-4% bandied around in a personal-finance context, and I reckon an institution like Alcor should be extra-cautious. A really conservative figure would of course be zero.)
2Lumifer
Ah, I see. I think I misread how the parentheses nest in your post :-) So you have no information on the actual maintenance cost of cryopreservation and are just working backwards from what Alcor charges. I'm having doubts about this number, but that's not a finance thread. And anyway, in this context what matters is not reality, but Alcor's estimates. That's debatable -- inflation can decimate your wealth easily enough. Currently inflation-adjusted Treasury bonds (TIPS) trade at negative yields.
0gjm
Correct. I did try to make it as clear as I could that I do too... Well, I defined it as the maximum amount you can take out without running out of money. I agree that if instead you define it as the maximum net outflow that (with some probability close to 1) leaves your fortune increasing rather than decreasing in both long and short terms, it could be negative in times of economic stagnation.
2philh
No, ve said that "unlimitedly long" is reasonable if that's the cost. Ve didn't say that that was the cost.
4RomeoStevens
I've seen extremely low plastination estimates due to the lack of maintenance costs. Very speculative obviously,, and the main component of cost is still the procedure itself (though there are apparently some savings here as well.)

I'm going to narrate a Mutants and Masterminds roleplaying campaign for my friends, and I'm planning that the final big villain behind all the plots will be... Clippy.

Any story suggestions?

[-]RowanE110

Sabotage of a big company's IT systems, or of an IT company that maintains those systems, to force people to use paperclip-needing physical documents while the systems are down. Can have the paperclips be made mention of, but as what seems to the players like just fluff describing how this (rival company/terrorist/whatever) attack has disrupted things.

7ilzolende
It depends on how familiar your friends are with uFAI tropes, so you may want to tone these up or down to keep foreshadowing at the right level. If they're highly familiar, you may want to switch paperclips with staples. * Monsters attack a factory, which happens to manufacture binder clips. * An infectious disease spreads across [home city], causing photosensitive epilepsy. Careful observers will note that seizures occur most often when lights strobe at computer monitor refresh rates. * Corporate executives experience wave of meningitis (nanotechnology-induced). When they return to work, they cancel all paperless-office initiatives. * Population of [distant area] missing. Buried underground: lots of paperclips. (If needed, have the paperclips test positive for some hallucinogen as a red herring). * Iron mines report massive thefts, magnetism-related supervillain denies all responsibility and is actually innocent. Alternatively, if any heroes have metal-related powers, frame one of them and present false evidence to the players that a supervillain did it. * Biotechnology companies seem to be colluding about something. The secret: somebody or something has been producing genetic material with their equipment, and they need to find out who, ideally without causing a panic. Maybe some superheroes could investigate for them? If you do run this, please share your notes with us. Edit: Now I want to run this sort of campaign. Thanks!
2polymathwannabe
Good ideas. My friends don't know anything about uFAI topics; if I drop the name "Clippy," they'll think of the MS Office assistant.

Several weeks ago I wrote a heavily upvoted post called Don't Be Afraid of Asking Personally Important Questions on LessWrong. I've been thinking about a couple of things since I wrote that post.

  • What makes LessWrong a useful website for asking questions which matter to you personally is that there is lots of insightful people here with wide knowledge base. However, for some questions, LessWrong might be too much, or the wrong kind of, monoculture to provide the best answers. Thus, for weird, unusual, or highly specific questions, there might be better d

... (read more)

Animal Charity Evaluators have updated their top charity recommendations, adding Animal Equality to The Humane League and Mercy for Animals. Also, their donation-doubling drive is nearly over.

6ZankerH
Why would an effective altruist (or anyone wanting their donations to have a genuine beneficial effect) consider donating to animal charities? Isn't the whole premise of EA that everyone should donate to the highest utilon/$ charities, all of which happen to be directed at helping humans? Just curiosity from someone uninterested in altruism. Why even bring this up here?
[-]jefftk170

We don't all agree on what a utilon is. I think a year of human suffering is very bad, while a year of animal suffering is nearly irrelevant by comparison, so I think charities aimed at helping humans are where we get the most utility for our money. Other people's sense of the relative weight of humans and animals is different, however, and some value animals about the same as humans or only somewhat below.

To take a toy example, imagine there are two charities: one that averts a year of human suffering for $200 and one that averts a year of chicken suffering for $2. If I think human suffering is 1000x as bad as chicken suffering and you think human suffering is only 10x as bad, then even though we both agree on the facts of what will happen in response to our donations, we'll give to different charities because of our disagreement over values.

In reality, however, it's more complicated. The facts of what will happen in response to a donation are uncertain even in the best of times, but because a lot of people care about humans the various ways of helping them are much better researched. GiveWell's recommendations are all human-helping charities because of a combination of "... (read more)

[-][anonymous]70

I may write a full discussion thread on this at some point, but I've been thinking a lot about undergraduate core curriculum lately. What should it include? I have no idea why history has persisted in virtually every curriculum I know of for so long. Do many college professors still believe history has transfer of learning value in terms of critical thinking skills? Why? The transfer of learning thread touches on this issue somewhat, but I feel like most people on there are overvaluing their own field hence computational science is overrepresented and social science, humanties, and business are underrepresented. Any thoughts?

The first question is what goals should undergraduate education have.

There is a wide spectrum of possible answers ranging from "make someone employable" to "create a smart, well-rounded, decent human being".

There is also the "provide four years of cruise-ship fun experience" version, too...

5[anonymous]
Check out page 40 of this survey. In order of importance: To be able to get a better job 86% / To learn more about things that interest me 82% / To get training for a specific career 77% / To be able to make more money 73% / To gain a general education and appreciation of ideas 70% / To prepare myself for graduate or professional school 61% / To make me a more cultured person 46%

First, undergrad freshmen are probably not the right source for wisdom about what a college should be.

Second, I notice a disturbing lack of such goals as "go to awesome parties" and "get laid a lot" which, empirically speaking, are quite important to a lot of 18-year-olds.

0RowanE
In systems like the US, where undergraduate freshmen are basically customers paying a fee, I expect their input on what they want and expect the product they're purchasing to be like should be extremely relevant.
0polymathwannabe
Indeed, customers are usually expected to be informed about what they're buying. But in the case of education, where what the "customer" is buying is precisely knowledge, a freshman's opinion on what education should contain may be less well informed than, for example, a grad student's opinion.
0Lumifer
Yes, that is the "provide four years of cruise-ship fun experience" version mentioned. The idea that it's freshmen who are purchasing college education also needs a LOT of caveats.
0[anonymous]
Exactly which courses do you imagine do the most to help students go to the most awesome parties and get laid a lot?
6Alsadius
Ones with very little homework and a good gender ratio.
5Lumifer
The point is not that they need courses to help them with that. The point is that if you are accepting freshman desires as your basis for shaping college education, you need to recognize that surveys like the one you linked to present a very incomplete picture of what freshmen want.
2[anonymous]
If the desires you named are irrelevant to the discussion at hand, then can you please name the desires that you think are relevant which are not encapsulated by the survey and explain how they are relevant to what classes students are taking? Also, who is the right source of wisdom about what a college should be?
0Lumifer
For the bit of mental doodling that this thread is, the right source is you -- your values, your preferences, your prejudices, your ideals.
6Metus
Nerds tend to undervalue anything that is not math-heavy or easily quantifiable.
3Evan_Gaensbauer
Scott Alexander from Slate Star Codex has the idea that if the humanities are going to be taught as part of a core curriculum, it might be better to teach the history of them backwards.
2MrMind
When I was in high school, I discussed this very idea with my Philosophy teacher. She said that (at least here in Italy) curricula for humanities are still caught in the Hegelian idea that history unfolds in logical structures, so that it's easier to understand them in chronological order. I reasoned instead that contemporary subjects are more relevant, more interesting and we have much more data about them, so they would appeal much better to first year students.
3Nornagest
If I were designing a core curriculum off the top of my head, it might look something like this: First year: Statistics, pure math if necessary, foundational biology, literature and history of a time and place far removed from your native culture. Classics is the traditional solution to the latter and I think it's still a pretty good one, but now that we can't assume knowledge of Greek or Latin, any other culture at a comparable remove would probably work as well. The point of this year is to lay foundations, to expose students to some things they probably haven't seen before, and to put some cognitive distance between the student and their K-12 education. Skill at reading and writing should be built through the history curriculum. Second year: Data science, more math if necessary, evolutionary biology (perhaps with an emphasis on hominid evolution), basic philosophy (focusing on general theory rather than specific viewpoints), more literature and history. We're building on the subjects introduced in the first year, but still staying mostly theoretical. Third year: Economics, cognitive science, philosophy (at this level, students start reading primary sources), more literature and history. At this point you'd start learning the literature and history of your native language. You're starting to specialize, and to lay the groundwork for engaging with contemporary culture on an educated level. Fourth year: More economics, political science, recent history, cultural studies (e.g. film, contemporary literature, religion).
6Lumifer
Fifth year: spent unemployed and depressed because of all the student debt and no marketable skills. This is a curriculum for future philosopher-kings who never have to worry about such mundane things as money.
2Nornagest
"Core curriculum" generally means "what you do that isn't your major". Marketable skills go there, not here; it does no one any good to produce a crop of students all of whom have taken two classes each in physics, comp sci, business, etc.
3Lumifer
If you count the courses you suggest, there isn't much room left for a major. I think a fruitful avenue of thought here would be to consider higher (note the word) education in its historical context. Universities are very traditional places and historically they provided the education for the elite. Until historically recently education did not involve any marketable skills at all -- its point was, as you said, "engaging with contemporary culture on an educated level".
0Nornagest
Four to six classes a year, out of about twelve in total? That doesn't sound too bad to me. I took about that many non-major classes when I was in school, although they didn't build on each other like the curriculum I proposed. It may amuse you to note that I was basically designing that as a modernized liberal arts curriculum, with more emphasis on stats and econ and with some stuff (languages, music) stripped out to accommodate major courses. Obviously there's some tension between the vocational and the liberal aims here, but I know enough people who e.g. got jobs at Google with philosophy degrees that I think there's enough room for some of the latter.
2jaime2000
I studied at two state universities. At both of them, classes were measured in "credit hours" corresponding to an hour of lecture per week. A regular class was three credit hours and semester loads at both universities were capped at eighteen credits, corresponding to six regular classes per semester and twelve regular classes per year (excluding summers). Few students took this maximal load, however. The minimum semester load for full-time students was twelve credit hours and sample degree plans tended to assume semester loads of fifteen credit hours, both of which were far more typical.
1Lumifer
Sure, but that's evidence that they are unusually smart people. That's not evidence that four years of college were useful for them. As you probably know, there is a school of thought that treats college education as mostly signaling. Companies are willing to hire people from, say, the Ivies, because these people proved that they are sufficiently smart (by getting into an Ivy) and sufficiently conscientious (by graduating). What they learned during these four years is largely irrelevant. Is four years of a "modernized liberal arts curriculum" the best use of four years of one's life and a couple of hundred thousand dollars?
1Evan_Gaensbauer
What counts as a 'marketable skill', or even what would be the baseline assumption of skill for becoming a fully and generally competent adult in twenty-first century society, might be very different from what was considered skill and competence in society 50 years ago. Rather than merely updating a liberal education as conceived in the Post-War era, might it make sense to redesign the liberal education from scratch? Like, does a Liberal Education 2.0 make sense? What skills or competencies aren't taught much in universities yet, but are ones everyone should learn?
1cameroncowan
Perhaps we need to re-think what jobs and employment look like in the 21st century and build from there?
0Evan_Gaensbauer
That seems like a decent starting point. I don't know my U.S. history to well, as I'm a young Canadian. However, a cursory glance at the Wikipedia page for the G.I. Bill in the U.S. reveals that it, among other benefits, effectively lowered the cost not only for veterans after World War II, but also their dependents. The G.I. Bill was still used through 1973, by Vietnam War veterans, so that's millions more than I expected. As attending post-secondary school became normalized, it shifted toward the status quo for getting better jobs. In favor of equality, people of color and women also demanded equal opportunity to such education by having discriminatory acceptance policies and whatnot scrapped. This was successful to the extent that several million more Americans attended university. So, a liberal education that was originally intended for upper(-middle) class individuals was seen as a rite of passage, for status, and then to stay competitive, for the 'average American'. This trend extrapolated until the present. It doesn't seem to me typical baccalaureate is optimized for what the economy needed for the 20th century, nor for what would maximize the chances of employment success for individuals. I don't believe this is true for some STEM degrees, of course. Nonetheless, if there are jobs for the 21st century that don't yet exist, we're not well-equipped for those either, because we're not even equipped for the education needed for the jobs of the present. I hope the history overview wasn't redundant, but I wanted an awareness of design flaws of the current education system before thinking about a new one. Not that we're designing anything for real here, but it's interesting to spitball ideas. * If not already in high school, universities might mandate a course on coding, or at least how to navigate information and data better, the same way almost all degrees mandate a course in English or communications in the first year. It seems ludicrous this isn't already s
1NancyLebovitz
Persuasive writing and speaking. Alternatively, interesting writing and speaking.
0cameroncowan
That was basically my education (I took 5 years of Latin, 2 of ancient greek, philosophy, literature, art) and the only reason I didn't end up homeless camping out in Lumifer's yard was because I learned how to do marketing and branding. I think having practical skills is a good idea. Trade and Technical schools are a great idea.
2[anonymous]
1st year: 5 / 2nd year: 7 / 3rd year: 5 / 4th year: 4 That's over half their classes. I also counted 14 of those 21 classes are in the social sciences or humanities which seems rather strange after you denigrated the fields. Now the big question: how much weight do you put on the accuracy of this first draft?
0Nornagest
It's pretty simple. I think the subjects are important; I'm just not too thrilled about how they're taught right now. Since there's no chance of this ever being influential in any way, I may as well go with the fields I wish I had rather than the ones I have. As to accuracy: not much.
1ChristianKl
What do you mean with those terms? Understanding the principle of evolution is useful but I don't see why it needs a whole semester.
0Azathoth123
Um, the reason for studying Greek and Latin is not just because they're a far-removed culture. It's also because they're the cultures which are the memetic ancestors of the memes that we consider the highest achievements of our culture, e.g., science, modern political forms. Also this suffers from the problem of attempting to go from theoretical to practical, which is the opposite of how humans actually learn. Humans learn from examples, not from abstract theories.
2Evan_Gaensbauer
I just want to point out for the record that if we're discussing a core curriculum for undergraduate education, I figure it would be even better to get such a core curriculum into the regular, secondary schooling system that almost everyone goes through. Of course, in practice, implementing such would require an overhaul of the secondary schooling system, which seems much more difficult than changing post-secondary education. The reason for this would probably because changing the curriculum for post-secondary education, or at least one post-secondary institution, is easier, because there is less bureaucratic deadweight, a greater variety of choice, and a nimbler mechanisms in place for instigating change. So, I understand where you're coming from in your original comment above.
2zedzed
tl;dr: having a set of courses for everyone to take is probably a bad idea. People are different and any given course is going to, at best, waste the time of some class of people. A while ago, I decided that it would be a good thing for gender equality to have everyone take a class on bondage that consisted of opposite-gender pairs tying each other up. Done right, it would train students "it's okay for the opposite gender to have power, nothing bad will happen!" and "don't abuse the power you have over people." In my social circle, which is disproportionately interested in BDSM, this kinda makes sense. It may even help (although my experience is that by the time anyone's ready to do BDSM maturely, they've pretty much mastered not treating people poorly based on gender.) It would also be a miraculously bad idea to implement. In general, I think it's a mistake to have a "core curriculum" for everyone. Within 5 people I know, I could go through the course catalog of, say, MIT, and find one person for whom nobody would benefit from them taking the course. (This is easier than it seems at first; me taking social science or literature courses makes nobody better off (the last social science course I took made me start questioning whether freedom of religion was a good thing. I still think it's a very good thing, but presenting me with a highly-compressed history of every inconvenience it's produced in America's history doesn't convince my system 1). Similarly, there exist a bunch of math/science courses that I would benefit greatly from taking, but would just make the social science or literature people sad. Also, I know a lot of musicians, for whom there's no benefit from academic classes; they just need to practice a lot.) Having a typical LWer take a token literature class generally means they're going to spend ~200 hours learning stuff they'll forget exponentially. (This could be remedied by Anki, but there's a better-than-even chance the deck gets deleted the mome
0cameroncowan
As a writer, I agree with you. I am horrible at math. In my life 2x3=5 most of the time. If I had to suffer and fail at Calculus when I can't multiply some days I would certainly start writing books about evil scientists abusing a village for its resources and then have the village revolt against its scientific masters with pitchforks. Throw in a great protagonist and a love interest and I have a bestseller with possible international movie rights.
0[anonymous]
If a field doesn't require a lot of technical knowledge, why bother with college in the first place? I'm not so sure how useful your examples are since most creative writers and musicians will eventually fail and be forced to switch to a different career path. Even related fields like journalism or band manager require some technical skills.
0Gondolinian
Obligatory SMBC comic. :)
0zedzed
Signalling, AKA why my friend majoring in liberal arts at Harvard can get a high-paying job even though college has taught him almost no relevant job skills.
1Alsadius
* History illuminates the present. A lot of people care about it, a lot of feuds stem from it, and a lot of situations echo it. You can't understand the Ukrainian adventures Putin is going on without a) knowing about the collapse of the Soviet Union to understand why the Russians want it, b) knowing about the Holodomor to understand why the Ukrainians aren't such big fans of Russian domination, and arguably c) knowing about the mistakes the west made with Hitler, to get a sense of what we should do about it. * History gives you a chance to learn from mistakes without needing to make them yourself. * History is basically a collection of the coolest stories in human history. How can you not love that?
6[anonymous]
How useful is knowing about Ukraine to the average person? What percentage of History class will cover things which are relevant? Which useful mistakes to avoid does a typical History class teach you about?
1Alsadius
1) Depends how political you are. I'm of the opinion that education should at least give people the tools to be active in democracy, even if they don't use them, so I consider at least a broad context for the big issues to be important. 2) Hard to say - I'm a history buff, so most of my knowledge is self-taught. I'd have to go back and look at notes. 3) Depends on the class. I tend to prefer the big-picture stuff, which is actually shockingly relevant to my life(not because I'm a national leader, but because I'm a strategy gamer), but there's more than enough historians who are happy to teach you about cultural dynamics and popular movements. You think popular music history might help someone who's fiddling with a bass guitar?
1ChristianKl
Given how hard it is to establish causality, history where you don't have a lot of the relevant information and there a lot of motivated reasoning going on is often a bad source for learning.
0Alsadius
Which is better - weak evidence, or none?
2Lumifer
An interesting question. Let me offer a different angle. You don't have weak evidence. You have data. The difference is that "evidence" implies a particular hypothesis that the data is evidence for or against. One problem with being in love with Bayes is that the very important step of generating hypotheses is underappreciated. Notably, if you don't have the right hypothesis in the set of hypotheses that you are considering, all the data and/or evidence in the world is not going to help you. To give a medical example, if you are trying to figure out what causes ulcers and you are looking at whether evidence points at diet, stress, or genetic predisposition, well, you are likely to find lots of weak evidence (and people actually did). Unfortunately, ulcers turned out to be an bacterial disease and all that evidence, actually, meant nothing. Another problem with weak evidence is that "weak" can be defined as evidence that doesn't move you away from your prior. And if you don't move away from your prior, well, nothing much changed, has it?
0Alsadius
"Weak" means that it doesn't change your beliefs very much - if the prior probability is 50%, and the posterior probability is 51%, calling it weak evidence seems pretty natural. But it still helps improve your estimates.
0Lumifer
Only if it's actually good evidence and you interpret it correctly. Another plausible interpretation of "weak" is "uncertain". Consider a situation where you unknowingly decided to treat some noise as evidence. It's weak and it only changed your 50% prior to a 51% posterior, but it did not improve your estimate.
0TheOtherDave
Often none. For example, if a piece of evidence E is such that: * I ought to, in response to it, update my confidence in some belief B by some amount A, but * I in fact update my confidence in B by A2, and updating by A2 gets me further from justified confidence than I started out, then to the extent that I value justified confidence in propositions I was better off without E. Incidentally, this is also what I understood RowanE to be referring to as well.
2[anonymous]
But it's only bad because you made the mistake of updating by A2. I often notice a different problem of people to always argue A=0 and then present alternative belief C with no evidence. On some issues, we can't get a great A, but if the best evidence available points to B we should still assume it's B.
0TheOtherDave
Agreed. Agreed. Yes, I notice that too, and I agree both that it's a problem, and that it's a different problem.
0ChristianKl
Overconfidence is a huge problem. Knowing that you don't understand how the world works is important. To the extend that people believe that they can learn significant things from history, "weak evidence" can often produce problems. If you look at the Western Ukraine policy they didn't make a treaty to accept Russian annexion of the Krim in return for stability in the rest of Ukraine. That might have prevented the mess we have at the moment. In general political decisions in cases like this should be made by doing scenario planning. It on thing to say that Britian and France should have declared war on Germany earlier. It quite another thing to argue that the West should take military action against Russia.
6Alsadius
Might have, but my money isn't on it. You think Putin cares about treaties? He's a raw-power sort of guy. And yes, the scenarios are not identical - if nothing else, Russia has many more ICBMs than Hitler did. Still, there's ways to take action that are likely to de-escalate the situation - security guarantees, repositioning military assets, joint exercises, and other ways of drawing a clear line in the sand. We can't kick him out, but we can tell him where the limits are. (Agreed on your broader point, though - we should ensure we don't draw too many conclusions).
1ChristianKl
Putin does care about the fact that Ukraine might join NATO or the EU free trade zone. He probably did feel threatened by what he perceived as a color revolution with a resulting pro-Western Ukrainian government. At the end of the day Putin doesn't want the crisis to drag on indefinitely so sooner or later it's in Russia's interest to have a settlement. Russia relies on selling it's gas to Europe. Having the Krim under embargo is quite bad for Russia. It means that it's costly to keep up the economy of the Krim in a way that it's population doesn't think the Krim decayed under Russian rule and there unrest. On the other hand it's not quite clear the US foreign policy has a problem with dragging out the crisis. It keeps NATO together even through Europeans are annoyed of getting spied at by the US. It makes it defensibly to have foreign miltary bases inside Germany that spy on Germans. Do you really think joint exercises contribute to deescalation? As far as repositioning military assets goes, placing NATO assets inside Ukraine is the opposite of deescalation. The only real way to descalate is a diplomatic solution and there probably isn't one without affirming Crimea as part of Russia.
2Alsadius
There's a certain type of leader, over-represented among strongmen, that will push as far as they think they can and stop when they can't any more. They don't care about diplomacy or treaties, they care about what they can get away with. I think Putin is one of those - weak in most meaningful ways, but strong in will and very willing to exploit our weakness in same. The way to stop someone like that is with strength. Russia simply can't throw down, so if we tell them that they'd have to do so to get anywhere, they'd back off. Of course, we need to be sure we don't push too far - they can still destroy the world, after all - but Putin is sane, and doesn't have any desire to do anything nearly so dramatic.
0ChristianKl
Putting gains inner politcs strength from the conflict. That assumes that you can simply change from being weak to being strong. In poker you can do this as bluffing. In Chess you can't. You actually have to calculate your moves. Holding joint military exercises isn't strength if you aren't willing to use the military to fight. Bailing out European countries is expensive enough. There not really the money to additionally prop up Ukraine.
2Alsadius
Only as long as he's winning. NATO is, far and away, the strongest military alliance that has ever existed. They have the ability to be strong. When the missing element is willpower, "Man up, already!" is perfectly viable strategic advice.
-7ChristianKl
0Lumifer
Accept an annexation in return for promises of stability? Hmm, reminds me of something...
0ChristianKl
That's partly the point, we didn't go that route and now have the mess we have at the moment.
0Lumifer
And what happened the last time we DID go that route?
0ChristianKl
Making decisions because on a single data point is not good policy. Also the alternative to the Munich agreements would have been to start WWII earlier. That might have had advantages but it would still have been very messy.
0RowanE
Sometimes none, if the source of the evidence is biased and you're a mere human.
0Alsadius
There are unbiased sources of evidence now?
0ChristianKl
That question doesn't have anything to do with the claim that you can make someone less informed by giving them biased evidence.
0Evan_Gaensbauer
Some sources of evidence are less biased than others. Some sources of evidence will contain biases which are more problematic than others for the problem at hand.
0Alsadius
Of course. But Rowan seemed to be arguing a much stronger claim.
1[anonymous]
Undergraduate core curriculum where, for whom, and for what purposes?
1Punoxysm
I think history and the softer social sciences / humanities can, if taught well, definitely improve your ability to understand and analyze present-day media and politics. This can improve your qualitative appreciation of works of art, understand journalistic works on their own terms and context instead of taking them at face value, and read and write better. They can also provide specific cultural literacy, which is useful for your own qualitative appreciation as well as some status things. I had a pretty shallow understanding of a lot of political ideas until I took a hybrid history/philosophy course that was really excellently taught. It allowed me to read a lot of poltical articles more deeply and understand their motivations and context and the core academic ideas they built around. That last part, seeing theses implicitly referenced in popular works, is pretty neat.
7Nornagest
I think this is true... but also that "taught well" is a difficult and ideologically fraught criterion. The humanities and most (but not all; linguistics, for example, is a major exception) of the social sciences are not generally taught in a value-neutral way, and subjective quality judgments often have as much to do with finding a curriculum amenable to your values as with the actual quality of the curriculum. Unfortunately, the fields most relevant to present-day media and politics are also the most value-loaded.
0Punoxysm
Well, the impossibility of neutrality, except when giving the most mundane recitation of events, when talking about history or the humanities is a pretty vital lesson to understand. The best way to approach this is to present viewpoints then counterpoints, present a thesis then a criticism. I have had one non-core course that was pretty much purely one perspective (left-radical tradition), but this is still a tradition opposed to and critical of even mainstream-leftist history and politics. What I mean to say is I don't think it was a great class, but I still learned plenty when I thought critically about it on my own time. If you have a certain amount of foundation (which I got through a much more responsibly-taught class pretty much following the traditional western philosophical canon), in other words, you should still learn plenty from a curriculum that is not amenable to your values, if you put in an effort. But I think most core history and philosophy courses at liberal arts colleges stick to a pretty mainstream view and present a decent range of criticisms, achieving the ends I talked about. If you really want far-left or right-wing or classical liberal views, there are certainly colleges built around those.
1Nornagest
The thing that bothers me is that (at least at my university, which was to be fair a school that leaned pretty far to the left) neutrality seems to have been thrown out not only as a practical objective but also as an optimization objective. You're never going to manage to produce a perfectly unbiased narrative of events; we're not wired that way. But narratives are grounded in something; some renditions are more biased than others; and that's a fact that was not emphasized. In a good class (though I didn't take many good classes) you'll be exposed to more than one perspective, yes. But the classes I took, even the good ones, were rather poor at grounding these views in anything outside themselves or at providing value-neutral tools for discriminating between them. (Emphasis on "value-neutral": we were certainly taught critical tools, but the ones we were taught tended to have ideology baked into them. If you asked one of my professors they'd likely tell you that this is true of all critical tools, but I don't really buy that.)
0Punoxysm
Of course bias can vary, but I think most of the professors you ask would say they are being unbiased, or they are calibrating their bias to counteract their typical student's previous educational bias. After all, you were taught history through high school, but in a state-approved curriculum taught by overworked teachers. As far as critical tools, which ones are you thinking of? Are you thinking of traditionally-leftist tools like investigations into power relationships? What do you think of as a value-neutral critical tool? You seem to have an idea of what differentiated the good classes from the bad. I'm not disagreeing that some classes are bad, I'm focusing on the value the good ones can give. A bad engineering class, by analogy, teaches about a subject of little practical interest AND teaches it at a slow pace. Bad classes happen across disciplines. And I admit I am probably speaking from a lot of hindsight. I took a couple good classes in college, and since then have read a ton of academic's blogs and semi-popular articles, and it has taken a while for things to "click" and for me to be able to say I can clearly analyze/criticize an editorial about history at a direct and meta-level the way I'm saying this education helps one do. You're right, for instance, that in college you probably won't get an aggressive defense of imperialism to contrast with its criticisms, even though that might be useful to understanding it. But that's because an overwhelming majority of academics consider it to be such a clearly wretched, even evil, that they see no value in teaching it. It's just how we rarely see a serious analysis of abolition vs. slavery, because come on right? On slavery, academia and the mainstream are clearly in sync. On Imperialism? Maybe not as much, especially given the blurry question of "what is modern imperialism?" (is it the IMF; is it NAFTA; is it Iraq?). But many academics are striving to make their classes the antidote to a naive narrative of A
0Nornagest
I mentioned critical theory elsewhere in these comments. There's also gender theory, Marxian theory, postcolonial theory... basically, if it comes out of the social sciences and has "theory" in its name, it's probably value-loaded. These are frameworks rather than critical tools per se, but that's really what I was getting at: in the social sciences, you generally don't get the tools outside an ideological framework, and academics of a given camp generally stick to their own camp's tools and expect you to do the same in the work you submit to them. Pointing to value-neutral critical tools is harder for the same reason, but like I said earlier I think linguistics does an outstanding job with its methodology, so that could be a good place to start looking. Data science in general could be one, but in the social sciences it tends to get used in a supporting rather than a foundational role. Ditto cognitive science, except that that hardly ever gets used at all. This in itself is a problem. If you start with a group of students that have been exposed to a biased perspective, you don't make them less biased by exposing them to a perspective that's equally biased but in another direction. We've all read the cog-sci paper measuring strength of identification through that sort of situation, but I expect this sort of thing is especially acute for your average college freshman: that's an age when distrust of authority and the fear of being bullshitted is particularly strong. (The naive narrative wasn't taught in my high school, incidentally, but I'm Californian. I expect a Texan would say something different.)
-1Punoxysm
But these frameworks/theories are pretty damn established, as far as academics are concerned. Postcolonial theory and gender theory make a hell-of-a-lot of sense. They're crowning accomplishments of their fields, or define fields. They're worth having a class about them. Most academics would also say that they consider distinctly right-wing theories intellectually weak, or simply invalid; they'd no more teach them than a bio professor would teach creationism. If you strongly feel all of mainstream academia is biased, then pick a school known for being right-wing. Academia's culture is an issue worthy of discussion, but well outside the scope of "should history be in core curriculums". Maybe things like game-theoretic explanations of power dynamics, or something like discussion of the sociology of in-groups and out-groups when discussing nationalism, or something similar, are neglected in these classes. If you think that, I wouldn't disagree. I guess most professors would probably say "leave the sociology to the sociologists; my class on the industrial revolution doesn't have room to teach about thermodynamics of steam engines either". I don't know much about linguistics, except that Chomsky is a Linguist and that some people like him and some people don't. I do know it is on the harder end of the social sciences. The softer social sciences and humanities simply won't be able to use a lot of nice, rigorous tools. I think good teachers, even ones with a strong perspective, approach things so that the student will feel engaged in a dialogue. They will make the student feel challenged, not defensive. More of my teachers achieved this than otherwise. Bad teachers and teaching practices that fail to do this should be pushed against, but I don't think the academic frameworks are the main culprit.
6ChristianKl
If left-wing academia is low quality that in no way implies that right-wing academia is high quality. Seeing everything as left vs. right might even be part of the deeper problem plaguing the subject.
0gjm
On the other hand, if (in someone's opinion) academia as a whole is of low quality on account of a leftward political bias then it seems reasonable for that person to take a look at more right-leaning academic institutions.
-2ChristianKl
Nobody here said that's it's primarily a leftward bias. A while ago someone tried to understand who controls the majority of companies and found that less than few institutions do control most of the economy. Did they publish in a economics journal? Probably too political. Instead they publised in Plos One. I have a German book that makes arguments about how old German accounting standards are much nicer than the Anglo American ones. Politics that makes Anglo-American accounting standards the global default are not well explored by either leftwing or rightwing academic institutions. Substantial debates about the political implications of accounting standards just aren't a topic that a lot of political academics who focus on left vs. right care about. A lot of right wing political academia is also funded via think tanks that exist to back certain policies.
0gjm
True, but the things Nornagest was complaining about could all be at-least-kinda-credibly claimed to have a leftward bias, and could not be at all credibly claimed to have a rightward bias. Of course, as you say, there's a lot more to politics (and putative biases in academia) than left versus right, but it's a useful approximation. Lest I be misunderstood, I will add that I too have a leftward bias, and I do not in fact think anyone would get a better education, or find better researchers, by choosing a right-leaning place (except that there are some places that happen both to be good and to have a rightward slant, I think largely by coincidence, and if you pick one of them then you win). And I share (what I take to be) your disapproval of attempts to manipulate public opinion by funding academics with a particular political bent.
4Nornagest
Though I suspect I have a rather dimmer view of the social sciences' "crowning achievements" than you do, I'm not objecting directly to their political content there. I was mainly trying to point to their structure: each one consists of a set of critical tools and attached narrative and ideology that's relatively self-contained and internally consistent relative to those tools. Soft academia's culture, to me, seems highly concerned with crafting and extending those narratives and distinctly unconcerned with grounding or verifying them; an anthropologist friend of mine, for example, has told me outright that her field's about telling stories as opposed to doing research in the sense that I'm familiar with, STEMlord that I am. The subtext is that anything goes as long as it doesn't vindicate what you've called the naive view of culture. That's a broader topic than "should history be in core curriculums?", but the relevance should be obvious. The precise form it takes, and the preferred models, do vary by school, but picking a right-wing school would simply replace one narrative with another. (I'd probably also like the students less.) They don't. That doesn't mean they can't. There's plenty of rigorous analysis of issues involved in social science out there; it's just that most of it doesn't come from social scientists. Some of the best sociology I've ever seen was done by statisticians. (Chomsky, incidentally, was a brilliant linguist -- if not always one vindicated by later research -- but he's now so well known for his [mostly unrelated] radical politics that focusing on him is likely to give the wrong impression of the field.)
-1Punoxysm
I think this is a problem, BUT it wouldn't be a problem if we had more people willing to pick up the ball and take these narratives as hypotheses and test/ground them. I think there IS a broad but slow movement towards this. I think these narrative-building cultures are fantastic at generating hypotheses, and I am also sympathetic in that it is pretty hard to test many of hypotheses concretely. That said, constant criticism and analysis is a (sub-optimal) form of testing. Historians tend to be as concrete as they can, even if it's non-quantitatively. If an art historian says one artist influenced another, they will demonstrate stylistic similarities and a possible or verified method of contact between the two artists. That's pretty concrete. It can rely on more abstract theories about what is a "stylistic similarity" though, but that's inevitable. I also think that the broadest and best theories are the ones you see taught at an undergrad level. The problems you point out are all more pernicious at the higher levels. Surely true. But I think (from personal discussions with academics) there is a big movement towards quantitative and empirical in social sciences (particularly political science and history), and the qualitative style is still great for hypothesis generation. I also think our discussion is getting a bit unclear because we've lumped the humanities and social sciences together. That's literally millions of researchers using a vast array of methodologies. Some departments are incredibly focused on being quantitative, some are allergic to numbers.
3Lumifer
I would call that "damning with faint praise" :-D
0Punoxysm
It's praise sincerely intended. What strikes you as inadequate about, say, feminist theory and related ideas?
0Lumifer
Can we do postcolonial theory instead? What kind of falsifiable (in the Popperian sense) claims does it make? Any predictions?
-2Punoxysm
First I'll do a couple examples from feminism, since it is often tarred as academic wankery, and I feel more knowledgeable about it: * Feminist theories say that movies underrepresent women, or represent them in relation to men. A simple count of the number of movies that pass the BechdelTest vs. it's male inverse shows this to be plainly true. In fact, the gap is breathtaking. Not only that, but this gap continues with movies released today, supporting the idea that only direct and conscious intervention can fix the gap and related iniquities in the portrayal of men and women in media. * Feminist theory predicts that issues like female reproductive autonomy, education, and various categories of violence against women are strongly correlated. Statistics appear to show this is true (not indisputable; reporting and confounding factors exist). As for postcolonialism, I'll give it a shot, though I'm not the best to speak on it: * Postcolonial theory states that most of the institutions of formerly colonial nations (their media, the World Bank, etc.) fetishize the strong nationalist state and a capitalist economy with all the trappings (central banks, urbanization, progression from agrarian to industrial to service economy) that western nations have developed over the past two centuries, and will attempt to impose states where they can. Many argue that Western intervention in the Balkans and in Somalia bear this out. * Postcolonial theory makes many other statements about development, like that postcolonial nations shouldn't try to emulate western paths of development (because they will result in poorer economic growth). Some of them are hotly disputed. However they are empirical. * More broadly, postcolonialism says that for any intervention in a non-western nation, basing this intervention on methodology for western nation will yield worse results than building the approach up based on the ethnographic characteristics of that nation, despite the fact that in
1Lumifer
That's merely an empirical observation. That's a normative statement about what should be. Can you be a bit more precise about these relationships? Also, does the feminist theory predict or does it say that's what it sees? Off the top of my head I'd say I have at least two issues with feminism. The first is that it loves to tell other people what they should think, feel, and value. Science is not normative and feminism is -- that makes it closer to preaching than to science. The second is that I am not sure why feminism (as an academic discipline) exists. I understand that historically there was the movement of "these not-quite-yet-dead white men in the social studies departments don't understand us and don't do things we find important, so fuck'em -- we'll set up our own department". That's fine, but first that's not true any more, and second, that's an office-politics argument for the administrative structure of a university, not reason for a whole new science to come into existence. What exactly is feminism doing that's not covered by sociology + political studies + cultural studies? Again, this is a post-factum empirical observation. And that doesn't seem to be quite true. Most newly independent countries love state power and often played with some variety of socialism, "third way", etc. Given the context of the Cold War, their political economy generally reflected which superpower they aligned with. Who will? Impose on whom? I don't quite understand what do you mean here. An interesting point. The problem with it is that nations which did NOT try to "emulate western paths of development" experienced even more poor economic growth. It is, in fact, an empirical observation that the economic growth in the developing world was, by and large, quite poor. However the conclusion that this is the result of transplanting Western practices to alien soil and home-grown solutions are much better does not seem to be empirically supported. And another curious statem
0Punoxysm
I made a mistake trying to defend postcolonial theory here, it's just not my area of expertise. Whether it's valid or not, I can't defend it well. But we do seem to be on the same page that it's falsifiable. However, I do have a substantial beef with your beefs with feminism. Come on... Things falling to the ground is an empirical observation, gravity is the theory. No, it's a prediction. If the gender representation gap spontaneously solved itself without any evident adoption of feminist attitudes that would be a strike against feminism as a theory. Predicts; It observed it then it continued to be true so it's not overfitting It has a normative and an empirical element. An organization like GiveWell empirically assesses charities then makes normative recommendations based on a particular version of utilitarianism. Feminism assesses institutions and makes recommendations. Most of what feminism does in influence other fields. Gender studies departments exist some places and not other, but it's influence is pervasive in academia. I think this is a misinformed criticism.
0Lumifer
In another post you called feminism "a project dedicated to changing certain policies and cultural attitudes". I like this definition, it makes a lot of sense to me. However the implication is that feminism is neither a science nor even a field of study. Recall that the original question was feminism (gender studies) in academia. You said I'm fine with treating feminism as a socio-cultural movement based on a certain set of values. But then it's not an academic theory which is a crowning accomplishment of a field of study.
-1Punoxysm
It's both scholarly field and social movement. And scholars involved in it may be involved in one or both elements. Feminism is a HUGE tent. It provides a framework for everyone from economists studying what factors drive labor participation rates among women to judges ruling on a case of sexual harassment to a film critic analyzing a character. There are probably tens of thousands of academics alone (forget lawyers, legislators, lobbyists and journalists) who would say feminism influences their work. This includes many who are very quantitative and empirical.
1Lumifer
What does this "scholarly field" study that is not covered by the usual social sciences? And, given that we are on LW, how prevalent do you think is motivated cognition in this field of study? What covers everything covers nothing. How would you define feminism -- in a useful way, specifying what kind of a thing is it and how it's different from other similar things?
-4Punoxysm
This is getting very Socratic. I don't know what your assumptions are or what would satisfy you as a definition and it is beginning to get frustrating to figure out, but I think these two links are pretty good. http://en.wikipedia.org/wiki/Gender_studies http://en.wikipedia.org/wiki/Feminist_theory As for motivated cognition, of course it's present, as it is virtually everywhere in life and academia. Do you have a more specific case? Remember that though the humanities and softer social sciences have all sorts of flaws that are easy to make fun of, they don't submit grants for $100 million dollar construction projects with stated goals they know to be totally unachievable (I'm looking at you local university particle accelerator). Don't condemn the field just by its sins.
5Lumifer
Don't you think that being both a field of study and a social movement aiming to change prevalent values and social structures offers especially rich opportunities for motivated cognition? Compared to the baseline of life and academia average? That's peanuts. When social scientists fuck things up, the cost is in millions of human lives. Exhibit A: Karl Marx. Well, the problem is that I don't think it's a field of study at all. I think it is, as you said, a project to change the society.
0[anonymous]
I can see your point about social sciences, but I would think this doesn't apply to most of the humanities. How is a creative writing, theatre, or communications course fraught by ideological criterion?
5Nornagest
In a word: theory. I didn't take as many of those classes in college as I did social science, so I'm speaking with a little less authority here, but the impression I got is that the framework underpinning creative writing etc. draws heavily on critical theory, which is about as value-loaded as it gets in academia. The implementation part, of course, isn't nearly as much so.
1ChristianKl
How do you know that you understand motivations of political articles better? Are you able to predict anything politically relevant that you couldn't have predicted beforehand?
0Punoxysm
Concretely, I can often tell if the article writer is coming from a particular school of thought or referencing a specific thesis, then interpret jargon, fill in unstated assumptions, see where they're deviating or conforming to that overarching school of thought. This directly enhances my ability to extrapolate to what other political views they might have and understand what they are attempting to write, and who their intended audience is. As far as predicting the real world, that's tough. These frameworks of thought are in constant competition with one another. They are more about making normative judgments than predictive ones. The political theories that I believe have the most concrete usefulness are probably those that analyze world affairs in terms of neocolonialism, in part because those theories directly influence a ton of intellectuals but also in part because they provide a coherent explanation of how the US has managed its global influence in the past and (I predict) how it will do so in the future. I can also do things like more fully analyze the factors behind US police and African-American relations, or how a film will influence a young girl.
0ChristianKl
That reminds me of the Marxist who can explain everything with the struggle of the workers against the capitalists. The sentence looks like your study did damage. You shouldn't come out of learning about politics believing that you can fully understand the factors of anything political.
2gjm