All of inklesspen's Comments + Replies

My folks raised us in borderline-fundamentalist Christianity, which made the Santa myth nearly as much of a non-starter as I expect it was for those commenters who were raised Orthodox Jewish.

If and when I have children of my own, I intend to use the Santa myth as an exercise in invisible-dragon baiting, nothing more.

I don't think that argument is even valid. After all, I have the option of putting a human in a box. If I do, one hypothesis states that the human will be tortured and then killed. The other hypothesis states that the human will "vanish"; it's not precisely clear what "vanish" means here, but I'm going to assume that since this state is supposed to be identical in my experience to the state in the first hypothesis, the human will no longer exist. (Alternative explanations, such as the human being transported to another universe which I ... (read more)

I don't think you're addressing the core of the argument. Even if you don't actually press the button, how much disutility you assign to pressing it depends on your beliefs. If you think the action will cause 50 years of torture, you're a believer in the "strong Occam's Razor" and the proof is complete.

I believe he does take medication; I remember him saying his psychologist started him on Abilify and he was terrified that Abilify would cause permanent muscle tics, as apparently it does in rare cases.

If he has a psychologist then there's not much you can do directly to help. That's sort of their job. However, it may help to just be there for him. And when he says something that's obviously negative about himself and likely to be wrong, explain why it is wrong. That won't do much, but it might help a tiny bit.

As I said to Perplexed, he lives halfway across the continent. I do know his name and mailing address, but I talk with him exclusively over IRC. I know some of the therapies and medicines he's taken, but I don't know what he's taking right now.

Part of my reluctance to take matters into my own hands is that I don't know how to reliably tell a qualified psychiatrist or psychologist from a quack. I can look up what Wikipedia says about a specific therapy like ECT, but how do I know whether what it says is accurate enough to trust my friend's life to it? As th... (read more)

Neither of us is qualified to judge whether that's true. The fact that you talk with this person exclusively over IRC limits your options a little, but it also changes things in one important respect; it greatly increases the probability that you're the only person (or only responsible person) with this information. He might tell you things that he doesn't tell any of the people he interacts with face to face. If you're the only one who knows, then you can't just sit on that information. If you do call a hotline, the first thing they do will probably be to find out who your friend's psychiatrist is and contact them. The information you have is sufficient to do this discretely. They are well aware that doing the wrong thing could be disastrous, and aren't likely to do anything stupid.
Most of the Wikipedia information on mental disorders and standard treatments for them is fairly accurate.

He lives halfway across the continent, and he has been talking like this for months without doing physical harm to himself. Is it right for me to cause the intrusion into his life such a call would surely bring without stronger evidence that it's necessary?

He is probably safe unless he starts getting less depressed, because at that point he'll probably still be suicidal but have enough energy to do something about it. If he's been stably in that condition for months then I don't think it's an emergency. I'm fairly torn on advice for this case. If he really has tried everything and it hasn't helped, then I don't think living is much of an end in and of itself and he should be assisted in his wishes, or at least not prevented.(Be aware that I am biased, this is my perspective as someone who empathises with your friend) If you think he hasn't tried everything, then the intrusion is completely warranted. He is at the point where he literally can't help himself. Therapy can only work if the patient has an interest in getting better, which he doesn't.
I'm told that talking about suicide is a "cry for help". If he is your friend, you have a right and duty to help him. Call your local suicide hotline. Educate yourself more efficiently than you are doing by asking questions here. Ask their advice, if you wish without giving your friend's name or geographic location. They can give you far better information and better moral arguments about whether and how to intervene.

Is it right for me to cause the intrusion into his life such a call would surely bring without stronger evidence that it's necessary?

Yes. You already have the strongest evidence it is possible to get without him dying.

I have a friend who suffers from severe depression. He has stated on many occasions that he hates himself and wants to commit suicide, but he can't go through with it because even that would be accomplishing something and he can't accomplish anything.

He has a firm delusion that he cannot do anything worthwhile, that the world is going to hell in a handbasket and nothing can possibly be done by anyone about it, and everyone else feels the same way he does, but is repressing it.

This makes talking with him about many subjects exceedingly difficult, as he will... (read more)

Tentative: have you tried telling him that the universe isn't keeping score? It seems to me that he's running a script of trying to prove that he deserves to live. Or possibly a script about whether he's allowed to let himself feel good about what he does. Check for influence from Ayn Rand. Some of her ideas are good, some of them are utterly poisonous. There's some level where he's still trying to live, even if all he's doing is trying to feel a little better by talking about what he's thinking. On the therapy side, I think bodywork helps, though it isn't the only route. (Strong belief here.) Habitual thoughts and emotions correlate with neuromuscular pattern-- that's why, if you know someone well, you can tell what they're thinking about by looking at them. On the therapeutic side, giving a person the experience of not going into those patterns can be useful. I don't know know how much difference your protesting his desire for suicide makes-- as far as I can tell, it depends on how emotionally close he feels to you. It seems fairly common for people to not commit suicide because there are particular people they don't want to hurt. Honestly, I don't know how much you can do. I'm having a hard fight with less serious depression-- some progress, which I'll probably write up. Meanwhile, I think Holy Basil is doing my mood some good. This is a very tasty holy basil and rooibos blend. As for the larger rationalist question, I don't know. I don't believe FAI + uploading = immortality. There's too much that can go wrong on the individual level even if the clade survives.

he has been seeing therapists and psychologists and they have been unable to help him.

As you probably already know, therapists and psychologists generally cannot prescribe anti-depressants - that takes an MD (psychiatrist). I am not a psychiatrist, nor do I play one on the internet. I have no idea whether the cause of your friend's depression is biological or purely psychological. I have no idea whether his therapists have advised him to see a psychiatrist, or whether they are the kind of quacks who "don't buy into the biological model". S... (read more)

That is very serious business, and it is not likely that you can handle it yourself. The first thing to do is make sure he's seeing a competent therapist. If he's lapsed, or his therapist is actually a quack, or his therapist for some reason doesn't know what's going on, that could be very dangerous. So get a name, contact him or her, and pass along what you just said. In writing. That is the most important thing. (EDIT: Actually, this is probably too slow; the time it'd take to do what the previous paragraph describes is a substantial unnecessary risk. Contact a suicide prevention hotline first, as Perplexed says.) The cause of this is probably biochemical, and must be addressed at that level. Unfortunately, identifying the cause of this sort of thing is hard, and there are no good tools for it. I would start by checking the basics of his therapists' work - diet (especially micronutrients; ask if he takes a multivitamin), a minimum amount of regular exercise (pressure him into playing a sport with you if necessary), and a minimum amount of recurring social contact (weekly events that happen automatically without him having to do anything to arrange them). After that, start looking at pharmaceutical solutions. Don't encourage him to change anything without the approval of a licensed professional, since if he's already borderline suicidal then the wrong change could tip him over the edge; but do find out what he's taking, look it up on PubMed, and ask a psychologist other than the one he's currently seeing whether his regimen is reasonable. Whatever it is, your description suggests that either he's not taking it, it's not working, or it's making things worse. You can probably distinguish between the first and the other two possibilities, but not between the second and third. That information would be useful to his therapist; but beware that it could be both that he's not taking it and that it doesn't work, in which case letting his therapist blame the problem on n

Interesting, but the pessimist in me is noting "even a stopped clock is right twice a day".

For every one study like this, there's hundreds of parents yelling that they noticed their kids developed autism right after getting vaccinated, or that they're sure the power lines near their house are affecting their kids' growth, or some other such nonsense.

I think you need to be far less general; not every parent is an expert on their child's behavior, let alone their child's health.

You need to distinguish between absolute observations (child is hyperactive) and relative observations (child was more hyperactive today than yesterday). The meta-analysis cited above uses relative observations. That's why I wrote, Also, this was across 15 studies, not one study.

I think the best definition of consciousness I've come across is Hofstadter's, which is something like "when you are thinking, you can think about the fact that you're thinking, and incorporate that into your conclusions. You can dive down the rabbit hole of meta-thinking as many times as you like." Even there, though, it's hard to tell if it's a verb, a noun, or something else.

If we want to talk about it in computing terms, you can look at the stored-program architecture we use today. Software is data, but it's also data that can direct the hard... (read more)

I like Hofstadter's works, but I think he over-focuses on recursion and meta-thinking. At a much more basic level, we use the word 'conscious' to describe the act of being aware of and thinking about something - I was conscious of X. Some drugs can silence that conscious state, and some (such as alcohol) can even induce a very interesting amnesiac state where you appear conscious but are not integrating long term memories, and can awake later to have no memory of large portions of the experience. Were you thus conscious? Clearly after awaking and forgetting, you are no longer conscious of the events forgotten. So perhaps our 'consciousness' is the set of all mindstuff we are conscious of, and thus it is clearly based on our memory (both long and short term). Even thinking about what you are thinking about is really thinking about what you were just thinking about, and thus involves short term memory. Memory is the key of consciousness, but it also involves some recursive depth - layering thoughts in succession. But ultimately 'consciousness' isn't very distinct from 'thinking'. It just has more mystical connotations.
I can write programs that can do that.

That's even more concise, but I think a bit too narrow.

As you mention in your second footnote, the idea of a 'pickup artist' carries unfortunate connotations. I'd suggest you change your headline to something that you won't have to explain "it's not really what you thought when you first heard it".

Perhaps "Optimizing interaction techniques for social enjoyment"? This has the benefit that while the pickup artist is perceived as interested in social engagement as a means to orgasm, practitioners of the techniques you discuss would be perceived as interested in social engagement as an end in itself.

"Optimizing interaction techniques for social enjoyment" is too long and abstract -- it signals that the group doesn't understand what it's setting out to do. Perhaps "Social Optimizer"? It's understandable and gets the overly nerdy angle w/o being confusing.
How to Win Friends and Influence People..?
I still think the title expresses my intent pretty well. I don't think it would have been easy to get my idea across without mentioning pick-up, but you're right it's going to get tedious explaining that I'm not a con artist wannabe. I originally had something like the second footnote at the very beginning, but it didn't read well. I like your suggestion though, it's appropriately LessWrongian!
"Leadership skills"?

Do you also argue that the books on my bookshelves don't really exist in this universe, since they can be found in the Library of Babel?

Gee, what do you think? I don't really wish to play word games here. Obviously there is some physical thing made of paper and ink on your bookshelf. Equally obviously, Borges was writing fiction when he told us about Babel. But in your thought experiment, something containing the same information as the book on your shelf exists in Babel. Do you have some point in asking this?

Where do those digits of pi exist? Do they exist in the same sense that I exist, or that my journal entries (stored on my hard drive) exist? What does it mean for information to 'exist'? If my journal entries are deleted, it is little consolation to tell me they can be recovered from the Library of Babel — such a recovery requires effort equivalent to reconstructing them ex nihilo.

In one sense, every possible state of a simulation could be encoded as a number, and thus every possible state could be said to exist simultaneously. That's of little comfort to ... (read more)

No, of course not. No more than do simulated entities on your hard-drive exist as sentient agents in this universe. As sentient agents, they exist in a simulable universe. A universe which does not require actually running as a simulation in this or any other universe to have its own autonomous existence. Now I'm pretty sure that is an example of mind projection. Information exists only with reference to some agent being informed. Which is exactly my point. If you terminate a simulation, you lose access to the simulated entities, but that doesn't mean they have been destroyed. In fact, they simply cannot be destroyed by any action you can take, since they exist in a different space-time. But you are not living in that upuniverse computer. You are living here. All that exists in that computer is a simulation of you. In effect, you were being watched. They intend to stop watching. Big deal!

Suppose I am hiking in the woods, and I come across an injured person, who is unconscious (and thus unable to feel pain) and leave him there to die of his wounds. (We are sufficiently out in the middle of nowhere that nobody else will come along before he dies.) If reality is large enough that there is another Earth out there with the same man dying of his wounds, and on that Earth, I choose to rescue him, does that avert the harm that happens to of the man I left to die? I feel this is the same sort of question as many-worlds. I can't wave away my moral responsibility by claiming that in some other universe, I will act differently.

All other things being equal, if I am a simulated entity, I would prefer not to have my simulation terminated, even though I would not know if it happened; I would simply cease to acquire new experiences. Reciprocity/xenia implies that I should not terminate my guest-simulations.

As for when the harm occurs, that's nebulous concept hanging on the meaning of 'harm' and 'occurs'. In Dan Simmons' Hyperion Cantos, there is a method of execution called the 'Schrodinger cat box'. The convict is placed inside this box, which is then sealed. It's a small but comfor... (read more)

So it seems that you simply don't take seriously my claim that no harm is done in terminating a simulation, for the reason that terminating a simulation has no effect on the real existence of the entities simulated. I see turning off a simulation as comparable to turning off my computer after it has printed the first 47,397,123 digits of pi. My action had no effect on pi itself, which continues to exist. Digits of pi beyond 50 million still exist. All I have done by shutting off the computer power is to deprive myself of the ability to see them.
What if you stop the simulation and reality is very large indeed, and someone else starts a simulation somewhere else which just happens, by coincidence, to pick up where your simulation left off? Has that person averted the harm?

No, I specifically meant that we should treat our simulations the way we would like to be treated, not that we will necessarily be treated that way in "return". A host's duty to his guests doesn't go away just because that host had a poor experience when he himself was a guest at some other person's house.

If our simulators don't care about us, nothing we can do will change that, so we might as well treat our simulations well, because we are moral people.

If our simulators do care about us, and are benevolent, we should treat our simulations well, ... (read more)

But maybe there could be a way in which, if you behave ethically in a simulation, you are more likely to be treated that way "in return" by those simulating you - using a rather strange meaning of "in return"? Some people interpret the Newcomb's boxes paradox as meaning that, when you make decisions, you should act is if you are influencing the decisions of other entities when there is some relationship between the behavior of those entities and your behavior - even if there is no obvious causal relationship, and even if the other entities already decided back in the past. The Newcomb's boxes paradox is essentially about reference class - it could be argued that every time you make a decision, your decision tells you a lot about the reference class of entities identical to you - and it also tells you something, even if it may not be much in some situations, about entities with some similarity to you, because you are part of this reference class. Now, if we apply such reasoning, if you have just decided to be ethical, you have just made it a bit more likely that everyone else is ethical (of course, this is your experience - in reality - it was more that your behavior was dictated by being part of the reference class - but you don't experience the making of decisions from that perspective). Same for being unethical. You could apply this to simulation scenarios, but you could also apply it to a very large or infinite cosmos - such as some kind of multiverse model. In such a scenario, you might consider each ethical act you perform as increasing the probability that ethical acts are occurring all over reality - even of increasing the proportion of ethical acts in an infinity of acts. It might make temporal discounting a bit less disturbing (to anyone bothered by it): If you act ethically with regard to the parts of reality you can observe, predict and control, your "effect" on the reference class means that you can consider yourself to be making it more likely that

If I'm following your "logic" correctly, and if you yourself adhere to the conclusions you've set forth, you should have no problem with me murdering your body (if I do it painlessly). After all, there's no such thing as continuity of identity, so you're already dead; the guy in your body is just a guy who thinks he's you.

I think this may safely be taken as a symptom that there is a flaw in your argument.

No, there are useful things I want to accomplish with the remaining lifespan of the body I have. That there is no continuity of personal identity is irrelevant to what I can accomplish. That continuity of personaal identity is an illusion simply means that the goal of indefinite extension of personal identity is a useless goal that can never be achieved. I don't doubt that a machine could be programmed to think it was the continuation of a flesh-and-blood entity. People have posited paper clip maximizers too.

It is, of course, utterly absurd to think that meat could be the substrate for true consciousness. And what if Simone chooses herself to spend eons simulating a being by hand? Are we to accept the notion of simulations all the way down?

In all honesty, I don't think the the simulation necessarily has to be very fine-grained. Plenty of authors will tell you about a time when one of their characters suddenly "insisted" on some action that the author had not foreseen, forcing the author to alter her story to compensate. I think it plausible that, wer... (read more)

There have been some opinions expressed on another thread that disagree with that. The key question is whether terminating a simulation actually does harm to the simulated entity. Some thought experiments may improve our moral intuitions here. * Does slowing down a simulation do harm? * Does halting, saving, and then restarting a simulation do harm? * Is harm done when we stop a simulation, restore an earlier save file, and then restart? * If we halt and save a simulation, then never get around to restarting it, the save disk physically deteriorates and is eventually placed in a landfill, exactly at which stage of this tragedy did the harm take place? Did the harm take place at some point in our timeline, or at a point in simulated time, or both? I tend to agree with your invocation of xenia, but I'm not sure it applies to simulations. At what point do simulated entities become my guests? When I buy the shrink-wrap software? When I install the package? When I hit start? I really remain unconvinced that the metaphor applies.
I am fascinated by applying the ethic of reciprocity to simulationism, but is a bidirectional transfer the right approach? Can we deduce the ethics of our simulator with respect to simulations by reference to how we wish to be simulated? And is that the proper ethics? This would be projecting the ethics up. Or rather should we deduce the proper ethics from how we appear to be simulated? This would be projecting the ethics down. The latter approach would lead to a different set of simulation ethics, probably based more on historicity and utility. ie "Simulations should be historically accurate." This would imply that simulation of past immorality and tragedy is not unethical if it is accurate.
That idea used to make me afraid to die before i wrote up all the stories i thought up. Sadly that is not even possible any more. One big difference between an upload an a person simulated in your mind is that the upload can interact with environment.

I think it plausible that, were I to dedicate my life to it, I could imagine a fictional character and his experiences with such fidelity that the character would be correct in claiming to be conscious.

Personally, I would be more surprised if you could imagine a character who was correct in claiming not to be conscious.

Proper posture tends to be more comfortable; surely this is a benefit to myself.

I also apologize to people when I have wronged them, not because they are higher-status than me, but because I do not like being a jackass.

The apology is a noble thing, and so is its rationalist cousin the mind-change. I'm not making any prescriptive recommendations -- I'm just elucidating one of the mechanisms of status evaluation.

We've evolved something called "morality" that helps protect us from abuses of power like that. I believe Eliezer expressed it as something that tells you that even if you think it would be right (because of your superior ability) to murder the chief and take over the tribe, it still is not right to murder the chief and take over the tribe.

We do still have problems with abuses of power, but I think we have well-developed ways of spotting this and stopping it.

That's exactly the high awareness I was talking about, and most people don't have it. I wouldn't be surprised if most people here failed at it, if it presented itself in their real lives. I mean, are you saying you wouldn't save the burning orphans? We have checks and balances of political power, but that works between entities on roughly equal political footing, and doesn't do much for those outside of that process. We can collectively use physical power to control some criminals who abuse their own limited powers. But we don't have anything to deal with supervillains. There is fundamentally no check on violence except more violence, and 10,000 accelerated uploads could quickly become able to win a war against the rest of the world.

Surely it would be better in multiple ways to simply find a well-spoken religious person with whom you can work. He will have more knowledge of his audience than you have, so there's a practical benefit, as well as the moral benefit of not being dishonest.

My journey away from theism was characterized by smaller arguments such as these. There was no great leap, just a steady stream of losing faith in doctrines I had been brought up to believe. Creationism went first. Discrimination against homosexuals went next. Shortly after that, I found it impossible to believe in the existence of hell, except perhaps in a sort of Sartrean way. Shortly after that, I found myself rejecting large portions of the Bible, because the deity depicted therein did not live up to my moral standards. At that point I was finally read... (read more)

I don't think it's possible to integrate core Babyeater values into our society as it is now. I also don't think it's possible to integrate core human values into Babyeater society. Integration could only be done by force and would necessarily cause violence to at least one of the cultures, if not both.

Other hominids have been known to keep pets. I would not be surprised if cetaceans were capable of this as well, though it would obviously be more difficult to demonstrate.

According to the article, they lack crucial features such as double-blinding. Most social networks lack the openness and data retention critical for effective peer review. It is possible to learn something from a network like the one described, but I would hesitate to call it science.

Lack of double-blinding ought to increase the false positive rate, right? But the result presented in the OP (the lithium) was a finding of a negative.

Well, you will have to be careful how you do it; my understanding is that most doctors are exasperated at people who self-diagnose based on reading things on the Internet. It's a bias, sure, but it doesn't seem to be an unreasonable one. So you wouldn't want to bring it up on your very first visit. You will need to wait until you've demonstrated your non-crank-ness.

Once you and your doctor know each other better, though, I think it would be an excellent idea to bring more data to the table. My objection is to an article entitled "Med Patient Social Networks Are Better Scientific Institutions", not one entitled "Med Patient Social Networks Are A Useful Tool In Improving Care".

The "people" in the quoted bit are correct. This is not science; this is statistical analysis.

It is possible that an individual would be better served by this social network, though I have generally agreed that a physician who treats himself has a fool for a patient, and the more so for a layman who neglects to consult competent medical authorities. These social networks certainly cannot take the place of original research; they rely on existing observed trends.

This depends on the situation.

With a rare diagnosed conditions it is kind of easy for the patient to have more knowledge than a typical doctor. The doctor has heard 15 minutes about it 20 years ago in med school while the patient has gone through all the recent research.

Self-diagnosing is typically problematic. Self-managing chronic conditions is many times quite rational.

The way you phrased that implies that these social networks cannot be used for original research.
Doctors make decisions based on a mix of theoretical knowledge and experience. More the experience than the knowledge. 'Experience' is another word for their subjective view of the patient histories that they have observed through their career. Why not make the decision based on an emprical measure of patient histories, taken over a large random-ish sample, rather than one particular physicians subjective interpretation of only the patients he has seen? Better yet, why not present this data to your physician and have a talk about it?

Integrating the values of the Baby-eaters would be a mistake. Doing so with, say, Middle-Earth's dwarves, Star Trek's Vulcans, or GEICO's Cavemen doesn't seem like it would have the same world-shattering implications.

The closer their values are to ours, the smaller the upset of integration; but for this very reason, the value of integration and the need to integrate may also be smaller This is not a logical truth, of course, but it is often true. For instance, in the original story, the need to integrate was directly proportional to the difference between the human and Babyeater (or Superhappy and Babyeater) values.
It would be a mistake if you don't integrate ALL baby eaters, including the little ones.

I don't see a terrible problem with comments being "a discussion about the facts of the post"; that's the point of comments, isn't it?

Perhaps we just need an Open Threads category. We can have an open thread on cryonics, quantum mechanics and many worlds, Bayesian probability, etc.

There may also be a limit to how wisely one can argue that spending money on wars while cutting taxes for the wealthy is sound economic policy.

Does any viewpoint have a right to survive in spite of being wrong?

But if they're wrong, then they'd have always been wrong, and Karl should have just been liberal, rather than becoming more so when surrounded by liberals.

I think the thing that made me a seeker-after-rationalism is the same thing that made me an agnostic: Greg Egan's Oceanic.

I grew up in a fundamentalist household and had had one moment of religious euphoria. Oceanic made me confront the fact that religious euphoria, like other euphoria, is just naturalistic phenomena in the brain. Still waiting on my fundamentalist parents to to show evidence for non-naturalistic causes for naturalistic phenomena.