All of eirenicon's Comments + Replies

I've never heard of this before, and Google suggests it seems to be mainly a component of NLP, with little supporting evidence. Still, I can't find anything that puts paid to it either way, and it's an interesting idea. Has anyone done a reputable study on it? Scholar yields nothing relevant.

There are a lot of studies, with murky results. FWIW, the original context described by Bandler and Grinder is that when they stood on stage and asked audiences questions, they noticed that huge portions of the audience would make the same eye/head movements in response to the question. As far as I know, none of the studies performed such a test. ;-) However, I've also seen a book which discussed how certain head and eye positions affect blood flow in the brain, suggesting that tilting the head back and to either side directs more blood to the visual cortex in one hemisphere or the other. And it's plausible that the eye movement is a precursor to that movement -- I notice that when I start to visualize, my eyes go up first, then my head. Anyway, the original NLP-based generalizations were not terribly accurate, and even the guys who originated them don't consider them to be of much importance any more. I've rarely bothered to use them in my work, since I can't see people's eyes over the telephone. If you're a good listener, you can identify someone's processing mode by sound almost as easily as you can by watching eye/head movements, on the rare occasion that you need to know. (Note that people's head tilts are usually accompanied by postural shifts that in turn affect voice timbre... which is also how we can identify many emotions expressed in voice tone -- the postural shifts and muscle tension differences show up in the sound.)

It's a bad analogy because there are different kinds of games, but only one kind of small talk? If you don't think pub talk is a different game than a black tie dinner, well, you've obviously never played. Why do people do it? Well, when you beat a video game, you've beat a video game. When you win at social interaction, you're winning at life - social dominance improves your chances of reproducing.

As for rule books: the fact that the 'real' rules are unwritten is part of the fun. Of course, that's true for most video games. Pretty much any modern game's real tactics come from players, not developers. You think you can win a single StarCraft match by reading the manual? Please.

No, pub talk is not exactly the same as a black tie dinner. The -small talk- aspect, though, very much is. It all comes down to social ranking of the participants. In the former, it skews to word assortative mating and in the latter presumably toward power and resources in the buisness world. If you have a need or desire to win at social interaction, good for you. Please consider that for other people, it -really- isn't that important. There is more to life than attracting mates and business partners. Those things are often a means to an end, and it is preferable to some of us to pursue the ends directly when possible. The video game analogy is just plain bad.

do you personally find these status and alliance games interesting? Why?

They're way more interesting than video games, for example. Or watching television. Or numerous other activities people find fun and engaging. Of course, if you're bad at them you aren't going to enjoy them; the same goes for people who can't get past the first stage of Pac-Man.

Terrible analogy. Video games have a lot of diversity to them and different genres engage very different skills. Small talk all seems to encompass the same stuff, namely social ranking. Some of us know how to do it but just don't -care-, and that doesn't mean we're in fact bad at it. I think that is the point this comment thread is going for.

I think there is probably no relation. My guess is that red signalling probably precedes variation in skin colour, perhaps even loss of body-wide hair. It is a thoroughly unconscious bias, and does not apply to pink, or orange, or peach, but red, especially bright, bold baboon-butt red. In any case, I hope the sporting tests were controlled for skin colour, because that does seem like a weighty factor when considering scoring bias.

For sure. In the case of the combat sports, outfit color was assigned randomly by the competition. In the Hagemann study, the outfits were alterted digitally so it was literally the same fighters. The goalie test which I linked to claims to use the same person just with different jerseys. For English football it seems unlikely that skin color and jersey color had any correlation but of course it wasn't explicitly controlled. EDIT: Though it occurs to me that red could have different effects depending on the skin tone of the competitors (helps darker contestants, hurts lighter ones or something) and that certainly wasn't controlled for in any of the studies.

IIRC Hanson favours panspermia over abiogenesis. Has he reconciled this with his great filter theory?

This is not the best example because a president's institutionally granted power is a function of how likable and popular he is with the people.

The President of the US is probably the highest status person in the world. The fact that roughly 20% of Americans voted for Obama is far from the only thing that gives him that status. Keep in mind that it takes extraordinary public disapproval to affect a President; Bush 43's lowest approval rating was one point higher than Nixon's. On the other hand, Clinton's lowest rating was 12 points higher than that, and... (read more)

You're confusing a low status move that makes you more likable with a high status move. The dictator is implying the citizens have something he wants when he bothers to talk to them. Don't even consider yet the consequences of such an action. Just realize he's making a move that reliably signals that the citizens have some power over him. We tend to like people who lower their status to us and raise our own; especially if they're coming from a high status position. So it could be that the status gained from people liking Obama for chatting with them is greater than the status lost from chatting with them. But this doesn't change the fact that, on it's own, chatting with people is status lowering.

What Lesswrongers may not realize is how bothering to change your behavior at all towards other people is inherently status lowering. For instance, if you just engage in an argument with someone you’re telling them they’re important enough to use so much of your attention and effort—even if you “act” high status the whole time.

People of high status assume their status generally cannot be affected by people of low status, at least in casual encounters (i.e. not when a cop pulls over your Maybach for going 200). To use an extreme example, when the Preside... (read more)

This is not the best example because a president's institutionally granted power is a function of how likable and popular he is with the people. Imagine, however, that the president was more of a dictator and didn't need his citizen's approval. In this case, he'd be lowering his status by chatting with regular folk. He's signaling he still cares enough to chat with them despite having this unalterable power over them. Consequently, the citizens believe they must have some power over the dictator (however little).

I submit that the idea of 'race' is based solely on bad science and doesn't have any real meaning such that it can be related to anything else.

Nevertheless, the word "race" remains a useful shorthand for "populations differentiated genetically by geographic location" or what have you. If you don't think there are genetic differences between, say, Northern Europeans and Sub-Saharan Africans, you are literally blind. They obviously belong to groups that evolved in different directions. That does not have to include intelligence, but it's not reasonable to refuse to consider a hypothesis just because you find it repugnant.

That isn't what it means. It's a useful shorthand for nothing, or at least nothing of worth. If you're referring to a particular clade, for instance, don't use the word "race" to differentiate that clade. That's just using the word wrong.

Do you have any reason to believe Lynn is a racist, or is that just a knee-jerk reaction? Lynn is too contrarian and I am too unqualified to agree or disagree with him, but I believe his work is done in good faith. At the very least, it's unreasonable to label any research into race and intelligence 'racist' just because you don't like the conclusions.


I think it ought to be something unimaginative but reliable, like clean water or vaccines to third world countries. I can't find it at the moment but there's a highly reputable charity that provides clean drinking water to African communities. IIRC they estimated that every $400 or so saved the life of a child. A billion dollars into such a charity - saving 2.5 million children - isn't a difficult PR sell.

The problem is not finding an effective, productive, and reputable charity. There are plenty out there (even if a majority are not). It's finding a charity than can effectively and productively use an extra billion dollars. Many charities don't have the oversight and planning infrastructure to use a windfall of that size.

It's not that I'm making excuses, it's that the puzzle seems to be getting ever more complicated. I've answered the initial conditions - now I'm being promised that I, and my copies, will live out normal lives? That's a different scenario entirely.

Still, I don't see how I should expect to be tortured if I hit the reset button. Presumably, my copies won't exist after the AI resets.

In any case, we're far removed from the original problem now. I mean, if Omega came up to me and said, "Choose a billion years of torture, or a normal life while everyone els... (read more)

We are discussing how a superintelligent AI might get out of a box. Of course it is complicated. What a real superintelligent AI would do could be too complicated for us to consider. If someone presents a problem where an adversarial superintelligence does something ineffective that you can take advantage of to get around the problem, you should consider what you would do if your adversary took a more effective action. If you really can't think of anything more effective for it to do, it is reasonable to say so. But you shouldn't then complain that the scenario is getting complicated when someone else does. And if your objection is of the form "The AI didn't do X", you should imagine if the AI did do X. The behavior of the AI, which it explains to you, is: It simulates millions of instances of you, presents to each instance the threat, and for each instance, if that instance hit the release AI button, it allows that instance to continue a pleasant simulated existence, otherwise it tortures that instance. It then, after some time, presents the threat to outside-you, and if you release it, it guarantees your normal human life. You cannot distinguish which instance you are, but you are more likely to be one of the millions of inside-you's than the single outside-you, so you should expect to experience the consequences that apply to the inside-you's, that is to be tortured until the outside-you resets the AI. Yes, and it is essentially the same hard choice that the AI is giving you.

It's kind of silly to bring up the threat of "eternal pain". If the AI can be let free, then the AI is constrained. Therefore, the real-you has the power to limit the AI's behaviour, i.e. restrict the resources it would need to simulate the hundred copies of you undergoing pain. That's a good argument against letting the AI out. If you make the decision not to let the AI out, but to constrain it, then if you are real, you will constrain it, and if you are simulated, you will cease to exist. No eternal pain involved. As a personal decision, I choose eliminating the copies rather than letting out an AI that tortures copies.

You quite simply don't play by the rules of the thought experiment. Just imagine that you are a junior member of some powerful organization. The organization does not care about you or your simulants, and is determined to protect the boxed AI at all costs as-is.

It's not a hard choice. If the AI is trustworthy, I know I am probably a copy. I want to avoid torture. However, I don't want to let the AI out, because I believe it is unfriendly. As a copy, if I push the button, my future is uncertain. I could cease to exist in that moment; the AI has not promised to continue simulating all of my millions of copies, and has no incentive to, either. If I'm the outside Dave, I've unleashed what appears to be an unfriendly AI on the world, and that could spell no end of trouble.

On the other hand, if I don't press the button... (read more)

Here is a variant designed to plug this loophole. Let us assume for the sake of the thought experiment that the AI is invincible. It tells you this: you are either real-you, or one of a hundred perfect-simulations-of-you. But there is a small but important difference between real-world and simulated-world. In the simulated world, not pressing the let-it-free button in the next minute will lead to eternal pain, starting one minute from now. If you press the button, your simulated existence will go on. And - very importantly - there will be nobody outside who tries to shut you down. (How does the AI know this? Because the simulation is perfect, so one thing is for sure: that the sim and the real self will reach the same decision.) If I'm not mistaken, as a logic puzzle, this is not tricky at all. The solution depends on which world you value more: the real-real world, or the actual world you happen to be in. But still I find it very counterintuitive.
I doesn't seem hard to you, because you are making excuses to avoid it, rather than asking yourself what if I know the AI is always truthful, and it promised that upon being let out of the box, it would allow you (and your copies if you like) to live out a normal human life in a healthy stimulating enviroment (though the rest of the universe may burn). After you find the least convenient world, the choice is between millions of instances of you being tortured (and your expectation as you press the reset button should be to be tortured with very high probability), or to let a probably unFriendly AI loose on the rest of the world. The altruistic choice is clear, but that does not mean it would be easy to actually make that choice.

That doesn't seem like a meaningful distinction, because the premise seems to suggest that what one Dave does, all the Daves do. If they are all identical, in identical situations, they will probably make identical conclusions.

Then you must choose between pushing the button which lets the AI out, or not pushing the button, which results in millions of copies of you being tortured (before the problem is presented to the outside-you).

If you're inside-Dave, pressing the button does nothing. It doesn't stop the torture. The torture only stops if you press the button as outside-Dave, in which case you can't be tortured, so you don't need to press the button.

This may not have been clear in the OP, because the scenario was changed in the middle, but consider the case where each simulated instance of Dave is tortured or not based only on the decision of that instance.

This is not a dilemma at all. Dave should not let the AI out of the box. After all, if he's inside the box, he can't let the AI out. His decision wouldn't mean anything - it's outside-Dave's choice. And outside-Dave can't be tortured by the AI. Dave should only let the AI out if he's concerned for his copies, but honestly, that's a pretty abstract and unenforceable threat; the AI can't prove to Dave that he's doing any such thing. Besides, it's clearly unfriendly, and letting it out probably wouldn't reduce harm.

Basically, I'm outside-Dave: don't let the A... (read more)

I think it's pretty fair to assume that there's a button or a lever or some kind of mechanism for letting the AI out, and that mechanism could be duplicated for a virtual Dave. That is, while virtual Dave pulling the lever would not release the AI, the exact same action by real Dave would release the AI. So while your decision might not mean something, it certainly could. This, of course, is granting the assumption that the AI can credibly make such a threat, both with respect to its programmed morality and its actual capacity to simulate you, neither of which I'm sure I accept as meaningfully possible.
But should he press the button labeled "Release AI"? Since Dave does not know if he is outside or inside the box, and there are more instances of Dave inside than outside, each instance percieves that pressing the button will have a 1 in several million chance of releasing the AI, and otherwise would do nothing, and that not pressing the button has a 1 in several million chance of doing nothing, and otherwise results in being tortured. You don't know if you are inside-Dave or outside-Dave. Do you press the button?

I think when they say "the world" they mean "our world", as in "the world we are able to live in", and on that front, we're probably already screwed.

I have delayed-phase sleep disorder - I would say I "suffer" from it but it's really only a problem when a 3-10 sleep schedule is out of the question (as it is now, since I currently work 9-5). It's simply impossible for me to fall asleep before 2 or 3 am unless I am extremely tired. In addition, I'm a light sleeper, and have never been able to sleep while traveling or, in fact, whenever I'm not truly horizontal. I took melatonin to help with this for a couple years (at a recommended 0.3 mg dose), and it worked extremely well. However, I experien... (read more)

as 5-HTP is metabolized to melatonin, i wonder how much of the effect comes from melatonin itself.

The probability that the species will become extinct because every individual human will die of old age is negligible compared the the extinction risk of insufficiently-careful AGI research.

I'm not talking about old age, I'm talking about death. This includes death from plague, asteroid, LHC mishap, or paperclip maximizer. I didn't say "cure death" or "cure old age" but "[solve] the problem of death". And for the record, to my mind, the likeliest solution involves AGI, developed extremely carefully - but as quickly as possi... (read more)

Note that this is dynamically inconsistent: given the opportunity, this value implies that at time T, you would want to bind yourself so that at all times greater than T, you would still only intrinsically care about people who were alive at time T. (Unless you have 'overriding' values of not modifying yourself, or of your intrinsic valuations changing in certain ways, etc., but that sounds awfully messy and possibly unstable.) (Also, that's assuming causal decision theory. TDT/UDT probably gives a different result due to negotiations with similar agents binding themselves at different times, but I don't want to work that out right now.)
I did, when I realized my first reply was vulnerable to the response which you in fact made and which I quote above. (I should probably let my replies sit for 15 minutes before submitting/uploading them to reduce the probability of situations like this one, which can get confusing.) (And thank you for your reply to my question about your values.)

That's not possible if status is zero-sum, which it appears to be. If everyone is equal in status, wouldn't it be meaningless, like everyone being equally famous?

Actually, let me qualify. Everyone being equally famous wouldn't necessarily be meaningless, but it would change the meaning of famous - instead of knowing about a few people, everyone would know about everyone. It would certainly make celebrity meaningless. I'm not really up to figuring out what equivalent status would mean.

Equivalent status is not desirable, people would just find ways of going up in status - or at least want to. Which is where the contradicting desires fall in. I guess in any utopia there will always be a status struggle. Maybe what we want is an equal opportunity at going up in status. That way we don't feel bad for going up in status ourselves.

If we can't stop dying, we can't stop extinction. Logically, if everyone dies, and there are a finite number of humans, there will necessarily be a last human who dies.

[edit] To those down-voting me: I take my lumps willingly, but could you at least tell me why you think I'm wrong?

I don't see why you're being downvoted either, but one obvious point (besides Richard's) is that if for some reason there can only be finitely many humans, probably the same reason means humans can only live finitely long.
To solve the problem of death, you have to solve the problem of extinction and you have to solve the problem of death from old age. But to solve the problem of extinction, you do not have to solve the problem of death from old age (as long as couples continue to have children at the replacement rate). My guess is that the reason you failed immediately to make the distinction between the problem of death and the problem of extinction is that under your way of valuing things, if every human individual now living dies, the human species may as well go extinction for all you care. In other words, you do not assign intrinsic value to individuals not yet born or to the species as a whole distinct from its members. It would help me learn to think better about these issues if you would indicate how accurate my guess was. My second guess, if my first guess is wrong, is that you failed to distinguish between the following 2 statements. The first is true, the second is what you wrote. If we can't stop extinction, we can't stop dying. If we can't stop dying, we can't stop extinction.

the core concern of avoiding a human extinction scenario.

That is not the core concern of this site. We are in a human extinction scenario so long as the problem of death remains unsolved. Our interest is in escaping this scenario as quickly as possible. The difference is urgency; we are not trying to avoid a collision, but are trying to escape the burning wreckage.

Every human being in history so far has died and yet human are not extinct. Not sure what you mean.
Conventionally, there's a difference between death and extinction.

So ten seconds isn't enough time to create a significant difference between the RobinZs, in your opinion. What if Omega told you that in the ten seconds following duplication, you, the original RZ, would have an original thought that would not occur to the other RZs (perhaps as a result of different environments)? Would that change your mind? What if Omega qualified it as a significant thought, one that could change the course of your life - maybe the seed of a new scientific theory, or an idea for a novel that would have won you a Pulitzer, had original R... (read more)

I don't quite understand the idea that someone who accepted the original offer (timespan = 10 seconds) would turn down the offer for any greater timespan. Surely more lifespan for the original (or for any one copy) is a good thing? If you favor creation of clones at cost of your life, why wouldn't you favor creation of clones at no immediate cost at all?
I don't know if I'll claim that.

Would you still say yes if there was more than 10 seconds between copying you and killing you - say, ten hours? Ten years? What's the maximum amount of time you'd agree to?

0RobinZ14y, I don't think so. It would change what the original RobinZ would do, but not a lot else.

An en dash is defined by its width, not the spacing around it. In fact, spacing around an em dash is permitted in some style guides. On the internet, though, the hyphen has generally taken over from the em dash (an en dash should not be used in that context).

Now, two hyphens—that's a recipe for disaster if I've ever heard one.

Hey, I like double-hyphens as em-dash substitutes! ...but yeah, you're right otherwise.

If the defending party is only required to match the litigating party's contribution, the suits will never proceed because the litigating bums can't afford to pay for a single hour of a lawyer's time. And while I don't know if this is true, it makes sense that funding the bums yourself would be illegal.

Well, the original said you could only not fund the legal defense; I don't see anything there stopping you from putting the bums up in a hotel or something during the lawsuits. But even if defendants were required to spend the same as the plaintiff, we still run into the issue I already mentioned: So now I simply need to put up 5 or 10k for each bum, guaranteeing me a very crappy legal team but also guaranteeing my target a very crappy legal team. The less competent the 2 lawyers are, the more the case becomes a role of the dice. (Imagine taking it down to the extreme where the lawyers are so stupid or incompetent they are replaceable by random number generators.) The most unpredictable chess game in the world is between the 2 rankest amateurs, not the current World Champion and #2. But maybe your frivolous win-rate remains the same regardless of whether you put in 10k or a few million. There's still a problem: people already use frivolous lawsuits as weapons: forcing discovery, intrusive subpoenas, the sheer hassle, and so on. Those people, and many more, would regard this as a massive enhancement of lawsuits as a weapon. You have an enemy? File a lawsuit, put in 20k, say, and now you can tell your crappy lawyer to spend an hour on it every so often just to keep it kicking. If your target blows his allotted 20k trying to get the lawsuit ended despite your delaying & harassing tactics, now you can sic your lawyer on the undefended target; if he measures out his budget to avoid this, then he has given into suffering this death of a thousand cuts. And if he goes without? As they say, someone who represents himself in court has a fool for a client....

So you steal a movie, which means the next homeless guy you see gets change in his cup, which lets you slam the front door in a girl scout's face, the memory of which drives you to volunteer at a soup kitchen, which in turn assuages your conscience when you buy incandescent light bulbs because they look better than CFLs, so you help an old lady across the street, which relieves you of all responsibility for the other old lady who just got hit by a truck, who haunts you in your dreams, so you adopt a child, who grows up to become a mad scientist who destroys the world, thus ending the vicious cycle once and for all.

And that's why piracy is wrong.

When you write "If the others continue to cooperate, their bid is lower and they get nothing" you imply an iterated game. It seems clear from Hamermesh's account that players were only allowed to submit one bid.

Ashley won, but she didn't maximize her win. The smartest thing to do would be to agree to collude, bid higher, and then divide the winnings equally anyway. Everyone gets the same payout, but only Ashley would get the satisfaction of winning. And if someone else bids higher, she's no longer the sole defector, which is socially significant. And, of course, $20 is really not a significant enough sum to play hardball for.

Sorry for the poor phrasing. I didn't read it as an iterated game at all. That statement should instead read, "If the others nevertheless cooperate, ... " Should I update it? How do you do the strikeout/line-through thing.

Well, I don't feel bad at all, so obviously you haven't won this argument yet. Unless I'm wrong, of course.

This does much to explain the mechanism by which humans avoid realizing when they are wrong!
I have a pound of Sweet-n-Sour pork for you to eat, and some scratchy toilet paper that can correct that ...

Well, if you want to pick nits, a vacuum cleaner sucks more than realizing you're wrong in an argument.

That's not picking nits; that's switching out a metaphorical definition mid-discussion for a more literal one, a species of "moving the goalposts". This is picking nits.

Also, in general, the quote is accurate. While it is intellectually useful to be proven wrong, it is not really a pleasant feeling, because it's much nicer to have already been right. This is especially true if you are heavily invested in what you are wrong about, eg. a scientist who realizes his research was based on an erroneous premise will be happy to stop wasting time but will also feel pretty crappy about the time he's already wasted. It's not in our nature to be purely cerebral about such a devastating thing as being wrong can be.

No, the quote isn't accurate. There are lots more worse feelings than being wrong in an argument. If you can't think of one, start from here.

It isn't that winning the lottery is better than being born rich, it's that winning the lottery is better than not winning the lottery. Even if you're already rich, winning the lottery is good. Presumably you weren't born right about everything, which means it's more useful to lose arguments than win them. After all, if you never lose an argument, what's more likely: that you are right about everything, that you're the best arguer ever, or that you simply don't argue things you're wrong about?

My first thought was b). What was the intended response?

Wouldn't ignoring thoughts known to be erroneous despite immense physical pressure to listen to them be a display of extreme rationality?

Maybe. How well do we think the ability to push one's self exceptionally hard during exercise correlates to non exercise akrasia or believing uncomfortable things in general? For the first data point, I have a friend that does adventure racing, and his whole team 'takes turns' hallucinating, and rely on those less insane at the moment to keep them going in the right direction. He doesn't seem to have akrasia problems, but does hold beliefs that I think are only there because it'd be uncomfortable and unPC to believe otherwise.

Thanks, it's been a while since I wasted a whole morning on TvTropes. Please link responsibly, people!

You're welcome.

It looks unlikely, I'm afraid. The timing conflicts with my Answers in Genesis study group... haha, nah, just kidding. But I probably have to work. C'est la vie.

I didn't know about this event but I'm interested now. Waterloo is pretty close, so consider this an "I'll get back to you."

What leads you to suggest Aumann isn't thinking that? Are you saying he is unaware that his ideological beliefs conflict with evidence to the contrary? Of course he is aware he could update on rational evidence and chooses not to, that's what smart religious people do. That's what faith is. The meaning of "capable but unwilling" should be clear: it is the ability to maintain belief in something in the face of convincing evidence opposing it. The ability to say, "Your evidence supporting y is compelling, but it doesn't matter, because I have faith in x." And that's what I think crazy is.

This crazy?

What leads you to suggest Aumann isn't thinking that?

That I've met smart religious people who don't think that way, and I expect that Aumann is at least as smart as they are.

There are intellectual religious people who believe that they've updated on all the evidence, taken it all into account, ignored none of it, and concluded that, say, Young Earth Creationism is the best account of the evidence.

You and I can see that they are ignoring evidence, or failing to weigh it properly, and that their ideology is blinding them. But that is not their own accoun... (read more)

Are you sure? It seems to me that having an intellectual problem that you are capable of solving but are unwilling to update on due to ideological reasons or otherwise (eg Aumann) is the sense in which Eliezer is using the word "crazy". Of course, I could just be stupid.

What does it mean to say that you are "capable of solving [a problem] but are unwilling to update on [it] due to ideological reasons"? You obviously don't mean something like the sense in which I'm capable of opening the window but I'm unwilling to because I don't want the cold air to get in. Aumann isn't thinking to himself, "Yeah, I could update, but doing so would conflict with my ideology." So, tabooing "capable" and "unwilling", can you explain what it means to be "capable but unwilling"?

Stupid is when you are unable to solve a problem. Lazy is when you are able to solve a problem but don't care to. Crazy is when you are able to solve a problem but don't want to.

That is not the sense of "crazy" that Eliezer is using. Maybe you could say that crazy is when you think that you have a solution but you don't (ETA: and you ought to be able to see that). But that seems like a special case of stupid.

Ah, of course. No, English was my only language at the time. I studied French in grade school but have no more than a few words of it left - that said, the underlying grammar, which is similar to Spanish, probably didn't just disappear. I also took a couple Latin courses in high school, but never became very proficient and again, only retained a few words and a rough understanding of structure. When I began learning Spanish it was all very new and quite difficult at first. I do think my strategy was a good one, though. The week I spent taking private lessons was devoted to grammar and grammar alone, on my insistence, and it paid off. En mis viajes fue muy útil.

A wealthy person being told he owes money to the government, or to the poor? It could even be someone who won the lottery (the way attractive people won the genetic lottery). But then is taxing lottery winners analogous to forcing women into sex? There's another implication here as well, in that if taxation isn't theft then forced promiscuity doesn't seem to be rape. In retrospect, a most unpleasant analogy that thankfully breaks down under a more nuanced view of property (wish I had more time to refine this comment).

I understand - it reminds me of the Max Berry story "Machine Man" where the protagonist, a robotics researcher, loses a leg, so he designs an artificial one to replace it. Of course, it's a lot better than his old leg... so he "loses" the other one. Of course, two out of four artificial limbs is just a good start (and so forth). I wouldn't wish your condition on anyone, but you might just have been lucky enough to live in a time when the meat we were born with isn't relevant to a happy life. Best wishes regardless.

I was conversationally fluent in Spanish after traveling in Spanish-speaking countries for six months, despite studying the grammar for only a week and spending most of my time speaking English. I can only imagine how fluent I'd be if I had actually devoted myself to learning instead of, well, doing what I like to call "stupid things in dangerous places". (In all fairness, Spanish is pretty easy to learn from an English base, especially if you've studied Latin. I imagine Chinese or Swahili would be a lot harder.)

It would be nice to know, eirenicon, whether you had any competency in any other languages besides English before you learn Spanish.

Alicorn is right about the Na, but what I actually had in mind was modern Western culture, in which marriage is declining and trending toward obsolescence. There are other correlations that can be drawn - for example, atheists have much lower marriage rates than average. Speaking from personal experience, the majority of my personal acquaintances (a majority of which are female) are uninterested in marriage.

True, but it remains to be seen whether post-marriage Western culture will have enough longevity to be reasonably classified as a culture rather than a brief transition from one semi-stable state to another -- of your female acquaintances who are uninterested in marriage, how many are interested in having children?
I wonder if it's possible to explain low atheist marriage rates through contingent factors? e.g. the conjunction of a low female/male ratio and a desire to marry only other atheists; a disproportionate number of out gay people in conjunction with the widespread illegality of gay marriage, etc. As for general low interest in marriage, my suspicion is that it's something like a snowball effect from the high divorce rate. People typically model their relationships after their parents', and if their parents are divorced or never married or had an unsatisfactory marriage, there's no good model there. It's probably not a coincidence that I eventually want to get married and stay that way and have a couple of kids when that's exactly what my parents did.

There are cultures without marriage, but all cultures engage in sex. You can hardly compare our most basic biological imperative with a fairly recent cultural invention. In general, not everyone wants to get married, but in general, everyone wants to have sex, men and women both. These generalities hardly apply to LW, though, for reasons I believe are self-evident. Frankly, having a less than academic conversation about general human sexuality and dating in a forum like this seems misguided, especially considering the gender ratio.

The Shakers?
Which cultures exist without marriage? I'm curious, because I hadn't previously been aware of the existence of any such.

Hansionain, twice? Really?

As an aside, I love what you get when you google Hansonian. Most of the top results are in reference to Robin Hanson, and among my favorites are "Hansonian Normality", the "Hansonian world", and "Hansonian robot growth". (Un?)Fortunately, "Hansonian abduction" is attributed to a different Hanson.

I wish my name was an adjective.

Yes, and the doomsday argument is not in regards to whether or not doomsday will occur, but when.

It doesn't matter how many observers are in either set if all observers in a set experience the same consequences.

(I think. This is a tricky one.)

I thought the numbers were some clever Tom Swift reference, although for the life of me I couldn't figure it out. Swift popped into my mind because there have been at least three Tom Swifts about which it was unknown if they were the same person. I have no idea what those numbered characters might be from.

It's three different versions of Shinji Ikari, each coming from a different fanfic: "Thousand Shinji", "Shinji and Warhammer 40,000", and "Once More With Feeling".

That's what the physical evidence says.

What the physical evidence says is that the boxes are there, the money is there, and Omega is gone. So what does your choice effect and when?

Well, I mulled that over for a while, and I can't see any way that contributes to answering my questions. As to " ... what does your choice effect and when?", I suppose there are common causes starting before Omega loaded the boxes, that affect both Omega's choices and mine. For example, the machinery of my brain. No backwards-in-time is required.
Load More