An Open Thread: a place for things foolishly April, and other assorted discussions.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Update: Tom McCabe has created a sub-Reddit to use for assorted discussions instead of relying on open threads.  Go there for the sub-Reddit and discussion about it, and go here to vote on the idea.

539 comments, sorted by Click to highlight new comments since: Today at 9:34 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

It doesn't seem like it's ever going to be mentioned otherwise, so I thought I should tell you this:

Lesswrong is writing a story, called "Harry Potter and the Methods of Rationality". It's just about what you'd expect; absolutely full of ideas from I know it's not the usual fare for this site, but I'm sure a lot of you have enjoyed Eliezer's fiction as fiction; you'll probably like this as well.

Who knows, maybe the author will even decide to decloak and tell us who to thank?

My fellow Earthicans, as I discuss in my book Earth In The Balance and the much more popular Harry Potter And The Balance Of Earth, we need to defend our planet against pollution. As well as dark wizards.

-- Al Gore on Futurama

I'm 98% confident it's Eliezer. He's been taunting us about a piece of fanfiction under a different name on for some time. I guess this means I don't have to bribe him with mashed potatoes to get the URL after all. Edit: Apparently, instead, I will have to bribe him with mashed potatoes for spoilers. Goddamn WIPs.

Yeah, I don't think I can plausibly deny responsibility for this one.

Googling either (rationality + fanfiction) or even (rational + fanfiction) gets you there as the first hit, just so ya know...

Also, clicking on the Sitemeter counter and looking at "referrals" would probably have shown you a clickthrough from a profile called "LessWrong" on

Want to know the rest of the plot? Just guess what the last sentence of the current version is about before I post the next part on April 3rd. Feel free to post guesses here rather than on, since a flood of reviewers would probably sound rather strange to them.

"Oh, dear. This has never happened before..."

Voldemort's Killing Curse had an epiphenomenal effect: Harry is a p-zombie. ;)

I don't like where this is headed - Harry isn't provably friendly and they're setting him loose in the wizarding world!

Also, there is a sharply limited supply of people who speak Japanese, Hebrew, English, math, rationality, and fiction all at once. If it wasn't you, it was someone making a concerted effort to impersonate you.
Do I have to guess right? ;)
It gets a strong vote of approval from my girlfriend. She made it about halfway through Three Worlds Collide without finishing, for comparison. We'll see if I can get my parents to read this one... Edit: And I think this is great. Looking forward to when Harry crosses over to the universe of the Ultimate Meta Mega Crossover.
Let's make that a Prediction. Harry becomes the ultimate Dark Lord by destroying the universe and escaping to the Metametaverse of the Ultimate Meta Mega Crossover.
DO NOT want.
This Harry is so much like Ender Wiggin.
Really? I picture him looking like a younger version of this [].

This Harry and Ender are both terrified of becoming monsters. Both have a killer instinct. Both are much smarter than most of their peers. Ender's two sides are reflected in the monstrous Peter and the loving Valentine. The two sides of Potter-Evans-Verres are reflected in Draco and Hermione. The environments are of course very similar: both are in very abnormal boarding schools teaching them things regular kids don't learn.

Oh, and now the Defense Against the Dark Arts prof is going to start forming "armies" for practicing what is now called "Battle Magic" (like the Battle Room!).

And the last chapter's disclaimer?

The enemy's gate is Rowling.

If the parallels aren't intentional I'm going insane.

And going back a few chapters, I'm betting that what Harry saw as wrong with himself is hair-trigger rage.
Ooo, I missed that. Yeah, OK.
This Harry and Ender are both terrified of becoming monsters. Both have a killer instinct. Both are much smarter than most of their peers. Ender's two sides are reflected in the monstrous Peter and the loving Valentine. The two sides of Potter-Evans-Verres are reflected in Draco and Hermione. The environments are of course very similar: both are in very abnormal boarding schools teaching them things regular kids don't learn. Oh, and now the Defense Against the Dark Arts prof is going to start forming "armies" for practicing what is now called "Battle Magic" (like the Battle Room!). And the last chapter's disclaimer? If the parallels aren't intentional I'm going insane.
There is a reason I didn't look for it. It isn't done. Having found it anyway via link above, of course I read it because I have almost no self-control, but I didn't look for it! Are you sure you wouldn't rather have the mashed potatoes? There's a sack of potatoes in the pantry. I could mash them. There's also a cheesecake in the fridge... I was thinking of making soup... should I continue to list food? Is this getting anywhere?
Holy fucking shit that was awesome.
This is a lot of fun so far, though I think McGonnagal was in some ways more in the right than Harry in chapter 6. Also, I kind of feel like Draco's behavior here is a bit unfair to the wizarding world as portrayed in the canon - the wizarding world is clearly not at all medieval in many ways (especially in the treatment of women where the behavior we actually see is essentially modern), so I'm not sure why it should necessarily be so in that way. Regardless of my nitpicking it's a brilliant fanfic and it's nice to see muggle-world ideas enter the wizarding world (which always seemed like it should have happened already).
You also have the approval of several Tropers [], only one of which is me.
I normally read within {nonfiction} U {authors' other works} but I had such a blast with Methods of Rationality that I might try some more fiction.
This story reminded me distinctly of Harry Potter and the Nightmares of Futures Past -- you might enjoy that one. Harry works until he's 30 to kill Voldemort, and by the time he succeeds, everyone he loves is dead. He comes up with a time travel spell that breaks if the thing being transported has any mass, so he kills himself, and lets his soul do the travelling. 30-year-old Harry's soul merges with 11-year-old Harry, and a very brilliant, very prepared, very powerful, and deeply disturbed young wizard enters Hogwarts.
Similar in premise is "The Mirror of Maybe" (slash warning, never-updates warning) in which a fifth-year Harry is shown a hypothetical future and uses the extensive knowledge gained thereby to ditch school, disguise himself as an adult, and become the greatest Gary Stu of all time. Slightly AU magic system and, as I warned, it never freakin' updates.
I've finished reading that. It's very well written technically - better than Eliezer who overindulges in speechifying, hyperbole, and italics - but in general Harry doesn't seem disturbed enough, heals too easily, and there are too few repercussions from his foreknowledge. (Snape leaving and usurping Kakaroff at Durmstang seems to be about it.) That, and the author may never finish, which is so frustrating an eventuality that I'm not sure I could recommend it to anyone.
AH... spoiler!
Snape leaving is hardly a spoiler, since so far it hasn't affected anything...
I like all of Eliezer's fiction... if you want more like this, see the pseudo-sequel, [] It is too insane of a story to recommend to most people, but assuming you've read Eliezer's non-fiction, you can jump right in. Otherwise, just about all of Eliezer's fiction is worth reading, Three World's Collide is his best work of fiction.
It's now the second hit on Google for (rationality + fiction)!
What proportion of the whole story are the current ten (nine) chapters likely to be? (There is going to be more, right? Right?!)

It's almost done, actually. Here's a sneak preview of the next chapter:

Dumbledore peered over his desk at young Harry, twinkling in a kindly sort of way. The boy had come to him with a terribly intense look on his childish face - Dumbledore hoped that whatever this matter was, it wasn't too serious. Harry was far too young for his life trials to be starting already. "What was it you wished to speak to me about, Harry?"

Harry James Potter-Evans-Verres leaned forward in his chair, looking bleak. "Headmaster, I got a sharp pain in my scar during the Sorting Feast. Considering how and where I got this scar, it didn't seem like the sort of thing I should just ignore. I thought at first it was because of Professor Snape, but I followed the Baconian experimental method which is to find the conditions for both the presence and the absence of the phenomenon, and I've determined that my scar hurts if and only if I'm facing the back of Professor Quirrell's head, whatever's under his turban. Now it could be that my scar is sensitive to something else, like Dark Arts in general, but I think we should provisionally assume the worst - You-Know-Who."

"Great heavens

... (read more)
that that is an excerpt or that you are almost done?
How proud of myself should I feel for figuring out how Comed-Tea works before Harry did? (Keeping in mind that it's been years since I internalized the facts that in the Harry Potter universe, prophecies work and Time-Turners don't create alternate time-lines, information not available to rational!Harry.)
Not very. Tons of commentators glommed onto the non-time-warping explanation, and the fic all but tells us that this is a possibility, especially with the experiment vignette with Hermione on the train. (Personally, I don't like the idea that the Comed-Tea affects only Harry; that mechanism leaves Luna Lovegood as an ethically depraved libeller.)
Or just charmingly nutty.
Or her father, at least. (I think there was an author's note about this - she says vague things and he turns them into ridiculous headlines.)
Well, that's just great - how am I supposed to know that now with Eliezer's little erasure system? I suppose better Xenophilius being a depraved libeller than Luna... although as an adult it's even more inexcusable.
I was looking over the old chapters and I found this [] :
Good to know.
And what does Voldemort have to do with anything? He's not Harry's target, he's just a stumbling block in the middle. You're not fooling me that easily. :P
This Harry is so much more potentially powerful than canon Harry, therefore having canon Voldemort be the final boss would be a let-down. Eliezer's author description explicitly says that anything which strengthens the hero must be accompanied by a corresponding increase in the difficulties he will face, so I think we can be pretty confident that things are going to be much more awesome than just defeating Voldemort with the Potterverse equivalent of RPG rules exploitation [].
Besides, even you had that happen in your story and had a dementor munching on the back of Quirrell's head, wouldn't the result be the equivalent of destroying only a single horcrux? (unless the bits of soul are linked in such a way that the dementor can suck them all up at a distance through the one...) You can't escape writing the rest of this that easily! ;) Also, hrm... would your comment here then count as you doing a parody fanfic of your own fanfic?
It could be one chapter where they debate whether or not to sic the dementor on Quirell without even confronting him, then one chapter where they figure out how to magically triangulate and destroy all of the Horcruxes at once.
Shhhh... Stop trying to make it easy for him to end the story sooner than later. ;) (Nevermind some of the grayer ethical aspects of, in a world with potentially eternal afterlife, Moldy Voldy's crimes may not stack up to that. (That is, "destroying a soul" >>> "killing someone" in the potter verse. Probably even "killing many someones" (otoh, IIRC the Dementor's were to a large extent his creatures, so we can probably safely assume that he was involved with or arranged for (or, more to the point, would in the future arrange for) plenty of soul consuming/destroying))
Well, by way of contrast, this point in the original book took us up to page 121 of 309. The story is currently 44,000 words which is approximately half the length of the average novel. However, we still haven't seen any deviation from the original story which suggests that Harry's opposition will be much harder, so I'm inclined to go with the first estimate, which gives us about 1/3 of total length so far. Not counting any sequels, of course.
Even if you'd used a different pseudonym and such, I'm sure a lot of us would have figured it out just from your writing style, the rationality explanations, and ... other things. Hell, the first chapter's disclaimer alone was a giveaway. :) Anyway, I've just finished reading all nine chapters, and this is a dream come true for me. I've had a few fantasies of my own about how I would have done things differently (and better / more rationally) if I'd been in Harry's shoes, and they were a lot like this fanfic... except for the sheer, scintillating brilliance of your work, I mean. This could be a good introduction/portal to rationality for a lot of people. I'll do what (little) I can to promote it and get you more readers, and I suggest other LWers do the same.
No, no, it's not Eliezer. It's an alternate personality, which acts exactly the same and shares memories, that merely believes it's Eliezer.
Sounds like an Eliezer to me.
like an Eliezer, yes.
I know, right? This would have been a wonderful story for me to read 10 years ago or so, and not just because now I'm having difficulty explaining to my girlfriend why I spent friday night reading a Harry Potter fanfic instead of calling her...
Magnificent. (I've sent it to some of my friends, most of whom are thoroughly enjoying it too; many of them are into Harry Potter but not advanced rationalism, so maybe it will turn some of them on to the MAGIC OF RATIONALITY!) Edit: Sequel idea which probably only works as a title: "Harry Potter and the Prisoner's Dilemma of Azkaban". Ohoho! Edit 2: Also on my wishlist: Potter-Evans-Verres Puppet Pals.
I could see that working as a prison mechanism, actually. Azkaban would be an ironic prison, akin to Dante's contrapasso []. (The book would be an extended treatise on decision theory.) The reward for both inmates cooperating is escape from Azkaban, the punishment really horrific torture, and the inmates are trapped as long as they are conniving cheating greedy bastards - but no longer. (The prison could be like a maze, maybe, with all sorts of different cooperation problems - magic means never having to apologize for Omega.)
So if one prisoner cooperates and the other defects, then the defector goes free and the cooperator doesn't? That doesn't sound very effective for keeping conniving cheating greedy bastards in prison.
I figure one would probably have to modify the dilemma to give sub-escape rewards to the defector. (I realize this inversion destroys the specific logical structure, but that's artistic license for you.)
Four possible outcomes: stay in prison (maintain status quo), be released, be (mind)raped by a Dementor, or receive some chocolate. Distribute in the payoff matrix according to whatever Æsop you’re pushing to :-)
* Competitor gets chocolate, cooperator gets indirect dementor exposure. * Both compete, both severely dementor'd. * Both cooperate, both released, but bound together somehow.
What about making the prison a hedge-maze sort of area, with lots of controllable access-points? Points earned by interactions can be spent to give yourself temporary access through a specific gate, any given pair of prisoners can only play the game a certain number of times per day, and unspent points decay - say, 5% loss per day. To earn enough points to pay your way out the front door, you effectively have to have access to the whole interior, and be on good terms with most of the people there.
The problem is that with 'currency' and iterated interactions like that, you start to approximate a concentration or POW camp [], with considerable mingling and freedom, which allows bad'uns to thrive. At least, if my reading of literature about said camps (like World of Stone or King Rat) is anything to go by.
In that case, the points [] would have to be associated with a task [] rather than simply cooperation. Edit: also []
Sure, that's reasonable. And it makes the prison/maze much more general - there could be all sorts of rationalist/moral traps in it, and then one could make the pure prisoner's dilemma the final obstacle before escape. I suppose the hard part is justifying in-universe the master rationalists who could create such a prison/maze - EY has clearly set Harry up in the fanfic as being the first master rationalist, and we can hardly postulate a hidden master when EY went to such pains with Draco to demonstrate the wizarding world's general moral bankruptcy (a hidden master would, one think, manage to bring the wizarding world up to at least muggle levels, if maybe not past it).
Why would one think that? This hidden master could be a total jerk-face.
Presumably Harry himself will be bringing about some drastic reforms. There's also the issue that wizards of the distant past might have been better rationalists than the current crop, but had less to work with, and the arts have simply been lost over time.
That's not a bad idea. It actually works well - the general loss of wizarding power is then due not to any genetic dilution by mudbloods, but because they're ignorant or lazy. It goes a little against the LW grain (we despise Golden Age myths), but since Rowling insists on a wizarding Golden Age, it's a good subversion.
They don't have systems or habits for preserving knowledge reliably, and there's enough competition between wizards that a lot of the best spells (not to mention methods for developing powerful spells) won't be recorded, and might not even be taught.
Actually, genetic dilution might still be a factor... if the rationalism of the founders was imperfect, and they didn't know much about heredity, the inability of most people to duplicate magical feats might have been interpreted as the result of an error in those finicky incantations. Emphasis on rote memorization of reliable effects would then come at the expense of higher-level invention and item creation techniques. There are some possibly-relevant discussions of a history of magic in Tales of MU [].
You'll have to link them, then (unless you mean the very funny section about the science cultists []); I read a bit of Tales of MU, but got weirded out after a while.
0Strange712y [] []
The reference to King Rat [\]) I can identify with an Internet search - what's World of Stone?
Try "Tadeusz Borowski []". Sample quotes: A (nonfiction) quote I sometimes think of in connection with World of Stone, though it's actually from The Captive Mind [], is:
Harry Potter as a boy genius smart-aleck aspiring rationalist works surprisingly well. And the idea of extending the pull of rationalism a bit beyond its standard sci-fi hunting grounds using Harry Potter fanfiction is brilliant.
For the record, it's currently the first Google autocomplete result for "harry potter and the me", with apparently multiple pages of forum posts and such about it.
So people get really invested in this fan-fiction stuff, huh?
Fb, sebz gur cbvag bs ivrj bs na Nygreangr-Uvfgbel, V nffhzr gur CBQ vf Yvyyl tvivat va naq svkvat Crghavn'f jrvtug ceboyrz. Gung jbhyq graq gb vzcebir Crghavn'f ivrj bs ure zntvpny eryngvirf, naq V nffhzr gur ohggresyvrf nera'g rabhtu gb fnir Wnzrf naq Yvyyl sebz Ibyqrzbeg. Tvira gur infgyl vapernfrq vagryyvtrapr bs Uneel, V nffhzr ur vf abg trargvpnyyl gur fnzr puvyq jr fnj va gur obbxf, nygubhtu vzcebirq puvyqubbq ahgevgvba pbhyq nyfb or n snpgbe.
Not having the same father would tend to imply not being genetically the same, yes. This isn't the Harry Potter we know.
He does have the same genetic parents; it's his biological aunt, not his biological mother, who married someone different in this timeline.
I feel rather foolish now. Of course he does. Should still be a genetic reshuffling, at least. The point of departure seems to be before his birth, so the butterfly effect would be in effect.
The probability of magic should make any effort on testing the hypothesis unjustified. Testing theories no matter how improbable is generally incorrect dogma. (One should distinguish improbable from silly though.)

I think you underestimate the real-world value of Just Testing It. If I got a mysterious letter in the mail and Mom told me I was a wizard and there was a simple way to test it, I'd test it. Of course I know even better than rationalist!Harry all the reasons that can't possibly be how the ontologically lowest level of reality works, but if it's cheap to run the test, why not just say "Screw it" and test it anyway?

Harry's decision to try going out back and calling for an owl is completely defensible. You just never have to apologize for doing a quick, cheap experimental test, pretty much ever, but especially when people have started arguing about it and emotions are running high. Start flipping a coin to test if you have psychic powers, snap your fingers to see if you can make a banana, whatever. Just be ready to accept the result.

This (injunction?) is equivalent to ascribing much higher probability to the hypothesis (magic) than it deserves. It might be a good injunction, but we should realize that at the same time, it asserts inability of people to correctly judge impossibility of such hypotheses. That is, this rule suggests that probability of some hypothesis that managed to make it in your conscious thought isn't (shouldn't be believed to be) 10^-[gazillion], even if you believe it is 10^-[gazillion].
I guess it depends a bit on how you came to consider the proposition to be tested, but I’m not sure how to formalize it. I wouldn’t waste a moment’s attention in general to some random person proposing anything like this. But if someone like my mother or father, or a few of my close friends, suddenly came with a story like this (which, mark you, is quite different from the usual silliness), I would spend a couple of minutes doing a test before calling a psychiatrist. (Though I’d check the calendar first, in case it’s April 1st.) Especially if I were about that age. I was nowhere near as bright and well-read rationalist!Harry at that age (nor am I now). I read a lot though, and I had a pretty clear idea of the distinction between fact and fiction, but I remember I just didn’t have enough practical experience to classify new things as likely true or false at a glance. I remember at one time (between 8 and 11 years old) I was pondering the feasibility of traveling to Florida (I grew up in Eastern Europe) to check if Jules Verne’s “From the Earth to the Moon” was real or not, by asking the locals and looking for remains of the big gun. It wasn’t an easy test, so I concluded it wasn’t worth it. However, I also remember I did check if I had psychic powers by trying to guess cards and the like; that took less than two minutes.
The probability that you have no grasp on the situation is high enough to justify an easy, simple, harmless test. And I'd appreciate it if spoilers for the story were ROT13'd or something - I haven't read it.
You mean the plot point that Harry Potter tested the Magic hypothesis? I don't think most plot points in the introductions of stories really count as spoilers.
Yeah, that's not a spoiler any more than "Obi-Wan Kenobi is a Jedi" is a spoiler.

A "Jedi"? Obi-Wan Kenobi?

I wonder if you mean old Ben Kenobi. I don't know anyone named Obi-Wan, but old Ben lives out beyond the dune sea. He's kind of a strange old hermit.

Ah, of course. That's fine, then. Although you might want to let EY know that someone posted unobfusticated spoilers for ... Chapter 10, was it? - in violation of community standards. ;)
I agree, though I think the particular test chosen in the story didn't make much sense - even if magic was real I wouldn't have expected that to have any effect.
The most astonishing thing about spoilers, I find, is that they are often provided to you with exactly as much enthusiasm after you announce that you haven't seen the story as before.
This isn't surprising at all. People who give out spoilers when discussing a work generally don't care that you don't like to hear spoilers before you've experienced a work.
Considering you've read the rest of the posts in this thread, that's not a spoiler, just my opinion about what you've already been discussing.
I haven't.
Well, it was a bit silly to comment on it without context then. At any rate no major/obvious spoilers have been posted here.
That's a relief.
It was strongly implied that some element of Harry's mind had skewed that prior dramatically. Perhaps his horcrux, perhaps infant memories, but either way it wasn't as you'd expect. Even for an eleven-year-old.
He didn't bite the bullet, didn't truly disbelieve his corrupted hardware. This is a problem that has to be solved by introspection, better theory of decision-making. It's not enough to patch it by observation in each particular case, letting the reality compute a correct conclusion above your defunct epistemology, even when you have all the data you might possibly need to infer the conclusion yourself.
Why not? I mean, granted, there might be occasions when you need the ability to disbelieve your hardware, but I'm having trouble thinking of any. It's unlikely enough that you'll go crazy; it's still more unlikely that you'll go crazy in such a way that your future depends on immediately and decisively noticing that you're mad. if you enjoy running tests and have the resources for it, why not indulge?
I'm talking about not interpreting intuitive "feel" for a belief as literally representing consciously endorsed level of belief. It's perfectly normal for your emotion to be at odds with your beliefs (see scope insensitivity [] for example). This kind of madness doesn't imply being less functional than average. We are all mad here. If you feel that "magic might be real", but at the same time believe that "magic can't be real, no chance", you shouldn't take the side of the feeling. The feeling might constitute new evidence for your beliefs to take into account, but the final judgment has to involve conscious interpretation, you shouldn't act on emotion directly. And sometimes, this means acting against emotion (intuitive expectation). In this case in particular, intuition is weak evidence, so it doesn't overpower a belief that magic isn't real, even if it's strong intuition.
Do you realize how many catgirls were killed because of you today?
One of the goals was to get his parents to stop fighting over whether or not magic was real.
How would it work? As expected outcome is that no magic is real, we'd need to convince the believer (mother) to disbelieve. An experiment is usually an ineffective means to that end. Rather, we'd need to mend her epistemology.
Well, Harry did spend some time making sure that this experiment would convince either of his parents if it went the appropriate way, though he had his misgivings. As a child who isn't respected by his parents, what better options does he have to stop the fight? (serious question)
Having no good options doesn't make the remaining options any good. This is a serious problem, for example, when people try to explain apparent miracles they experience: they find the best explanation they are able to come up with, and decide to believe that explanation, even if it has no right to any plausibility, apart from the fact it happened to be the only one available.
So you think that the best response is to do nothing about the fight. Perhaps, but setting up the experiment didn't take that much effort. What was Harry's opportunity cost here? Is it that high?
It's not completely out of the question that it was a fine rhetorical effort (though it's not particularly plausible), but it's still not concerned with finding out the truth, which was presented as the goal.
There seemed to be two goals to me - finding the truth and stopping the fight. I'll have to reread that section later.
A valid point.
You have not taken into account that testing magical hypotheses may be categorized as "play" and pay its rent on time and effort accordingly.
Then this activity shouldn't be rationalized as being the right decision specifically for the reasons associated with the topic of rationality. For example, the father dismissing the suggestion to test the hypothesis is correct, given that the mere activity of testing it doesn't present him with valuable experience. You've just taken the conclusion presented in the story, and wrote above it a clever explanation that contradicts the spirit of the story.
Wow, I wish I saw this sooner. And there are already 99 pages of reviews! ETA: Wow, now there's 100...

Example of teachers not getting past Guessing the Teacher's Password: debating teachers on the value of pi. Via Gelman.

4Eliezer Yudkowsky12y
Clearly, your math teacher biting powers are called for.
In first grade, I threw a crayon at the principal. Can I help? ;)
Let's not get too hasty. They still might know logarithms. ;)
It would have been even more frustrating had the protagonist not also been guessing the teacher's password. It seemed that the protagonist just had a better memory of what more authoritative teachers had said. The protagonist was closer to being able to derive π himself, but that played no part in his argument.
The protagonist knew that pi is defined as the ratio of a circle's circumference and diameter, and the numbers that people have memorized came from calculating that ratio. The protagonist knew that pi is irrational, that irrational means it cannot be expressed as a ratio of integers, and that 7 and 22 are integers, and that therefore pi cannot be exactly expressed as 22/7. The protagonist was willing to entertain the theory that 22/7 is a good enough approximation of pi to 5 digits, but updated when he saw that the result came out wrong.
These are important pieces of knowledge, and they are why I said that they protagonist was closer to being able to derive π himself. The result only came out wrong relative to his own memorized teacher-password. Except for his memory of what the first five digits of π really were, he gave no argument that they weren't the same as the first five digits of 22/7.
Y'know, there's something this blogger I read once wrote [] that seems kinda applicable here:
I did not criticize the protagonist. He acted entirely appropriately in his situation. Trying to derive digits of π (by using Archimedes's method, say) would not have been an effective way to convince his teammates under those circumstances. In some cases, such as a timed exam, going with an accurately-memorized teacher-password is the best thing to do. [ETA: Furthermore, his and our frustration at his teammates was justified.] But the fact remains that the story was one of conflicting teacher-passwords, not of deep knowledge vs. a teacher-password. Although the protagonist possessed deeper knowledge, and although he might have been able to reconstruct Archimedes's method, he did not in fact use his deeper knowledge in the argument to make 3.1415 more probable than the first five digits of 22/7. Again, I'm not saying that he should have had to do that. But it would have made for a better anti-teacher-password story.
I see what you mean. I think the confusion we've had on this thread is over the loaded term "teacher's password" - yes, the question only asked for the password, but it would be less misleading [] to say that both the narrator and the schoolteachers had memorized the results, but the narrator did a better job of comprehending the reference material.
Quite depressing. Makes me even less likely to have my kids educated in the states. I wonder how bad Europe is on that count? Is it really better here? It can be hard to tell from inside; correcting for the fact that most info I get is biased one way or the other leaves me with pretty wide confidence intervals.
22/7 gives "something like" something like 3.1427 ?!? Surely it is more like some other things that that!
Well, yes - it's more like 3.142857 recurring. But that's fairly minor. (Footnote: I originally thought the teachers had performed the division incorrectly, rather than the anonymous commenter incorrectly recount the number, so this comment was briefly incorrect.)
[-][anonymous]12y 20

After the top level post about it, I bought a bottle of Melatonin to try. I've been taking it for 3 weeks. Here are my results.

Background: Weekdays I typically sleep for ~6 hours, with two .5 hour naps in the middle of the day (once at lunch and once when I get home from work). Weekends I sleep till I feel like getting up, so I usually get around 10-11 hours.

I started with a 3mg pill, then switched to a ~1.5 mg pill (I cut them in half) after being extremely tired the next day. I take it about an hour before I go to sleep.

The first thing I noticed was that it makes falling asleep much easier. It's always been a struggle for me to fall asleep (usually I have to lay there for an hour or more), but now I'm almost always out cold within 20 minutes.

I've also noticed that I feel much less tired during the day, which was my impetus for trying it in the first place. However, I'm not sure how much of this is a result of needing less sleep, and how much is a result of me falling asleep faster and thus sleeping for longer. But it's definitely noticeable.

Getting up in the morning is not noticeably easier.

No evidence that it's habit forming. I'm currently not taking it on weekends (I found mys... (read more)

I took it for at least 8 weeks, primarily on weekdays. I found after a while that I was waking up at 4am, sometimes unable to get back to sleep. I had some night sweats too. May not be a normal response, but I found that if I take it in moderation it does not have these effects.
I wonder if you need to get back to sleep after waking up at 4 AM.
So you are still using it after those 8 weeks?
The easily available product for me is a blend of 3mg melatonin/25mg theanine []. 25mg is a heavy tea-drinker's dose, and I see no reason to consume theanine at all (even dividing the pills in half), so I haven't bought any. Does anyone have some evidence recommending for/against taking theanine? In my view, the health benefits of tea drinking are negligible, and theanine is just one of many compounds in tea.
Theanine [] may be "one of many compounds found in tea" but, on the recommendation of an acquaintance I tried taking theanine itself as an experiment once (from memory maybe 100mg?). First I read up on it a little and it sounded reasonably safe and possibly beneficial and I drank green tea anyway so it seemed "cautiously acceptable" to see what it was like in isolation. Basically I was wondering if it helped me relax, focus, and/or learn better. The result was a very dramatic manic high that left me incapable intellectually directed mental focus (as opposed to focus on whatever crazy thing popped into my head and flittered away 30 minutes later) for something like 35 hours. Also, I couldn't sleep during this period. In retrospect I found it to be somewhat scary and it re-confirmed my general impression of the bulk of "natural" supplements. Specifically, it confirmed my working theory that the lack of study and regulation of supplements leads to a market full of many options that range from worthless placebo to dangerously dramatic, with tragically few things in the happy middle ground of safe efficacy. Melatonin is one of the few supplements that I don't put in this category, however in that case I use less than "the standard" 3mg dose. When I notice my sleep cycle drifting unacceptably I will spend a night or two taking 1.5mg of melatonin (using a pill cutter [] to chop 3mg pills in half) to help me fall asleep and then go back to autopilot. The basis for this regime is that my mother worked in a hospital setting and 1.5mg was what various doctors recommended/authorized for patients to help them sleep. There was a melatonin fad in the late 1990's(?) where older people were taking melatonin as a "youth pill" because endogenous production declines with age. I know of no good studies supporting that use, but around that time was when the results about sleep came out, showing m
That reaction sounds rare. Do you think 20 cups of tea would have triggered a similar reaction in you? There is a huge variation based on dosage for all things you can ingest: food, drug, supplement, and "other". Check out the horrors of eating a whole bottle of nutmeg. []
Who knows? I doubt she'll ever find out. 20 cups of tea is a lot. 10 or 15 cups will send you to the bathroom every half hour, assuming your appetite doesn't decline so much that you can't bring yourself to drink any more.
From memory it is a 'mostly harmless' way to reduce anxiety and promote relaxation. This is a relatively rare result given that things with an anxiolytic effect often produce dependence. Works mostly by increasing GABA in the brain, with a bit of a boost to dopamine too. Some people find it also helps them focus. See also sublutamine, a synthetic analogue. It is used to promote endurance, particularly the kind caused by residual lethargy that sometimes hangs around after depression. Also provides a stimulant effect while also being relaxing, or at least not as agitating as stimulants can tend to be.
I've been trying it as well for ~2 months (with some gaps). Normally I have trouble falling asleep, but have no problem staying asleep, so the main reason I take melatonin is to help fall asleep. Currently, I take 2 5mg pills. Taking 1 doesn't have a very noticeable effect on my ability to fall asleep, but 2 seems to do the trick. However, I have to be sure that I give myself 7-8 hours for sleep, otherwise getting up is more difficult and I may be very groggy the next day. This can be problematic because sometimes I just have to stay up slightly later doing homework and because I can't take the melatonin I end up barely getting any sleep at all. I haven't noticed any habit forming effects, though some slight effects might be welcome if it helped me to remember to take the supplement every night ;) edit: its actually two 3mg pills, not 5mg. I googled the brand walmart carries since that's where I bought mine from, and it said 5mg on the bottle. Now that I'm home, I see that my bottle is actually 3mg.
I also tried it out after reading that LW post. At first it was fantastic at getting me to fall asleep within 30 minutes (I'm a good sleeper, it would only take me 30 minutes because I would be going to sleep not tired in order to wake up earlier) and I would wake up feeling alert. Now unfortunately I wake up feeling the same and basically have stopped noticing its effects. The only time I take it is when I want to go to sleep and I'm not tired. Also: During the initial 1-2 week period of effectiveness, I had intense and vivid and stressful dreams (or maybe I simply remembered my normal dreams better).
Thanks. It would be really helpful if people talking about their experiences would describe the entirety of their psychostimulant usage since how they interact and whether or not other drugs can be replaced are important things to know about Melatonin.
I am not any other drugs or medication. The only thing that would qualify as a stimulant is caffeine - I have a coffee in the morning and a soda at lunch.

I have a couple of problems with anthropic reasoning, specifically the kind that says it's likely we are near the middle of the distribution of humans.

First, this relies on the idea that a conscious person is a random sample drawn from all of history. Okay, maybe; but it's a sample size of 1. If I use anthropic reasoning, I get to count only myself. All you zombies were selected as a side-effect of me being conscious. A sample size of 1 has limited statistical power.

ADDED: Although, if the future human population of the universe were over 1 trillion, a sample size of 1 would still give 99% confidence.

Second, the reasoning requires changing my observation. My observation is, "I am the Xth human born." The odds of being the 10th human and the 10,000,000th human born are the same, as long as at least 10,000,000 humans are born. To get the doomsday conclusion, you have to instead ask, "What is the probability that I was human number N, where N is some number from 1 to X?" What justifies doing that?

Because we don't care about the probability of being a particular individual, we care about the probability of being in a certain class (namely the class of people born late enough in history, which is characterized exactly by one minus "the probability that I was human number N, where N is some number from 1 to X").
But if you turn it around, and say "where N is some number from X to the total number of humans ever born", you get different results. And if you say "where N is within 1/10th of all humans ever of X", you also get different results.
This is a different class, so yes, you get a different probability for belonging to it. But you likewise get a different probability that you'll see a doomsday conditioning on belonging to that class. Consider class A, the last 10% of all people to live, and class B, the last 20%. Clearly there's a greater chance I belong to class B. But class B has a lower expectation for observing doomsday. There's a lower chance of being in a class with a higher chance of seeing doomsday, and a higher chance of being in a class with a lower chance of seeing doomsday. What's wrong with this? I don't see any problem with the freedom of choice for our class.
Both your example are still taking from your current position to the end of all humans. What I said was that you get different results if you take one decile from your position, not all the way to the end. There's no reason to do one rather than the other.
P(Observing doomsday) = P(Being in some class of people) * P(Observing doomsday | you belong to the class of people) You get a different probability for belonging to those classes, but the conditional probabilities of observing doomsday given that you belong to those classes are different. I'm not convinced that these differences don't balance out when you multiply the two probabilities together. Can you show me a calculation where you actually get two different values for your likelihood of seeing doomsday?
Maybe I'm misreading this, but it looks like you're missing a term... You said: P(O) = P(B) * P(O|B) Bayes's theorem: P(O) P(B|O) = P(B) P(O|B) ne?
I agree that Jordan's equation needs to be adjusted (corrected), but I humbly suggest that in this context, it is better to adjust it to the product rule: P(O and B) = P(B) * P(O|B). ADDED. Yeah, minor point.
Yes, correct. I missed that. For the standard Doomsday Argument P(B|O) is probably 1, so it can be excluded, but for alternative classes of people this isn't so.
The real problem with anthropic reasoning is that it's just a default starting point. We are tricked because it seems very powerful in contrived thought experiments in which no other evidence is available. In the real world, in which there is a wealth of evidence available, it's just a reality check saying "most things don't last forever." In real world situations, it's also very easy to get into a game of reference class tennis [] .
I read the linked-to comment, but still don't know what reference class tennis is.

Some fantastic singularity-related jokes here:

Voted up for having jokes with cautionary power, and not just amusement value.

The researchers found that subjects assigned leadership roles were buffered from the negative effects of lying. Across all measures, the high-power liars — the leaders —resembled truthtellers, showing no evidence of cortisol reactivity (which signals stress), cognitive impairment or feeling bad. In contrast, low-power liars — the subordinates — showed the usual signs of stress and slower reaction times. “Having power essentially buffered the powerful liars from feeling the bad effects of lying, from responding in any negative way or giving nonverbal cues that low-power liars tended to reveal,” Carney explains.

A couple of articles on the benefits of believing in free will:

Vohs and Schooler, "The Value of Believing in Free Will"

Baumeister et al., "Prosocial Benefits of Feeling Free"

The gist of both is that groups of people experimentally exposed to statements in favour of either free will or determinism[1] acted, on average, more ethically after the free will statements than the determinism statements.

References from a Sci. Am. article.

[1] Cough.

ETA: This is also relevant.

Cool. Since a handful of studies suggest a narrow majority believe moral responsibility and determinism to be incompatible this shouldn't actually be that surprising. I want to know how people act after being exposed to statements in favor of compatibilism.

I've written a reply to Bayesian Flame, one of cousin_it's posts from last year. It's titled Frequentist Magic vs. Bayesian Magic. I'd appreciate some review and comments before I post it here. Mainly I'm concerned about whether I've correctly captured the spirit of frequentism, and whether I've treated it fairly.

BTW, I wish there is a "public drafts" feature on LessWrong, where I can make a draft accessible to others by URL, but not show up in recent posts, so I don't have to post a draft elsewhere to get feedback before I officially publish it.

Consider "syntactic preference" as an order on agent's strategies (externally observable possible behaviors, but in mathematical sense, independently on what we can actually arrange to observe), where the agent is software running on an ordinary computer. This is "ontological boxing", a way of abstracting away any unknown physics. Then, this syntactic order can be given interpretation, as in logic/model theory, for example by placing the "agent program" in environment of all possible "world programs", and restating the order on possible agent's strategies in terms of possible outcomes for the world programs (as an order on sets of outcomes for all world programs), depending on the agent. This way, we first factor out the real world from the problem, leaving only the syntactic backbone of preference, and then reintroduce a controllable version of the world, in a form of any convenient mathematical structure, an interpretation of syntactic preference. The question of whether the model world is "actually the real world", and whether it reflects all possible features of the real world, is sidestepped.
Thanks (and upvoted) for this explanation of your current approach. I think it's definitely worth exploring, but I currently see at least two major problems. The first is that my preferences seem to have a logical dependency on the ultimate nature of reality. For example, I currently think reality is just "all possible mathematical structures", but I don't know what my preferences are until I resolve what "all possible mathematical structures" means exactly. What would happen if you tried to use your idea to extract my preferences before I resolve that question? The second is that I don't see how you plan to differentiate within "syntactic preference", those that are true preferences, and those that are caused by computational limitations and/or hardware/software errors. Internally, the agent is computing the optimal strategy (as best as it can) from a preference that's stated in terms of "the real world" and maybe also in terms of subjective anticipation. If we could somehow translate those preferences directly into preferences on mathematical structures, we would be able to bypass those computational limitations and errors without having to single them out.
An important principle of FAI design to remember here is "be lazy!". For any problem that people would want to solve, where possible, FAI design should redirect that problem to FAI, instead of actually solving it in order to construct a FAI. Here, you, as a human, may be interested in "nature of reality", but this is not a problem to be solved before the construction of FAI. Instead, the FAI should pursue this problem in the same sense you would. Syntactic preference is meant to capture this sameness of pursuits, without understanding of what these pursuits are about. Instead of wanting to do the same thing with the world as you would want to, the FAI having the same syntactic preference wants to perform the same actions as you would want to. The difference is that syntactic preference refers to actions (I/O), not to the world. But the outcome is exactly the same, if you manage to represent your preference in terms of your I/O. You may still know the process of discovery that you want to follow while doing what you call getting to know your own preference. That process of discovery gives definition of preference. We don't need to actually compute preference in some predefined format, to solve the conceptual problem of defining preference. We only need to define a process that determines preference. This issue is actually the last conceptual milestone I've reached on this problem, just a few days ago. The trouble is in how would the agent reason about the possibility of corruption of its own hardware. The answer is that human preference is to a large extent concerned with consequentialist reasoning about the world, so human preference can be interpreted as modeling the environment, including the agent's hardware. This is an informal statement, referring to the real world, but the behavior supporting this statement is also determined by formal syntactic preference that doesn't refer to the real world. Thus, just mathematically implementing human preference is eno
Once we implement this kind of FAI, how will we be better off than we are today? It seems like the FAI will have just built exact simulations of us inside itself (who, in order to work out their preferences, will build another FAI, and so on). I'm probably missing something important in your ideas, but it currently seems a lot like passing the recursive buck. ETA: I'll keep trying to figure out what piece of the puzzle I might be missing. In the mean time, feel free to take the option of writing up your ideas systematically as a post instead of continuing this discussion (which doesn't seem to be followed by many people anyway).
FAI doesn't do what you do; it optimizes its strategy according to preference. It's more able than a human to form better strategies according to a given preference, and even failing that [] it still has to be able to avoid value drift [] (as a minimum requirement). Preference is never seen completely, there is always loads of logical uncertainty about it. The point of creating a FAI is in fixing the preference so that it stops drifting, so that the problem that is being solved is held fixed, even though solving it will take the rest of eternity; and in creating a competitive preference-optimizing agent that ensures the preference to fair OK against possible threats, including different-preference agents or value-drifted humanity. Preference isn't defined by an agent's strategy, so copying a human without some kind of self-reflection I don't understand is pretty pointless. Since I never described a way of extracting preference from a human (and hence defining it for a FAI), I'm not sure where do you see the regress in the process of defining preference. FAI is not built without exact and complete definition of preference. The uncertainty about preference can only be logical, in what it means/implies. (At least, when we are talking about syntactic preference, where the rest of the world is necessarily screened off.)
Reading your previous post in this thread, I felt like I was missing something and I could have asked the question Wei Dai asked ("Once we implement this kind of FAI, how will we be better off than we are today?"). You did not explicitly describe a way of extracting preference from a human, but phrases like "if you manage to represent your preference in terms of your I/O" made it seem like capturing strategy was what you had in mind. I now understand you as talking only about what kind of object preference is (an I/O map) and about how this kind of object can contain certain preferences that we worry might be lost (like considerations of faulty hardware). You have not said anything about what kind of static analysis would take you from an agent's s̶t̶r̶a̶t̶e̶g̶y̶ program to an agent's preference.
After reading Nesov's latest [] posts [] on the subject, I think I better understand what he is talking about now. But I still don't get why Nesov seems confident that this is the right approach, as opposed to a possible one that is worth looking into. Do we have at least an outline of how such an analysis would work? If not, why do we think that working out such an analysis would be any easier than, say, trying to state ourselves what our "semantic" preferences are?
What other approaches do you refer to? This is just the direction my own research has taken. I'm not confident it will lead anywhere, but it's the best road I know about. I have some ideas, though too vague to usefully share (I wrote about a related idea on the SIAI decision theory list, replying to Drescher's bounded Newcomb variant, where a dependence on strategy is restored from a constant syntactic expression in terms of source code). For "semantic preference", we have the ontology problem, which is a complete show-stopper. (Though as I wrote before, interpretations of syntactic preference in terms of formal "possible worlds" -- now having nothing to do with the "real world" -- are a useful tool, and it's the topic of the next blog post.) At this point, syntactic preference (1) solves the ontology problem, (2) gives focus to investigation of what kind of mathematical structure could represent preference (strategy is a well-understood mathematical structure, and syntactic preference is something allowing to compute a strategy, with better strategies resulting from more computation), and (3) gives a more technical formulation [] of the preference extraction problem, so that we can think about it more clearly. I don't know of another effort towards clarifying/developing preference theory (that reaches even this meager level of clarity). Returning to this point, there are two show-stopping problems: first, as I pointed out above, there is the ontology problem: even if humans were able to write out their preference, the ontology problem makes the product of such an effort rather useless; second, we do know that we can't write out our preference manually. Figuring out an algorithmic trick for extracting it from human minds automatically is not out of the question, hence worth pursuing. P.S. These are important questions, and I welcome this kind of discussion about general sanity of what I'm doin
Why do you consider the ontology problem to be a complete show-stopper? It seems to me there are at least two other approaches to it that we can take: 1. We human beings seem to manage to translate our preferences from one ontology to another when necessary, so try to figure out how we do that, and program it into the FAI. 2. Work out what the true, correct ontology is, then translate our preferences into that ontology. It seems that we already have a good candidate of this in the form of "all mathematical structures". Formalizing that notion seems really hard, but why should it be impossible? You claim that syntactic preference solves the ontology problem, but I have even fewer ideas about how to extract the syntactic preference of arbitrary programs. You mention that you do have some vague ideas, so I guess I'll just have to be patient and let you work them out. How do we know that? It's not clear to me that there is any more evidence for "we can't write out our preferences manually", than for "we can't build an artificial general intelligence manually". I had a hunch that might be the case. :)
By "show-stopper" I simply mean that we absolutely have to solve it in some way. Syntactic preference is one way, what you suggest could conceivably be another. An advantage I see with syntactic preference is that it's at least more or less clear what are we working with: formal programs and strategies. This opens the whole palette of possible approaches to the remaining problems to try on. With "all mathematical structures" thing, we still don't know what we are supposed to talk about, there is as of now no way forward already at that step. At least syntactic preference allows to make one step further, to firmer ground, even though admittedly it's unclear what to do next. I mean the "complexity of value"/"value is fragile" thesis. It seems to me quite convincing, and from the opposite direction, I have the "preference is detailed" conjecture resulting from the nature of preference in general. For "is it possible to build AI", we don't have similarly convincing arguments (and really, it's an unrelated claim that only contributes connotation of error in judgment, without giving an analogy in the method of arriving at that judgment).
I agree with "complexity of value" in the sense that human preference, as a mathematical object, has high information content. But I don't see a convincing argument from this premise to the conclusion that the best course of action for us to take, in the sense of maximizing our values under the constraints that we're likely to face, involves automated extraction of preferences, instead of writing them down manually. Consider the counter-example of someone who has the full complexity of human values, but would be willing to give up all of their other goals to fill the universe with orgasmium, if that choice were available. Such an agent could "win" by building a superintelligence with just that one value. How do we know, at this point, that our values are not like that?
Whatever the case is with how acceptable the simplified values are, automated extraction of preference seems to be the only way to actually knowably win, rather than striking a compromise, which simplified preference is suggested to be. We must decide from information we have; how would you come to know that a particular simplified preference definition is any good? I don't see a way forward without having a more precise moral machine than a human first (but then, we won't need to consider simplified preference).
Correct. Note that "strategy" is a pretty standard term [], while "I/O map" sounds ambiguous, though it emphasizes that everything except the behavior at I/O is disregarded. An agent is more than its strategy: strategy is only external behavior, normal form of the algorithm implemented in the agent. The same strategy can be implemented by many different programs. I strongly suspect that it takes more than a strategy to define preference, that introspective properties are important (how the behavior is computed, as opposed to just what the resulting behavior is). It is sufficient for preference, when it is defined, to talk about strategies, and disregard how they could be computed; but to define (extract) a preference, a single strategy may be insufficient, it may be necessary to look at how the reference agent (e.g. a human) works on the inside. Besides, the agent is never given as its strategy, it is given as its source code that normalizes to that strategy, and computing the strategy may be tough (and pointless).
You can do better than frequentist approach without using the "magic" universal prior. You can just use a prior that represents initial ignorance of the frequency at which the machine produces head-biased and tail-biased coins. (dP(f) = 1df). If you want to look for repeating patterns, you can assign probability (1/2)(1/2^n) to the theory that the machine produces each type of coin on a frequency depending on the last n coins it produced. This requires treating a probability as a strength of belief, and not the frequency of anything, which is what (as I understand it) frequentists are not willing to do. Note the universal prior, if you can pull it off, is still better than what I described. The repeating pattern seeking prior will not notice, for example, if the machine makes head biased coins on prime-numbered trials, but tailbiased coins on composite-numbered trials. This is because it implicitly assigns probability 0 to that type of machine, which takes infinite evidence to update.
I second this feature request. ETA: I did not notice earlier Steve Rayhawk made the same comment [].
Seconded []. See also JenniferRM on editorial-level versus object-level comments [].
Agreed. I'll be investigating what it would take to implement that. (Edit: interesting; draft folders are apparently private sub-reddits created when a user registers and admin'ed by that user.)

The London meet is going ahead. Unless someone proposes a different time, or taw's old meetings are still going on and I just didn't know about them, it will be:

5th View cafe on top of Waterstone's bookstore near Piccadilly Circus Sunday, April 4 at 4PM

Roko, HumanFlesh, I've got your numbers and am hoping you'll attend and rally as many Londoners as you can.

EDIT: Sorry, Sunday, not Monday.

6Paul Crowley12y
Found this entirely by chance - do a top level post?
2Eliezer Yudkowsky12y
Do a top-level post.
1Paul Crowley12y
Done [] . I hesitated as I wasn't in any sense the organiser of this event, just someone who had heard about it, but better me than no-one!
Hmm, that's also Easter Sunday, so I have commitments with family. I would love to meet you in person, Yvain, but it looks like I won't make this.
I'll try to come.
I hope to get to this, as I'll be not too far away [] this weekend.
I think I can't afford to come...

I recently found something that may be of interest to LW readers:

This post at the Lifeboat Foundation blog announces two tools for testing your "Risk Intelligence":

The Risk Intelligence Game, which consists of fifty statements about science, history, geography, and so on, and your task is to say how likely you think it is that each of these statements is true. Then it calculates your risk intelligence quotient (RQ) on the basis of your estimates.

The Prediction Game, which provides you with a bunch of statements, and your task is to say how likely... (read more)

An annoying thing about the RQ test (rot13'd): Jura V gbbx gur ED grfg gurer jnf n flfgrzngvp ovnf gbjneqf jung jbhyq pbzzbayl or pnyyrq vagrerfgvat snpgf orvat zber cebonoyr naq zhaqnar/obevat snpgf orvat yrff cebonoyr. fgrira0461 nyfb abgvprq guvf. Guvf jnf nobhg 1 zbagu ntb. ebg13'q fb nf abg gb shegure ovnf crbcyrf' erfhygf.
I did not check the test in detail, but I somehow question the validity [] of the test: As presented in their summary, would not just total risk aversion give you a perfect score? 50% on everything, except for the 0 and 100 entries (where 0 is something like "hey, I do play an instrument, and I know this is total crap, except if I would now be hallucinating, in which case..."). It seems like a test which is too easy to play.
I remember seeing an LW post about why it's cheating to always guess 50%, but I haven't found the link to that post yet... I think the basic idea was that you could technically be perfectly calibrated by always guessing 50%, but that's like always claiming that you don't know anything at all. It also means that you're never updating your probabilities. It also makes you easily exploitable, since you'll always assume that your probability of winning any gamble is 50%. Oh, and then there are the times when you'll give different probabilities for the same event, if the question is worded in different ways.
Your probability of winning any two-sided bet is 50%, as long as you pick which side of the bet you take at random. A "rational ignoramus" who always had minimum confidence wouldn't accept any arrangement where the opponent got to pick which side of the bet to take.
Please note that I explicitly referred to the test, not to reality.
That implies a very easy Dutch-book: 1. Create a lottery with three possible outcomes (a), (b), and (c) - for example, (a) 1, (b) 2, 3, or 4, and (c) 5 or 6 on a six-sided die. (Note that the probabilities are not equal - I have no need of that stipulation.) 2. Ask "what are the odds that (a) will happen?" In response to the proposed even-odds, bet against (a). 3. Ask "what are the odds that (b) will happen?" In response to the proposed even-odds, bet against (b). 4. Ask "what are the odds that (c) will happen?" In response to the proposed even-odds, bet against (c). 5. Collect on two bets out of three, regardless of outcome.

Karma creep: It's pleasant to watch my karma going up, but I'm pretty sure some of it is for old comments, and I don't know of any convenient way to find out which ones.

If some of my old comments are getting positive interest, I'd like to revisit the topics and see if there's something I want to add. For that matter, if they're getting negative karma, there may be something I want to update.

The only way I know to track karma changes is having an old tab with my Recent Comments visible and comparing it to the new one. That captures a lot of the change - >90% - but not the old threads. I would love to know how hard it would be to have a "Recent Karma Changes" feed.

US Government admits that multiple-time convicted felon Pfizer is too big to fail.

Did the corporate death penalty fit the crime(s)? Or, how can corporations be held accountable for their crimes when their structure makes them unpunishable?

The causes of "too big to fail" are: 1. Corporate personhood laws makes it harder to punish the actual people in charge. 2. Problems in tort law (in the US) make it difficult to sue corporations for certain kinds of damages. 3. A large government (territorial monopoly of jurisdiction) makes it more profitable for any sufficiently large company to use the state as a bludgeon against its competitors (lobbying, bribes, friends in high places) instead of competing directly on the market. 4. Letting companies that waste resources go bankrupt causes short-term damage to the economy, but it is healthy in the long term because it allows more efficient companies to take over the tied-up talent and resources. Politicians care more about the short term than the long term. 5. For pharmaceutical companies there is an additional embiggening factor. Testing for FDA drug approval costs millions of dollars, which constitutes a huge barrier to entry for smaller companies. Hence the large companies can grow larger with little competition. This is amplified by 1 and 2, and 3 suggests that most of the competition among Big Pharma is over legislators and regulators, not market competition. Disclosure: I am a "common law" libertarian (I find all monopolies counterproductive, including state governments).
I'd add trauma from the Great Depression (amplified by the Great Recession) which means that any loss of jobs sounds very bad, and (not related to the topic but a corollary) anything which creates jobs can be made to sound good.

Is there any evidence that Bruce Bueno de Mesquita is anything else than a total fraud?

  • Most "experts" making "predictions" are total frauds, so priors are against him
  • He never published actual formulas or anything like that, so independent verification is impossible
  • In spite of all his claims to verifiability, he never made a big list of easily verifiable predictions in advance
  • Failure never seemed to bother most such "prediction experts". For comparison which might or might not not be relevant - Kurzweil made something like
... (read more)
Well, his TED talk does make a number of specific testable predictions. They were registered in, but that's down.
Here they are []. These are 5 predictions all basically saying "Iran will not make a nuclear test by 2011" as far as their predictive content is concerned, which is not much unlike predicting that "we will not use flying cars by 2011".
I don't think they're that vague and obvious. * No nukes was something of a surprise to many people when that NIE came out * the loss of Ahmadinejad power prediction is nontrivial. I, and most others, I think, would have predicted an increase. * The noone-endorsing-nukes 2011 prediction is also significant, if heavily correlated with Ahmadinejad losing some power.
He predicts "Ahmedinijad will lose influence and the mullahs will become slightly more influential", not loss of office - which is not testable. All Iranian officials have claimed endlessly that their program is "civilian only" etc. - it would be a huge surprise if they made a sudden reversal. If someone expected Iran to have had nukes, they have a serious prediction problems. The only people "expecting" that were the same who were expecting Saddam to have nukes.
0Paul Crowley12y
That review is a very worthwhile read - thanks for linking to it!
I've heard claims that his "general model of international conflict" has been independently tested by the CIA and some other organization to 90% accuracy, but haven't seen any details of any of these tests.
Oh he gives plenty of such claims, not a single one of them are independently verifiable. You cannot access such report. This increases my estimation he's a fraud relative to not giving such claims in the first place.
At the Amazon link you provide, BBdM gives the full citation for the CIA report [], among others: Stanley Feder, "Factions and Policon: New Ways to Analyze Politics," in H. Bradford Westerfield, ed. Inside CIA's Private World: Declassified Articles from the Agency's Internal journal, 1955-1992 (New Haven: Yale University Press, 1995) It does not mention BBdM by name, but is about Policon, which I believe is the original name of his company. I have not read the report and don't know if it supports him, but I think it's pretty common for people's lack of interest in such reports to create the illusion that they have been fabricated so difficulty finding them on the web isn't much evidence. ETA: the other articles he mentions: a follow-up by Feder (gated [] ) and an academic review (ungated []). ETA: I have still not read the report, but I should say that first page says exactly what he says it says: 90% accuracy, standard CIA methods also 90% accuracy, but his predictions are more precise.
You'd think that if he had some method that at least happened to get lucky once in a while, he'd find a way to say "Hey, look at this success I can show!" or something. Allow me to make a prediction: There will be conflict in the Middle East. ;) (And I'm not exactly going out on a limb here. I don't even have to say when; there's been conflict there for roughly the past four thousand years, and I don't think anything's going to change that for as long as people still live there.)

Applied rationality April Edition: convince someone with currently incurable cancer to sign up for cryonics:

Hacker News rather than Reddit this time, which makes it a little easier.

I've been trying to do this since November for a close family member. So far the reaction has been fairly positive, but she has still not decided to go for it.

A recent study (hiding behind a paywall) indicates people overestimate their ability to remember and underestimate the usefulness of learning. More ammo for the sophisticated arguer and the honest enquirer alike.

Available without the paywall from the author's home page [] .
It's also an argument in favor of using checklists.

My parents are both vegetarian, and have been since I was born. They brought me up to be a vegetarian. I'm still a vegetarian. Clearly I'm on shaky ground, since my beliefs weren't formed from evidence, but purely from nurture.

Interestingly my parents became vegetarian because they perceived the way animals were farmed to be cruel (although they also stopped eating non-farmed animals such as fish), however my rationalization for not eating meat is that it is the killing of animals that is wrong (generalising from the belief that killing humans is worse tha... (read more)

I hope this isn't a vegatarianism argument, but remember that you have to rehabilitate both killing and cruelty to justify eating most meat, even if killing alone has held you back so far.

That's an excellent point, and one I may not have spotted otherwise. Thank you.
Do you want to eat meat? Or do you just want to have a good reason for not wanting to eat meat? It's... y'know... food. I don't have an ethical objection to peppermint but I don't eat it because I don't want to.
If Omega told me that the rest of my life would be more painful than it was pleasant I would still choose to live. I think most others here would choose similarly (except in cases of extreme pain like torture).
Even if my life would be painful on net, there are still projects I want to finish and work I want to do for others that would prevent me from choosing death. Valuing things such as these is no more irrational than valuing your own pleasure. Perhaps our disagreement is over the connection between pain/pleasure and utility. I would prefer a world in which I was in pain but am able to complete certain projects to one in which I was in pleasure but unable to complete certain projects. In the economic sense of utility (rank in an ordinal preference function), my utility would be higher in the former world than the latter world (even though the former is more painful).
I think your disagreement is over time preference []. Which path you choose now depends on how much you discount future pain versus present moral guilt or empathy considerations. In other words, you would make that choice now because that would make you feel best now. Of course (you project that) you would make the same choice at time T, for all T occurring between now and the completion of your projects. This is known as having a high time preference. It might seem like a quintessential example of low time preference, because you get a big payoff if you can persist through to completing those projects. However, the initial assumption was that "the rest of my life would be more painful than it was pleasant," so ex hypothesi the payoff cannot possibly be big enough to balance out the pain.
Pleasure and pain have little to do with it. []
Thanks, I read the article, and I think everything in it is actually answered by my post above. For instance: He's confused about time structure here. He doesn't want to take the pill now, because that would have a dreadful effect on his happiness now. Whether we call it pleasure/pain, happiness/unhappiness or something else, there's no escaping it []. Eliezer says his values are not reducible to happiness. Yet how unhappy (or painful) would it be for him right now to watch the happy-all-the-time pill slowly being inched toward his mouth, knowing he'll be made to swallow it? I suspect those would be the worst few moments of his life. It's not that values are not reducible to happiness, it's that happiness has a time structure that our language usually ignores.
What if you sneak up on him while he's sleeping and give him the happy-all-the-time injection before he knows you've done it? Then he wouldn't have that moment of unhappiness.
Yes, and he would never care about it as long as he never entertained the prospect. I don't think there is a definition of "value" that does everything he needs it to while at the some time not referring to happiness/unhappiness or similar. Charity requires that I continue to await such definition, but I am skeptical.
Preference satisfaction =! happiness.How many times do I have to make this point? Which would you prefer to have happen, without any forewarning: 1) I wirehead [] you, and destroy the rest of the world, or 2) I torture you for a while, and leave the rest of the world alone. If you don't pick 2, you're an asshole. :P
Indeed, we cannot say categorically that "preference satisfaction = happiness," but my point has been that such a statement is not very elucidating unless it takes time structure into account: Satisfaction of my current preferences right now = happiness right now (this is tautological, unless we are including dopamine-based "wanting but not liking" in the definition of preferences - I can account for the case if you'd like, but it will make my response a lot longer) Knowledge that my current preferences will be satisfied later = happiness right now (but yes, this does not necessarily equal happiness later) ETA: In case it's not clear, you still haven't shown that values are not just another way of looking at happiness/unhappiness. "Preference" is just another word for value - or if not, please define. I didn't answer your other question simply because neither answer reveals anything about either of our positions. However, your implying that someone who chooses option 1 should feel like an asshole underscores my point: If someone chose 2 over 1 it'd be because choosing 1 would be painful (insofar as seeing oneself as an asshole is painful ;-). (By the way, you can't get around this by adding "without warning" because the fact that you can make a choice about what is coming implies you believe it's coming (even if you don't know when), or if you don't believe it's coming then it's a meaningless hypothetical.) Disclosure: I didn't wait for Shadow (and I felt like an asshole, although that was afterward).
It isn't tautological. In fact, it's been my experience that this is simply not true. There seem to be times that I prefer to wallow in self-pity rather than feel happiness. Anger also seems to preclude happiness in the moment, but there are also times that I prefer to be angry. I could probably also make you happy and leave many of your preferences unsatisfied by injecting you with heroin. Somehow I think we're just talking past each other...
I've been there too, but I will try to show below that that is just having an (extremely) high time preference. Don't you get some tiny instantaneous satisfaction from choosing at any given moment to continue wallowing in self-pity? I do. It's similar to the guilty pleasure of horking down a tub of Ben&Jerry's, but on an even shorter timescale. Here are my assumptions: 1. If humans had no foresight, they would simply choose what gave them immediate pleasure or relief from pain. The fact that we have instincts doesn't complicate this, because an instinct is simply an urge, which is just another way of saying, "going against it is more painful than going with it." 2. But we do have foresight, and our minds are (almost) constantly hounding us to consider the effects of present decisions on future expected pleasure. This is of course vital to our continued survival. However, we can push those future-oriented thoughts out of our minds to some extent (some people excel at this). Certain states - anger probably foremost among them - can effectively shut off or decrease the hounding about the future as well. Probably no one has a time preference of "IMMEDIATELY" all the time, and having a low (long) time preference is usually associated with good mental health and self-actualization. (Note that this "emotional time preference" is relative: perhaps we weight the pleasure experienced in the next second very highly versus the coming year, or perhaps the reverse; or perhaps it's the next hour versus the next few days, etc.) So what we call values are generally things we are willing to defer to our brain's "future hounding" about. Example: A man chances upon some ice cream, but he is lactose intolerant. Let's say he believes the pain of the forthcoming upset stomach will exceed the pleasure of the eating experience. If his mind is hounding him hard enough about the pain of an upset stomach (which will oc
When do you think suicide would be the rational option?
When doing so causes a sufficiently large benefit for others (ie, 'a suicide mission', as opposed to mere suicide). Or when you have already experienced enough danger (that is, situations likely to have killed you) to overcome your prior and make you conclude that you have quantum immortality with high enough confidence.
Is it meaningful to put a probability on 'killing animals is wrong' and absolute moral statements like that? Feels like trying to put a probability on 'abortion is wrong' or 'gun control is wrong' or '(insert your pet issue here) is wrong/right' or...
No, it's not meaningful to put a prior probability on it, unless you seriously think something like absolute morality exists. Having said that, the prior for "killing animals is wrong" is still higher than the prior for the God of Abraham existing.
Note that Bayesian probability is not absolute, so it's not appropriate to demand absolute morality in order to put probabilities on moral claims. You just need a meaningful (subjective) concept of morality. This holds for any concept one can consider, any statement can be assigned a subjective probability, and morality isn't an exceptional special case.
If morality is a fixed computation [], you can place probabilities on possible outputs of that computation (or more concretely, on possible outputs of an extrapolation of your or humanity's volition).
I find this paper to be a good resource to think about this subject: []
You have to escape underscores by preceding them with backslashes, otherwise they're interpreted as markup for italics.
The underscores need escaping.
See this discussion of my own meat-eating []. My conclusion was that there is not much of a rational basis for deciding one way or the other -- my attempts to use rationality broke down. I think you should go out and get yourself something deliciously meaty, while still being mostly vegetarian. "Fair weather vegetarianism". Unless you don't actually like the taste of meat. That's ok. There's also an issue of convenience. You could begin the slippery slope of drinking chicken broth soup and Thai food with lots of fish sauce. We exist in an immoral system and there isn't much to do about it. Being a vegetarian for reasons of animal suffering is symbolic. If we truly cared about the holocaust of animal suffering, we would be waging a guerrilla war against factory farms.
In this case, other people seem to have concluded that the value of not eating a piece of an animal is in the long run equal to that much animal not suffering/dying. So I know the difference one person could make and it seems too small to be worth the hassle of not eating meat that other people prepare for me, and not worth the inconvenience of not getting the most delicious item on the menu at restaurants.

Perhaps the folks at LW can help me clarify my own conflicting opinions on a matter I've been giving a bit of thought lately.

Until about the time I left for college, most of my views reflected those of my parents. It was a pretty common Republican party-line cluster, and I've got concerns that I have anchored at a point too close to favoring the death penalty than I should. I read studies about how capital punishment disproportionately harms minorities, and I think Robin Hanson had more to say about difference in social tier. Early in my college time, t... (read more)

My take on capital punishment is that it's not actually that important an issue. With pretty much anything that you can say about the death penalty, you can say something similar about life imprisonment without parole (especially with the way that the death penalty is actually practiced in the United States). Would you lock an innocent man in a cell for the rest of his life to keep 19 bad ones locked up?

Virtually zero chance of recidivism? True for both. Very expensive? Check. Wrongly convicted innocent people get screwed? Check - though in both cases they have a decent chance of being exonerated after conviction before getting totally screwed (and thus only being partially screwed). Could be considered immoral to do something so severe to a person? Check. Deprives people of an "inalienable" right? Check (life/liberty). Strongly demonstrates society's disapproval of a crime? Check (slight edge to capital punishment, though life sentences would be better at this if the death penalty wasn't an option). Applied disproportionately to certain groups? I think so, though I don't know the research. Strong deterrent? It seems like the death penalty should be a bit... (read more)

Good post. I have never seen strong evidence that the death penalty has a meaningful deterrent effect but I'd be curious to see links one way or the other. I lean towards prison abolition, but it's an idealistic notion, not a pragmatic one. I suppose we could start by getting rid of prisons for non-violent crimes and properly funding mental hospitals. [] I can't see that happening when we can't even decriminalize marijuana.
Standard response: politics is the mind-killer []. Personal response: I'm opposed to the death penalty because it costs more than putting them in prison for life due to the huge number of appeals they're allowed (vaguely recall hearing in newspapers / reports). I feel the US has become so risk-averse and egalitarian that it cannot properly implement a death penalty. This is reflected in the back-and-forth questions you ask. I also oppose it on the grounds that it is often used as a tool of vengeance rather than justice. Nitrogen poisoning (I think that was the gas they were talking about) is a safe, highly reliable, and euphoric means of death, but the US still prefers electrocution (can take minutes), injection (can feel like the veins are burning from the inside out while the body is paralyzed), etc. That said, I don't care enough about the topic to try and alter its use, whether through voting, polling, letters, etc, nor do I desire to put much thought into it. Best to let hot topics alone. And after asking about Bayes, you should ask for math rather than opinions.
Yeah, my formatting of the last few sentences wasn't very great. Sorry.
There is strong Bayesian evidence that the USA has executed one innocent man. [] By that I mean that an Amanda Knox test [] type analysis would clearly show that Willingham is innocent, probably with greater certainty than when the Amanda Knox case was analyzed. Does knowing that the USA has indeed provably executed an innocent person change your opinion? What are the practical advantages of death over life in prison? US law allows for true life without parole. Life in an isolated cell in a Supermax prison is continual torture -- it is not a light punishment by any means. Without a single advantage given for the death penalty over life in prison without parole, I think that ~100% certainty is needed for execution. I am against the death penalty for regular murder and mass murder and aggravated rape. I am indifferent with regards to the death penalty for crimes against humanity as I recognize that symbolic execution could be appropriate for grave enough crimes.
Kevin, thank you for the specific example. It definitely strengthened my practical objection to the practice. I strongly suspect that the current number of false positives lies outside of my acceptance zone. Rain, I agree that politics is a mind-killer, but thought it worthy of at least brushing the cobwebs off some cached thoughts. Good point about Nitrogen. I wonder why we choose gruesome methods when even CO would be cheap, easy and effective. Morendil, I appreciate the other questions. You have a good point that if Omega were brought in on the justice system, it would definitely find better corrective measures than the kill command. I think Eliezer once talked about how predicting your possible future decisions is basically the same as deciding. In that case, I already changed many things on this Big Question, and am just finally doing what I predicted I might do last time I gave any thought to capital punishment. Which happened to be at the conclusion (if there is such a thing) of a murder trial where my friend was a victim. Lots of bias to overcome there, methinks. Unnamed, interesting points. I hadn't actually considered how similar life imprisonment is to execution, with regard to the pertinent facts. I was recently introduced to the concept of restorative justice [] which I think encompasses your article. I find it particularly appealing because it deals with what works, instead of worthless Calvinist ideals like punishment. From my understanding, execution only fulfills punishment in the most trivial of senses.
"Crimes against humanity" is one of the crimes that for most practical purposes means "... and lost".
Yup. Even though they'll never face charges, some of the winners [] are guilty as sin. And I mean that the Project for the New American Century was on the winning side of the war, their namesake mission has failed horribly.
The more judicious question, I am coming to realize, isn't so much "Which of these two Standard Positions should I stand firmly on". The more useful question is, why do the positions matter? Why is the discussion currently crystallized around these standard positions important to me, and how should I fluidly allow whatever evidence I can find to move me toward some position, which is rather unlikely (given that the debate has been so long crystallized in this particular way) to be among the standard ones. And I shouldn't necessarily expect to stay at that position forever, once I have admitted in principle that new evidence, or changes in other beliefs of mine, must commit me to a change in position on that particular issue. In the death-penalty debate I identify more strongly with the "abolitionist" standard position because I was brought up in an abolitionist country by left-wing parents. That is, I find myself on the opposite end of the spectrum from you. And yet, perhaps we are closer than is apparent at first glance, if we are both of us committed primarily to investigating the questions of values, the questions of fact, and the questions of process that might leave either or both of us, at the end of the inquiry, in a different position than we started from. * Would I revise my "in principle" opposition to the death penalty if, for instance, the means of "execution" were modified to cryonic preservation? Would I then support cryonic preservation as a "punishment" for lesser crimes such as would currently result in lifetime imprisonment? * Would I still oppose the death penalty if we had a Truth Machine? Or if we could press Omega into service to give us a negligible probability of wrongful conviction? Or otherwise rely on a (putatively) impartial means of judgment which didn't involve fallible humans? Is that even desirable, if it was at all possible? * Would I support the death penalty if I found out it was an effec
Yeah, that's why I try to avoid hot topics. Too much work.
Well, even relatively uncontroversial topics have the same entangled-with-your-entire-belief-network quality to them, but (to most people) less power to make you care. The judicious response to that is to exercise some prudence in the things you choose to care about. If you care too much about things you have little power to influence and could easily be wrong about, you end up "mind-killed". If you care too little and about too few things except for basic survival, you end up living the kind of life where it makes little difference how rational you are. The way it's worked out for me is that I've lived through some events which made me feel outraged, and for better or for worse the outrage made me care about some particular topics, and caring about these topics has made me want to be right about them. Not just to associate myself with the majority, or with a set of people I'd pre-determined to be "the right camp to be in", but to actually be right.
Political questions like this are far removed from the kind of analysis you seem to want to apply. If it's you taking out a killer yourself that's one thing, but the question of whether to support it as a law is something entirely different. This rabbit hole goes very far indeed. Anyway, why would you care about the Constitution - you're not one of the signers, are you? ;-)
I care about the Constitution for a couple of reasons beyond the narrowly patriotic: (1) For the framers, its design posed a problem very similar to the design of Friendly AI. The newly independent British colonies were in a unique situation. On the one hand, whatever sort of nation they designed was likely to become quite powerful; it had good access to very large quantities of people, natural resources, and ideas, and the general culture of empiricism and liberty meant that the nation behaved as if it were much more intelligent than most of its competitors. On the other hand, the design they chose for the government that would steer that nation was likely to be quite permanent; it is one thing to change your system of government as you are breaking away from a distant and unpopular metropole, and another to change your government once that government is locally rooted and supported. The latter takes a lot more blood, and carries a much higher risk of simply descending into medium-term anarchy. Finally, the Founders knew that they could not see every possible obstacle that the young and unusual nation would encounter, and so they would have to create a system that could learn based on input from its environment without further input from its designers. So just as we have to figure out how to design a system that will usefully manage vast resources and intelligence in situations we cannot fully predict and with directions that, once issued, cannot be edited or recalled, so too did the Founding Fathers, and we should try to learn from their failures and successes. (2) The Constitution has come to embody, however imperfectly, some of the core tenents of Bayesianism. I quote Chief Justice Oliver Wendell Holmes:
Re 1, if that is the case why not support the Articles of Confederation [] instead? I also take exception to the underlying assumption that society needs top-down designing, but that's a very deep debate. If that was really the theory - "checks and balances" - the Constitution was a huge step backward from the Articles of Confederation. (I don't support the AoC, but I'd prefer them to the Constitution.)
I never said we should support it; I said we should care about it. It would be silly to claim that anyone interested in FAI should be pro-Constitution; there were plenty of 18th century people who earnestly grappled with their version of the FAI problem and thought the Constitution was a bad idea. If you agree more with the anti-Federalists, fine! The point is that we should closely follow the results of the experiment, not that we should bark agreement with the particular set of hypotheses chosen by the Founding Fathers for extensive testing.
Very good point, and the founders' process for developing the constitution and bill of rights is important for thinking about how to develop a Friendly (mostly Friendly?) AI.
I swore an oath [] to support and defend the Constitution as a condition of employment, so at the very least I have to signal caring about it. I doubt beriukay is in the same position, though.
Do you really take that sort of thing seriously? Far out if you do, but I have trouble with the concept of an 'oath'.
Oaths in general can be a form of precommitment and a weak signal that someone ascribes to certain moral or legal values, though no one seemed to take it seriously in this instance. On my first day, it was just another piece of paper in with all the other forms they wanted me to sign, and they took it away right after a perfunctory reading. I had to search it out online to remember just what it was I had sworn to do. Later, I learned some people didn't even remember they had taken it. Personally, I consider it very important to know the rules, laws, commitments, etc., for which I may be responsible, so when I or someone else breaks them, I can clearly note it. For example, in middle school, one of my teachers didn't like me whispering to the person sitting next to me in class. When she asked what I was doing, I told her that I was explaining the lesson, since she did a poor job of it. She asked me if I would like to be suspended for disrespect; I made sure to let her know that the form did not have 'disrespect' as a reason for suspension, only detention.
Far out. That is important. As for your story, it's something I would have done but I hope you understand that a little tact could have gone a long way. What I was trying to get at you seem to think also. You think you are sending a 'weak signal' that you are committed to something. But you are using words that I think many around here would be suspicious of (e.g. oath and sworn). You can say you will do something. If someone doesn't trust that assertion, how will they ever trust 'no really I'm serious'.
Perhaps through enforcement. There are a significant number of laws, regulations, and directives that cover US Federal employees, and the oath I linked to above is a signed and sworn statement indicating the fact that I am aware of and accept responsibility for them.
You prefer more time locked up in school than less?
My explanation: It is ironic that 'more time at school after it finishes' is used as a punishment and yet 'days off school' is considered a worse punishment. Given the chance I would go back in time and explain to my younger self that just because something is presented as a punishment or 'worse punishment' doesn't mean you have to prefer to avoid it. Further, I would explain that getting what he wants does not always require following the rules presented to him. He can make his own rules, chose among preferred consequences. While I never actually got either a detention or a suspension, I would have to say I'd prefer the suspension.
In theory but I wonder how long it has been since you were in school. In GA they got around to making a rule that if you were suspended you would lose your drivers license. Also, suspensions typically imply a 0 on all assignments (and possibly tests) that were due for its duration.
As a teacher or a student? 4 years and respectively.
My explanation: It is ironic that 'more time at school after it finishes' is used as a punishment and yet 'days off school' is considered a worse punishment. Given the chance I would go back in time and explain to my younger self that just because something is presented as a punishment or 'worse punishment' doesn't mean you have to prefer to avoid it. Further, I would explain that getting what he wants does not always require following the rules presented to him. He can make his own rules, chose among preferred consequences. While I never actually got either a detention or a suspension, I would have to say I'd prefer the suspension.
I have a martyr complex.
How so?
An oath is an appeal to a sacred witness, typically the God of Abraham. An affirmation is the secular version of an oath in the American legal system.
Hailing from secular Britain I wasn't aware of the distinction. Affirmation actually sounds more religious to me. I'd never particularly associated the idea of an oath with religion but I can see how such an association could sour one on the word 'oath'.
Yeah I like Kevin's short answer. But in general I said to Rain: When you make something a contract you see there are some legal teeth, but swearing to uphold the constitution feels silly.
Well obviously the idea of an oath only has value if it is credible, that is why there are often strong cultural taboos against oath breaking. In times past there were often harsh punishments for oath breaking to provide additional enforcement but it is true that in the modern world much of the function of oaths has been transferred to the legal system. Traditionally one of the things that defined a profession was the expectation that members of the profession held themselves to a standard above and beyond the minimum enforced by law however. Professional oaths are part of that tradition, as is the idea of an oath sworn by civil servants and other government employees. This general concept is not unique to the US or to government workers.

As you grow up, you start to see that the world is full of waste, injustice and bad incentives. You try frantically to tell people about this, and it always seems to go badly for you.

Then you grow up a bit more, get a bit wise, and realize that the mother-of-all-bad-incentives, the worst injustice, and the greatest meta-cause of waste ... is that people who point out such problems get punished, (especially) including pointing out this problem. If you are wise, you then become an initiate of the secret conspiracy of the successful.


Telling people frantically about problems that are not on a very short list of "approved emergencies" like fire, angry mobs, and snakes is a good way to get people to ignore you, or, failing that, to dislike you. It is only very recently (in evolutionary time) that ordinary people are likely to find important solutions to important social problems in a context where those solutions have a realistic chance of being implemented. In the past, (a) people were relatively uneducated, (b) society was relatively simpler, and (c) arbitrary power was held and wielded relatively more openly. Thus, in the past, anyone who was talking frantically about social reform was either hopelessly naive, hopelessly insane, or hopelessly self-promoting. There's a reason we're hardwired to instinctively discount that kind of talk.
You should present the easily implemented, obviously better solution at the same time as the problem. If the solution isn't easy to implement by the person you're talking to, then cost/benefit analysis may be in favor of the status quo or you might be talking to the wrong person. If the solution isn't obviously better, then it won't be very convincing as a solution or you might not have considered all opinions on the problem. And if there is no solution, then why complain?
Is that true? 'Cause if it's true, I'd like to join.

Does brain training work? Not according to an article that has just appeared in Nature. Paper here, video here or here.

These results provide no evidence for any generalized improvements in cognitive function following brain training in a large sample of healthy adults. This was true for both the ‘general cognitive training’ group (experimental group 2) who practised tests of memory, attention, visuospatial processing and mathematics similar to many of those found in commercial brain trainers, and for a more focused training group (experimental group 1) w

... (read more)
Brain training, for those not following the link, refers to playing games involving particular mental skills (e.g. memory). The study ran six weeks. I don't think the experiment looks definite - the control group did not appear as thoroughly distinguished from the test groups as I would have liked - but the MRC Cognition and Brain Sciences Unit (who were partners in the experiment) is well-regarded enough that I would call the null result major evidence.
The fact that they studied adults rather than children may make a difference.

Rats have some ability to distinguish between correlation and cauation

To get back to the rat study—it's very simple actually. What I did is: I had the rats learn that a light, a little flashing light in a Pavlovian box, is followed sometimes by a tone and sometimes by food. So they might have used Pavlovian conditioning; just as I said, Pavlovian conditioning might be the substrate by which animals learn to piece together spatial maps and maybe causal maps as well. If they treat the light as a common cause of the tone and of food, they see [hear] the ton

... (read more)
The information here is a little scant. If, in the cases where there was a tone instead of food, the tone always followed very soon after the light, it'd be most logical for rats to wait for the tone after seeing the light, and only go look for food after confirming that no tone was forthcoming. (This would save them effort assuming the food section was significantly far away. No tone = food. Tone = no food. Or did the scientists sometimes have the light be followed by both tone and food? I assume no, because that would introduce a first-order Pavlovian association between tone and food, which would mess up the next part of the experiment.) If, as I suggested above, the rats had previously been trained to wait for the lack of a tone before checking in the food section, this result would more strongly rule out a second-order Pavlovian response. On the one hand, this is really surprising. On the other hand, I don't see how rats could survive without some cause-and-effect and logical reasoning. I'm really eager to see more studies on logical reasoning in animals. Any anecdotal evidence with house pets anyone?

David Chalmers has written up a paper based on the talk he gave at 2009 Singularity Summit:

From the blog post where he announced the paper:

The main focus is the intelligence explosion that some think will happen when machines become more intelligent than humans. First, I try to clarify and analyze the argument for an intelligence explosion. Second, I discuss strategies for negotiating the singularity to maximize the chances of a good outcome. Third, I discuss issues regarding uploading human minds into comp

... (read more)
Rather sad to see Chalmers embracing the dopey "singularity" terminology. He seems to have toned down his ideas about development under conditions of isolation: "Confining a superintelligence to a virtual world is almost certainly impossible: if it wants to escape, it almost certainly will." Still, the ideas he expresses here are not very realistic, IMO. People want machine intelligence to help them to attain their goals. Machines can't do that if they are isolated off in virtual worlds. Sure there will be test harnesses - but of course we won't keep these things permanently restrained on grounds of sheer paranoia - that would stop us from using them. 53 pages with only 2 mentions of zombies - yay.
We can't test for values -- we don't know what they are. A negative test might be possible ("this thing surely has wrong values"), as a precaution, but not a positive test.
Testing often doesn't identify all possible classes of flaw - e.g. see: [] It is still very useful, nonetheless.

PDF: "Are black hole starships possible?"

This paper examines the possibility of using miniature black holes for converting matter to energy via Hawking radiation, and propelling ships with that. Pretty interesting, I think.

I'm no physicist and not very math literate, but there is one issue I pondered: namely, how the would it be possible to feed matter to a mini black hole that has an attometer scale event horizon and radiating petajoules of energy in all directions? The black hole would be an extremely tiny target in a barrier of ridiculous energy density. The paper, as rudimentary it is, does not discuss this feeding issue.

I prefer links to the abstract, when possible. []
This might be interesting in combination with the a "balanced drive". They were invented by science fiction author Charles Sheffield who attributed them his character Arthur Morton McAndrew so they are sometimes also called a "McAndrew Drive []" or a "Sheffield Drive". The basic trick is to put an incredibly dense mass at the end of a giant pole such that the inverse square law of gravity is significant along the length of the pole. The ship flies "mass forward" through space. Then the crew cabin (and anything else incapable of surviving enormous acceleration) is set up on the pole so that the faster the acceleration the closer it is to the mass. The cabin, flying "floor forward", changes its position while the floor flexes as needed so that the net effect of the ship's acceleration plus the force of gravity balance out to something tolerable. When not under acceleration you still get gravity in the cabin by pushing it out to very tip of the pole. The literary value of the system is that you can do reasonably hard science fiction and still have characters jaunt from star to star so long as they are willing to put up with the social isolation because of time dilation, but the hard part is explaining what the mass at the end of the pole is, and where you'd get the energy to move it. If you could feed a black hole enough to serve as the mass while retaining the ability to generate Hawking radiation, that might do it. Or perhaps simply postulating technological control of quantum black holes and then use two in your ship: a big one to counteract acceleration and a small one to get energy from a "Crane-Westmoreland Generator".

Why doesn't brain size matter? Why is a rat with its tiny brain smarter than a cow? Why does the cow bother devoting all those resources to expensive gray matter? Eliezer posted this question in the February Open Topic, but no one took a shot at it.

FTA: "In the real world of computers, bigger tends to mean smarter. But this is not the case for animals: bigger brains are not generally smarter."

This statement seems ripe for semantic disambiguation. Cows can "afford" a larger brain than rats can, and although "large cow brain < sma... (read more)

Be careful about making assumptions about the intelligence of cows. I used to think sheep were stupid, then I read that sheep can tell humans apart by sight (which is more than I can do for them!), and I realized on reflection I never had any actual reason to believe sheep were stupid, it was just an idea I'd picked up and not had any reason to examine. Also, be careful about extrapolating from the intelligence of domestic cows (which have lived for the last few thousand years with little evolutionary pressure to get the most out of their brain tissue) to the intelligence of their wild relatives.
I'm not sure if it's useful to speak of a domesticated animal's raw "intelligence" by citing how they interact with humans. "Little evolutionary pressure" means "little NORMAL evolutionary pressure" for animals protected by humans. That is, surviving and propagating is less about withstanding normal natural situations, and more about successfully interacting with humans. So, sheep/cows/dogs/etc. might have pools of genius in the area of "find a human that will feed you," and may be really dumb in almost other areas.
At the risk of repeating the same mistake as my previous comment, I'll do armchair genetics this time: Perhaps genes controlling the size of various mammalian organs and body regions tend to grow or shrink uniformly, and only become disproportionate when there is a stronger evolutionary pressure. When there is a mutation leading to more growth, all the organs tend to grow more.
(I now see this answered in the first few comments on the link eliezer posted.) Purely armchair neurology: To answer the question of why cow brains would need to be bigger than rat brains, I asked what would go wrong if we put a rat brain into a cow. (Ignoring organ rejection and cheese crazed, wall-eating cows) We would need to connect the rat brain to the cow body, but there would not be a 1 to 1 correspondence of connections. I suspect that a cow has many more nerve endings throughout it's body. At least some of the brain/body correlation must be related to servicing the body nerves. (both sensory and motor)
The cow needs more receptors, and more activators. However, this would lead one to expect the relationship of brain size to body size to follow a power-law with an exponent of 2/3 (for receptors, which are primarily on the skin); or of 1 (for activators, which might be in number proportional to volume). The actual exponent is 3/4. Scientists are still arguing over why.
West and Brown has done some work on this which seemed pretty solid to me when I read it a few months ago. The basic idea is that biological systems are designed in a fractal way which messes up the dimensional analysis. From the abstract of []: A Science article of theirs containing similar ideas:;284/5420/1677 [;284/5420/1677] Edit: A recent Nature article showing that there is systematic deviations from the power law, somewhat explainable with a modified version of the model of West and Brown: []
A recent Nature article showing that there is systematic deviations from the power law, somewhat explainable with a modified version of the model of West and Brown: []
Can something be mathematical and yet not strict? Overly-simple mathematical models don't always work in the real world.

My mother's sister has two children. One is eleven and one is seven. They are both being given an unusually religious education. (Their mother, who is Catholic, sent them to a prestigious Jewish pre-school, and they seem to be going through the usual Sunday School bullshit.) I find this disturbing and want to proselytize for atheism to them. Any advice?

ETA: Their father is non-religious. I don't know why he's putting up with this.

I wouldn't proselytize too directly - you want to stay on their (and their mother's) good side, and I doubt it would be very effective anyways. You're better off trying to instill good values - open-mindedness, curiosity, ability to think for oneself, and other elements of rationality & morality - rather than focusing on religion directly. Just knowing an atheist (you) and being on good terms with him could help lead them to consider atheism down the road at some point, which is another reason why it's important to maintain a good relationship. Think about the parallel case of religious relatives who interfere with parents who are raising their kids non-religiously - there are a lot of similarities between their situation and yours (even though you really are right and they just think they are) and you could run into a lot of the same problems that they do. I haven't had the chance to try it out personally, but Dale McGowan's blog [] seems useful for this sort of thing, and his books might be even more useful.
I think that's some very good advice, and I'd like to elaborate a bit. The thing that made me ditch my religion was the fact that I already had a secular, socially liberal, science-friendly worldview, and it clashed with everything they said in church. That conflict drove my de-conversion, and made it easier for me to adjust to atheism. (I was even used to the idea, from most of my favorite authors mentioning that they weren't religious. Harry Harrison, in particular, had explicitly atheistic characters as soon as his publishers would let him.) So, yeah, subtlety is your friend here.
One thing to do is make sure the kids understand that the Bible is just a bunch of stories. My mom teaches Reform Jewish Sunday school and makes this clear to her students. I make fun of her for cranking out little atheists. Teaching that the bible is a bunch of stories written by multiple humans over time is not nearly as offensive as preaching atheism. Start there. This bit of knowledge should be enough to get your young relatives thinking about religion, if they want to start thinking about it.
I'm not speaking from experience here, but that doesn't stop me from having opinions. I don't believe this is an emergency. Are the kid's lives being affected negatively by the religion? What do they think of what they're being taught? Actually, this could be an emergency if they're being taught about Hell. Are they? Is it haunting them? Their minds aren't a battlefield between you and religious school-- what they believe is, well not exactly their choice because people aren't very good at choosing, but more their choice than yours. I recommend teaching them a little thoughtful cynicism, with advertisements as the subject matter.
I haven't seen any evidence that they're being bothered by anything. Mostly, I just want to make it clear that, unlike a lot of other things they're learning in school, there are a lot of people who have good reasons to think the stories aren't true - to make it clear that there's a difference between "Moses led the Jews out of Egypt" and "George Washington was the first President of the United States."
Dangerous situation! How do the parents feel about science and science fiction? I believe that stuff has good effects.
Possibly introducing them to some of the content in A Human's Guide to Words [], such as dissolving the question, would lead them to theological noncognitivism []. The nice thing about that as opposed to direct atheism is it's more "insidious" because instead of saying, "I don't believe" the kids would end up making more subtle points, like, "What do you even mean by omnipotent?" This somehow seems a lot less alarming to people, so it might bother the parents much less, or even seem like "innocent" questioning.
Introduce them to really cool, socially near, atheists. In particular, provide contact with attractive opposite-gender children who are a couple of years older and are atheists.
Teach them the basis of bayesian reasoning without any connection to religion. This will help them in more ways and will lay the foundation for later when they naturally start questioning religion. Also their parents wont have anything against it you merely introduce it as a method for physics or chemistry or with the standard medical examples.
Speaking as someone who is seeing that sort of thing happening on the inside, I'm really not sure how you should deal with it. Even teaching traditional rationality doesn't help if religion is wrapped up in their social identity. I myself was lucky, in that I never did believe in god. I almost believe that the reason I came through sane was my IQ, although I'm sure that cannot be entirely correct. Getting them to socialize with other children who don't believe in god, or if that's not possible, children who believe in very different gods might help. I would also suggest you introduce them to fiction with strong rationality memes - Eliezer's Harry Potter fanfic [edited, see below] is the kind of thing that might appeal to children, although it has too much adult material.
4Eliezer Yudkowsky12y
Um... Chapter 7 is not the child-friendliest chapter in the world. Teen-friendly, maybe. Not child-friendly.
Ah, yes. Totally slipped my mind. Part of the problem might be that I was reading that kind of material by age 10 so I'm a bit desensitized. However, I continue to think that the overall package is generally appealing to children. Perhaps delivery of a hard copy that has been judiciously edited might work.
True story: when I was 8 or so, I loved Piers Anthony's [] Xanth [] books. So much that I went and read all of his other books.
Even Xanth isn't harmless throughout.
Xanth's dark places are a heck of a lot more kid-friendly than, say, Bio of a Space Tyrant [].
Of course. But I can't think of a single Piers Anthony item that I'd actually recommend to a child. Or, for that matter, to an adult, but that's because Anthony's work sucks, not because it's inappropriate.
I'd classify his... preoccupation... with young teenage girls paired with much older men as "inappropriate".
This is one of those "stupid questions" to which the answer seems obvious to everyone but me: What's wrong with a 16-year-old and a 30-year-old having sex?
It's a power thing. In our culture, the power differential between most 16-year-olds and most 30-year-olds is large enough to make the concept of 'uncoerced consent' problematic.
In principle, nothing. Positive, worthwhile, sexual relationships can exist between 16-year-olds and 30-year-olds. In practice, there can be a great deal wrong, that cuts against the probability of any given relationship with that age split being a net positive. There are immediately obvious power differentials (several legal and common commercial age lines of increasing responsibility and power are between them[1]), there is a large disparity in history and experience, and probably economic power. These really can lower the downside immensely, while not raising the upside. [1]: i.e. 18 several things change, 21 drinking, renting cars at 25

I'd put it differently: There's nothing intrinsically wrong with a 16-year-old and a 30-year-old having sex, any more than there is anything intrinsically wrong with two 30-year-olds having sex. There may be extrinsic factors in either case that make it problematic (somebody's being coerced or forced, somebody's elsewhere married, somebody's intoxicated, somebody's being manipulative to get the sex). The way our society is set up, the first case is dramatically more likely to feature such extrinsic factors than the second case.

Most of my aversion to that theme is (just?) cultural preference. I cannot tell whether I would object to the practice in another culture without more information about, for example, any physical or emotional trauma involved, reproductive implications, degree of physical maturity and the opportunity for the girls to self-determine their own lives. I would then have to compare the practice with 'forced schooling' from our culture to decide which is more disgusting.
I've read a fair bit about this, but I would be interested in reading more about your perspective on this, in particular, the parts of the system that evoke for you such a visceral feeling as disgust.
I'm interested in wedrifid's response as well, but I share the disgust for forced schooling, at least as it's currently practiced. * In particular it's the extreme lack of freedom that bothers me. Students are constantly monitored, disciplined for minor infractions, and often can't even go to the bathroom without permission. * Knowledge is dispensed in small units to the students as if they were all identical, without any individualization or recognition that students may be interested in different things or have different rates of learning. * Students are frequently discouraged from learning on their own or pursuing their own interests, or at the very least not given time to do so. * The practice of giving grades puts the emphasis on competition and guessing the teacher's password rather than on creative thought or deep understanding. Students learn to get a grade, not out of intellectual curiosity. * Students are isolated in groups of students their own age, rather than interacting in the real world, with community members of all different ages. This creates an unnatural and unhealthy social environment that leads to cliques and bullying. There are many schools that have made progress on some of these areas. Many cities have alternative or magnet schools that solve some of these problems, so I'm describing a worst-case scenario. I'd suggest "The Teenage Liberation Handbook" by Grace Llewellyn for more on this, if you haven't already read it.
Students don't get to see adults making decisions.
Right. And I would consider that inappropriateness sufficient to refrain from recommending the books to a child. The fact that they also suck is necessary to extend that lack of recommendation to adults. Sorry if it was unclear.
Oh no, you were clear. All I mean is that the skeeviness of that particular theme is sufficient for not recommend PA to adults (even if the writing weren't ass). ETA: Yeah, so, that was me being unclear, not you.
I'm incredibly curious why that theme bothers you so much that you wouldn't recommend that book to adults. There's a lot of fiction, and erotic fiction, around that theme: would you be against all of it? I haven't read Anthony, so I don't know how he handles it. But despite cultural taboos, in some sense it seems like a better fit for young (straight) men to date older women, and vice versa. The more experienced partner can teach the less experienced partner. The power imbalance can be abused, but any relationship has the potential for abuse. Is it just the violation of the cultural taboo that bothers you? Is it the same sort of moral disgust that people feel about incest? Sexual taboos are incredibly fascinating to me.
Having read quite a bit of Piers Anthony's work, I noticed that it got consistently worse as he got older. I still think A Spell for Chameleon was pretty good (and so was Tarot, if you don't mind the deliberate squick-inducing scenes), but anything he wrote after, say, 1986 is probably best avoided - everything had a tendency to turn into either pure fluff or softcore pornography.
The entire concept of Chameleon is nasty. Her backstory sets up all of the men from her village as being thrilled to take advantage of "Wynne" and universally unwilling to give "Fanchon" the time of day, while about half of them like "Dee". (Anthony is notable for being outrageously sexist towards both genders at once.) Her lifelong ambition is to sit halfway between the two extremes permanently, sacrificing the chance to ever have her above-average intellect because she wants male approval and it's conditional on being pretty (while she recognizes that being as stupid as she sometimes gets is a hazard). Bink is basically presented as a saint for putting up with the fact that she's sometimes ugly for the sake of getting "variety". It's implied that in her smart phase he values her as a conversation partner but actually touching her then would be out of the question. I haven't read the book in years, but I don't remember Chameleon having any complaints about the dubious sort of acceptance Bink offers; she just loves him because he's the protagonist and love means never having to say you want any accommodations whatsoever from your partner, apparently.
I still have some fondness for Macroscope. The gender stuff is creepy, but the depiction of an interstellar information gift culture seemed very cool at the time. I should reread it and see how it compares to how the net has developed.

"Update: Tom McCabe has created a sub-Reddit to use for assorted discussions instead of relying on open threads. Go there for the sub-Reddit and discussion about it, and go here to vote on the idea."

Attention everyone: This post is currently broken for some unknown reason. Please use the new post at if you want to discuss the sub-Reddit. The address of the sub-Reddit is

What do you value?

Here are some alternate phrasings in an attempt to find the same or similar reasoning (it is not clear to me whether these are separate concepts):

  • What are your preferences?
  • How do you evaluate your actions as proper or improper, good or bad, right or wrong?
  • What is your moral system?
  • What is your utility function?

Here's another article asking a similar question: Post Your Utility Function. I think people did a poor job answering it back then.

I value empathy. Unfortunately, it's a highly packed word in the way I use it. Attempting a definition, I'd say it involves creating the most accurate mental models of what people want, including oneself, and trying to satisfy those wants. This makes it a recursive and recursively self-improving model (I think), since one thing I want is to know what else I, and others, want. To satisfy that want, I have to constantly get better at want-knowing. The best way to determine and to satisfy these preferences appears to be through the use of rationality and future prediction, creating maps of minds and chains of causality, so I place high value on those skills. Without the ability to predict the future or map out minds, "what people want" becomes far too close to wireheading or pure selfishness. Empathy, to me, involves trying to figure out what the person would truly want, given as much understanding and knowledge of the consequences as possible, contrasting with what they say they want.
Take a wild, wild guess. No rush -- I'll wait.
I would guess "paperclips and things which are paperclippy", but that still leaves many open questions. Is 100 paperclips which last for 100 years better than 1 paperclip which lasts for 100,000 years? How about one huge paperclip the size of a planet? Is that better or worse than a planetary mass turned into millimeter sized paperclips? Or maybe you could make huge paperclippy-shapes out of smaller paperclips: using paperclip-shaped molecules to form tiny paperclips which you use to make even bigger paperclips. But again, how long should it last? Would you create the most stable paperclips possible, or the most paperclippy paperclips possible? And how much effort would you put into predicting and simplifying the future (modeling, basic research, increases in ability to affect the universe, active reductions to surrounding complexity, etc.) instead of making paperclips? You could spend your entire existence in the quest for the definition to ultimate paperclippiness...

Well, User:Rain, that's about the story of my existence right there. What kinds of paperclips are the right ones? What tradeoffs should I make?

However, regarding the specific matters you bring up, they are mostly irrelevant. Yes, there could be some conceivable situation in which I have to trade off paperclips now against paperclips later. But the way it usually works is that once I obtain or make a paperclip, I move it to the safe zone, where I'll pretty much have it forever. Also, it's obviously the number of paperclips that matters, and the constraint on bizarre paperclips is obviously that they have to be able to (counterfactually) hold sheets of paper together.

If you want to get past this abstract philosophizing and on to some concrete problems, it would be better to talk about the dilemma that User:h-H posed to me, in which I must consider alternate models of paperclipping that don't have the shape of standard paperclips. Here's my recent progress on thinking about the issue.

My current difficulty is extrapolating my values to cover unexpected situations like this, starting from the simplest algorithm I can find which generates my current preference. The problem is that... (read more)

How much modeling of the universe have you done, or will you do, to ensure that this is the case? How much effort will you put into creating and defending a suitable safe zone? Will you search the stars for invasive self-replicators (green goo) who may be a threat to your paperclip cache, or will you postpone active offense/defense until such time as you receive a credible threat? Humankind has been searching for answers for thousands of years. Do you think you'll have a better time discovering ultimate paperclippiness? Or will you be content with a lesser understanding ("I did my best")?
1) I will do all things such that they maximize expected paperclip content of the universe, trading off smaller paperclip quantities for larger ones. I can't express a more specific algorithm than that without knowing the particulars of the situation. 2) I will do much better than humans at finding the ultimate morality* of the universe because I can spend all my resources to make perfect copies of myself that share my values and update knowledge and reflective value equilibria among each other, rather than having to pursue other values like "signaling" (except when I post here, of course). * Note I said ultimate morality, not ultimate paperclippiness. There's a strange tendency here to equate everything I like with "paperclippiness", and everything humans like with "morality" or "friendliness". Wrong: we all want morality (humans and clippys), we just disagree about what counts as moral. To use the alternate naming scheme of calling my values "paperclippy" and human values "morality" is to assume away the problem. (Certainly, I sometimes fall into the trap of calling good things, either for humans or clippys, "paperclippy" -- see here [] -- but this is just when I'm being imprecise with my phrasings.)
Thanks for answering my questions. I think I have a better understanding of your values now. My empathy circuits are lighting up, telling me I should buy a box of paperclips and keep them safe for you. And that I should put them on display as a warning to myself.
A warning of what???
How morality can go awry. I already have a framed print of Hug Bot [] on my wall.
I don't fully understand how I tell good from bad. A query goes in, an answer pops out in the form of a feeling. Many of the criteria probably come from my parents, from reading books, and from pleasant/unpleasant interactions with other people. I can't boil it down to any small set of rules that would answer every moral question without applying actual moral sense, and I don't believe anyone else can. It's easier to give a diff, to specify how my moral sense differs from that of other people I know. The main difference I see is that some years ago I deeply internalized the content of Games People Play [\]) and as a result I never demonstrate to anyone that I feel bad about something - I now consider this a grossly immoral act. On the other hand, I cheat on women a lot and don't care too much about that. In other respects I see myself as morally average.
How has not demonstrating to people that you feel bad about something worked out for you?
Very well. It attracts people.
I value my physical human needs, similarly to Maslow ['s_hierarchy_of_needs]. I endeavor to value larger, long-term contributions to my needs more than short term ones. I often act as though I value others' needs approximately in relation to how well I know them, though I endeavor to value others' needs equally to my own. Specifically I do this when making a conscious value calculation rather than doing what "feels right." I almost always fulfill my own basic needs before fulfilling the higher needs of others; I justify this by saying that I would be miserable and ineffective otherwise but it's very difficult to make my meat-brain go along with experiments to that end. My conscious higher order values emerge from these.
Getting pleasure and avoiding pain, just like everyone else. The question isn't, "What do I value?" but "When [] do I value it?" (And also, "What brings you pleasure and pain?" But do you really want to know that?)
It's not as simple as that. Happiness/suffering might be a better distinction. Some people get happiness (and even pleasure) from receiving physical sensations that can be classed as painful (people who like spicy foods, people who are into masochism, etc.). Using happiness/suffering makes it clear that we're talking about mental states, not physical sensations. And, of course, there are some people who claim to actually value suffering, e.g. religious leaders who preach it as a means to spiritual cleanliness, though it's arguable that they're talking more about pain than suffering, if they find it spiritually gratifying. Or it might behoove us to clarify it further as anticipated happiness/suffering — "What do you value?" meaning "What do you anticipate will maximize your long-term happiness and minimize your long-term suffering?". Further, talking about values often puts people in full-force signaling mode. It might actually expand to "What do you want people to think you anticipate will maximize your long-term happiness and minimize your long-term suffering?" So answering "What is your utility function?" (what's the common pattern behind what you actually do?) or "What is your moral system?" (what's the common pattern behind how you wish you and others would act?) might be best.
Happiness/unhappiness vs. pleasure/pain - whatever you want to call it. All these sorts of words carry extra baggage, but pleasure/pain seems to carry the least. In particular, if someone asked me, "How do you know you're happy right now?" I would have to say, "Because I feel good feelings now." Re: your second paragraph, I suggest that you're driving toward my "When do you value?" question above. As for what I want to signal, that's a more mundane question for me, but I suppose I want people to see me as empathetic and kind - people seeing me that way gives me pleasure / makes me feel happy.
I value time spent in flow times the amount of I/O between me and the external world. "Time spent in flow" is a technical term for having a good time. By I/O (input/output) I mean both information and actions. Talking to people, reading books, playing multiplayer computer games, building pyramids, writing software to be used by other people are examples of high impact of me on the world and/or high impact of the world on me. On the other hand, getting stoned (or, wireheaded) and daydreaming has low interaction with the external world. Some of it is okay though because it's an experience I can talk to other people about.
I value individual responsibility for one's own life. As a corollary I value private property and rationality as means to attain the former. From this I evaluate as good anything that respects property and allows for individual choices. Anything that violates property or impedes choice as bad.
Are you sure that is your real reason for valuing the latter? I doubt it. * Private property implies responsibility for one's own life can be taken by your grandfather and those in your community who force others to let you keep his stuff. * Individual responsibility for one's own life, if that entails actually living, will sometimes mean choosing to take what other people claim as their own so that you may eat. * Private property ensures that you don't need to take individual responsibility for protecting yourself. Other people handle that for you. Want to see individual responsibility? Find a frontier and see what people there do to keep their stuff. * Always respecting private probability and unimpeded choice guaruntees that you will die. You can't stop other people from creating a superintelligence in their back yard to burn the cosmic commons. And if they can do that, well, your life is totally in their hands, not yours.
"Are you sure that is your real reason for valuing the latter? I doubt it." Why do you think you know my valuations better than me? What evidence do you have? As for your bullet points, if I eat a sandwich nobody else can. That's inevitable. Taking responsibility for my own life means producing the sandwich I intend to eat or trade something else I produced for it. If I simply grab what other people produced I shift responsibility to them. And on the other hand if I produced a sandwich and someone else eats it, I can no longer use the sandwich as I intended. Responsibility presupposes choice because I can not take on responsibility for something I have no choice over. And property simply is the right to choose.
Only the benefit of the doubt. If you actually value private property because you value individual responsibility then your core value system is based on confusion. Assuming you meant "I value personal responsibility, I value private property, these two beliefs are politically aligned and here is one way that one can work well with the other" puts your position at least relatively close to sane. No more than Chewbacca is an Ewok. He just isn't, even if they both happen to be creatures from Star Wars.
So, there's the problem. I was using property as "having the right to choose what is done with something". I looked it up in a dictionary but that wasn't helpful. So what is your definition of property? Edit: Wikipedia seems to be on my side: "Depending on the nature of the property, an owner of property has the right to consume, sell, rent, mortgage, transfer, exchange or destroy their property, and/or to exclude others from doing these things." I think this boils down to "the right to choose what is done with it". On a side note it seems that "personal property" is closer to what I meant than "private property".
Laws pertaining to personal property give me the reasonable expectation that someone else will take, indeed, insist on taking responsibility for punishing anyone who chooses to take my stuff. If I take too much responsibility for keeping my personal property I expect to be arrested. I have handed over responsibility in this instance so that I can be assured of my personal property. This is an acceptable (and necessary) compromise. Personal responsibility is at odds with reliance on social norms and laws enforced by others. I am all in favor of personal property, individual choice and personal responsibility. They often come closely aligned, packaged together in a political ideology. Yet they are sometimes in conflict and one absolutely does not imply the other.

I've become a connoisseur of hard paradoxes and riddles, because I've found that resolving them always teaches me something new about rationalism. Here's the toughest beast I've yet encountered, not as an exercise for solving but as an illustration of just how much brutal trickiness can be hidden in a simple-looking situation, especially when semantics, human knowledge, and time structure are at play (which happens to be the case with many common LW discussions).

A teacher announces that there will be a surprise test next week. A student objects that this

... (read more)
Let's not forget that the clever student will be indeed very surprised by a test on any day, since he thinks he's proven that he won't be surprised by tests on those days. It seems he made an error in formalizing 'surprise'. (imagine how surprised he'll be if the test is on Friday!)
Since the student believes a surprise test is impossible, it seems this wouldn't surprise him.
Why not give a test on Monday, and then give another test later that day? I bet they would be surprised by a second test on the same day.
True, there's nothing saying there won't be two tests. Rather than solve this, I was hoping people'd take a look at the linked explanation. When phrased more carefully, it becomes a whole bunch of nested paradoxes, the resolution of which contains valuable lessons on how words can trick people. It covers some LW material along the way, such as Moore's Paradox.
But if there's a solution, it's not really a paradox. And I don't like word arguments [].
Ugh, yes. Why are we speaking of "paradoxes" at all? Anything that actually occurs is not a paradox. If something appears to be a paradox, either you have reasoned incorrectly, you've made untenable assumptions, or you've just been using fuzzy thinking. This is a problem; presumably it has some solution. Describing it as a "paradox" and asking people not to solve it is not helpful. You don't understand it better that way, you understand it by solving it. The only thing gained that way is an understanding of why it appears to be a paradox, which is useful as a demonstration of the dangers of fuzzy thinking, but also kind of obvious. Maybe I'm being overly strict about the word "paradox" here, but I really just don't see the term as at all helpful. If you're using it in the strict sense, they shouldn't occur except as an indicator that you've done something wrong (in which case you probably wouldn't use the word "paradox" to describe it in the first place). If you're using it in the loose sense, it's misleading and unhelpful (I prefer to explcitly say "apparent paradox".)
We're all saying the exact same thing here: words are not to be treated as infallible vehicles for communicating concepts. That was the point of my original post, the point of Rain's reply, and yours as well. (You're completely right about the word "paradox.") Also, I'm not saying not to try solving it, just that I've no intention of refuting all proposed solutions. I didn't want my reply to be construed as a debate about the solution, because that would never end.
Words frequently confuse people into believing something they wouldn't otherwise. You may be correct that this confusion can always be addressed indirectly, but in any case it needs to be addressed. Addressing semantic confusion requires identifying it, and I found this riddle (actually the whole article) a great neutral exercise for that purpose. EDIT: Looking back, I should probably just have posted riddle and kept quiet. Updated for next time.
...and yet []... Probably. p(teacher provides a surprise test) = 1 - x^3 Where: x = 'improbability required for an event to be surprising' If a 50% chance of having a test that day would leave a student surprised he can be 87.5% confident in being able to fullfil his assertion. However, if the teacher was a causal decision agent then he would not be able to provide a surprise test without making the randomization process public (or a similar precommitment).
The problem with choosing at day at random is, what if it turns out to be Friday? Friday would not be a surprise, since the test will be either Monday, Wednesday or Friday, and so by Thursday the students would know by process of elimination that it had to be Friday.
How do you get that result while requiring that the test occur next week? It is that assumption that drives the 'paradox'.
The answer to the question 'Can the teacher fulfill his announcement?' is 'Probably'. The answer to the question 'Is there a 100% chance that the teacher fulfills his announcement?' is 'No'.
You misunderstand me - I maintain that an obvious unstated condition in the announcement is that there will be a test next week. Under this condition, the student will be surprised by a Wednesday test but not a Friday test, and therefore p(teacher provides a surprise test) = 1 - x^2 and, if I guess your algorithm correctly, p(teacher provides a surprise lack of test) = x^2 * (1 - x) [edit: algebra corrected]
The condition is that there will be a surprise test. If the teacher were to split 'surprise test' into two and consider max(p(surprise | p(test) == 100)) then yes, he would find he is somewhat less likely to be making a correct claim. I maintain my previous statement (and math): Something that irritates me with regards to philosophy as it is often practiced is that there is an emphasis on maintaining awe at how deep and counterintuitive a question is rather than extract possible understanding from it, disolve the confusion and move on. Yes, this question demonstrates how absolute certainty in one thing can preclude uncertainty in some others. Wow. It also demonstrates that one can make self defeating prophecies. Kinda-interesting. But don't let that stop you from giving the best answer to the question. Given that the teacher has made the prediction and given that he is trying to fulfill his announcement there is a distinct probability that he will be successful. Quit saying 'wow', do the math and choose which odds you'll bet on!
I never intended to dispute that only the specific figure 87.5%. It's a minor point. Your logic is good.

Does anyone have suggestions for how to motivate sleep? I've hacked all the biological problems so that I can actually fall asleep when I order it, but me-Tuesday generally refuses to issue an order to sleep until it's late enough at night that me-Wednesday will sharply regret not having gone to bed earlier.

I've put a small effort into setting a routine, and another small effort into forcing me-Tuesday to think about what I want to accomplish on Wednesday and how sleep will be useful for that; neither seems to be immediately useful. If I reorganize my en... (read more)

Melatonin []. Also, getting my housemates to harass me if I don't go to bed.
Mass_Driver's comment is kind of funny to me, since I had addressed exactly his issue at length in my article.
Which, I couldn't help but notice, you have thoughtfully linked to in your comment. I'm new here; I haven't found that article yet.
If you're not being sarcastic, you're welcome. If you're being sarcastic, my article is linked, in Nick_Tarleton's very first sentence []; it would be odd for me to simply say 'my article' unless some referent had been defined in the previous two comments, and there is only one hyperlink in those two comments.
Gwern, I apologize for the sarcasm; it wasn't called for. As I said, I'm new here, and I guess I'm not clicking "show more above" as much as I should. However, a link still would have been helpful. As someone who had never read your article, I had no way of knowing that a link to "Melatonin" contained an extensive discussion about willpower and procrastination. It looked to me like a biological solution, i.e., a solution that was ignoring my real concerns, so I ignored it. Having now read your article, I agree that taking a drug that predictably made you very tired in about half an hour could be one good option for fighting the urge to stay up for no reason, and I also think that the health risks of taking melatonin long-term -- especially at times when I'm already tired -- could be significant. I may give it a try if other strategies fail.
I strongly disagree, but I also dislike plowing through as enormous a literature as that on melatonin and effectively conducting a meta-study, since Wikipedia already covers the topic and I wouldn't get a top-level article out of such an effort, just some edits for the article (and old articles get few hits, comments, or votes, if my comments are anything to go by).
I've been struggling with this for years, and the only thing I've found that works when nothing else does is hard exercise. The other two things that I've found help the most: * Let the sun hit your eyelids first thing in the morning (to halt melatonin production) * F.lux [], a program that auto-adjusts your monitor's light levels (and keep your room lights low at night; otherwise melatonin production will be delayed) EDIT: Apparently [] keeping your room lights at a low color temperature (incandescent/halogen instead of fluorescent) is better than keeping them at low intensity: "...we surmise that the effect of color temperature is greater than that of illuminance in an ordinary residential bedroom or similar environment where a lowering of physiological activity is desirable, and we therefore find the use of low color temperature illumination more important than the reduction of illuminance. Subjective drowsiness results also indicate that reduction of illuminance without reduction of color temperature should be avoided." —Noguchi and Sakaguchi, 1999 [] (note that these are commercial researchers at Matsushita, which makes low-color-temperature fluorescents)
That all sounds awfully biological -- are you sure fixing monitor light levels is a solution for akrasia []?
No, the items I've given will only make you more sleepy at night than you would have been. If that's not enough, I agree it's akrasia of a sort, also known as having a super-high time preference [].
Does that imply that HIDs are safer for long drives at night than halogen headlights?
If you use Mac OS, Nocturne [] lets you darken the display, lower its color temperature, etc. manually/more flexibly than F.lux.
For Linux, there's Redshift []. I like it because it's kinder on my eyes, though it doesn't do anything for akrasia.
There is also Shades [], which lets you set a tint color and which provides a slider so you can move gradually between standard and tinted mode.
What do you do instead of going to bed? I notice myself spending time on the Internet.
Either that or painting (The latter is harder to do because the cats tend to want to help me paint, yet don't get the necessity of oppose-able thumbs ... umm...Opposeable? Opposable??? anyway....) Since I have had sleep disorders since I was 14, I've got lots of practice at not sleeping (pity there was no internet then)... So, I either read, draw, paint, sculpt, or harass people on the opposite side of the earth who are all wide awake.
Ah, that puts the causal chain opposite mine - I stay up because I'm doing something, not vice-versa.
I used to be more like MatthewB, but now I'm more like RobinZ. I tend to stay up browsing the Internet, reading sci-fi, or designing board games. The roommate idea has worked in the past, and I do use it for 'emergencies.' My roommates don't really take akrasia seriously, though; they figure if I want to stay up all night and regret it, then that's just fine.
Random ideas: Set an alarm clock or two for the time you want to go to bed, so you don't "lose track of the time." Find some program that automatically turns off your Internet access at a certain time each night.

I recently got into some arguments with foodies I know on the merits (or lack thereof) of organic / local / free-range / etc. food, and this is a topic where I find it very difficult to find sources of information that I trust as reflective of some sort of expert consensus (insofar as one can be said to exist.) Does anyone have any recommendations for books or articles on nutrition/health that holds up under critical scrutiny? I trust a lot of you as filters on these issues.

7Scott Alexander12y
There are lots of studies on the issue, and as usual most of them are bad and disagree with each other. I tend to trust the one by the UK Food Standards Association [] because it's big and government-funded. Mayo Clinic [] agrees. I think there are a few studies [] that show organic foods do have lower pesticide levels than normal, but nothing showing that it actually leads to health benefits. Pesticides can cause some health problems in farmers, but they're receiving a bajillion times the dose of someone who just eats the occasional carrot. And some "organic pesticides" are just as bad as any synthetic ones. There's also a higher risk of getting bacterial infections from organic food. Tastewise, a lot of organics people cite some studies showing that organic apples [] and other fruit taste better than conventional - I can't find the originals of these and there are equally questionable studies that say the opposite. Organic vegetables taste somewhere between the same and worse, even by organic peoples' admission. There's a pretty believable study showing conventional chicken tastes better than organic [] , and a more pop-sci study claiming the same thing about almost everything []. I've seen some evidence that locally grown produce tastes better than imported, but that's a different issue than organic vs. non-organic and you have to make sure people aren't conflating them. They do produce less environmental damage per unit land, but they produce much less food per unit land and so require more land to be devoted to
Of course, "organic" covers a wide range. I tend not to be blown away by the organic veggies and fruit at Whole Foods. I've had extraordinarily good produce from my local (south Philadelphia) farmer's markets.
The famous metaanalyses which has shown that vitamin supplementation is essentially useless, or possibly even harmful totally destroys the basic argument ("oh look, more vitamins!" - not that it's usually even true) that organic is good for your health. It might still be tastier. Or not.
Do you mean these [] metaanalyses?
Yes. Even if PhilGoetz is correct that harmfulness was an artifact, there's still essentially zero evidence for benefits of eating more vitamins than RDA.
I thought Vitamin D was an exception.
My experience (admittedly, not double-blinded) is that the food from the farmer's markets tends to be a lot tastier. Three possibilities: confirmation bias at my end, the theory that local-organic-free range creates better food (and better food tastes better) is correct, and selection pressure-- the only way they can get away with those prices is to sell food which tastes really good.
You should be extremely skeptical of any taste comparisions that are not blinded. One recent story [] carried out a blind taste comparison of Walmart and Whole Foods produce and found Walmart was preferred for some items. If the taste test had not been conducted blind you would likely have seen very different results. This comparison doesn't directly bear on your theory since both the Walmart and Whole Foods produce was local and organic in most cases but perceptions of the source are very significant in taste judgements.
Alternative theory: food from local sources (such as farmer's markets) tastes better because it's fresher, because it's transported less and warehoused fewer times. This would imply that production methods, such as being organic or free range, have little or nothing to do with it. This is also pretty easy to test, if you have some visibility into supply chains.
In UK all supermarkets offer both "normal" and "organic" food. Isn't it true wherever you live? You can use this to check if this makes any difference in taste, as both are most likely transported and stored the same.
I want to test a different hypothesis-- whether extreme freshness is necessary for excellent flavor.
That's easy. If you have something very tasty, just store it in a fridge for an extra day, and try it again. I remember some experiments showing that meat got somewhat tastier around its labeled expiration date, which is the opposite result.
Plausible, but hard to test-- how would I get conventionally raised food which is as fresh as what I can get in farmer's markets? I'd say that the frozen meat is also tastier, and it's (I hope) no fresher than what I can get at Trader Joe's.
My extensive but not blinded at all testing suggests that cheapest brands of supermarket food usually taste far worse than more expensive brands, and quite a number of times fell below my edibility threshold. My theory is this: it's cheaper to produce bad-tasting food than well-tasting food - and then you can use market segmentation - poor people who cannot afford more expensive food will buy this, while majority of people will buy better tasting and more expensive food. Two price points earn you more money, and as better tasting food is more expensive to make competition cannot undercut you. One thing I cannot explain is that this difference applies only to some kinds of food - cheap meat is really vile, but for example cheap eggs taste the same as expensive organic eggs, tea price has little to do with its taste, not to mention things like salt and sugar which simply have to taste the same by laws of chemistry.
You can buy fancy salts (mined from different places-- there's a lot of pink Tibetan salt around) these days. I'm not interested enough in salt to explore them, so I have no opinion about the taste. I've found that the cheap eggs ($1/dozen) leave me feeling a little off if I eat them a couple of days in a row, but organic free range ($3.50 or more/dozen) don't.
Your second possibility deserves elaboration - I believe a fair restatement is: factory farming methods are less responsive than local organic free-range methods to taste and quality (i.e. cannot control for it as effectively).
Is the methodology of the Amanda Knox test [] useful in this case? (I didn't attempt the test or even read the posts, but it sounds like a similarly politicized problem.)
An Amanda-Knox-type situation would be one where the priors are extreme and there are obvious biases and probability-theoretic errors causing people to overestimate the strength of the evidence. I think one would have to know a fair amount of biochemistry in order for food controversies to seem this way. Although one might potentially be able to apply the heuristic "look at which side has the more generally impressive advocates" -- which works spectacularly well in the Knox case -- to an issue like this.
I thought Robin meant: Let the Less Wrong community sort through the information and see if there is a consensus arises on one side or the other. In this case no one has a "right answer" in mind, but we got a pretty conclusive, high confidence answer in the Knox case. Maybe we can do that here- we'd just need to put the time in (and have a well-defined question). Yes, there aren't many biochemists among us. But we all seem remarkably comfortable reading through studies and evaluating scientific findings on grounds of statistics, source credibility etc. Also, my uninformed guess is that a lot of the science is just going to consist of statistical correlations without a lot of deep biochemistry.
Oddly, no - although I think that would be a good exercise to carry out at intervals, I was imagining the theoretical solo game that each commenter played before bringing evidence to the community. Which has the difficulties that komponisto mentioned, of there not being prominent pro- and con- communities available, among other things.
I'm thinking: 1. Define the claim/s precisely. 2. Come up with a short list of pro and con sources 3. Individual stage: anyone who wants to participate goes through the sources and does some addition research as they feel necessary. 4. Each individual posts their own probability estimates for the claims. 5. Communal stage: Disagreements are ironed out, sources shared, arguing and beliefs revised. 6. Reflection: What, if anything, have we agreed on. It would be a lot harder than the Knox case but it is probably doable.
Yes, that's it. I don't think enough time has passed to get around to another such exercise, however.
It takes about an hour to familiarize yourself with all of the relevant information in the Knox case, I imagine it would take a lot longer in this case. It might still work though if enough people were willing to invest the time, especially since most people don't already have rigid, well-formed opinions on the issue.

Having read the quantum physics sequence I am interested in simulating particles at the level of quantum mechanics (for my own experimentation and education). While the sequence didn't go into much technical detail, it seems that the state of a quantum system comprises an amplitude distribution in configuration space for each type of particle, and that the dynamics of the system are governed by the Shroedinger equation. The usual way to simulate something like this would be to approximate the particle fields as piecewise linear and update iteratively accor... (read more)

Arithmetic, Population, and Energy by Dr. Albert A. Bartlett, Youtube playlist. Part One. 8 parts, ~75 minutes.

Relatively trivial, but eloquent: Dr. Bartlett describes some properties of exponential functions and their policy implications when there are ultimate limiting factors. Most obvious policy implication: population growth will be disastrous unless halted.

People have been worrying about that one since Malthus. Turns out, production capacity can increase exponentially too, and when any given child has a high enough chance of survival, the strategy shifts from spamming lots of low-investment kids (for farm labor) to having one or two children and lavishing resources on them, which is why birthrates in the developed world are dropping below replacement.
Yes, for a while. The simplest factor driving this is exponentially more laborers. Then there's better technology of all sorts. Still, after a certain point we start hitting hard limits. (a) Is this guaranteed to happen, a human universal or is it a contingent feature of our culture? (b) Even if it is guaranteed to happen, will the race be won by increasing population hitting hard limits, or populations lifting themselves out of poverty?
I believe it's a quite general phenomenon - Japan did it, Russia did it, USA did it, all of Europe did it, etc. It looks like a pretty solid rich=slower-growth phenomenon: [] And if there were a rich country which continued to grow, threatening neighbors, there's always nukes & war.
I think "hard limits" is the wrong way to frame the problem. The only limits that appear truly unbeatable to me right now are the amounts of mass-energy and negentropy in our supergalactic neighborhood, and even those limits may be a function of the map, rather than the territory. Other "limits" are really just inflection points in our budget curve; if we use too much of resource X, we may have to substitute a somewhat more costly resource Y, but there's no reason to think that this will bring about doom. For example, in our lifetime, the population of Earth may expand to the point where there is simply insufficient naturally occurring freshwater on Earth to support all humans at a decent standard of living. So, we'll have to substitute desalinized oceanwater, which will be expensive -- but not nearly as expensive as dying of drought. Likewise, there are only so many naturally occurring oxygen atoms in our solar system, so if we keep breathing oxygen, then at a certain population level we'll have to either expand beyond the Solar System or start producing oxygen through artificial fusion, which may cost more energy than it generates, and thus be expensive. But, you know, it beats choking or fighting wars over a scarce resource. There are all kinds of serious economic problems that might cripple us over the next few centuries, but Malthusian doom isn't one of them.
It's true that many things have substitutes. All these limits are soft in the sense that we can do something else, and the magic of the market will select the most efficient alternative. At some point this may be no kids, rather than desalinization plants, however, cutting off the exponential growth. (Phosphorus will be a problem before oxygen. Technically, we can make more phosphorus, and I suppose the cost could go down with new techniques other than "run an atom smasher and sort what comes out".) But there really are hard limits. The volume we can colonize in a given time goes up as (ct)^3. This is really, really. really fast. Nonetheless, the required volume for an exponentially expanding population goes as e^(lambda t), and will get bigger than this. (I handwave away relativistic time-dilation -- it doesn't truly change anything.)
Or, more precisely, less kids. I don't insist that we're guaranteed to switch to a lower birth rate as a species, but if we do, that's hardly an outcome to be feared. Fascinating. That sounds right; do you know where in the Solar System we could try to 'mine' it? Not until we start getting close to relativistic speeds. I could care less about the time-dilation, but for the next few centuries, our maximum cruising speed will increase with each new generation. If we can travel at 0.01 c, our kids will travel at 0.03 c, and so on for a while. Since our cruising velocity V is increasing with t, the effective volume we colonize per generation increases at more than (ct)^3. We should also expect to sustainably extract more resources per unit volume as time goes on, due to increasing technology. Finally, the required resources per person are not constant; they decrease as population increases because of economies of scale, economies of scope, and progress along engineering learning curves. All these factors mean that it is far too early to confidently predict that our rate of resource requirements will increase faster than our ability to obtain resources, even given the somewhat unlikely assumption that exponential population growth will continue indefinitely. By the time we really start bumping up against the kind of physical laws that could cause Malthusian doom, we will most likely either (a) have discovered new physical laws, or (b) have changed so much as to be essentially non-human, such that any progress human philosophers make today toward coping with the Malthusian problem will seem strange and inapposite.
Actually, if we figure out how to stabilize traversible wormholes, the colonizable volume goes up faster than (ct)^3. I'm not sure exactly how much faster, but the idea is, you send one mouth of the wormhole rocketing off at relativistic speed, and due to time dialation, the home-end of the gate opens up allowing travel to the destination in less than half the time it would take a lightspeed signal to travel to the destination and back.
Assuming zero space inflation, the “exit” mouth of the wormhole can’t travel faster than c with respect to the entry. So for expansion purposes (where you don’t need (can’t, actually, due to lack of space) to go back), you’re limited to c (radial) expansion. Which is the same as without wormholes. In other words, the volume covered by wormholes expands as (c×t)³ relative to when you start sending wormholes. The number of people is exponential relative to when you start reproducing. Even if you start sending wormholes a long time before you start reproducing exponentially, you’re still going to fill the wormhole-covered volume. (The fault in your statement is that you can go in “less” than half the time only for travel within the volume already covered by wormholes. For arbitrarily far distances you still need to wait for the wormhole exit to reach there, with is still below c.) Space inflation doesn’t help that much. Given long enough time, the “distance” between the wormhole entry and exit point can grow at more than c (because the space between the two expands; the exit points still travel below c). In other words, far parts of the Universe can fall outside your event horizon, but the wormhole can keep them still accessible (for various values of can...). This can allow you unbounded-growth in the volume of space for expansion (exponentially, if the inflation is exponential), but note that the quantity of matter accessible is still the same volume that was in your (c×t)³ (without inflation) volume of space.
Strange7 is referring to this essay [], especially section 6.
I still don’t get how you can get more than c×t³ as a colonized volume. With wormholes you could travel within that volume very quickly, which will certainly help you approach c-speed expansion faster, since engine innovations at home can be propagated to the border immediately. And, of course, your volume will be more “useful” because of the lower communication costs (time-wise, & presuming worm-holes are not very expensive otherwise). But I don’t see how you can expand the volume quicker than c, since the border expansion will still be limited by it. (Disclaimer: I didn’t read everything there, mostly the section you pointed out.)
Simple thermodynamics [] guarantees that any growing consumption of resources is unsustainable on a long enough timescale - even if you dispute the implicit timescale in Dr. Bartlett's talk*, at some point planning will need to account for the fundamental limits. Ignoring the physics is a common error in economics (even professional economics, depressingly). * Which you appear not to have watched through - for shame!
Yes, obviously thermodynamics limits exponential growth. I'm saying that exponential growth won't continue indefinitely, that people (unlike bugs []) can, will, and in fact have already begun to voluntarily curtail their reproduction [].
What kind of reproductive memes do you think get selected for?
How strong is the penalty for defection?
Yeah, this obviously matters a lot. Right now low to non-existent outside the People's Republic of China, though I suppose that could change. There are a lot of barriers to effective enforcement of reproductive prohibitions: incredibly difficult to solve cooperation issues, organized religions, assorted rights and freedoms people are used to. I suppose a sufficiently strong centralized power could solve the problem though such a power could be bad for other reasons. My sense is the prospects for reliable enforcement are low but obviously a singularity type superintelligence could change things.
I’m not quite sure that penalties are that low outside China. There are of course places where penalties for many babies are low, and there are even states that encourage having babies — but the latter is because birth rates are below replacement, so outside of our exponential growth discussion; I’m not sure about the former, but the obvious cases (very poor countries) are in the malthusian scenario already due to high death rates. But in (relatively) rich economies there are non-obvious implicit limits to reproduction: you’re generally supposed to provide a minimum of care to children; even more, that “minimum” tends to grow with the richness of the economy. I’m not talking only about legal minimum, but social ones: children in rich societies “need” mobile phones and designer clothes, adolescents “need” cars, etc. So having children tends to become more expensive in richer societies, even absent explicit legal limits like in China, at least in wide swaths of those societies. (This is a personal observation, not a proof. Exceptions exist. YMMV. “Satisfaction guaranteed” is not a guarantee.)
The legal minimum care requirement is a good point. With the social minimum: I recognize that this meme exists but it doesn't seem like there are very high costs to disobeying it. If I'm part of a religion with an anti-materialist streak and those in my religious community aren't buying their children designer clothes either... I can't think of what kind of penalty would ensue (whereas not bathing or feeding your children has all sorts of costs if an outsider finds out). It seems better to think of this as a meme which competes with "Reproduce a lot" for resources rather than as a penalty for defection. Your observation is a good one though.
Sure, within a relatively homogeneous and sufficiently “socially isolated”* community the social cost is light. (*: in the sense that “social minimum” pressures from outside don’t affect it significantly, including by making at least some members “defect to consumerism” and start a consumerist child-pampering positive feedback loop.) I seem to think that such communities will not become very rich, but I can’t justify it other than with a vague “isolation is bad for growth” idea, so I don’t trust my thought. Do you have any examples of “rich” societies (by current 1st-world standards) which are socially isolated in the way you describe? (Ie, free from “consumerist” pressure from inside and immune to it from outside.) I can’t think of any.
I'm not sure I understand what you mean. This isn't a matter of interpersonal communication, it's just individual married couples more-or-less rationally pursuing the 'pass on your genes' mandate by maximizing the survival chances of one or two children rather than hedging their bets with a larger number of individually-riskier children.
If a gene leads to greater fertility rates with no drop in survival rates, it spreads. Similarly if a meme [] leads to greater fertility [,5143,635152902,00.html] with no drop in survival rate and is sufficiently resistant to competing memes [] it too spreads. Thus, those memes/memetic structures that encourage more reproduction have a selection advantage.
In this case, the meme in question leads to a drop in fertility rates, but increases survival rates more than enough to compensate.
I don't really think your characterization of the global drop in fertility rate is right (farmers with big families survive just fine!) but that isn't really the point. The point is, mormons aren't dying and neither are lots of groups which encourage reproduction among their members. Unless there are a lot of deconversions or enforced prohibitions against over reproducing the future will consist of lots of people whose parents believed in having lots of children and those people will likely feel the same way. They will then have more children who will also want to have lots of children. This process is unsustainable.
I'm expecting a lot of deconversions. Mormons already go to a lot of trouble to retain members and punish former members, which suggests there's a corresponding amount of pressure to leave. Catholics did the whole breed-like-crazy thing, and that worked out well for a while, but catholicism doesn't rule the world. I think the relative zeal of recent converts as compared to lifelong believers has something to do with how siblings raised apart are more likely to have sexual feelings for each other, but that's probably a topic for another time.

An extensive observation-based discussion of why people leave cults Worth reading, not just for the details, but because it's made very clear that leaving has to make emotional sense to the person doing it. Logical argument is not enough!

People leave because they've been betrayed by leaders, they've been influenced by leaders who are on their own way out of the cult, they find the world is bigger and better than the cult has been telling them, the fears which drove a person into a cult get resolved, and /or life changes which show that the cult isn't working for them.

Does anyone know a popular science book about, how should I put it, statistical patterns and distributions in the universe. Like, what kind of things follow normal distributions and why, why do power laws emerge everywhere, why scale-free networks all over the place, etc. etc.

Sorry for ranting instead of answering your question, but "power laws emerge everywhere" is mostly bullshit. Power laws are less ubiquitous than some experts want you to believe. And when you do see them, the underlying mechanisms are much more diverse than what these experts will suggest. They have an agenda: they want you to believe that they can solve your (biology, sociology, epidemiology, computer networks etc.) problem with their statistical mechanics toolbox. Usually they can't.

For some counterbalance, see Cosma Shalizi's work. He has many amusing rants, and a very good paper:

Gauss Is Not Mocked

So You Think You Have a Power Law — Well Isn't That Special?

Speaking Truth to Power About Weblogs, or, How Not to Draw a Straight Line

Power-law distributions in empirical data

Note that this is not a one-man crusade by Shalizi. Many experts of the fields invaded by power-law-wielding statistical physicists wrote debunking papers such as this:

Another very relevant and readable paper:

That gives a whole new meaning to Mar's Law [].
Thank you, I never knew this fallacy has its own name, and I have been annoyed by it since ages. Actually, since 2003, when I was working on one of the first online social network services ( The structure of the network was contradicting most of the claims made by the then-famous popular science books on networks. Not scale-free, (not even truncated power-law), not attack-sensitive, most of the edges were strong links. Looking at the claims of the original papers instead of the popular science books, the situation was not much better.
You could try "Ubiquity" by Mark Buchanan for the power law stuff, but it's been a while since I read it, so I can't vouch for it completely. (Confusingly, Amazon lists three books with that title and different subtitles, all by that author, all published around 2001-2002.)

Is there any chance that we (a) CAN'T restrict AI to be friendly per se, but (b) (conditional on this impossibility) CAN restrict it to keep it from blowing up in our faces? If friendly AI is in fact not possible, then first generation AI may recognize this fact and not want to build a successor that would destroy the first generation AI in an act of unfriendliness.

It seems to me like the worst case would be that Friendly AI is in fact possible...but that we aren't the first to discover it. In which case AI would happily perpetuate itself. But what are ... (read more)

First, define "friendly" in enough detail that I know that it's different from "will not blow up in our faces".
Ooh, good catch! wheninrome15 may need to define "will not blow up in our faces" in more detail as well.
Such an eventuality would seem to require that (a) human beings are not computable or (b) human beings are not Friendly. In the latter case, if nothing else, there is [individual]-Friendliness to consider.
I think human history has demonstrated that (b) is certainly true... sometimes I am surprised we are still here.
The argument from (b)* is one of the stronger ones I've heard against FAI. * Not to be confused with the argument from /b/ [].
Incidentally, /b/ might be good evidence for (b). It's a rather unsettling demonstration of what people do when anonymity has removed most of the incentive for signaling.
I find chans' lack of signaling highly intellectually refreshing. /b/ is not typical - due to ridiculously high traffic only meme-infested threads that you can reply to in 5 seconds survive. Normal boards have far better discussion quality.
[-][anonymous]12y 1

Are there any Germans, preferably from around Stuttgart, who are interested in forming a society for the advancement of rational thought? Please PM me.

I know I asked this yesterday, but I was hoping someone in the Bay Area (or otherwise familiar) could answer this:

Monica Anderson: Anyone familar with her work? She apparently is involved with AI in the SF Bay area, and is among the dime-a-dozen who have a Totally Different approach to AI that will work this time. She made this recent slashdot post (as "technofix") that linked a paper (PDF WARNING) that explains her ideas and also linked her introductory site and blog.

It all looks pretty flaky to me at this point, but I figure some of you must ... (read more)

3Eliezer Yudkowsky12y
Trust your intuition.
Is there a post about when to trust your intuition?
This comment [] shows when :) If you don't like that, I think this [] gives somewhat of a better idea when you should consider it.
It looks like a biology-inspired, predictive approach somewhat along the lines of Hawkins' HTMs, except that I've not seen her implementation details spelled out as thoroughly as Hawkins'. Her analysis seems sound to me (in the sense that her proposed model quite closely matches how humans actually get through the day), except that she seems to elevate certain practical conclusions to a philosophical level that's not really warranted (IMO). (Of course, I think there would likely be practical problems with AN-based systems being used in general applications -- humans tend to not like it when machines guess, especially if they guess wrong. We routinely prefer our tools to be stupid-but-predictable over smart-but-surprising.)

A couple of physics questions, if anyone will indulge me:

Is quantum physics actually an improvement in the theory of how reality works? Or is it just building uncertainty into our model of reality? I was browsing A Brief History of Time at a bookstore, and the chapter on the Heisenberg uncertainty principle seem to suggest the latter - what I read of it, anyway.

If this is just a dumb question for some reason, feel free to let me know - I've only taken two classes in physics, and we never escaped the Newtonian world.

On a related note, I'm looking for a ... (read more)

It explains everything microscopic. For example, the stability of atoms. Why doesn't an electron just spiral into the nucleus and stay there? The uncertainty principle means it can't be both localized at a point and have a fixed momentum of zero. If the position wavefunction is a big spike concentrated at a point, then the momentum wavefunction, which is the Fourier transform of the position wavefunction, will have a nonzero probability over a considerable range of momenta, so the position wavefunction will start leaking out of the nucleus in the next moment. The lowest energy stable state for the electron is one which is centered on the nucleus, but has a small spread in position space and a small spread in momentum "space". However, every quantum theory ever used has a classical conceptual beginning. You posit the existence of fields or particles interacting in some classical way, and then you "quantize" this. For example, the interaction between electron and nucleus is just electromagnetism, as in Faraday, Maxwell, and Einstein. But you describe the electron (and the nucleus too, if necessary) by a probabilistic wavefunction rather than a single point in space, and you also do the same for the electromagnetic field. Curiously, when you do this for the field, you get particles as emergent phenomena. A "photon" is actually something like a bookkeeping device for the probabilistic movement of energy within the quantized electromagnetic field. You can also get electrons and nucleons (and their antiparticles) from fields in this way, so everywhere in elementary particle physics, you have this "field/particle duality". For every type of elementary particle, there is a fundamental field, and vice versa. The basic equations that get quantized are field equations, but the result of quantization gives you particle behavior. Everyone wants to know how to think about the uncertainty in quantum physics. Is it secretly deterministic and we just need a better theory, or do
I don't know if I'm the only person who thinks this is funny, but every theory in physics has a basis in naive trust in qualia, even if it's looking at the readout from an instrument or reading the text of an article.
I just take all scientific theories to ultimately be theories about phenomenal experience. No naive trust required.
What do you mean?
The conclusion may be that matter is almost entirely empty space, but you still have to let your interactions with the way you get information about physics use the ancient habit of assuming that what seems to be solid is solid.
I think you may misunderstand what the physics actually says. Compared to the material of neutron stars [], yes, terrestrial matter is almost entirely empty space ... but it's still resists changes to shape and volume []. And you don't need to invoke ancient habits anywhere - those conclusions fall right out of the physics [] without modification.
I've beginning to think that I've been over-influenced by "goshwow" popular physics, which tries to present physics in the most surpising way poosible. It's different if I think of that "empty space" near subatomic particles as puffed up by energy fields.
The Quantum Physics Sequence []
thanks, but I was hoping for a quick answer. Working through that sequence is on my "Definitely do sometime when I have nothing too important to do" list.
OK, a quick answer: classical physics cannot be true of the reality we find ourselves in. Specifically, classical physics is contradicted by experimental results such as the photoelectric effect and the double-slit experiment. The parts of reality that require you to know quantum physics affect such important things as chemistry, semiconductors and whether our reality can contain such a thing as a "solid object". The only reason we teach classical physics is that it is easier than quantum physics. If everyone could learn quantum physics, there would be no need to teach classical physics anymore.
First of all, thanks. Really? Isn't classical physics used in some contexts because the difference between the classical model and reality isn't enough to justify extra complications? I'm thinking specifically of engineers.
True. Revised sentence: the only reasons for using classical physics are that it is easier to learn, easier to calculate with and it helps you understand people who know only classical physics.
On the first point: I try never to categorize questions as intelligent or dumb, but is quantum mechanics an improvement? Unquestionably. To give only the most obvious example, lasers [] work by quantum excitation. I, too, would be interested in learning quantum mechanics from a good textbook.
I understand that Claude Cohen-Tannoudji et al.'s two-volume Quantum Mechanics [] is supposed to be exceptional, albeit expensive, time consuming to work through fully, and targeted at post-graduates rather than beginners. (Another disclaimer: I have not used the textbook myself.) Cohen-Tannoudji got the 1997 Nobel Prize in Physics [] for his work with...lasers!
It was my undergraduate textbook. It is certainly thorough, but other than that, I'm not sure I can strongly recommend it. (The typography is painful). I think starting with Quantum Computation and Quantum Information [] and hence discrete systems might be a better way to start, and then later expand to systems with continuous degrees of freedom.
I'm confused: "typography"? The font on the Amazon "LOOK INSIDE" [] seems perfectly legible to me.
The typesetting of the equations in particular. There were several things that hampered the readability for me -- like using a period for the dot product, rather than a raised dot. I expect a full stop to mean the equation has ended. Exponents are set too big. Integral signs are set upright, rather than slanted (conversely the "d"s in them are italicized, when they should be viewed as an operator, and hence upright). Large braces for case expansion of definitions are 6 straight lines, rather than smooth curves. The operator version of 1 is an ugly outline. The angle brackets used for bras and kets are ugly (though at least distinct from the less than and greater than signs). I'm not being entirely fair: these are really nits. On the other hand, these and other things actually made it harder for me to use the book. And it's not an easy book to start with.
Thanks for the elaboration. I'll bear that in mind if I have a chance to pick up a copy.

I'm looking at the question of whether it's certainly the case that getting an FAI is a matter of zeroing in directly on a tiny percentage of AI-space.

It seems to me that an underlying premise is that there's no reason for a GAI to be Friendly, so Friendliness has to be carefully built into its goals. This isn't unreasonable, but there might be non-obvious pulls towards or away from Friendliness, and if they exist, they need to be considered. At the very least, there may be general moral considerations which incline towards Friendliness, and which would be... (read more)

I wonder how alarming people find this? I guess that if something fooms, this will provide the infrastructure for an instant world takeover. OTOH, the "if" remains as large as ever.

RoboEarth is a World Wide Web for robots: a giant network and database repository where robots can share information and learn from each other about their behavior and their environment.

Bringing a new meaning to the phrase "experience is the best teacher", the goal of RoboEarth is to allow robotic systems to benefit from the experience of other robots, pavi

... (read more)

CFS: creative non-fiction about immortality

BOOK PROJECT: Immortality postmark deadline August 6, 2010

For a new book project to be published by Southern Methodist University Press, entitled "Immortality," we're seeking new essays from a variety of perspectives on recent scientific developments and the likelihood, merits and ramifications of biological immortality. We're looking for essays by writers, physicians, scientists, philosophers, clergy--anyone with an imagination, a vision of the future, and a dream (or fear) of living forever.

Essays must... (read more)

How does the notion of time consistency in decision theory deal with the possibility of changes to our brains/source code? For example, suppose I know that my brain is going to be forcibly re-written in 10 minutes, and that I cannot change this fact. Then decisions I make after that modification will differ from those I make now, in the presence of the same information (?).

"Forcibly rewritten" implies your being a different person afterwards. Naively, time consistency would suggest treating them as such.
2Alex Flint12y
But if a mind's source code is changed just a little then shouldn't its decisions be changed just a little too (for sufficiently small changes in source code)? In so, then what does time consistency even mean? If not, then how big does a modification have to be to turn a mind into "a different person" and why does such a dichotomy make sense?
Not necessarily: if (a < b) changing to if (a > b) is a very small change in source with a potentially very large effect.
2Alex Flint12y
Right, so I suppose what I should've said is that if I want to make some arbitrarily small change to the decisions made by mind X (measured as some appropriate quantity of "change") then there exists some change I could make to X's source code such that no decision would deviate by more than the desired amount from X's original decision. How to measure "change in decisions" and "change in source code" is all a bit fluffy but the point is just that there is a continuum of source code modifications from those with negligible effect to those with large effect. This makes it hard to believe that all modifications can be classified as either "X is now a different person" or "X is the same person" with no middle ground. And, if on the contrary middle ground is allowed, then what does time consistency mean in such a case?
Well, it's not much of a problem for me in particular, as I'm fairly generous toward other people as a rule - the main problem is continuity of values and desires. A random stranger is not likely to agree with me on most issues, so I'm not sure I want my resources to become theirs rather than Mom's. If there is likely to be significant continuity of a coherent-extrapolated-volition sort, I'd probably not worry.

If you were going to predict the emergence of AGI by looking at progress towards it over the past 40 years and extrapolate into the future, then what parameter(s) would you measure and extrapolate?

Kurzweil et al measure raw compute power in flops/$, but as has been much discussed on LessWrong there is more to AI than raw compute power. Another popular approach is to chart progress in terms of the animal kingdom, saying things like "X years ago computers were as smart as jellyfish, now they're as smart as a mouse, soon we'll be at human level", b... (read more)

In spite of the rather aggressive signaling here in favor of atheism, I'm still an agnostic on the grounds that it isn't likely that we know what the universe is ultimately made of.

I'm even willing to bet that there's something at least as weird as quantum physics waiting to be discovered.

Discussion here has led me to think that whatever the universe is made of, it isn't all that likely to lead to a conclusion there's a God as commonly conceived, though if we're living in a simulation, whoever is running it may well have something like God-like omnipotence... (read more)

The Bayesian translation of this is "I'm an atheist". Interesting. I'm not sure I know enough about Omega to say. But for one thing: I think it is probably impossible for Omega to predict it's own future mental states (there would be an infinite recursion). This will introduce uncertainty into its model of the universe.
The justification for atheism over agnosticism is essentially Occam's Razor. As far as we know, there are no exceptions to physics as we understand it. So God/Gods explains nothing that isn't already explained by physics. So P(physics is true) >= P(Physics is true AND God/Gods exist(s))
I've always taken Omega to be just a handy tool for thinking about philosophical problems. "Omega appears and tells you X" is short for "For the purposes of this conundrum, imagine that X is true, that you have undeniably conclusive evidence for X, and that the nature of this evidence and why it convinces you is irrelevant to the problem." In a case where X is impossible ("Omega appears and tells you that 2+2=3") then the conundrum is broken.

I have a couple of questions about UDT if anyone's willing to bite. Thanks in advance.

Mass Driver's recent comment about developing the US Constitution being like the invention of a Friendly AI opens up the possibility of a mostly Friendly AI-- an AI which isn't perfectly Friendly, but which has the ability to self-correct.

Is it more possible to have an AI which never smiley-faces or paperclips or falls into errors we can't think of than to have an AI which starts to screw up, but can realize it and stops?

It's not feasible to attempt to create a government which both perfect and self-correcting. I'm not sure if the same is true of FAI.

Is anybody interested in finding a study buddy for the material on Less Wrong? I think a lot of the material is really deep -- sometimes hard to internalize and apply to your own life even if you're articulate and intelligent -- and that we would benefit from having a partner to go over the material with, ask tough questions, build trust, and basically learn the art of rationality together. On the off chance that you find Jewish analogies interesting or helpful, I'm basically looking for a chevruta partner, although the sacredish text in question would be the Less Wrong sequences instead of the Bible.

[-][anonymous]12y 0

I was disallowed from posting this on the LessWrong subreddit, so here it is on the LessWrong mainland:


I decided the following quote wasn't up to Quotes Thread standard, but worth remarking on here:

Read a book a day.

Arthur C. Clarke, quoted in "Science Fictionisms".

I've never managed to do this. I've sometimes read a book in a day, but never day after day, although I once heard Jack Cohen) say he habitually read three SF books and three non-fiction books a week.

How many books a week do you read, and what sort of books? (No lists of recommended titles, please.)

I pretty much don't read paper books - this format and medium might as well die as far as I'm concerned. I listen to ridiculous number of audiobooks on Sansa Clip (in fast mode). The Bay has plenty, or Audible if you can stand their DRM. My favourite have been TTC lectures, which have zero value to me other than entertainment. The idea for audiobooks was mostly to do something useful at times when I cannot do anything else - but it does use cognitive resources and makes me more tired than if I was listening to music or so for the same amount of time. It's very reliable relation.
I read about a book a week, almost exclusively non-fiction, generally falling somewhere between the popular science and textbook level. Occasionally I'll throw a sci-fi novel into the mix. I'd love to speed this up, since my reading list grows much faster than books get completed, but I'm not sure how (other than simply spending more time reading). Has anyone had luck with speed-reading techniques , such as Tim Ferriss's? []
In some periods of my life I've read about a book a day (almost entirely fiction), but I mostly look back at those periods with regret, because I suspect my reading was largely based on the desire to escape an unpleasant reality that I understood as inherent to reality rather than something contingent that I could do something about. As an adult I have found myself reading non-fiction directed at life goals more often and fiction relatively less. Every so often I go 3 months without reading a book but other times I get through maybe 1 a week, but part of this is that non-fiction is generally just a slower read because they actually have substantive content that must be practiced or considered before it really sticks. With math texts I generally slow down to maybe 1 to 10 pages a day. With non-fiction, I also tend to spend relatively a lot of time figuring out what to read, rather than simply reading it. When I become interested in a subject I don't mind spending several hours trying to work out the idea space and find "the best book" within that field. I've never made efforts to learn speed reading because the handful of times I've met someone who claimed to be able to do it and was up for a test, their reading comprehension seemed rather low. We'd read the same thing and then I'd ask them about details of motivation or implication and they would have difficulty even remembering particular scenes or plot elements, leaving out their implications entirely. With speed reading, I sometimes get the impression that people are aiming for "having read X" as the goal, rather than "having read X and learned something meaningful from it".
The stuff Ferriss covers is normal enough. It's better to think of it as remedial reading techniques for people (most everyone) who don't read well than as speeding up past 'normal'. For example, if you're subvocalizing everything you read, You're Doing It Wrong. For your average LW reader, I'd suggest that anything below 300WPM is worth fixing.
After a fallow period I'm back to two-three a month as a fairly regular rhythm. Fiction has been pretty much eliminated from my reading diet (ten years ago it used to make up the bulk of it). Who else has a LibraryThing account [] or similar?
I have a LibraryThing here [], which I generally do a bulk update of every 2-3 months (whenever I'm reminded I have it).
I recently got a GoodReads account [] - mainly because (a) it's on my iPhone and (b) it is just a reading list, rather than an owning list, so editions and such aren't such a hassle.
I'm on LibraryThing here [], but I don't keep it up to date (I did a bulk upload in 2006 and have hardly touched it since), and most of my books that are too old to have ISBNs aren't there. My primary book catalogue isn't online.
Read: 2 books a year :( Listen on iPhone through subscription: 2-3 books a month Plan to read on iPad once I buy it: Maybe one a month, and denser stuff than what they put on audio.
I have read over 5 books a day. I generally read less than one book a week though, as there are so many other things to consume, e.g. on the Internet.
Tyler Cowen also reads voraciously [] . I read up to 3 books a week, averaging around 0.3 due to long periods of avoiding them. The internet and Netflix are much more immediate and require less work. I read primarily science fiction and fantasy, but I have lots of classical fiction and non-fiction as well, Great Books [] style.

Question about Mach's principle and relativity, and some scattered food for thought.

Under Mach and relativity, it is only relative motion, including acceleration, that matters. Using any frame of reference, you predict the same results. GR also says that acceleration is indistinguishable from being in a gravitational field.

However, accelerations have one observable impact: they break things. So let's say I entered the gravitational field of a REALLY high-g planet. That can induce a force on me that breaks my bones. Yet I can define myself as being at ... (read more)

No. Moving non-rigidly breaks things. Differences in acceleration on different parts of things break things.
The classic pithy summary of this is "falling is harmless, it's the sudden stop at the end that kills you."
You know, really, neither falling nor suddenly stopping is harmful. The thing that kills you is that half of you suddenly stops and the other half of you gradually stops.
Well put. And the way I can fit this into an information-theoretic formalism is that one part of the body has high kinetic energy relative to the other, which requires more information to store.
Yes, but the sudden stop is itself a (backwards) acceleration, which should be reproducible merely from a gravitational field. (Anecdote: when I first got into aircraft interior monument analysis, I noticed that the crash conditions it's required to withstand include a forward acceleration of 9g, corresponding to a head-on crash. I naively asked, "wait, in a crash, isn't the aircraft accelerating backwards (aft)?" They explained that the criteria is written in the frame of reference of the objects on the aircraft, which are indeed accelerating forward relative to the aircraft.)
The sudden stop is a differential backwards acceleration. The front of the object gets hits and starts accelerating backwards while the back is not, If you could stop something by applying a uniform 10000g to all parts of the object, it would survive none the worse for wear. If you can't, and only apply it to part, the object gets smushed or ripped apart.
Actually, from a frame of reference located somewhere on the breaking thing, wouldn't it be the differences in relative positions (not accelerations) of its parts that causes the break? After all, breakage occurs when (there exists a condition equivalently expressible as that in which) too much elastic energy is stored in the structure, and elastic energy is a function of its deformation -- change in relative positions of its parts.
Yes, change in relative positions causes the break. But differences in velocities caused the change in relative positions. And differences in acceleration caused the differences in velocities. Normally, you can approximate that a planet's gravitational field is constant with the region containing a person, so it will cause a uniform acceleration, that will change the person's velocity uniformly, which will not cause any relative change in position. However, the strength of the gravitational field really varies inversely with the square of the distance to the center of the planet, so if the person's head is further from the the planet than their feet, their feet will be accelerated more than their head. This is known as gravitational shear. For small objects in weak fields, this effect is small enough not to be noticed.
Okay, thanks, that makes sense. So being in free fall in a gravitational field isn't really comparable to crashing into something, because the difference in acceleration across my body in free fall is very small (though I suppose could be high for a small, ultra-dense planet). So, in free fall, the (slight) weaking gravitational field as you get farther from the planet should put your body in (minor) tension, since, if you stand as normal, your feet accelerate faster, pulling your head along. If you put the frame of reference at your feet, how would you account for your head appearing to move away from you, since the planet is pulling it in the direction of your feet?
Spaghettification [].
Your feet are in an accelerating reference frame, being pulled towards the planet faster than your head. One way to look at it is that the acceleration of your feet cancels out a gravitational field stronger than that experienced by your head.
But I've ruled that explanation out from this perspective. My feet are defined to be at rest, and everything else is moving relative to them. Relativity says I can do that.
Relavtivity says that there are no observable consequences from imposing a uniform gravitational field on the entire universe. So, imagine that we turn on a uniform gravitational field that exactly cancels the gravitational field of the planet at your feet. Then you can use an inertial (non accelerating) frame centered at your feet. The planet, due to the uniform field, accelerates towards you. Your head experiences the gravitational pull of the planet, plus the uniform field. At the location of your head the uniform field is slightly stronger than is needed to cancel the planet's gravity, so your head feels a slight pull in the opposite direction, away from your feet. An important principle here is that you have to apply the same transformation that lets you say your feet are at rest to the rest of the universe.
In a gravitational field steep enough to have nonnegligible tides (that is the phenomenon you were referring to, right?), there is no reference frame in which all parts of you remain at rest without tearing you apart. You can define some point in your head to be at rest, but then your feet are accelerating; and vice versa.

Sam Harris gave a TED talk a couple months ago, but I haven't seen it linked here. The title is Science can answer moral questions.

It was so filled with wrong I couldn't even bother to finish it, and I usually enjoy crackpots from TED.
Harris has also written a blog post [] nominally responding to 'many of my [Harris'] critics' of his talk, but it seems to be more of a reply to Sean Carroll's criticism [] of Harris' talk (going by this tweet [] and the many references to Carroll in Harris' post). Carroll has also briefly responded to Harris' response. []
My reaction was: bad talk, wrong answers, not properly thought through.
He discusses that science can answer factual questions, thus resolving uncertainty in moral dogma defined conditionally on those answers. This is different from figuring out moral questions themselves.
That isn't all he is claiming though:
He does claim this, but it's not what he actually discusses in the talk.
I'm always impressed by Harris's eloquence and clarity of thought.

I was disallowed from posting this on the LessWrong subreddit, so here it is on the LessWrong mainland: Shoeperstimulus