All of thescoundrel's Comments + Replies

Overcoming bias guy meme | quickmeme

My apologies, I was meaning a more general "you", as in "the person who uses this phrase". Not directed at you you, just the common you, and you are certainly not the you I meant for "you" to refer to.

Overcoming bias guy meme | quickmeme

Fair enough- I should have chosen a clearer example.

0wedrifid8ySociopathy would be a perfect example (in the sense that I would expect them to be less likely to make terrible jokes like that). Dyslexia would work too. Definitely not schizophrenia though.
Overcoming bias guy meme | quickmeme

My complaint is that is either a euphemism for autistic (in which case, just say autistic- if that feel "Squicky", re-evaluate your statement), or it is so vague as to lose all meaning- someone with bi-polar disorder is non-neurotypical, but is no more likely to have made these than anyone else.

If you do mean specifically autistic, you may want to broaden your understanding of autism. Autism is not standard, it can present in many, many ways, including many that would not create this type of image. The images are indicative of a poor grasp of humor, and a poor grasp of the original subject matter, but I do not see a higher probability for an autistic person to create these against the general population.

1wedrifid8yNot actually true (in the specific example, although I support your general objection about terminology misuse). In a hypo-manic phase someone is more likely to get caught up with the kinda-clever notion of making Hanson memes and get carried away dumping all his ideas, exercising less judgement and restraint than he otherwise would. (Of course some time later they would later be able to look at their work and see why it isn't funny and delete it. They are also more likely than average to come up with a whole bunch of awesome memes.)
2wedrifid8y(It likely isn't your intention but I'm a little uncomfortable having these 'you' claims as replies to me when it isn't me to whom they apply.)
Caelum est Conterrens: I frankly don't see how this is a horror story

If that's the case, then I stand by my original point, if not to its extreme conclusion.

Caelum est Conterrens: I frankly don't see how this is a horror story

Ah- I read the preview version, I think that bit was added later. Thanks :)

MetaMed: Evidence-Based Healthcare

Wow- that is former MTG Pro Zvi, one of the best innovators in the game during his time. Awesome to see him involved in something like this.

Caelum est Conterrens: I frankly don't see how this is a horror story

The biggest horror aspect for me (also from the original) was that (rot13) nal aba-uhzna vagryyvtrapr unf ab punapr. Aba-uhzna vagryyvtrag yvsr trgf znqr vagb pbzchgebavhz, gb srrq gur rire tebjvat cbal fcurer. Vg vf gur gbgny trabpvqr bs rirel aba-uhzna enpr.

6iceman8y(rot13) Nyvraf ner rvgure cbavsvrq be abg qrcraqvat ba jurgure fur erpbtavmrf gurz nf uhzna. Vg raqf jvgu ure svaqvat na rknzcyr bs nyvraf gung fur guvaxf ner uhzna, juvyr fgebatyl vzcylvat gung fur'f qrfgeblrq fbpvrgvrf jvgubhg rira abgvpvat ("Fur unq frra znal cynargf tvir bss pbzcyrk, aba-erthyne enqvb fvtanyf, ohg hcba vairfgvtngvba, abar bs gubfr cynargf unq uhzna yvsr, znxvat gurz fnsr gb erhfr nf enj zngrevny gb tebj Rdhrfgevn.")
1gwern8yThat wasn't true in the original either. Many aba-uhznaf enprf got hcybnqrq.
Open Thread, January 1-15, 2013

I think that is fighting the hypothetical.

That's possible, but I am not sure how I am fighting it in this case. Leave Omega in place- why do we assume equal probability of omega guessing incorrectly or correctly, when the hypothetical states he has guessed correctly each previous time? If we are not assuming that, why does cdc treat each option as equal, and then proceed to open two boxes?

I realize that decision theory is about a general approach to solving problems- my question is, why are we not including the probability based on past performance in our general approach to solving problems, or if we are, why are we not doing so in this case?

[Discussion] The Kelly criterion and consequences for decision making under uncertainty

I made a comment early this week on a thread discussing the lifespan dilemma, and how it appears to untangle it somewhat. I had intended to see if it helped clarify other similar issues, but haven't done so yet. I would be interested in feedback- it seems possible the I have completely misapplied it in this case.

Open Thread, January 1-15, 2013

If in Newcomb's problem you replace Omega with James Randi, suddenly everyone is a one-boxer, as we assume there is some slight of hand involved to make the money appear in the box after we have made the choice. I am starting to wonder if Newcomb's problem is just simple map and territory- do we have sufficient evidence to believe that under any circumstance where someone two-boxes, they will receive less money than a one box? If we table the how it is going on, and focus only on the testable probability of whether Randi/Omega is consistently accurate, we... (read more)

4TimS8yNo. I think that is fighting the hypothetical. More generally, the discipline of decision theory is not about figuring out the right solution to a particular problem - it's about describing the properties of decision methods that reach the right solutions to problems generally. Newcomb's is an example of a situation where some decision methods (eg CDT) don't make what appears to be the right choice. Either CDT is failing to make the right choice, or we are not correctly understanding what the right choice is. That dilemma motivates decision-theorists, not particular solutions to particular problems.
Some scary life extension dilemmas

Interestingly, I discovered the Lifespan Dilemma due to this post. While not facing a total breakdown of my ability to do anything else, it did consume an inordinate amount of my thought process.

The question looks like an optimal betting problem- you have a limited resource, and need to get the most return. According to the Kelly Criterion, the optimal percentage of your total bankroll looks like f*=(p(b-1)+1)/b, where p is the probability of success, and b is the return per unit risked. The interesting thing here is that for very large values of b, t... (read more)

[Link] "An OKCupid Profile of a Rationalist"

My apologies if you felt I was handing out condemnation- it was not my intent at all. As is said, I did not think the reaction I had was the reaction you were aiming for. While upon consideration I don't think there is a valid harm to the OK Cupid posting, I was in no way attempting to say we shouldn't talk about it. I simply was noting that if persuasion is what you are after, there may be a better approach that does not trigger the squick feeling. It is also possible that I am a statistical anomaly in this (although I would say that the number of upvot... (read more)

[Link] "An OKCupid Profile of a Rationalist"

From my perspective, if you are in a place of prestige and you want to avoid damage to your image, hiding your quirks is maximizes the chance that they will be discovered in a way that precludes you controlling the how it is released. If image malpractice is the issue, keeping this in the open is an inoculation against more damaging future revelation. The trade-off is that you may lose credibility up front. Given EY's eschewing of the "normal" routes to academic success, and the profound strangeness that a some of the ideas we take for granted h... (read more)

Random LW-parodying Statement Generator

Eliezer Yudkowsky is what acausal sex feels like from the inside.

Inside Eliezer Yudkowsky's pineal gland is not an immortal soul, but counterfactual hugging.

Decision Theories, Part 3.5: Halt, Melt and Catch Fire

So- does the whole problem go away if instead of trying to deduce what fairbot is going to do with masquerade, we assume that fairbot is going to asses it as if masquerade = the current mask? By ignoring the existence of masquerade in our deduction, we both solve the Gödel inconsistency, while simultaneously ensuring that another AI can easily determine that we will be executing exactly the mask we choose.

Masquerade deduces the outcomes of each of its masks, ignoring its own existence, and chooses the best outcome. Fairbot follows the exact same process... (read more)

0orthonormal9yThe problem with that is something I outlined in the previous post []: this agent without a sanity check is exploitable. Let's call this agent TrustingBot (we'll see why in a minute), and have them play against a true TDT agent. Now, TDT will cooperate against FairBot, but not any of the other masks. So TrustingBot goes ahead and cooperates with TDT. But TDT notices that it can defect against TrustingBot without penalty, since TrustingBot only cares what TDT does against the masks; thus TDT defects and steals TrustingBot's lunch money. See how tricky this gets?
Decision Theories, Part 3.5: Halt, Melt and Catch Fire

What happens if the masks are spawned as sub processes that are not "aware" of the higher level process monitoring thems? The higher level process can kill off the sub processes and spawn new ones as it sees fit, but the mask processes themselves retain the integrity needed for a fairbot to cooperate with itself.

0orthonormal9yThis is the case with Masquerade: within the hypotheticals, the masks and opponents play against each other as if theirs were the "real" round.
In Defense of Tone Arguments

Ahh, that wonderfully embarrassing moment when you realize your small group has been calling Crocker's rules by the wrong name for almost year.

In Defense of Tone Arguments

A rationalist who doesn't consider the effects of tone when attempting to effect a change in someone's thinking is not dealing in reality. There is a reason Becker's Rules have to be asked for and agreed to, even among rationalists- we are not built to automatically separate tone from content, and there are times when even the most thoughtful of us are personally vulnerable to a harsh tone. We tend to simplify to "two Beysians updating on evidence", but in reality, we have to consider the best way to transmit that message, as well as the outcome ... (read more)

8thescoundrel9yAhh, that wonderfully embarrassing moment when you realize your small group has been calling Crocker's rules by the wrong name for almost year.
5wedrifid9yBecker's Rules []? Surprisingly relevant.
3TimS9yCrocker's rules?
Magic players: "How do I lose?"

This is a very fine line to walk, especially in magic. Finding the places you could have made better decisions, while understanding what decisions you could not have made better with the information you had at the time, is not an easy task- although at my skill level, it is generally easier to assume I made a poor decision and find it.

5TheOtherDave9yI don't know much about magic, but yes, when games have a significant random element it's often difficult to tell the difference between an optimal strategy that just happened to lose this time, and a suboptimal strategy.
Magic players: "How do I lose?"

Probably the best example of how do I win. In this match, the gut reaction would be to use the direct damage spell in hand to clear away one of the creatures, and hope for either a big creature draw, or some other game changing spell. Instead, knowing the ONLY way he could win is if the card on top of his deck is direct damage, he spent the direct damage spell in hand directly at his opponent, and then just flipped over the top card- if you only have one path to victory, you have to ignore all other paths, no matter how tempting, or how much it feels like the wrong play.

3TheOtherDave9y...without losing sight of the fact that having allowed myself to get into a position where my only path to victory requires a low-probability event that I don't control was already a huge mistake that I should confidently expect to result in my failure.
Group rationality diary, 6/25/12

Realized I was in a slump regarding my band, due to one poor audience response at one show. Changed my focus to a few salient facts- we have turned a (albeit modest) profit for three years, we have three albums, and enough written music for a fourth, and contracts for 25 days of performance already in place. Changed focus into thoughts on how to grow our audience, and whether we would be better suited at comedy clubs instead of bars.

Thwarting a Catholic conversion?

The world looks pretty scary when we try and look at it as it really is. As much as we try to account for it, at some level we are a function of that which we observe and take in- from that viewpoint, it is not difficult to imagine a scenario where the information you take in becomes skewed along lines that don't mach up with reality. Given enough skewed data, we all make choices that appear irrational from other eyes.

Sadly, none of us rank information told us once as highly as information we "discover" for ourselves. I don't know if the convers... (read more)

4Manfred9yParagraph breaks, man. Paragraph breaks.
Sneaky Strategies for TDT

Just for a moment, let us consider TDT as a property. By defining the rules around the TDT property, the question is not whether or not the agent should 1 box or 2 box, the question has become whether the agent can fool Omega in such a way to maximize its utility. As long as we grant that Omega can always simulate TDT correctly, then the choice becomes clear- if omega correctly recognizes the TDT trait, or we are unable to calculate, we one box B, otherwise we two box.

How can we get more and better LW contrarians?

This reminds me of days in +x debate, where the topic was set in advance, and you were assigned to oppose or affirm each round. Learning to find persuasive arguments for ideas you actually support is not an intuitive skill, but certainly one that can be learned with practice. I, for one, would greatly enjoy +x debate over issues in the less wrong community.

Our Phyg Is Not Exclusive Enough

That's possible- it may be that the cost of doing this effectively is not worth the gain, or that there is a less intensive way to solve this issue. However, I think there could be benefits to a tiered structure- perhaps even have the levels be read only for those not there yet- so everyone can read the high signal to noise, but we still make sure the protect it. I do know there is much evidence to suggest the prestige among even small groups is enough to motivate people to do things that normally would be considered an absurd waste of time.

Our Phyg Is Not Exclusive Enough

I think the freemasons have this one solved for us: instead of a passwords, we use interview systems, where people of the level above have to agree that you are ready before you are invited to the next level. Likewise, we make it known that helpful input on the lower levels is one of the prerequisites to gaining a higher level- we incentivise constructive input on the lower tiers, and effectively gate access to the higher tiers.

4Alsadius9ySo, who is going to sit on the interview committee to control access to a webforum? You're asking more of the community than it will ever give you, because what you advocate is an absurd waste of time for any actual person.
2Percent_Carbon9yYou're not proposing a different system, you're just proposing additional qualifiers.
9Bugmaster9yWhy does this solution need to be so global ? Why don't we simply allow users to blacklist/whitelist other users as they see fit, on an individual basis ? This way, if someone wants to form an ultra-elite cabal, they can do that without disturbing the rest of the site for anyone else.
Our Phyg Is Not Exclusive Enough

Reading the comments, it feels like the biggest concern is not chasing away the initiates to our phyg. Perhaps tiered sections, where demonstrable knowledge in the last section gains you access to higher levels of signal to noise ratio? Certainly would make our phyg resemble another well known phyg.

0buybuydandavis9yInstead of setting up gatekeepers, why not let people sort themselves first? No one wants to be a bozo. We have different interests and aptitudes. Set up separate forums to talk about the major sequences, so there's some subset of the sequences you could read to get started. I'd suggest too that as wonderful as EY is, he is not the fount of all wisdom. Instead of focusing on getting people to shut up, how about focusing on getting people to add good ideas that aren't already here?
0Viliam_Bur9yDepending on other factors, it could also resemble a school system.
5Bugmaster9yThis sounds like a good idea, but I think it might be too difficult to implement in practice, as determined users will bend their efforts toward guessing the password [] in order to gain access to the coveted Inner Circle. This isn't a problem for that other phyg, because their access is gated by money, not understanding.
1TrE9ySounds like a good idea, would be an incentive for reading and understanding the sequences to many people and could raise the quality level in the higher 'levels' considerably. There are also downsides: We might look more phyg-ish to newbies, discussion quality at the lower levels could fall rapidly (honestly, who wants to debate about 'free will' with newbies when they could be having discussions about more interesting and challenging topics?) and, well, if an intelligent and well-informed outsider has to say something important about a topic, they won't be able to. For this to be implemented, we'd need a user rights system with the respective discussion sections as well as a way to determine the 'level' of members. Quizzes with questions randomly drawn from a large pool of questions with a limited number of tries per time period could do well, especially if you don't give any feedback about the scoring other than 'you leveled up!' and 'Your score wasn't good enough, re-read these sequences:__ and try again later.' And, of course, we need the consent of many members and our phyg-leaders as well as someone to actually implement it.

Maybe we should charge thousands of dollars for access to the sequences as well? And hire some lawyers...

More seriously, I wonder what people's reaction would be to a newbie section that wouldn't be as harsh as the now-much-harsher normal discussion. This seems to go over well on the rest of the internet.

Sort of like raising the price and then having a sale...

0[anonymous]9yRationology? Edit: I apologize.
Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84

I forget that when I listen to it, I have the background of the story and buildup already, so I start with different expectations- perhaps not the best example.

0Alsadius9yAlso, I've listened to a fair bit of weird proggy music.
Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84


Seems very clear at this point that Q. cannot predict Harry's actions, and that he was responsible for Hermione's framing. Truth is entangled, Harry is very clever, especially when not under a time crunch- this seems very likely to me.

1Alsadius9yI think the probability might be that high given narrative requirements(i.e., Harry will near-certainly figure it out, Potter books usually end at the end of school years and it's April, and we know that the series is in the homestretch), but I'd put an in-universe probability without reference to that data a vastly lower chance.
Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84

Here is a quarter tone scale. While the changes are detectable right next to each other, much like sight delivers images based on pre-established patterns, so does hearing. When laid out in this fashion, you can hear the quarter tone differences- although to my ears (and I play music professionally, have spent much time in ear training, and love music theory) there are times it sounds like two of the same note is played successively. Move out of this context, into an interval jump, and while those with good relative pitch may think it sounds "pitchy&q... (read more)

I just tried some experiments and I find that if I take Brahms's lullaby (which I think is the one Eliezer means by "Lullaby and Goodnight") and flatten a couple of random notes by a quarter-tone, the effect is in most cases extremely obvious. And if I displace each individual pitch by a random amount from a quarter-tone flat to a quarter-tone sharp, then of course some notes are individually detectable as out of tune and some not but the overall effect is agonizing in a way that simply getting some notes wrong couldn't be.

I'm a pretty decent (th... (read more)

Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84

The vast majority of humans don't have perfect pitch, so the specific pitch of the note is far less important than the relationships to the notes surrounding them. I agree that he is rather showing off, but unless you spend a very large amount of time ear training, you likely cannot tell when a note is a quarter tone sharp or flat. However, just like there are cycles of notes that always sound amazing together when you run them through variation (see the circle of 5ths), there are notes that sound horrible and jarring. Furthermore, the amount of time it ta... (read more)

1gjm9yEven without a lot of ear training, you can quite likely hear if a note is a quarter-tone out relative to its predecessors and successors.
Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84

I don't think you need to even venture into the world of quarter pitches in order to create horrible humming. To give an idea of a song that twists your expectations of keys and time signatures and melodic progression, and breaks it in specific ways to ramp tension, check the epiphany from sweeney todd.

0Alsadius9yI didn't really notice anything wrong with that. it jumped around a lot, and it wasn't especially good, but it didn't much bother me.
0Percent_Carbon9yThere's a continuous spectrum of pitch. The character is kind of showing off, like he always kind of is. He's probably hitting notes that are multiples of irrational numbers when described in Hertz. Retracted because it seemed the best way to acknowledge the correction: the vast majority of common musical notes are multiples of irrational numbers when described in Hertz.
Harry Potter and the Methods of Rationality predictions

I was thinking a full blown sequel, where part of voldie's plan is revealed, but not near all of it- we resolve the issues surrounding the stone, and perhaps gain some clearer insight into what dumbledore's motivation is, but the war is still on.

Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84

I would think the real key to horrible humming would not be to have it be uniformly horrible, but so close to brilliant that the horrible notes punctuate and pierce the melody so completely that it starts driving you mad- a song filled with unresolved suspensions, minor 2nds where they just should not belong, that then somehow modulate into something which sounds normal just long enough for you to think you are safe, when it collapses again, and the new key is offensive both to the original and to the modulation. This is not just random sounds, this is purposeful song writing, with the intent to unsettle- in my mind, something like sondheim at his most twisted, but without any resolution ever.

1David_Gerard9ySee, I'm the sort of person that reads that and wants to buy that record. Probably from the small ads in the back of The Wire []. (Breaking musical rules sufficiently horribly is a well-established way to win at music, even if you're unlikely to get rich from it. Metal Machine Music [] actually got reissued and people actually bought it.)
7[anonymous]9yWell, first we're dealing with variations on a specific tune. The reason I suspect that random variations might work well is that if the probability of a change is sufficiently low, it would have exactly the effect you suggest: mostly the original "Lullaby and Goodnight", but with occasional horrible. Of course, if I were actually a cruel genius, I could do better, but it would be foolish of me to admit to being one. Another reason random changes might work well is that they are by definition unexpected. If I did something purposeful, it would have a pattern; the real Quirrell might break that pattern by observing his victim's reactions, but not having a pattern at all might also be an interesting thing to try.
Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84

“People aren't either wicked or noble. They're like chef's salads, with good things and bad things chopped and mixed together in a vinaigrette of confusion and conflict.” -- The Grim Grotto

9Velorien9yI suspect Voldemort is less likely to produce a true Patronus. The Patronus 2.0 comes from facing death and rejecting it. Voldemort certainly rejects death, but it doesn't seem like he's faced it the way Harry has. Voldemort: "This 'death' thing is horrible, get it away from me! I'll tear apart my very soul if that's what it takes to escape death!" Harry: "You dare threaten me and the people I feel responsible for, you pitiful little leftover of the evolutionary process? I will end you if it's the last thing I do." Admittedly, this is based more on a canon portrayal of Voldemort, since MoR!Voldemort's views on the subject have yet to be made explicit (and he seems altogether more emotionally healthy than the canon version).
Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84

Just for fun, consider this: Quirrelmort is more likely to be able to produce a true patronus than Dumbledore, as Quirrelmort understands that death should be avoided. Patronus 2.0 as the power the Dark Lord Knows not?

5thomblake9yObviously all the good guys are anti-death and bad guys are pro-death.
Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84

In a rather large "Oh Duh" moment: if Harry knew about the stone, he would insist it be used on everyone. Barring some unforeseen mechanism that prevents its mass use, he would view Dumbledore as Evil for knowing how to keep everyone alive, and not acting on it.

Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84

Prediction: Harry's investigation to clear Hermione's name leads him to Quirrlemort's true identity.

Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84

1.Unless you have supreme power over everyone, you are very likely to need help from other people, and evil inhibits your ability to gain that help.

  1. Evil causes cascade ripples with consequences that are very hard to see- large numbers of people you don't know about having personal vendettas against you, etc.

  2. It is hard to inspire people to your cause with evil- they people you are using must at least think they are acting in accordance with good, and at some level have what we would consider a "good" set of rules for how they deal with each other.

Harry Potter and the Methods of Rationality predictions

Possibility that this fiction concludes year one, with Harry and Voldemort still alive, and a second book picks up with year two?

0gwern9yWhat would a 'second book' be? A full-blown official sequel? (Without a resolution of the plot, I'd wonder if it really makes sense to dub any division the end of 'book 1' and the start of 'book 2'.) Might be better to predict about sequels in general.
Harry Potter and the Methods of Rationality discussion thread, part 14, chapter 82

I take a slightly broader view: a device in which the final elements of the piece inspire intense curiosity in the reader/viewer.

0bogdanb9yThat would be a reasonable view in most cases, but after the first five or so paragraphs I can’t really think of any point in MoR, end of chapter or otherwise, that didn’t inspire me with intense curiosity about what’s next...
Harry Potter and the Methods of Rationality discussion thread, part 11

I phrased that poorly- if her wand wasn't used, then it goes a long way to clearing her name. If it was, then Harry starts tracking down suspects, in order to find the wand that made the memory charm. Either way, its an investigation tool that still hasn't been used.

SotW: Be Specific

Genie's Folly

A near omnipotent being is offering you a single wish. It is known that the Genie will attempt to implement the wish in a way the results in a net decrease of utility for the wisher, but is bound by any constraints explicitly written into the wish. Write your wish in such a way that the Genie can only implement it in such a way that you have a net increase in utility. Bonus points if you wish for something related to a current problem you are solving; ie, I wish I ran a successful startup with x following properties, which avoids y pitfalls in z ways.

0Zaq9yConstraint: Within the next two seconds, you must perform only the tasks listed, which you must perform in the specified order. Task 1. Exchange your definition of decrease with your definition of increase Task 2. --insert wish here-- Task 3. Self-terminate This is of course assuming that the I don't particularly care for the genie's life.
2handoflixue9yConstraint: This must result in a net increase in utility for me...
1TheOtherDave9yMy response to this would be a blank sheet of paper, personally.
Open Thread, April 1-15, 2012

Looks like Zach Wiener at SMBC might be reading up on FAI.

3Nisan9yAnd/or Nozick's utility monster [].
4cousin_it9yOr maybe on utility monsters []. Also see Hacking the CEV for fun and profit [].
Should logical probabilities be updateless too?

Our set of possible worlds comes from somewhere, some sort of criteria. Whatever generates that list passes it to our choice algorithm, which begins branching. Lets say we receive an observation that contains both Logical and Indexical updates- could we not just take our current set of possible worlds, with our current set of data on them, update the list against our logical update, and pass that list on to a new copy of the function? The collection remains fixed as far as each copy of the function is concerned, but retains the ability to update on new information. When finished, the path returned will be the most likely given all new observations.

Should logical probabilities be updateless too?

Perhaps I am missing the obvious, but why is this a hard problem? So our protagonist AI has some algorithm to determine if the millionth digit of pi is odd- he cannot run it yet, but he has it. Lets call that function f{}, that returns a 1 if the digit is odd, or a 0 if it is even. He also has some other function like: sub pay_or_no { if (f{}) { pay(1000); }

In this fashion, Omega can verify the algorithm that returns the millionth digit of pi, independently verify the algorithm that pays based on that return, and our protagonist gets his money.

4cousin_it9y!!!! This seems to be the correct answer to jacobt's question. The key is looking at the length of proofs. The general rule should go like this: when you're trying to decide which of two impossible counterfactuals "a()=x implies b()=y" and "a()=x implies b()=z" is more true even though "a()=x" is false, go with the one that has the shorter proof. We already use that rule when implementing agents that compute counterfactuals about their own actions. Now we can just implement Omega using the same rule. If the millionth digit of pi is in fact odd, but the statement "millionth digit of pi is even => agent pays up" has a much shorter proof than "millionth digit of pi is even => agent doesn't pay up", Omega should think that the agent would pay up. The idea seems so obvious in retrospect, I don't understand how I missed it. Thanks!
SotW: Check Consequentialism

To me, this comes down to what I am trying to learn as my anti-akrasia front kick: I cache the question "Why am I doing what I am doing?". While I lose some amount of focus to the question itself, I have gained key insights into many of my worst habits. For instance, my employer provides free soft drinks- I found that I would end up with multiple, open drinks at my desk. The cached question revealed I was using the action of getting a drink whenever I felt the need to stretch and leave my desk. Browsing reddit too much at work- cached question c... (read more)

Harry Potter and the Methods of Rationality discussion thread, part 11

Yes, but we have now found a thread that we can pull on to start establishing a true map. The truth is entangled- so then we find the student whose wand was stolen, or we start testing the wands of our prime suspects. At the very least, we have introduced an inconsistency in the story- when would Hermione have had the chance to steal a wand? Draco called this dual- are we to now believe that Hermione showed up believing she would be defeated and stole a wand in advance, so she could kill Draco? I don't know if this is enough to forestall the vote, but it c... (read more)

Harry Potter and the Methods of Rationality discussion thread, part 11

Just a piece, but one I haven't seen discussed- why has no one done a Priori Incantatem on Hermione's wand? We know harry knows about it, from clear back in chapter 13:

"Priori Incantatem," said Sprout. She frowned. "That's odd, your wand doesn't seem to have been used at all." Harry shrugged.

I don't know if this is part of Harry's plan, but it is certainly a line of investigation that has not been followed. There is always the possibility that whoever did the memory charm used Hermione's wand to cast the blood chilling hex, but once down that track Harry can start eliminating suspects for the memory charm.

2Luke_A_Somers9yAnd if her wand did it? Quite possible, but it wouldn't help.
4glumph9yEven if that test is performed and it is proven that Hermoine's wand was not used to cast the Blood-Cooling/Chilling Charm, Lucius et al. will simply claim that Hermoine stole another student's wand before the duel.
1pedanterrific9yWait, it actually says that? Oops- Priori Incantatem is the brother-wand effect from GoF, the investigative spell is Prior Incantato.
Load More