All of HonoreDB's Comments + Replies

I'm not sure the recursive argument even fully works for the stock market, these days--I suspect it's more like a sticky tradition that crudely mimic the incentive structure that used to exist, like a parasitic vine that still holds the shape of the rotted-away tree it killed. When there's any noise, recursion amplifies it with each iteration: a 1-year lookahead to a 1-year lookahead might be almost the same as a 2-year-lookahead, but it's slightly skewed by wanting to take into account short-scale price movements and different risk and... (read more)

Sure, but it's really hard to anticipate which side will benefit more, so in expected value they're equal. I'm sure some people will think their side will be more effective in how it spends money...I'll try to persuade them to take the outside view.

I think those contributors will probably not be our main demographic, since they have an interest in the system as it is and don't want to risk disrupting it. In theory, though, donating to both parties can be modeled as a costly signal (the implied threat is that if you displease me, the next election I'll only donate to your opponent), and there's no reason you can't do that through our site.

It seems to be implicit in your model that funding for political parties is a negative-sum arms race.

What army1987 said. The specific assumption is that on the margin, the effect of more funding to both sides is either very small or negative.

In my own view, the most damaging negative-sum arms race is academia.

This is definitely an extendable idea. It gets a lot more complicated when there are >2 sides, unfortunately. Even if they agreed it was negative-sum, someone donating $100 to Columbia University would generally not be equally happy to take $100 away from Harvard. I don't know how to fix that.

I'm happy to specify completely, actually, I just figured a general question would lead to answers that are more useful to the community.

In my case, I'm helping to set up an organization to divert money away from major party U.S. campaign funds and to efficient charities. The idea is that if I donate $100 to the Democratic Party, and you donate $200 to the Republican party (or to their nominees for President, say), the net marginal effect on the election is very similar to if you'd donated $100 and I've donated nothing; $100 from each of us is being cancel... (read more)

1Azathoth1238y
That the ratios of the marginal benefits of a dollar for the two parties are 1:1 is not at all obvious.
4Salemicus8y
It seems to be implicit in your model that funding for political parties is a negative-sum arms race. This is starkly at odds with much of political thinking, which sees funding for political parties as a positive-sum game. This is expressed by public subsidies for political parties, in such terms as public funding/matching funding/tax deductibility of political donations, depending on where you reside. Political parties turn funding into votes by getting their message out to voters, so the more funding political parties have, the better informed an electorate we will have. Moreover, to the extent that funding getting your message out becomes less binding of a constraint, then other constraints (such as the persuasiveness of that message) will become more binding - which seems like a good thing. I guess it just goes to show that one person's public good is another person's public nuisance. In my own view, the most damaging negative-sum arms race is academia. Perhaps you will inspire me to set up my own 501c(3) to allow matching donations to universities to be diverted to political parties.
2ChristianKl8y
After thinking about the issue a bit, an edge case that's worth to think about: What happens when someone donates personally X amount of money to a party and then donates Y money via your process and X+Y are together more than the maximum donation allowable?

I think you might be underestimating the amount of money in politics that comes from large organized contributors who give money to both parties for purposes of making the system in general beholden to them rather than favoring one ideology over the other.

You should probably chat with Sai, of Make Your Laws. (http://s.ai/) He's spent a bunch of time recently petitioning the FEC to answer questions about various crazy ways his organization would like to funnel donations. (Specific technical questions, like: "If someone gives us a donation whose recipient is conditional on a condition that won't be known until 6 months from now, [question about how some regulation applies].") I bet he can at least help you find answers.

5ChristianKl8y
I recommend crossposting the request for information to http://www.effective-altruism.com/ [http://www.effective-altruism.com/] . Maybe someone knows someone who can help. It's worthwhile to spread the request that many people see it.

What's the best way to get (U.S.) legal advice on a weird, novel issue (one that would require research and cleverness to address well)? Paid or unpaid, in person or remotely.

(For that matter, if anyone happens to be interested in donating good legal advice to a weird, novel non-profit organization, feel free to contact me at histocrat at gmail dot com).

8ChristianKl8y
It probably includes finding a person with expertise on the subject matter. That means it's easier if you reduce the level of abstractness and specify the issue at least a bit.

Arthur Prior's resolution it to claim that each statement implicitly asserts its own truth, so that "this statement is false" becomes "this statement is false and this statement is true".

Pace your later comments, this is a wonderfully pithy solution and I look forward to pulling it out at cocktail parties.

I like people's attempts to step outside the question, but playing along...

LW-rationalists value thinking for yourself over conformity. A LW sport might be a non-team sport like fencing, a team sport in which individuals are spotlighted, like baseball, or a sport that presents constant temptation to follow cues from your teammates but rewards breaking away from the pack.

LW-rationalists value cross-domain skills. A LW sport might involve a variety of activities, like an n-athlon, or facing a quick succession of opponents who all trained together so that les... (read more)

the effects of poverty & oppression on means & tails

Wait, what are you saying here? That there aren't any Einsteins in sweatshops in part because their innate mathematical ability got stunted by malnutrition and lack of education? That seems like basically conceding the point, unless we're arguing about whether there should be a program to give a battery of genius tests to every poor adult in India.

The talent can manifest as early as arithmetic, which is taught to a great many poor people, I am given to understand.

Not all of them, I don't th... (read more)

"Oppenheimer wasn't privileged, he was only treated slightly better than the average Cambridge student."

I'm sorry, I never really rigorously defined the counter-factuals we were playing with, but the fact that Oppenheimer was in a context where attempted murder didn't sink his career is surely relevant to the overall question of whether there are Einsteins in sweatshops.

3Vaniver9y
I don't see the relevance, because to me "Einsteins in sweatshops" means "Einsteins that don't make it to ", for some Cambridge equivalent. If Ramanujan had died three years earlier, and thus not completed his PhD, he would still be in the history books. I mean, take Galois as an example: repeatedly imprisoned for political radicalism under a monarchy, and dies in a duel at age 20. Certainly someone ruined by circumstances--and yet we still know about him and his mathematical work. In general, these counterfactuals are useful for exhibiting your theory but not proving your theory. Either we have the same background assumptions- and so the counterfactuals look reasonable to both of us- or we disagree on background assumptions, and the counterfactual is only weakly useful at identifying where the disagreement is.

Do you really think the existence of oppression is a figment of Marxist ideology? If being poor didn't make it harder to become a famous mathematician given innate ability, I'm not sure "poverty" would be a coherent concept. If you're poor, you don't just have to be far out on multiple distributions, you also have to be at the mean or above in several more (health, willpower, various kinds of luck). Ramanujan barely made it over the finish line before dying of malnutrition.

Even if the mean mathematical ability in Indians were innately low (I'm qu... (read more)

-2Vaniver9y
The specific oppressions you led off with: yes. I thought we were talking about Oppenheimer and Cambridge? It looks like if Oppenheimer hadn't had rich parents who lobbied on his behalf, he might have gotten probation instead of not. Given his instability, that might have pushed him into a self-destructive spiral, or maybe he just would have progressed a little slower through the system. So, yes, jumping from "the university is unhappy" to "the state hangs you" is a gross exaggeration. (Universities are used to graduate students being under a ton of stress, and so do cut them slack; the response to Oppenheimer of "we think you need to go on vacation, for everyone's safety" was 'normal'.)
5gwern9y
I'm perfectly happy to accept the existence of oppression, but I see no need to make up ways in which the oppression might be even more awful than one had previously thought. Isn't it enough that peasants live shorter lives, are deprived of stuff, can be abused by the wealthy, etc? Why do we need to make up additional ways in which they might be opppressed? Gould comes off here as engaging in a horns effect: not only is oppression bad in the obvious concrete well-verified ways, it's the Worst Thing In The World and so it's also oppressing Einsteins! Not what Gould hyperbolically claimed. He didn't say that 'at the margin, there may be someone who was slightly better than your average mathematician but who failed to get tenure thanks to some lingering disadvantages from his childhood'. He claimed that there were outright historic geniuses laboring in the fields. I regard this as completely ludicrous due both to the effects of poverty & oppression on means & tails and due to the pretty effective meritocratic mechanisms in even a backwater like India. It absolutely is. Don't confuse the fact that there are quite a few brilliant Indians in absolute numbers with a statement about the mean - with a population of ~1.3 billion people, that's just proving the point. The talent can manifest as early as arithmetic, which is taught to a great many poor people, I am given to understand. Really? Then I'm sure you could name three examples. Sorry, I can only read what you wrote. If you meant he lacked tact, you shouldn't have brought up insanity. Really? Because his mathematician peers were completely exasperated at him. What, exactly, was he politic about?

I think it can be illustrative, as a counter to the spotlight effect, to look at the personalities of math/science outliers who come from privileged backgrounds, and imagine them being born into poverty. Oppenheimer's conjugate was jailed or executed for attempted murder, instead of being threatened with academic probation. Gödel's conjugate added a postscript to his proof warning that the British Royal Family were possible Nazi collaborators, which got it binned, which convinced him that all British mathematicians were in on the conspiracy. Newton and Tur... (read more)

Oppenheimer's conjugate was jailed or executed for attempted murder, instead of being threatened with academic probation.

A gross exaggeration; execution was never in the cards for a poisoned apple which was never eaten.

Gödel's conjugate added a postscript to his proof warning that the British Royal Family were possible Nazi collaborators, which got it binned, which convinced him that all British mathematicians were in on the conspiracy.

Likewise. Goedel didn't go crazy until long after he was famous, and so your conjugate is in no way showing 'privi... (read more)

Huh, that does make a lot more sense. I guess I'd been assuming that any reference to someone "averting" a prophecy was actually just someone forcing the better branch of an EitherOrProphecy (tvtropes). Like if Trelawney had said "HE WHO WOULD TEAR APART THE VERY STARS IN HEAVEN IF NONE STAND AGAINST IT." The inference that prophecies don't always come true fits Quirrell's behavior much better.

Quirrell seems to have been counterfactually mugged by hearing the prophecy of the end of the world...which would mean his decision theory, and psychological commitment to it, are very advanced.

Assume Quirrell believes that the only possible explanation of the prophecy he heard is that the apocalypse is nigh. This makes sense: prophecies don't occur for trivial events like a visitor to Hogwarts destroying books in the library named "Stars in Heaven" and "The World," and the idea of "the end of the world" being a eucatastrophe ... (read more)

8Eliezer Yudkowsky9y
Upvoted for the word 'eucatastrophe'.

Upvoted because it is an interesting parallel, but this is unlikely to be an explanation of Quirrell's actions. See Chapter 86:

More than the question of whom the prophecy spoke - who was meant to hear it? It is said that fates are spoken to those with the power to cause them or avert them.

Quirrell believes that he can cause or prevent the "end of the world" prophecy, and is gambling that helping Harry increases the chance of "prevent" rather than "cause". A better chance was to dissuade Harry - that would increase the chance of "prevent" even more - but Quirrell's just realized that he can't do that.

T was supposed to do a bit more than it did, but it had some portability bugs so I hastily lobotomized it. All it's supposed to do now is simulate the opponent twice against an obfuscated defectbot, defect if it cooperates both times, otherwise play mimicbot. I didn't have the time to add some of the obvious safeguards. I'm not sure if K is exploiting me or just got lucky, but at a glance, what it might be doing is checking whether the passed-in bot can generate a perfect quine of itself, and cooperating only then. That would be pretty ingenious, since typically a quine chain will go "original -- functional copy -- identical copy -- identical copy", etc.

4selbram9y
You're right. K is a MimicBot with an additional check for proper quining. I primarily intended it to cause defection against CooperateBots, RandomBots, and others that don't simulate their opponents meaningfully. I expected a lot more MimicBot variants and mutual cooperations...

The bad news is there is none. The good news is that this means, under linear transformation, that there is such a thing as a free lunch!

I'm standing at a 4-way intersection. I want to go the best restaurant at the intersection. To the west is a three-star restaurant, to the north is a two-star restaurant, and to the northwest, requiring two street-crossings, is a four-star restaurant. All of the streets are equally safe to cross except for the one in between the western restaurant and the northern one, which is more dangerous. So going west, then north is strictly dominated by going north, then west. Going north and eating there is strictly dominated by going west and eating there. This me... (read more)

0SilasBarta10y
Where is reality's corresponding utility gain?

This seems like a good sketch of the endgame for histocracy, my own pie-in-the-sky organizational scheme. If you start with people voluntarily transitioning management of a resource they own to an open histocratic system with themselves as the judges, and then iterate and nest and stuff, you get something like this in the limit. I hadn't been able to envision it quite as elegantly as you do here.

0whpearson10y
Looks interesting. Will have to read it properly at some point. Any plans to test it in the real world? Or how you might encourage people to test it?

In my discipline? I guess

Write code that's easy to update without breaking dependent code.

That'll save the ancient programmers of the 1950's some time.

If I were trying to build up programming from scratch, it'd get pretty hairy.

Build a machine that, when "x = 1.1; while (10. - x*x > .0001) x = x - ((x * x - 10.) / (10.*x)); display x" is entered into it, displays a value close to the ratio of the longest side of a right triangle to another side expressed as the sum of 0 or 1 times the lengths of successive bisections.

I came here to refer you to John Holt, but since User:NancyLebovitz already did that, I'll just add that I'm amused that your handle is Petruchio.

Unfortunately you need access to a comparably-sized bunch of estimates in order to beat the market. You can't quite back it out of a prediction market's transaction history. And the amount of money to be made is small in any event because there's just not enough participation in the markets.

And the amount of money to be made is small in any event because there's just not enough participation in the markets.

Aren't prediction markets just a special case of financial markets? (Or vice versa.) Then if your algorithm could outperform prediction markets, it could also outperform the financial ones, where there is lots of money to be made.

In prediction markets, you are betting money on your probability estimates of various things X happening. On financial markets, you are betting money on your probability estimates of the same things X, plus your estimate of the effect of X on the prices of various stocks or commodities.

Irrationality Game

Prediction markets are a terrible way of aggregating probability estimates. They only enjoy the popularity they do because of a lack of competition, and because they're cheaper to set up due to the built-in incentive to participate. They do slightly worse than simply averaging a bunch of estimates, and would be blown out of the water by even a naive histocratic algorithm (weighted average based on past predictor performance using Bayes). The performance problems of prediction markets are not just due to liquidity issues, but would inevi... (read more)

0[anonymous]9y
Congratulations. You have discovered a way to make a fortune. Mind you, while you're making your prediction you will have made your prediction wrong. That's the point of markets. If you can beat them you get paid to improve them.
0MixedNuts10y
Downvoted for agreement, but prediction markets still win because they're possible to implement. (Will change to upvote if you explicitly deny that too.)
-3[anonymous]10y
If you think Prediction Markets are terrible, why don't you just do better and get rich from them?
1RichardKennaway10y
A new word to me. Is this [http://histocracy.tripod.com/id5.html] what you're referring to?
8Kaj_Sotala10y
Markets can incorporate any source or type of information that humans can understand. Which algorithm can do the same?
2AspiringRationalist10y
Down-voted for semi-agreement. There are simply too many irrational people with money, and as soon as it became popular to participate in prediction markets, the way it currently is to participate in the stock market, they will add huge amounts of noise.

They do slightly worse than simply averaging a bunch of estimates, and would be blown out of the water by even a naive histocratic algorithm (weighted average based on past predictor performance using Bayes)

Fantastic. Please tell me which markets this applies to and link to the source of the algorithm that gives me all the free money.

Yup. The propositions need to be such that you can get more confident than that.

5JGWeissman11y
My point was that being biased to answer "true", even if "false" is more likely to be correct, is a rational strategy. This problem could be eliminated if the good effects of the correct answer being "true" were independant of getting the right answer. That is, if the correct answer is "true" you get 10 points, and if you answer correctly you get 1 point. That way, you want the answer to be "true", but it is not rational to let this have any effect on your answer.

My girlfriend says that a common case of motivated cognition is witnesses picking someone out of a lineup. They want to recognize the criminal, so given five faces they're very likely to pick one even if the real criminal's not there, whereas if people are leafing through a big book of mugshots they're less likely to make a false positive identification.

She suggests a prank-type exercise where there are two plants in the class. Plant A, who wears a hoodie and sunglasses, leaves to go to the bathroom, whereupon Plant B announces that they're pretty sure Pl... (read more)

This seems like it'll be easiest to teach and test if you can artificially create a preference for an objective fact. Can you offer actual prizes? Candy? Have you ever tried a point system and have people reacted well?

Assume you have a set of good prizes (maybe chocolate bars, or tickets good for 10 points) and a set of less-good prizes (Hershey's kisses, or tickets good for 1 point).

Choose a box: Have two actual boxes, labeled "TRUE" and "FALSE". Before the class comes in, the instructor writes a proposition on the blackboard, such a... (read more)

4JGWeissman11y
If the prize for correctly answering "true" is 10 times as good as the prize for correctly answering "false", then you really should be about 91% confident the correct answer is "false" before you give that answer.

It seems likely that God would create multiple realities, populated by different sorts of people and/or with different True Religions, to feed a diverse set of people into a shared heaven. So the recursive realities would have a pyramid or lattice structure. If God has limited knowledge of the realities he's created, there could even be cycles.

0[anonymous]11y
We have cycled through the realms of mere brilliance at top speed and plunged head-first into the unfathomable depths of recursive genius. It's a trap: like being in orbit, one is trapped in a jump from which one cannot land, on account of constantly missing the ground. In plain English, my mind is blown. If we go into Godel Escher Bach territory I might have a lot of trouble following. Really, when I wrote my request, I was expecting something a lot more mundane. Kind of like the society in 1984. With my apologies to fellow Muslims everywhere, you'd be amazed how much the Qur'an sounds like a lot of propaganda posters glued together once you replace "Allah" with "Big Brother" and "Lord" with "Leader". I suppose we could combine "hands-off totalitarian dictatorship" (paradoxical, I know) with "recursive realities", couldn't we?

God is, himself, in a world filled with vague, ambiguous, sometimes contradictory hints towards a divine meta-reality. He's confused, anxious, and doesn't trust his own judgment. So he's created the Abrahamic world in order to identify the people who somehow manage to arrive at the truth given a similar lack of information. One of our religions is correct--guess right and you go to Heaven to help God try to get to Double Heaven.

6Nisan11y
This is now the subject of an smbc comic [http://www.smbc-comics.com/index.php?db=comics&id=2616#comic].

This reminds me of one of the stories in David Eagleman's 2009 fiction anthology Sum: Forty Tales from the Afterlives, "Spirals":

In the afterlife, you discover that your Creator is a species of small, dim-witted, obtuse creatures. They look vaguely human, but they are smaller and more brutish. They are singularly unintelligent. They knit their brows when they try to follow what you are saying. It will help if you speak slowly, and it sometimes helps to draw pictures. At some point their eyes will glaze over and they will nod as though they unde

... (read more)
5[anonymous]11y
That sounded like something right out of a Jorge Luis Borges novel... But where does the recursion stop? Can we hypothesize that it's Turtles All The Way Down?

Okay, I see that that's what you're saying. The assumption then (which seems reasonable but needs to be proven?) is that the simulated humans, given infinite resources, would either solve Oracle AI [edit: without accidentally creating uFAI first, I mean] or just learn how to do stuff like create universes themselves.

There is still the issue that a hypothetical human with access to infinite computing power would not want to create or observe hellworlds. We here in the real world don't care, but the hypothetical human would. So I don't think your specific idea for brute-force creating an Earth simulation would work, because no moral human would do it.

I'm slightly worried that even formally specifying an "idealized and unbounded computer" will turn out to be Oracle-AI-complete. We don't need to worry about it converting something valuable into computronium, but we do need to ensure that it interacts with the simulated human(s) in a friendly way. We need to ensure that it doesn't modify the human to simplify the process of explaining something. The simulated human needs to be able to control what kinds of minds the computer creates in the process of thinking (we may not care, but the human w... (read more)

9paulfchristiano11y
We are trying to formally specify the input-output behavior of an idealized computer, running some simple program. The mathematical definition of a Turing machine with an input tape would suffice, as would a formal specification of a version of Python running with unlimited memory.

a papercut doesn't leave much if any blood on the paper... as the paper moves away fast enough that blood doesn't even have time to flow on it.

It is possible to engineer, though, if you're manipulating the paper with great telekinetic precision. I accidentally bloodstained a book that way when I was about Harry's age.

N-player rock-paper-scissors variants. They generally involve everybody standing in a circle facing inward shaking their fists three times and chanting in unison, and looking back I feel like they do have a community-building effect. But they bypass the filter because they're competitive, and are presumably appealing to LW people because they involve memorizing a large ruleset and then trying to game it.

0Ronny Fernandez11y
I'll work on that and edit my result to here. Thanks.

Having seen the exchange that probably motivated this, one note: in my opinion, events can be linked both causally and acausally. The linked post gives an example. I don't think that's an abuse of language; we can say that people are simultaneously communicating verbally and non-verbally.

That is awesome! They actually put out their data.

Pretty much, Krugman successfully predicted that the downturn would last a while (2-6,8,15), made some obvious statements (7,9,12,16,17,18), was questionably supported on one (11), was unfairly said to miss another (14), hit on a political prediction (10), and missed on another (13).

He was 50-50 or said nothing, except for successfully predicting that the downturn would take at least a couple of years, which wasn't going out too far on a limb itself.

So I can't say that I'm impressed much with the authors of... (read more)

Incidentally, the best way to make conditional predictions is to convert them to explicit disjunctions. For example, in November I wanted to predict that "If Mitt Romney loses the primary election, Barack Obama will win the general election." This is actually logically equivalent to "Either Mitt Romney or Barack Obama will win the 2012 Presidential Election," barring some very unlikely events, so I posted that instead, and so I won't have to withdraw the prediction when Romney wins the primary.

3Douglas_Knight11y
While that may be best with current PB, I think conditional predictions are useful. If you are only interested in truth values and not the strength of the prediction, then it is logically equivalent, but the number of points you get is not the same. The purpose of a conditional probability is to take a conditional risk. If Romney is nominated, you get a gratuitous point for this prediction. Of course, simply counting predictions is easy to game, which is why we like to indicate the strength of the prediction, as you do with this one on PB. But turning a conditional prediction into an absolute prediction changes its probability and thus its effect on your calibration score. To a certain extent, it amounts to double counting the prediction about the hypothesis.
-1drethelin11y
This is less specific than the first prediction. The second version loses the part where you predict obama will beat romney

it didn't treat mild belief and certainty differently;

It did. Per the paper, the confidences of the predictions were rated on a scale from 1 to 5, where 1 is "No chance of occurring" and 5 is "Definitely will occur". They didn't use this in their top-level rankings because they felt it was "accurate enough" without that, but they did use it in their regressions.

Worse, people get marked down for making conditional predictions whose antecedent was not satisfied!

They did not. Per the paper, those were simply thrown out (... (read more)

5Larks11y
Sure, so we learn about how confidence is correlated with binary accuracy. But they don't take into account that being very confident and wrong should be penalised more than being slightly confident and wrong. I misread; you are right

Incidentally, the best way to make conditional predictions is to convert them to explicit disjunctions. For example, in November I wanted to predict that "If Mitt Romney loses the primary election, Barack Obama will win the general election." This is actually logically equivalent to "Either Mitt Romney or Barack Obama will win the 2012 Presidential Election," barring some very unlikely events, so I posted that instead, and so I won't have to withdraw the prediction when Romney wins the primary.

This objection is not entirely valid, at least when it comes to Krugman. Krugman scored 17/19 mainly on economic predictions, and one of the two he got wrong looks like a pro-Republican prediction.

From their executive summary:

According to our regression analysis, liberals are better predictors than conservatives—even when taking out the Presidential and Congressional election questions.

From the paper:

Krugman...primarily discussed economics...

4buybuydandavis11y
That's a good point. I didn't read the whole thing, as the basic premise seemed flawed. That does seem like real information about Krugman's accuracy. I'd still wonder about the independence of the predictions, though. Did the paper authors address that issue at all? I saw noise about "statistical significance", so I assumed not. Are the specific predictions available online? It seemed like they had a large sample of predictions, so I doubt they were in the paper. This gives my estimate of Krugman a slight uptick, but without the actual predictions and results, this data can't do much more for me.

That's right, Emotion. Go ahead, put Reason out of the way! That's great! Fine! ...for Hitler.

--1943 Disney cartoon

I think Quirrell is working with an unconventional definition of Dark. Something like "in violent opposition to you."

Or you cast the spell after doing the deed, and that one time they were too busy fleeing/claiming this wasn't what it looked like/getting castigated/getting dressed.

...just how many pregnancies has McGonagall caused, anyway?

Most of the stuff I was hoping for hasn't panned out thus far. The ebook gets a few downloads each week, mostly as referrals from the HPMoR fan art page.

2gwern11y
That's too bad. Maybe you should just re-release it for free so you at least get some readers?

See also this exchange on the tvtropes forum, where EY clarifies at least one of his reasons for removing the Griphook line.

0Alsadius11y
It's a bit late to chime in there, but if EY actually thinks that dying people have no interest in things that extend life, he's crazy.

Yeah, it was a total cheat. That's why I put my anagram in the Dramatis Personae.

2gwern11y
Incidentally, what's happened with that play since I left my comment?

I'd walk through the roulette box (sounds like fun!) but not the torture box.

Load More