Posts

Sorted by New

Wiki Contributions

Comments

The day-to-day cognitive skills I've mastered most completely (I will not say "rationalist skills," because this is true of my countless irrational skills too) are the ones which I learned during a moment of strong emotion — any emotion, excitement or curiosity or joy or surprise or depression or fear or betrayal.

In the case of this particular skill, it was betrayal. I brought it on myself — the details aren't important; suffice it that I spent two weeks living in the "should-universe" (I like this term) before a rude reminder of reality — but the emotion, the physical neurendocrine experience of betrayal, was quite real. And I've been able to return to it ever since, and if I'm ever in a situation where I might be working from a cached plan, I can relive a hint of it and ask myself, "Now, you don't want to feel that again, do you?"

Unfortunately, this experience strongly ties the five-second skill of "check consequentialism" to the emotion of betrayal in my mind. It is very easy for me to construct social experiments in which the teacher radically betrays her students, and then turns around and says, "Don't let anyone do that to you again!" But that is horrible teaching. It's a lot more difficult for me to imagine what "check consequentialism" would feel like if it carried a strictly positive emotional association, and then extrapolate outward to what kind of social situation would provide that emotional/cognitive link.

Students must abandon a cached plan, and evaluate the real-world consequences of their actions instead, at precisely the moment they get a strong positive emotional charge. Preferably "fun." Preferably in the sense of a party game, not a strategy game: both because people who have learned to win without disrupting social bonds (or who care more about winning than about socialization) have often already learned this skill, and because the moment I construct "winning" as a state which disrupts social bonds, I've set up a false dilemma which misleads my students about what rational thought actually is.

But what's the chain of causation? A dispassionate experimenter times the payoff to correlate with the decision? That seems awfully Pavlovian. Leaving the plan causes a reward which provides an emotional payoff? Maybe, but if a student only leaves the plan in expectation of reward, they haven't actually learned anything beyond the latest professorial password. The excitement of getting the right answer to a puzzle inspires leaving the plan? I suspect this is the way to go. But then what sort of puzzle?

I'm going to press the "Comment" button now, even though I don't think I've contributed much beyond a restatement of your original dilemma. Perhaps having done so, I'll think of some specific scenarios overnight.

Once upon a time I scored a 42 on the Putnam. Two decades later I placed 23rd at the World Puzzle Championships. I'd be happy to help if I can.

But honestly? This website holds many mathematicians far better than I. Really I'm replying more from a desire to assuage my own curiosity, than from a strong belief that there exists a problem that uniquely I can solve. All I can promise you is that if I don't know the answer, I'll say so.

ETA: If anything, folks, this comment is worth downvoting for committing terrible math while bragging of being good at it. Not that anybody here could have caught the error, but however much eleven years between events might feel like two decades, it really, really isn't.

Oh, that and for commenting on Luke's post when he specifically asked for emails.

I generally resolve this issue with the observation that the awareness of misery takes quite a lot of coherent brainpower. By the time my perceptions are 200 years old, I suspect that they won't be running on a substrate capable of very much computational power — that is, once I pass a certain (theoretically calculable) maximum decrepitude, any remaining personal awareness is more likely to live in a Boltzmann brain than in my current body.

You see, after the vast majority of possible worlds perceive that I am dead, how likely is it that I will still have enough working nerves to accept any new sensory input, including pain? How likely is it that I'll be able to maintain enough memories to preserve a link to my 2011-era self? How likely is it that my awareness, running on a dying brain, will process thoughts at even a fraction of my current rate?

I suspect that after death, I'll quickly drift into an awareness that's so dreamlike, solipsistic, and time-lapsed that it's a bit iffy calling me an awareness at all. I may last until the end of time, but I won't see or do anything very interesting while I'm there. And no worries about the universe clotting with ghosts: as my entropy increases, I'll quickly become mathematically indistinguishable from everyone else, just as one molecule of hydrogen is very like another.

Quantum immortality is pretty certainly real, but it also has to add up to normality.

(Ooh, I like that first problem. It reframes in all sorts of interesting directions.)

Speaking only for myself: Eliezer's sequences first lured me to Less Wrong, but your posts on decision theory were what convinced me to stick around and keep checking the front page.

I confess I don't understand all of the math. It's been decades since I studied mathematics with any rigour; these days I can follow some fairly advanced theory, but have difficulty reproducing it, and cannot in general extend it. I have had nothing substantial to add, and so I haven't previously commented on one of your posts. Somehow LW didn't seem like the best place to go all fangirl…

It's not only the contributors who are following your work on decision theory. I hope you'll continue to share it with the rest of us.

Well, perhaps this answers Yvain's question on the thread above: if we link to the original post, instead of quoting it, then its "next" buttons will work….

Well, how embarrassing. Ten months of lurking and I still hadn't noticed that for myself. Thank you!

My own biggest annoyance, after discovering this site last summer and delving into the Sequences, is that it was often very difficult to figure out which post came next.

Finding dependencies was easy — even when there isn't an explicit "Follows:" tag at the start, Eliezer's generosity of hyperlinks meant that I could quickly assemble a screenful of tabs — but whenever I finished a particularly exciting post, especially one I'd reached on the third hyperlink down, I didn't know how to find its follow-up. Early on, I didn't even know how to guess which Sequence it was part of.

By now I know about the "all posts by year" lists, but as a newbie I couldn't find them. And if I had found them, I wouldn't have known which posts were relevant from their titles alone. I'd have used a naïve all-Eliezer-all-the-time heuristic, and assembled the same list that you're intelligently avoiding.

And … honestly … even if there were a single, coherent, easy to find, chronological list of all Sequence posts … the act of going there, looking up the blog I just finished, and visiting the one beneath it is just the sort of trivial inconvenience that discourages new readers. It's easy, but it's not obvious. We can make it easier.

So as long as we're re-running the Sequences from a template — could there please be a "next" button?

Pray forgive me that I dodge this question. My brother prefers to keep his personal & professional lives separate; now that I've outed myself as his sister on a Googlable forum, I feel awkward identifying precisely whose sister I am.

I didn't like it at all, the first time I read it.

Many years later, after reading and enjoying A Deepness in the Sky, I gave it another try, and this time liked it very much. Even though the books were written in the opposite order, I wonder whether it helps to read Deepness first?

Load More