lsusr

# Sequences

True Stories
Rationality
How to Write
Bayeswatch
Luna Lovegood
Sunzi's《Methods of War》
Short Stories
Antimemetics

# Wiki Contributions

Fascinating. You're one of the names on Less Wrong that I associate with positive, constructive dialogue. We may have a scissor statement here.

I appreciate your earnest attempt to understand what I'm writing. I don't think "weirdos/normies" nor "Critical thinkers/uncritical thinkers" quite point at what I'm trying to point at with "independent/conventional".

"Independent/dependent" is about whether what other people think influences you to reach the same conclusions as other people. "Weirdos/normies" is about whether you reach the same conclusions as other people. In other words, "weirdos/normies" is correlation. "Independent/dependent" is causation in a specific direction. Independent tends to correlate with weirdo, and dependent tends to correlate with normie, but it's possible to have either one without the other.

You are correct that critical thinkers may want to censor uncritical thinkers. However, independent-minded thinkers do not want to censor conventional-minded thinkers.

Your comment is not a censure of me.

I didn't feel the need to distinguish between censorship of ideas and censorship of independent-minded people, because censorship of ideas censors the independent-minded.

give enough examples to know what kind of exceptions to look for

I deliberately avoided examples for the same reason Paul Graham's What You Can't Say deliberately avoids giving any specific examples: because either my examples would be mild and weak (and therefore poor illustrations) or they'd be so shocking (to most people) they'd derail the whole conversation.

Did you read the Paul Graham article I linked? Do you disagree with it too?

Independent-mindedness is multi-dimensional. You can be more independent-minded in one domain than another.

I made my November 20, 2023 08:58:05 UTC post between the dip and the recovery.

November 20, 2023 19:54:45 UTC

Result: Microsoft has gained approximately \$100B in market capitalization.

November 20, 2023 08:58:05 UTC

If my phone wasn't broken right now I'd be creating a Robinhood (or whatever) account so I can long Microsoft. Ideally I'd buy shares, but calls (options to buy) are fine.

Why? Because after the disaster at OpenAI, Satya Nadella just hired Sam Altman to work for Microsoft directly.

My deontological terminal value isn't to causally win. It's for FTD agents to acausally lose. Either I win, or the FDT agents abandon FDT. (Which proves that FDT is an exploitable decision theory.)

I'm not sure I see the pathological case of the problem statement: an agent has utility function of 'Do worst possible action to agents who exactly implement (Specific Decision Theory)' as a problem either. Do you have a specific idea how you would get past this?

There's a Daoist answer: Don't legibly and universally precommit to a decision theory.

But the exploit I'm trying to point to is simpler than Daoist decision theory. Here it is: Functional decision theory conflates two decisions:

1. Use FDT.
2. Determine a strategy via FDT.

I'm blackmailing contingent on decision 1 and not on decision 2. I'm not doing this because I need to win. I'm doing it because I can. Because it puts FDT agents in a hilarious lose-lose situation.

The thing FDT disciples don't understand is that I'm happy to take the scenario where FDT agents don't cave to blackmail. Because of this, FDT demands that FDT agents cave to my blackmail.

Correct. The last time I was negotiating with a self-described FDT agent I did it anyway. 😛

My utility function is "make functional decision theorists look stupid", which I satisfy by blackmailing them. Either they cave, which mean I win, or they don't cave, which demonstrates that FDT is stupid.