Wiki Contributions

Comments

Right. The 100 arguments the person gives aren't 100 parallel arguments in favor of them having good reasons to believe evolution is false, for exactly the reason you give. So my reasoning doesn't stop you from concluding that they have no good reason to disbelieve.

And, they are still 100 arguments in parallel that evolution is false, and my reasoning in the post correctly implies that you can't read five of them, see that they aren't good arguments, and conclude that evolution is true. (That conclusion requires a good argument in favor of evolution, not a bad one against it.)

Yeah. I should have made it clear that this post is prescribing a way to evaluate incoming arguments, rather than describing how outgoing arguments will be received by your audience.

Alternate framing: if you already know that criticisms coming from one's outgroup are usually received poorly, then the fact that they are received better when coming from the ingroup is a hidden "success mode" that perhaps people could use to make criticisms go down easier somehow.

Idea: "Ugh-field trades", where people trade away their obligations that they've developed ugh-fields for in exchange for other people's obligations. Both people get fresh non-ugh-fielded tasks. Works only in cases where the task can be done by somebody else, which won't be every time but might be often enough for this to work.

or 10^(+/- 35) if you're weird

Excuse you, you mean 6^(+/- 35) !

This is a nice story, and nicely captures the internal dissonance I feel about cooperating with people who disagree with me about my "pet issue", though like many good stories it's a little simpler and more extreme than what I actually feel.

This could be a great seed for a short story. The protagonist can supposedly see the future but actually they're just really really good at seeing the present and making wise bets. 

May I see it too? 

Asking because the post above advised me to purchase cheap chances at huge upsides and this seems like one of those ^^

This is a lovely post and it really resonated with me. I've yet to really orient myself in the EA world, but "fix the normalization of child abuse" is something I have in my mind as a potential cause area. Really happy to hear you've gotten out, even if the permanent damage from sleep deprivation is still sad.

I just caught myself committing a bucket error.

I'm currently working on a text document full of equations that use variables with extremely long names. I'm in the process of simplifying it by renaming the variables. For complicated reasons, I have to do this by hand.

Just now, I noticed that there's a series of variables O1-O16, and another series of variables F17-F25. For technical reasons relating to the work I'm doing, I'm very confident that the name switch is arbitrary and that I can safely rename the F's to O's without changing the meaning of the equations.

But I'm doing this by hand. If I'm wrong, I will potentially was a lot of work by (1) making this change (2) making a bunch of other changes (3) realizing I was wrong (4) undoing all the other changes (5) undoing this change (6) re-doing all the changes that came after it.

And for a moment, this spurred me to become less confident about the arbitrariness of the naming convention!

The correct thought would have been "I'm quite confident about this, but seeing as the stakes are high if I'm wrong and I can always do this later, it's still not worth it to make the changes now."

The problem here was that I was conflating "X is very likely true" with "I must do the thing I would do if X was certain". I knew instinctively that making the changes now was a bad idea, and then I incorrectly reasoned that it was because it was likely to go wrong. It's actually unlikely to go wrong, it's just that if it does go wrong, it's a huge inconvenience.

Whoops.

Load More