rule_and_line

Sorted by New

# Wiki Contributions

Ah! That sounds like a great one!

So, folks like Chris Ferguson are presumably doing both activities (judging how much evidence as well as accurately translating brain estimates to numerical estimates).

But if I go find a consistently successful poker player who does not translate brain estimates to numerical estimates, then I could see how that person does on calibration exercises. That sounds like a fun experiment. Now I just need to get the grant money ...

Sidenote, but how would I narrow down to the successful poker players who don't translate brain estimates to numerical estimates? I mean, I could always ask them up front, but how would I interpret an answer like "I don't really use numbers all that much. I just go by feel." Is that a brain that's translating brain-based estimates to numerical estimates, then throwing away the numbers because of childhood mathematical scarring? Or is that a brain that's doing something totally outside translating brain-based estimates to numerical estimates?

Gatsby believed in the green light, the orgastic future that year by year recedes before us. It eluded us then, but that's no matter — tomorrow we will run faster, stretch out our arms farther... And one fine morning —

• The Great Gatsby

I always liked Fitzgerald's portrayal of what Something to Protect feels like.

Happy New Year's resolutions, all.

I'm having difficulty replacing your quotation with its referent. Could you describe an activity I could do that would demonstrate that I was judging how much evidence I have on a given issue?

Hey, that's me! I also didn't think we had other LWers down here. PM sent, let's meet up after the holidays.

I thought of the idea that maybe the human decision maker has multiple utility functions that when you try to combine them into one function some parts of the original functions don't necessarily translate well... sounds like the "shards of desire" are actually a bunch of different utility functions.

I hereby request a research-filled thread of what to do when you feel like you're in this situation, which I believe has been called "welfare economics" in the literature.

It sounds like you're measuring your success by the impact you have on the person you are directly communicating with.

What happens if you measure success by your impact on the rest of your audience?

Interesting position! I can't speak for James, but I want to engage with this. Let's pretend, for the scope of this thread, that I made the statement about the proper role of skepticism.

I'm happy to endorse your wording. I agree it's more precise to talk about "claims" than "things" in this context.

Quick communication check. When you say "increased" you're implying at least two distinct levels of skepticism. From your assertion, I gather that difficult-to-measure claims like "there exist good leaders, people who can improve the performance of the rest of their team" will face your higher level of skepticism.

Could you give me an example of a claim that faces your lower level of skepticism?

[S]kepticism should be directed at things that are actually untrue rather than things that are difficult to measure.

Thank you. Heuristics like these are, I think, the meta-skill I'm trying to learn at the same time.