David Gross

Sequences

Notes on Virtues

Comments

Two possible answers to this:

  1. Maybe people are different in this way and my experience falling asleep doesn't match yours and so my advice won't be of much use to you.
  2. The visualizations are somewhat subtle. They are, like dreams, hallucinations rather than visions of real-things-out-there. But they are also much less vivid than dreams. You may not notice some of them just because they're pretty subdued and uninteresting and so unless you're looking for them they won't jump out at you. Also: you may be used to categorizing some of these images not as hallucinations happening in your visual field but as "imagination" happening elsewhere. If you're accustomed to being able to visualize things when you imagine them in waking life, you may think about these hypnagogic hallucinations that they're not "visualizations" but "imaginations" and you may dismiss them for that reason.

Empathy might not work that way. See: Notes on Empathy.

For one thing, we seem to be wired to empathize more with people in the in-group than people in the out-group. For another, once we begin to see a conflict through the lens of empathy, we tend to adjust our interpretation of the evidence so as to share the interests and bias of whomever we first began to empathize with in the conflict. In short: empathy ought to be approached with caution.

FWIW, I'm trying to create something of a bridge between "the ancient wisdom of people who thought deeply about this sort of thing a long time ago" and "modern social science which with all its limitations at least attempts to test hypotheses with some rigor sometimes" in my sequence on virtues. That might serve as a useful platform from which to launch this new rigorous instrumental rationality guide.

I'm working on an essay about "love" as a virtue, where a "virtue" is a characteristic habit that contributes to (or exhibits) human flourishing. I'm aiming to make the essay of practical value, so a focus on what love is good for and how to get better at it.

"Love" is notoriously difficult to get a handle on, both because the word covers a bunch of things and because it lends itself to a lot of sentimental falderol. My current draft is concentrating on three varieties of "love": Christian agape, Aristotelian true-friendship, and erotic/romantic falling/being in love.

Anyway: that long preamble aside, if you know of any sources I could consult that would help me along, I'd appreciate the pointers.

I notice that in notation form it’s just an extra ergo in the ordinary (p→q, p, ∴q) argument to yield (p→q, ∴p, ∴q). So maybe “ergotism” or “alter-ergo” for the name of the fallacy?

Google already pivoted once to providing machine-curated answers that were often awful (e.g. https://searchengineland.com/googles-one-true-answer-problem-featured-snippets-270549). I'm just extrapolating.

You're imagining that Google stays the same in the way it indexes and presents the web. What if it decides people like seeing magic answers to all their questions, or notices that consumers have a more favorable opinion of Google if Google appears to index all the answers to their questions, and so Google by default asks gpteeble (or whatever) to generate a page for every search query, as it comes in, or maybe every search query for which an excellent match doesn't already exist on the rest of the web.

Imagine Google preloads the top ten web pages that answer to your query, and you can view them in a panel/tab just by mouse-overing the search results. You mouse-over them one by one until you find one that seems relevant, but it's not one that Google retrieved from a web search but one that Google or a partner generated in response to your query. It looks just the same. Maybe you don't even look at the URL most of the time to notice it's generated (the UI has gone more thumbnaily, less texty). Maybe "don't be evil" Google puts some sort of disclaimer on generated content that's noticeable, but the content still seems good enough for the job to all but the most critically discerning readers (the same way people often prefer bullshit to truth today, but now powered by AI; "it's the answer I hoped I'd find"), and so most of us just tune out the disclaimer.

Free Will: A Very Short Introduction

Who doesn't like to opine about the free will problem? This short book will quickly catch you up on the philosophical state of the art so you can do so more cleverly and can understand the weaknesses of the easy answers you thought up in the shower.

Language, Truth, and Logic

Logical positivism in one witty lesson. Make your beliefs pay rent in anticipated experiences.

Load More