My essay How to Read a Book for Understanding was 90% motivated by annoyance at Andy's misunderstanding of how books are read differently by different types of readers. I never reformatted this for Less Wrong, but probably should. https://www.reddit.com/r/slatestarcodex/comments/d7bvcp/how_to_read_a_book_for_understanding/
I definitely agree though with your point about salience. I think it is an important though inadequate defense of books. Salience can be achieved in many more sophisticated ways than reading, even some YouTube videos create salience on surprisingly complex topics. Books offer more than just this as a medium.
When I heard this, I was simply confused at how unexamined Matt's position was.
The idea of being an expert on ethics is something the Rationalist is community quite familiar with. Effective Altruism, in general, assumes that an EA mode of moral behavior is in fact something one can develop an expertise in. Perhaps you have different values than EA, but even so, whatever your values, there can be more or less effective ways of achieving them, an expert is just someone who has mapped the tensions and conflicts and contradictions involved in thinking about the territory clearly.
I have a practicing bioethical consultant on my team, and I have very much realized that the rationalsphere is unduly prejudiced against the field. This paper set confirms that for me, since my reading in the field is due entirely to selection bias.
Bioethics is, in my opinion, healthier than philosophy in that it more often requires coming to a decision on a current moral question.
Notice, however, that Bioethics papers will skew in a way that bioethical consultants will not. In general people in the field have an additional specialty/ practice like law, clinical research, hospital management, drug research, social work, psychology, and, of course, academia. I think this diversity of professions with actual jobs to perform, makes the field more healthy (but perhaps less coherent) than Eliezer and Alex Tabarrok realize.
A higher number of these papers are at least on interesting and consequential questions, even if the authors fumble, than one finds in philosophy.
Thinking out loud here about inference.
Darwin's original theory relied on three facts: overproduction of offspring, variability of offspring, and inheritance of traits. These facts were used to formulate a mechanism: the offspring best adapted to the environment for reproduction would, on average, displace population of those less so adapted. Overproduction ensured that there was selection pressure, or at least group stasis on average (and not dysgenics), variability allowed for positive mutations, heritability allowed for persistence. Call it natural selection for short.
What I'm interested in the mistake Darwin made in his next step. He assumes that because the process of natural selection tends on average towards fitness the evolution of species can only be a super imperceptible gradual process.
This is incorrect: evolution can happen alarmingly fast.
What I'm interested in why Darwin thought this. And whether the error is general enough that we can learn something about inferential reasoning that would apply to other cases. At the time geologists disagreed about the rate of major geological transitions in earth's history. Darwin through himself entirely behind Charles Lyell's slow change puritanism. What to give Darwin credit he thought this had to be the way it was because his law of averages requires a law of large numbers, and you can't get large numbers of populations without any immense number of years.
I think the big mistake Darwin made was placing too high a prior upon gradual change, even though he knew there was insufficient evidence for gradual change based on the geological record. His explanation for this lack of evidence was that the evidence had been destroyed for the most part through time, that the geological record we had was a tiny fragment of an immense story which we only can pick the pieces up from. "Absence of evidence is not evidence of absence."
But he should have mapped out the other hypothetical world too, even if just a bit. For Darwin's theory of evolution is not in fact dependent on gradual change, but can accommodate times of stasis followed by times of chaos and development.
To me the lesson is to be as clear as possible about what aspects of your model are essential and which are reasonable extensions.
And freedom is a terrible master. I was far more free from college to college + 3 years, but freedom is something you spend. It's a currency which you exchange for some type of life. Now I have very little slack, but I have an endless supply of good places to devote my energy. And that's freedom to do good, the highest form of freedom.
I play StarCraft 1 month a year, and it's true, I stick with Protoss. Although now that you mention it, next time I play I'll play Terran to see what happens...
But I also learn bits of languages frequently and maintain 2 foreign languages, and although there is always some switching cost with languages, it's not competitive and so the costs to switching are low.
I want to keep being successful despite the costs to my freedom, but that's because I view my success as a service (hence I get paid for it), not as a source of my own happiness. Esse quam videri.
Here is a quick list of things that spring to mind when I evaluate intellectuals. Any score does not necessarily need to cash out in a ranking. There are different types of intellectuals that play different purposes in the tapestry of the life of the mind.
How specialized is this person's knowledge?What are the areas outside of specialization that this person has above average knowledge about?How good is this person at writing/arguing/debating in favor of their own case?How good is this person at characterizing the case of other people?What are this person's biggest weaknesses both personally and intellectually?
Just a reminder to self that I wrote this, but need to write a counterargument to it based upon a new insight about what a good "popular book" can do.
Ah! Thanks so much. I was definitely conflating farsightedness as discount factor and farsightedness as vision of possible states in a landscape.
And that is why some resource increasing state may be too far out of the way, meaning NOT instrumentally convergent, - because the more distant that state is the closer its value is to zero, until it actually is zero. Hence the bracket.