Since you've gone with the definition, are you sure that definition is solid? A reasoning process like "spend your waking moments deriving mathematical truths using rigorous methods; leave all practical matters to curated recipes and outside experts" may tend to arrive at true beliefs and good decisions more often than "attempt to wrestle as rationally as you can with all of the strange and uncertain reality you encounter, and learn to navigate toward worthy goals by pushing the limits of your competence in ways that seem most promising and prudent" but th...
LessWrong is a good place for:
Each of the following bullet points begins with "who", so this should probably be something like "LessWrong is a good place for people:"
A more rational reasoning process tends to arrive at true beliefs and good decisions more often than a less rational process.
It's not clear from this or what immediately follows in this section whether you intend this statement as a tautological definition of a process (a process that "tends to arrive at true beliefs and good decisions more often" is what we call a "more rational reasoning process") or as an empirically verifiable prediction about a yet-to-be-defined process (if you use a TBD "more rational reasoning process" then you will "tend[] to arrive at true beliefs and good decisions more often"). I could see people drawing either conclusion from what's said in this section.
Although encouraged, you don't have to read this to get started on LessWrong!
This is grammatically ambiguous. The "encouraged" shows up out of nowhere without much indication of who is doing the encouraging or what they are encouraging. ("Although [something is] encouraged [to someone by someone], you don't have to read this...")
Maybe "I encourage you to read this before getting started on LessWrong, but you do not have to!" or "You don't have to read this before you get started on LessWrong, but I encourage you to do so!"
California adopted a "Housing First" policy several years ago. The number of people experiencing homelessness continued to rise thereafter. Much of the problem seems to be that there just aren't a lot of homes to be had, because it is time-consuming and expensive to make them (and/or illegal to make them quickly and cheaply).
It seems to me that a major factor contributing to the homelessness crisis in California is that there is a legal floor on the quality of a house that can be built, occupied, or rented. That legal floor is the lowest-rung on the ladder out of homelessness and in California its cost makes it too high for a lot of people to reach. Other countries deal with this by not having such a floor, which results in shantytowns and such. Those have their own significant problems, but it isn't obvious to me that those problems would be worse (for e.g. California) than widespread homelessness. Am I missing something I should be considering?
Has anyone done an in-depth examination of AI-selfhood from an explicitly Buddhist perspective, using Buddhist theory of how the (illusion of) self comes to be generated in people to explore what conditions would need to be present for an AI to develop a similar such intuition?
FWIW: I dropped out of high school a year early via the GED route. I am very glad I did, and recommend it. At the time this was not really an option that was discussed above-ground by e.g. guidance counselors: instead the assumption was that you'd either graduate from high school or "be a drop-out" with all sorts of bad connotations.
I enrolled in a community college and began taking my lower-division undergrad courses there (and some electives that I was curious about). This was far less expensive than taking the equivalent courses at a university, and by ...
Going off on a wild tangent here, but all this strikes me as eerily similar to what I recently read in Rob Burbea's "Seeing That Frees": a book about meditative approaches to Buddhist "emptiness" insight.
Burbea repeatedly insists on the "fabricated" nature of reality: that it doesn't appear to us in any raw form with an inherent nature of its own, but that any time it appears to us it does so by means of our own construction of it (and in a way that's always tangled up in our agendas: i.e. we don't see anything "as it is" but only "as it means to me").
This...
Tangentially, FWIW: Among the ought/is counterarguments that I've heard (I first encountered it in Alasdair MacIntyre's stuff) is that some "is"s have "ought"s wrapped up in them from the get-go. The way we divide reality up into its various "is" packages may or may not include function, purpose, etc. in any particular package, but that's in part a linguistic, cultural, fashionable, etc. decision.
For example: that is a clock, it ought to tell the correct time, because that is what clocks are all about. That it is a clock implies what it ought to do....
Yeah, disturbing imagery like that can wake you right back up in a hurry. But at that stage of falling-asleep, that imagery is going to arrive whether you're using this method or not. This method just helps you get as far as that stage more quickly.
At this point I'm being extra-speculative, but it may be that above-normal levels of anxiety in ordinary waking life bleed over into the hypnagogic imagery and make it more likely that you'll be presented with disturbing images. It could be that more attention to pre-bedtime calming (pleasant nature videos, medi...
Why do you think there will be heavy selection against things like made-up stories presented as fact, or fabricated/misrepresented medical baloney, when there doesn't seem to be much such selection now?
I'm one of those LW readers who is less interested in AI-related stuff (in spite of having a CS degree with an AI concentration; that's just not what I come here for). I would really like to be able to filter "AI Alignment Forum" cross-posts, but the current filter setup does not allow for that so far as I can see.
Two possible answers to this:
Empathy might not work that way. See: Notes on Empathy.
For one thing, we seem to be wired to empathize more with people in the in-group than people in the out-group. For another, once we begin to see a conflict through the lens of empathy, we tend to adjust our interpretation of the evidence so as to share the interests and bias of whomever we first began to empathize with in the conflict. In short: empathy ought to be approached with caution.
FWIW, I'm trying to create something of a bridge between "the ancient wisdom of people who thought deeply about this sort of thing a long time ago" and "modern social science which with all its limitations at least attempts to test hypotheses with some rigor sometimes" in my sequence on virtues. That might serve as a useful platform from which to launch this new rigorous instrumental rationality guide.
I'm working on an essay about "love" as a virtue, where a "virtue" is a characteristic habit that contributes to (or exhibits) human flourishing. I'm aiming to make the essay of practical value, so a focus on what love is good for and how to get better at it.
"Love" is notoriously difficult to get a handle on, both because the word covers a bunch of things and because it lends itself to a lot of sentimental falderol. My current draft is concentrating on three varieties of "love": Christian agape, Aristotelian true-friendship, and erotic/romantic falling/being in love.
Anyway: that long preamble aside, if you know of any sources I could consult that would help me along, I'd appreciate the pointers.
I notice that in notation form it’s just an extra ergo in the ordinary (p→q, p, ∴q) argument to yield (p→q, ∴p, ∴q). So maybe “ergotism” or “alter-ergo” for the name of the fallacy?
Google already pivoted once to providing machine-curated answers that were often awful (e.g. https://searchengineland.com/googles-one-true-answer-problem-featured-snippets-270549). I'm just extrapolating.
You're imagining that Google stays the same in the way it indexes and presents the web. What if it decides people like seeing magic answers to all their questions, or notices that consumers have a more favorable opinion of Google if Google appears to index all the answers to their questions, and so Google by default asks gpteeble (or whatever) to generate a page for every search query, as it comes in, or maybe every search query for which an excellent match doesn't already exist on the rest of the web.
Imagine Google preloads the top ten web pages that answ...
Who doesn't like to opine about the free will problem? This short book will quickly catch you up on the philosophical state of the art so you can do so more cleverly and can understand the weaknesses of the easy answers you thought up in the shower.
Logical positivism in one witty lesson. Make your beliefs pay rent in anticipated experiences.
Pihkal: A Chemical Love Story if you'd like to know all about a huge variety of phenethylamines from the inside and out, including how to go about synthesizing them.
On Food and Cooking: The Science and Lore of the Kitchen delivers what it promises: a deep understanding of the materials and processes involved in home-scale food production.
That was probably her opinion, but I think she was carefully trying to write with respect for a non-religious audience.
I think she was saying, more or less, that secular people can either go forward in the direction they are going, but they'll have to leave should/ought/morality behind (and with it any judgements about e.g. whether shoving Jews in the ovens was necessarily a bad thing to do)—which was what philosophers of her place and time were doing with e.g. emotivism—or they can go backwards to a pre-Christian perspective from which ethics had a ground...
When an author uses a term that has many, conflicting definitions in popular use, it's reasonable to hope the author will explain which of these definitions he or she intends. It's less reasonable, I think, to insist that the author must use those terminology choices that you prefer.
In the case of "shame" it's impossible for me to please everyone, since there are so many competing and conflicting definitions in popular use. I can only choose one, explain myself, and ask my readers to meet me half-way.
No need to stop at not voting for people. Voting in general fuels the madness. Please stop voting: You’re just making things worse.
Any chance we could get a "book review" icon to decorate post titles in lists so that people don't feel they need to flag them with "[book review]..."? This could be based on the presence of the "book review" tag.
FWIW, the philosopher William Wollaston's magnum opus is devoted to defending the thesis that truth and morality completely overlap with one another: that to adhere to truth and to be moral are identical.
Here's a free ebook version of his argument: https://standardebooks.org/ebooks/william-wollaston/the-religion-of-nature-delineated
And my summary of his argument: https://www.lesswrong.com/posts/P75rzmpJ62E2Qfr3A/truth-reason-the-true-religion
I think you may be reading more (and more sinister things) into this than were originally there. I don't think DiAngelo starts with "a large part of your core identity is inherently very bad" at all. The progression she has in mind is more like this:
I'm very open to the idea that I've seen something that wasn't there and or wasn't intended 😄, let me see if I can spesifically find what made me feel that way.
Okay, so I have that reaction to paragraphs like this:
...White fragility is a sort of defensiveness that takes the form of a variety of strategies that white people deploy when we are confronted with how we participate in and perpetuate racismS. Whites use these strategies to deflect or avoid such a confrontation and to defend a comfortable, privileged vantage point from which race is “not an issue” (
This isn't my area of expertise, but as best as I understand it, one reason why racismS is not de facto a synonym for "being white" because racismS is not primarily a description of individual people, the way racismF can be.
That is to say, you can call someone a racistF, which is de facto a synonym for calling them a bigot or intolerant or a "race realist" or something like that, because a racistF is someone who believes in or professes racismF or acts like they do. But racismS doesn't work like that. It isn't an explicit belief system, but "a systemic, ...
I see where you're coming from, and I also wish I didn't have to do the extra work to remember the correct technical definition of racism when I read White Fragility. That said, I expect that when I read a book in a particular discipline that I will need to be more attentive to the terms of art in that discipline. For instance, when I read a book of physics, I don't expect the author to cater to my folk definitions of "work", "energy", "power", "momentum", and so forth: instead, I expect that I will need to learn how to use the terminology of the field precisely as its practitioners do if I am to follow its arguments and learn what they have to teach.
I see where you're coming from, and I also wish I didn't have to do the extra work to remember the correct technical definition of racism when I read White Fragility.
There's nothing technical about the definition of racism that gets used by people like DeAngelo. In physics a definition becomes technical when it's well defined enough to objectively measure the resulting effect. There's nothing that makes their definition more inherently correct either.
In the civil rights area a lot of laws were passed to combat racism and I would say that the re...
For instance, when I read a book of physics, I don't expect the author to cater to my folk definitions of "work", "energy", "power", "momentum"
Since you assume that physics book authors won't cater to the laymen's ordinary definition of the physics terms of art you may be surprised then reading most books on classical physics. The authors go to painstaking effort to make their content accessible to laypersons. I have not yet read a textbook on classical physics that didn't take the time to explain that "work" in a physics context means Force x Distance ...
Bostrom estimates that just one second of delayed colonization equals 100 trillion human lives lost. Therefore taking action today for accelerating humanity’s expansion into the universe yields an impact of 100 trillion human lives saved for every second that it’s is brought closer to the present.
I don't much care for this rhetorically sneaky way of smudging the way we feel the import of "lives lost" and "lives saved" so as to try to make it also cover "lives that never happen" or "lives that might potentially happen." There's an Every Sperm is Sacred silliness at work here. Do you mourn the millions of lives lost to vasectomy?
So... first of all, I'd like someone to look up the logical positivists and say what it is they actually believed.
A.J. Ayer's Language, Truth, and Logic is brief, to-the-point, bold, and fun to read. All of this to the extent that you may forget why you dislike reading philosophy. I'm pretty sure that Eliezer and Scott would enjoy their time reading it and would get something out of it.
I wish I remembered where I heard about this. It was a long time ago and seemed convincing to me at the time, but now I don't remember the details, and a little googling doesn't turn up much of anything to confirm this. I should probably dial back how I describe this until I can verify it.
Reduced it by ~43kb, though I don't know if many readers will notice as most of the reduction is in markup.