All of David Gross's Comments + Replies

Reduced it by ~43kb, though I don't know if many readers will notice as most of the reduction is in markup.

Since you've gone with the definition, are you sure that definition is solid? A reasoning process like "spend your waking moments deriving mathematical truths using rigorous methods; leave all practical matters to curated recipes and outside experts" may tend to arrive at true beliefs and good decisions more often than "attempt to wrestle as rationally as you can with all of the strange and uncertain reality you encounter, and learn to navigate toward worthy goals by pushing the limits of your competence in ways that seem most promising and prudent" but th... (read more)

2Ruby1mo
A thing I should likely include is something like the definition gets disputed, but what I present is the most standard one.

LessWrong is a good place for:

Each of the following bullet points begins with "who", so this should probably be something like "LessWrong is a good place for people:"

A more rational reasoning process tends to arrive at true beliefs and good decisions more often than a less rational process.

It's not clear from this or what immediately follows in this section whether you intend this statement as a tautological definition of a process (a process that "tends to arrive at true beliefs and good decisions more often" is what we call a "more rational reasoning process") or as an empirically verifiable prediction about a yet-to-be-defined process (if you use a TBD "more rational reasoning process" then you will "tend[] to arrive at true beliefs and good decisions more often"). I could see people drawing either conclusion from what's said in this section.

2Ruby1mo
Good point. I've edited to make this clearer.

Although encouraged, you don't have to read this to get started on LessWrong! 

This is grammatically ambiguous. The "encouraged" shows up out of nowhere without much indication of who is doing the encouraging or what they are encouraging. ("Although [something is] encouraged [to someone by someone], you don't have to read this...")

Maybe "I encourage you to read this before getting started on LessWrong, but you do not have to!" or "You don't have to read this before you get started on LessWrong, but I encourage you to do so!"

For some food for thought on this question, see:

from the LessWrong Notes on Virtues sequence.

California adopted a "Housing First" policy several years ago. The number of people experiencing homelessness continued to rise thereafter. Much of the problem seems to be that there just aren't a lot of homes to be had, because it is time-consuming and expensive to make them (and/or illegal to make them quickly and cheaply).

It seems to me that a major factor contributing to the homelessness crisis in California is that there is a legal floor on the quality of a house that can be built, occupied, or rented. That legal floor is the lowest-rung on the ladder out of homelessness and in California its cost makes it too high for a lot of people to reach. Other countries deal with this by not having such a floor, which results in shantytowns and such. Those have their own significant problems, but it isn't obvious to me that those problems would be worse (for e.g. California) than widespread homelessness. Am I missing something I should be considering?

1Portia2mo
The alternatives? Like, in Europe, you will generally encounter very few homeless people, and yet no shantytowns, and decent building codes? It starts with the fact that we have a comprehensive social system that will cover rent in minimum available housing if you lose your job, because we realise that losing you house too will totally fuck you up in ways in noone's interest. There are projects that build on the basic idea - that the solution to homelessness is giving them fucking homes, and then sorting out the rest - in the US, too. https://en.wikipedia.org/wiki/Housing_First [https://en.wikipedia.org/wiki/Housing_First] They work well.
2shminux2mo
It would be instructive to compare with the homelessness in Vancouver, which has no such legal floor. There must be a comparative analysis out there somewhere.
4ChristianKl2mo
It's basically the NIMBY problem. Low-quality housing decreases the value of nearby housing. The quest to change rules to get more housing built is one of the central political battles in California. 

Has anyone done an in-depth examination of AI-selfhood from an explicitly Buddhist perspective, using Buddhist theory of how the (illusion of) self comes to be generated in people to explore what conditions would need to be present for an AI to develop a similar such intuition?

1NicholasKross3mo
I liked your similar page about Attention, so this is enticing!

FWIW: I dropped out of high school a year early via the GED route. I am very glad I did, and recommend it. At the time this was not really an option that was discussed above-ground by e.g. guidance counselors: instead the assumption was that you'd either graduate from high school or "be a drop-out" with all sorts of bad connotations.

I enrolled in a community college and began taking my lower-division undergrad courses there (and some electives that I was curious about). This was far less expensive than taking the equivalent courses at a university, and by ... (read more)

Going off on a wild tangent here, but all this strikes me as eerily similar to what I recently read in Rob Burbea's "Seeing That Frees": a book about meditative approaches to Buddhist "emptiness" insight.

Burbea repeatedly insists on the "fabricated" nature of reality: that it doesn't appear to us in any raw form with an inherent nature of its own, but that any time it appears to us it does so by means of our own construction of it (and in a way that's always tangled up in our agendas: i.e. we don't see anything "as it is" but only "as it means to me").

This... (read more)

Tangentially, FWIW: Among the ought/is counterarguments that I've heard (I first encountered it in Alasdair MacIntyre's stuff) is that some "is"s have "ought"s wrapped up in them from the get-go. The way we divide reality up into its various "is" packages may or may not include function, purpose, etc. in any particular package, but that's in part a linguistic, cultural, fashionable, etc. decision.  

For example: that is a clock, it ought to tell the correct time, because that is what clocks are all about. That it is a clock implies what it ought to do.... (read more)

1DaemonicSigil5mo
From a language perspective, I agree that's it's great to not worry about the is/ought distinction when discussing anything other than meta-ethics. It's kind of like how we talk about evolved adaptations as being "meant" to solve a particular problem, even though there was really no intention involved in the process. It's just such a convenient way of speaking, so everyone does it. I'd guess I'd say that the despite this, the is/ought distinction remains useful in some contexts. Like if someone says "we get morality from X, so you have to believe X or you won't be moral", it gives you a shortcut to realizing "nah, even if I think X is false, I can continue to not do bad things".

Yeah, disturbing imagery like that can wake you right back up in a hurry. But at that stage of falling-asleep, that imagery is going to arrive whether you're using this method or not. This method just helps you get as far as that stage more quickly.

At this point I'm being extra-speculative, but it may be that above-normal levels of anxiety in ordinary waking life bleed over into the hypnagogic imagery and make it more likely that you'll be presented with disturbing images. It could be that more attention to pre-bedtime calming (pleasant nature videos, medi... (read more)

Why do you think there will be heavy selection against things like made-up stories presented as fact, or fabricated/misrepresented medical baloney, when there doesn't seem to be much such selection now?

1Jotto9998mo
I mean that Google themselves wouldn't want something that could get them lawsuits, and if they generate stuff, yes they'll have a selection for accuracy.  If someone is interested in AI-Dr-Oz's cures and searched for those, I'm sure Google will be happy to provide.  The market for that will be huge, and I'm not predicting that crap will go away. Yes Google does select, now.  The ocean of garbage is that bad.  For people making genuine inquiries, often the best search providers can do right now is defer to authority websites.  If we're talking specifically about interpreting medical papers, why don't you think they'll have a selection for accuracy?

I'm one of those LW readers who is less interested in AI-related stuff (in spite of having a CS degree with an AI concentration; that's just not what I come here for). I would really like to be able to filter "AI Alignment Forum" cross-posts, but the current filter setup does not allow for that so far as I can see.

3habryka8mo
Filtering out the AI tag should roughly do that.

Two possible answers to this:

  1. Maybe people are different in this way and my experience falling asleep doesn't match yours and so my advice won't be of much use to you.
  2. The visualizations are somewhat subtle. They are, like dreams, hallucinations rather than visions of real-things-out-there. But they are also much less vivid than dreams. You may not notice some of them just because they're pretty subdued and uninteresting and so unless you're looking for them they won't jump out at you. Also: you may be used to categorizing some of these images not as halluci
... (read more)

Empathy might not work that way. See: Notes on Empathy.

For one thing, we seem to be wired to empathize more with people in the in-group than people in the out-group. For another, once we begin to see a conflict through the lens of empathy, we tend to adjust our interpretation of the evidence so as to share the interests and bias of whomever we first began to empathize with in the conflict. In short: empathy ought to be approached with caution.

FWIW, I'm trying to create something of a bridge between "the ancient wisdom of people who thought deeply about this sort of thing a long time ago" and "modern social science which with all its limitations at least attempts to test hypotheses with some rigor sometimes" in my sequence on virtues. That might serve as a useful platform from which to launch this new rigorous instrumental rationality guide.

I'm working on an essay about "love" as a virtue, where a "virtue" is a characteristic habit that contributes to (or exhibits) human flourishing. I'm aiming to make the essay of practical value, so a focus on what love is good for and how to get better at it.

"Love" is notoriously difficult to get a handle on, both because the word covers a bunch of things and because it lends itself to a lot of sentimental falderol. My current draft is concentrating on three varieties of "love": Christian agape, Aristotelian true-friendship, and erotic/romantic falling/being in love.

Anyway: that long preamble aside, if you know of any sources I could consult that would help me along, I'd appreciate the pointers.

5Dirichlet-to-Neumann1y
Allan Bloom's Love and Friendship is an interesting collection of essays discussing love and friendship in literature (Rousseau, Stendhal, Austen, Flaubert, Tolstoï, Shakespeare).

I notice that in notation form it’s just an extra ergo in the ordinary (p→q, p, ∴q) argument to yield (p→q, ∴p, ∴q). So maybe “ergotism” or “alter-ergo” for the name of the fallacy?

Google already pivoted once to providing machine-curated answers that were often awful (e.g. https://searchengineland.com/googles-one-true-answer-problem-featured-snippets-270549). I'm just extrapolating.

You're imagining that Google stays the same in the way it indexes and presents the web. What if it decides people like seeing magic answers to all their questions, or notices that consumers have a more favorable opinion of Google if Google appears to index all the answers to their questions, and so Google by default asks gpteeble (or whatever) to generate a page for every search query, as it comes in, or maybe every search query for which an excellent match doesn't already exist on the rest of the web.

Imagine Google preloads the top ten web pages that answ... (read more)

1interstice1y
It seems that either (a) the AI-powered sites will in fact give more useful answers to questions, in which case this change might actually be beneficial, or (b) they will give worse answers, in which case people won't be likely to use them. Don't you think people will stop trusting such sites after the first 5 times they try eating their own toenails to no avail? And for the purposes of finding plausible bullshit to support what you already think, I think gpt-powered sites have the key disadvantage of being poor evidence to show other people: it looks pretty bad for your case if your best source is a generated website(normal websites could also be generated but not advertise it, of course, but that's a separate matter). You seem to be imagining a future in which Google does the most dystopian thing possible for no reason in particular.
Free Will: A Very Short Introduction

Who doesn't like to opine about the free will problem? This short book will quickly catch you up on the philosophical state of the art so you can do so more cleverly and can understand the weaknesses of the easy answers you thought up in the shower.

Language, Truth, and Logic

Logical positivism in one witty lesson. Make your beliefs pay rent in anticipated experiences.

Pihkal: A Chemical Love Story if you'd like to know all about a huge variety of phenethylamines from the inside and out, including how to go about synthesizing them.

On Food and Cooking: The Science and Lore of the Kitchen delivers what it promises: a deep understanding of the materials and processes involved in home-scale food production.

That was probably her opinion, but I think she was carefully trying to write with respect for a non-religious audience.

I think she was saying, more or less, that secular people can either go forward in the direction they are going, but they'll have to leave should/ought/morality behind (and with it any judgements about e.g. whether shoving Jews in the ovens was necessarily a bad thing to do)—which was what philosophers of her place and time were doing with e.g. emotivism—or they can go backwards to a pre-Christian perspective from which ethics had a ground... (read more)

Thanks for the new feature. Minor bug report here: The footnote marker seems to be followed by a non-breaking space, such that it can interfere with normal paragraph formatting. See the bullet point that begins "correlates suggestively with virtues like altruism" on this page.

When an author uses a term that has many, conflicting definitions in popular use, it's reasonable to hope the author will explain which of these definitions he or she intends. It's less reasonable, I think, to insist that the author must use those terminology choices that you prefer.

In the case of "shame" it's impossible for me to please everyone, since there are so many competing and conflicting definitions in popular use. I can only choose one, explain myself, and ask my readers to meet me half-way.

7pjeby2y
Can you point to a modern popular use of your definition? As far as I'm aware, the current popular (late 20th/21st century) usage is much closer to my definition than the one you're using. I've also not seen any dictionary definitions that reference one's own standards (vs. implied social standards such as "impropriety" or "foolishness"). It just seems to me that referencing one's own standards is a very odd carve-out in the definition, as is calling it merely "unpleasant" (vs. dictionary terms saying things like "painful" and "humiliating"). Something that is unpleasant and one's own standards sounds much more like the emotion of "regret" (wishing you'd done something different), rather than the emotion of shame (public disregard and low worth). Your usage seems to me like saying that "rage is a virtue because to rage is to act against things that are unjust", while ignoring the fact that the popular understanding of the word "rage" is more like "anger to the point of irrational, destructive or counterproductive action". You can redefine the term in an excessively narrow way, but it doesn't help anybody understand what you're getting at. Notice, too, that if you simply called it regret, much of the article would be dissolved: you wouldn't need to address toxic shame or virtue signaling, since these aren't terribly relevant to regret. The article could be considerably shorter, which suggests that choosing a better term would be an empirical benefit. I also can't help but notice that all of the other top-level comments are about this terminology confusion and would have been obviated by choosing regret or another term for a less problematic emotion.

No need to stop at not voting for people. Voting in general fuels the madness. Please stop voting: You’re just making things worse.

1CraigMichael2y
Your Medium article is really excellent--of course, I've recently become biased. Up until now, I'm someone who voted religiously in every election. I think this November will be the first time I'll leave some bubbles empty.  Did you think about cross-posting your Medium article here? I believe it's a very serious question as to when voting does more harm than good, and it seems like this would be the kind rationalist/EA types would be interested in the answer, and there seems to be an unwillingness to discuss it. (I would take the bit about the Kennedy assassination out). 

Any chance we could get a "book review" icon to decorate post titles in lists so that people don't feel they need to flag them with "[book review]..."? This could be based on the presence of the "book review" tag.

2Ruby2y
That's an interesting idea! I'll think about it.

FWIW, the philosopher William Wollaston's magnum opus is devoted to defending the thesis that truth and morality completely overlap with one another: that to adhere to truth and to be moral are identical.

Here's a free ebook version of his argument: https://standardebooks.org/ebooks/william-wollaston/the-religion-of-nature-delineated

And my summary of his argument: https://www.lesswrong.com/posts/P75rzmpJ62E2Qfr3A/truth-reason-the-true-religion

I think you may be reading more (and more sinister things) into this than were originally there. I don't think DiAngelo starts with "a large part of your core identity is inherently very bad" at all. The progression she has in mind is more like this:

  1. You were raised in a culture that has a lot of baggage from its explicitly white supremacist origins, and as part of learning to adopt to that culture you learned ways of getting along with it that have the effect of reinforcing its racism. In part this is because as a white person those things were designed wi
... (read more)

I'm very open to the idea that I've seen something that wasn't there and or wasn't intended 😄, let me see if I can spesifically find what made me feel that way.

Okay, so I have that reaction to paragraphs like this:

White fragility is a sort of defensiveness that takes the form of a variety of strategies that white people deploy when we are confronted with how we participate in and perpetuate racismS. Whites use these strategies to deflect or avoid such a confrontation and to defend a comfortable, privileged vantage point from which race is “not an issue” (

... (read more)

This isn't my area of expertise, but as best as I understand it, one reason why racismS is not de facto a synonym for "being white" because racismS is not primarily a description of individual people, the way racismF can be.

That is to say, you can call someone a racistF, which is de facto a synonym for calling them a bigot or intolerant or a "race realist" or something like that, because a racistF is someone who believes in or professes racismF or acts like they do. But racismS doesn't work like that. It isn't an explicit belief system, but "a sys­tem­ic, ... (read more)

I see where you're coming from, and I also wish I didn't have to do the extra work to remember the correct technical definition of racism when I read White Fragility. That said, I expect that when I read a book in a particular discipline that I will need to be more attentive to the terms of art in that discipline. For instance, when I read a book of physics, I don't expect the author to cater to my folk definitions of "work", "energy", "power", "momentum", and so forth: instead, I expect that I will need to learn how to use the terminology of the field precisely as its practitioners do if I am to follow its arguments and learn what they have to teach.

I see where you're coming from, and I also wish I didn't have to do the extra work to remember the correct technical definition of racism when I read White Fragility. 

There's nothing technical about the definition of racism that gets used by people like DeAngelo. In physics a definition becomes technical when it's well defined enough to objectively measure the resulting effect. There's nothing that makes their definition more inherently correct either. 

In the civil rights area a lot of laws were passed to combat racism and I would say that the re... (read more)

For instance, when I read a book of physics, I don't expect the author to cater to my folk definitions of "work", "energy", "power", "momentum"

Since you assume that physics book authors won't cater to the laymen's ordinary definition of the physics terms of art you may be surprised then reading most books on classical physics. The authors go to painstaking effort to make their content accessible to laypersons. I have not yet read a textbook on classical physics that didn't take the time to explain that "work" in a physics context means Force x Distance ... (read more)

Bostrom estimates that just one second of delayed colonization equals 100 trillion human lives lost. Therefore taking action today for accelerating humanity’s expansion into the universe yields an impact of 100 trillion human lives saved for every second that it’s is brought closer to the present.

I don't much care for this rhetorically sneaky way of smudging the way we feel the import of "lives lost" and "lives saved" so as to try to make it also cover "lives that never happen" or "lives that might potentially happen." There's an Every Sperm is Sacred silliness at work here. Do you mourn the millions of lives lost to vasectomy?

3Writer2y
Well, there was some love for the person affecting view at the end of the video. Note that one that ascribes to the totalist view might not only mourn every sperm but every potential worthwhile mind.
5gilch2y
I kind of have similar feelings. I'd need an answer for the Mere addition paradox [https://en.wikipedia.org/wiki/Mere_addition_paradox]/repugnant conclusion before I could compare these. I do find the conclusion repugnant, so I must take issue with the premises somehow. My current inclination is to reject the first step: the idea that a universe with more lives worth living is better than one with less, but I'm not especially confident that I've entirely resolved it that way. Living in Many Worlds [https://www.lesswrong.com/posts/qcYCAxYZT4Xp9iMZY/living-in-many-worlds] has really influenced my thinking about future population sizes. It's more important to me that quality of life is high than that we maximize lives barely worth living. That could also be taken to extremes: why not have a population of one? But I think there are good reasons not to take it that far.

You can exit insert mode by pressing Escape but it is faster to remap your CapsLock key to Ctrl and then exit insert mode with Ctrl-[.

I don't get how that's faster.

4gilch2y
You have to reach farther for Escape than for CapsLock, which makes Escape slower. I mapped fd to Escape, because that's what Spacemacs uses. It's also much less error-prone than jj and jk, which seem to be common choices.
2lsusr2y
Escape is farther from home row.

So... first of all, I'd like someone to look up the logical positivists and say what it is they actually believed.

A.J. Ayer's Language, Truth, and Logic is brief, to-the-point, bold, and fun to read. All of this to the extent that you may forget why you dislike reading philosophy. I'm pretty sure that Eliezer and Scott would enjoy their time reading it and would get something out of it.

I wish I remembered where I heard about this. It was a long time ago and seemed convincing to me at the time, but now I don't remember the details, and a little googling doesn't turn up much of anything to confirm this. I should probably dial back how I describe this until I can verify it.

1JesperO2y
Also curious about this.
Load More