All of Zack_M_Davis's Comments + Replies

Reply to Nate Soares on Dolphins

Thanks. I regret letting my emotions get the better of me. I apologize.

2Slider4dCrying over the amount of spilt ink doesn't have that much epistemic relevance. That your particle accelerator cost X million dollars doesn't make it produce better data. Truth can be frustrating and unfair in that sense. If a single naive person can say "The emperor has no clothes" and all the epistemics come falling down, maybe they should come falling down. With solid deconfusion even if a single authority figure says "I am not convinced" the solace from the work itself should be plenty. This plea that the issue should be handled in the tone of seriousness seems like a bad application of social pressure. We shouldn't need swearing on bibles. Using the role of constituting community beliefs as social-versus game value chips seems bad.

Some notes with my mod hat on:

While it seems to me like you're trying to protect an important pole of coherency and consistency here, I think this comment as well as some features of the OP (to a lesser extent) overstep some important bounds and make it quite tricky to have a productive conversation, in a way that I would like to both discourage and advise against. I worry that you're imputing positions stronger than people are holding, and thus creating more disagreement than exists, and raising the emotional stakes of that disagreement more than seems ne... (read more)

Feedback:

"please don't shitpost and when you engage with me please avoid all attempts at humor because these pattern-match to ways I am abused and if you do those things even if in good faith it will only hurt our communication, perhaps disastrously, never help" would, I think, cover basically everything you want to cover without also signaling that it will be extremely emotionally draining to engage with you.

OTOH if it will be extremely emotionally draining to engage with you then you have successfully signaled that.

Possibly this isn't fair but I'm pretty sure it's an accurate reading.

9tomcatfish6dI would note the similarity between "haha yeah" and the stated lack of punctuation and capitalization in "shitposts", which are supposed to be light jokes. Also, you say That argument is also a valid argument against making emotional appeals about your past mental state and how it has been affected by this argument. There are rumors about mathematicians being driven mad by the concept of infinity. This doesn't make them very good at teaching Calculus in college, but rather the opposite.

How would you feel if you sunk forty months of your life into deconfusing a philosophical issue that had huge, life-altering practical stakes for you, and the response to your careful arguments from community authorities was a dismissive "haha yeah"? Would you, perhaps, be somewhat upset?

Perhaps! But also that doesn't seem to me like what happened. The response to your careful arguments was the 1000ish words that engage with what seemed to me to be the heart of your questions, and that attempted to convey some ways it seemed you were misunderstanding my... (read more)

Philosophy in the Darkest Timeline: Basics of the Evolution of Meaning

As a longtime Internet ratsphere person, but not a traditional philosophy nerd, the idea [...] never occurred to me.

Are you sure that's not the other way around?? (I don't think Brian Skyrms is a traditional philosopher.)

3Icarus Gallagher4dI had never heard of him before! I was referring to a cloud of implicit norms I feel like I’ve seen around something I would call “traditional philosophy”, but part of what makes ”traditional philosophy” stand out a distinct thing in my mind is that those norms exist in it. I should probably be clearer.
Reply to Nate Soares on Dolphins

(I've drafted a 3000 word reply to this, but I'm waiting on feedback from a friend before posting it.)

Reply to Nate Soares on Dolphins

you tend to get a bit worked up sometimes

Well, yes. I've got Something to Protect.

3Slider4dI read it to be more about getting over your workedness to get to the goal.
Reply to Nate Soares on Dolphins

Thanks, you are right and the thing I originally typed is wrong. I edited the comment.

Reply to Nate Soares on Dolphins

Thanks for the reply! (Strong-upvoted.) I've been emotionally trashed today and didn't get anything done at my dayjob, which arguably means I shouldn't be paying attention to Less Wrong, but I feel the need to type this now in the hopes of getting it off my mind so that I can do my dayjob tomorrow.

In your epistemic-status thread, you express sadness at "the fact that nobody's read A Human's Guide to Words or w/e". But, with respect, you ... don't seem to be behaving as if you've read it? Specifically, entry #30 on the list of "37 Ways Words Can Be Wrong" i... (read more)

So, I'm not a biologist. I don't think Eliezer is much of a biologist either. A thing that I learned in the last ten years, which maybe Nate and Eliezer learned in the same time, idk, is that different aquatic animals are more distantly related than one might have thought. For example, let's take the list from 2008. When I go on Wikipedia and try to find an appropriate scientific name for each and stick it into timetree.org to try to figure out when their most recent common ancestor was, I get the following estimates:

Salmon and Guppies: 206 MYA
Trout and Gu

... (read more)

Am I the only one creeped out by this?

Usually I don't think short comments of agreement really contribute to conversations, but this is actually critical and in the interest of trying to get a public preference cascade going: No. You are not the only one creeped out by this. The parts of The Sequences which have held up the best over the last decade are the refinements on General Semantics, and I too am dismayed at the abandonment of carve-reality-at-its-joints.

2Jiro10dScott Alexander's essay [https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/] uses the example of fish versus whales to argue that transgender people should be classified by whatever sex they claim to be rather than classified by biological sex. This essay came out after 2008 and before 2021. And Scott Alexander is about as influential here as Yudkowsky. In other words, what changed is that asserting that it makes sense to classify dolphins as fish is now something you need to assert for political purposes. Edit: I missed the reference to gender issues. But I think it may explain why Yudkowsky and rationalists in general have changed their mind, regardless of why anyone in particular here has.

Or, imagine if in 2014, Yudkowsky suddenly started saying the Copenhagen interpretation of quantum mechanics is correct, without acknowledging that anything had changed. That's how weird this is.

This is a strong overstatement. Eliezer clearly has invested orders of magnitude more effort in defense of his MWI stance than he did in defense of his original dolphins-aren't-fish "stance".

Why? What changed?

On the object level question? Like, what changed between younger Nate who read that sequence and didn't object to that example, and older Nate who asserted (in a humorous context) that it takes staggering gymnastics to exclude dolphins from "fish"? Here's what springs to mind:

  • I noticed that there is cluster-nature to the marine-environment adaptations
  • I noticed that humans used to have a word for that cluster
  • I noticed a sense of inconsistency between including whales in "even-toed ungulates" without also including them in "lobe-finn
... (read more)
6Ben Pace13dWow, am surprised that the dolphins example has 180'd in Nate's recent thread. I do endorse shitposting as a form of posting, it's great and I'd like Nate to do more.
4lsusr1moThanks. Fixed. I wrote the table right now. I was just reading the FAQ [https://www.lesswrong.com/posts/2rWKkWuPrgTMpLRbp/lesswrong-faq-1#Karma___Voting] which links to outdated source code [https://github.com/LessWrong2/Lesswrong2/blob/260717952f0b860a03939f13741752f3d462c7af/packages/lesswrong/lib/modules/voting/new_vote_types.js] .
4lsusr1moI'm confused. The source code seems to imply that anyone with 25,000 karma or more has a small vote power of 3 but the the list of actual user vote-powers implies it maxes out at 2. For those too lazy to read the source code. SMALL VOTES user karmasmall vote weight0-99911000-∞2BIG VOTES user karmabig vote weight0-9110-992100-2493250-4994500-99951000-249962500-49997 5000-9999810000-24999925000-499991050000-749991175000-9999912100000-17499913 175000-24999914250000-49999915500000-∞16
Containment Thread on the Motivation and Political Context for My Philosophy of Language Agenda

Okay. I give up. I really liked your 11 May comment, and it made me optimistic that this conversation would lead somewhere new and interesting, but I'm not feeling optimistic about that anymore. (You probably aren't, either.) This was fun, though: thanks! You're very good at what you do!

2gjm1moOK. I'm not sure to what extent I'm supposed to take the last comment as an insult ("you're very good at emitting sophistical bullshit" or whatever), but no matter :-). I don't know that I was feeling optimistic, but I had had some hopes that you might be persuaded to engage with what seem like key criticisms rather than just dismissing them. But you certainly should feel obliged to engage with someone you aren't finding it worthwhile arguing with. [EDITED to add:] Er, oops, of course I mean you shouldn't feel obliged. By the way, I see that at least one earlier comment of yours in this thread has been downvoted; it wasn't by me.
Containment Thread on the Motivation and Political Context for My Philosophy of Language Agenda

I'm not sure exactly what distinction you're appealing to

Thanks for asking! More detail: if you're building a communication system to transmit information from one place to another, the signals/codewords you use are arbitrary in the sense that it doesn't matter which you use as long as the reciever of the signals knows what they mean (the conditions under which they are sent).

(Well, the codeword lengths turn out to matter, but not the codewords themselves.)

If I'm publishing a weather report on my website about whether it's "sunny" or "cloudy" today, it ... (read more)

Definitions aren't generally arbitrary in communication for reasons similar to why they aren't arbitrary in cognition; if I define "woman" to mean "adult female human" (for some possibly-contentious definition of female" I will communicate more effectively than if I define it to mean "adult female human who is not called Jane, OR 4x2 lego brick" (same definition of "female"), even if everyone knows what definitions I am using. I think the distinction that's doing the actual work isn't between communication and cognition, but between proper nouns (where the... (read more)

There’s no such thing as a tree (phylogenetically)

Yeah, not-loving the way this thread turned out makes sense. Sorry. Please make sure to downvote any comments that you think are bad.

Containment Thread on the Motivation and Political Context for My Philosophy of Language Agenda

(A reply to gjm, split off from the comments on "There's No Such Thing as a Tree")

would you care either to argue for that principle or explain what weaker principle you are implicitly appealing to here?

No, not really. What actually happened here was, I was annoyed at being accused of not understanding something I've been obsessively explaining and re-explaining for multiple years—notice the uncanny resemblance between your comment ("If I and the people I need to talk to about pumpkins spend our days [...]") and one of my replies from March 2019 (!) to ... (read more)

9gjm1moThe examples seem relevant to me because they illustrate that language is not used only to predict, that the merits of a particular language-using strategy are not determined only by its impact on predictive accuracy. If language in general has proper goals other than predictive accuracy, why should I think that category-boundary drawing has no proper goal other than predictive accuracy? I'm not sure exactly what distinction you're appealing to, by the way. In particular, you say "the communicative function of proper names ... the cognitive function of categories" and it's not clear to me whether (1) you're suggesting that proper names are used primarily for communication while categories are used primarily for cognition, or (2) you're saying that your complaints about talk of arbitrariness apply only when thinking about cognition as opposed to communication, or (3) something entirely different. I say that proper names and category boundaries are highly relevant to both communication and cognition, and that some of the examples that apparently bother you most seem much more about communication than about cognition (e.g., being asked to use pronouns that you find inappropriate for the people they refer to, which you say amounts to asking you to lie, clearly a communicative category more than a cognitive one). It's entirely possible that your meaning is something entirely different, of course; please feel free to enlighten me if so. The pouncing algorithm can't literally be "pounce when people say nice things about TCWMFM" because in the particular case that sparked this particular discussion that didn't happen but you still pounced. It probably won't surprise you that I don't agree with your description of TCWMFM as pants-on-fire mendacious, and I don't think the edit at the end undermines the foregoing material nearly as much as you say it does. There are rules, but the rules are of the form "if you want your thinking to be optimal in such-and-such a way, you ha
There’s no such thing as a tree (phylogenetically)

the point is that all these things require some sort of notion of distance, size, etc., in concept-space.

I agree. Did ... did you read "Unnatural Categories Are Optimized for Deception"? The post says this very explicitly in quite a lot of detail with specific numerical examples! (Ctrl-F for "metric".)

If you're going to condescend to me like this, I think I deserve an answer: did you read the post, yes or no? I know, it's kind of long (just under 10,000 words). But ... if you're going to put in the effort to write 500 words allegedly disproving what I "... (read more)

[The following is rather long; I'd offer the usual Pascal quotation but actually I'm not sure how much shorter it could actually be. I hope it isn't too tedious to read. It is quite a bit shorter than "Unnatural Categories are Optimized for Deception".]

I don't really understand what in what I wrote you're interpreting as condescension, but for what it's worth none was intended.

No, I don't think I ever read UCAOFD in any detail. The "did you read ...?" seems, on the face of it, to be assuming a principle along the lines of "you should not say that someone i... (read more)

There’s no such thing as a tree (phylogenetically)

Or I'm speaking a slightly different dialect of English from you?? As a point of terminology, I think "fuzzy" is a better word than "arbitrary" for this kind of situation, where I agree that, as a human having a casual conversation, my response to "Is a pumpkin a fruit?" is usually going to be something like "Whatever; if it matters in context, I'll ask for more specifics", but as a philosopher of science, I claim that there definite mathematical laws governing the relationship between what communication signals are sent, and what probabilistic inferences ... (read more)

2Douglas_Knight18dThe difference between "fuzzy" and "arbitrary" is fuzzy, but we should prefer one word over the other.

I don't love this thread - your first comment reads like you're correcting me on something or saying I got something important philosophically wrong, and then you just expand on part of what I wrote with fancier language. The actual "correction", if there is one, is down the thread and about a single word used in a minor part of the article, which, by your own findings, I am using in a common way and you are using in an idiosyncratic way. ...It seems like a shoehorn for your pet philosophical stance. (I suppose I do at least appreciate you confining t... (read more)

You keep saying this (and other roughly-equivalent things) but I think it's just wrong.

If you pick a measure on your concept-space, you can use it to define a notion of entropy, and then you can ask what clusterings permit maximally efficient communication. It's not clear that communication efficiency is the thing we want to maximize, and if you permit approximate transmission of information then you may actually want to minimize something like cost of errors + cost of communication, and for that you need not merely a measure but a metric. Anyway, the poin... (read more)

There’s no such thing as a tree (phylogenetically)

On the specific example of trees, John Wentworth recently pointed out that neural networks tend to learn a "tree" concept: a small, local change to the network can add or remove trees from generated images. That kind of correspondence between human and unsupervised (!) machine-learning model concepts is the kind of thing I'd expect to happen if trees "actually exist", rather than trees being weird and a little arbitrary. (Where things are closer to "actually existing" rather than being arbitrary when different humans and other AI architectures end up conve... (read more)

Oh, I think you're over-extrapolating what I meant by arbitrary - like I say toward the end of the essay, trees are definitely a meaningful category. Categories being "a little arbitrary" doesn't mean they're not valuable - is there a clear difference between a tree and a shrub? Maybe, but I don't know what it is if so, and it seems like plausibly not. The fruit example is even clearer - is a grape a berry? Is a pumpkin a fruit? Who cares? Probably lots of people, depending on the context? Most common human categories work like this around the edges if you... (read more)

3Slider2moThis is more of a contrast but this line of thinking could be used to remedy that dolphins are fishes. That is the branch of tree fo life "fish" is a different concept than "thing that swims to survive". In this sense "fishes" don't inherently breathe or have gills. A whole lot of properties would probably be A freedom degree" while the phylogenetics probbaly has a lot of "accidental" properties.
Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems

I expected you to realize how wrong everything you said was

What parts, specifically, are wrong? What is the evidence that shows that those parts are wrong? Please tell me! If I'm wrong about everything, I want to know!

Well, I was smiling and thinking "egg" after just a couple minutes because you describe an awful lot of little pieces of evidence that point towards being trans. So I expected that hrt would be a transformative sweep-away-all-doubt experience, like how it was for me. And on the off chance that you weren't trans, then going on estrogen would cause dysphoria in the reverse of how being on testosterone messed my mind up.

And neither of those things happened! Which means it wasn't you that was wrong, it was me.

There’s no such thing as a tree (phylogenetically)

Acknowledge that all of our categories are weird and a little arbitrary

That is not the moral! The moral is that the cluster-structure of similarities induced by phylogenetic relatedness exists in a different subspace from the cluster-structure of similarities induced by convergent evolution! (Where the math jargon "subspace" serves as a precise formalization of the idea that things can be similar in some aspects ("dimensions") while simultaneously being different in other aspects.) This shouldn't actually be surprising if you think about what the phrase... (read more)

That's a good expanded takeaway of part of it! (Obviously "weird and a little arbitrary" is kind of nebulous, but IME it's a handy heuristic you've neatly formalized in this case.) To be clear, it doesn't sound like we disagree?

The consequentialist case for social conservatism, or “Against Cultural Superstimuli”

actual trans people, or perverts willing to pretend to be trans if it allows them to sneak into female toilets

It gets worse: if the dominant root cause of late-onset gender dysphoria in males is actually a paraphilic sexual orientation, this is a false dichotomy! (It's not "pretending" if you sincerely believe it.)

2Viliam2moNo comment on your link, but by "perverts" in this context I specifically meant guys who get turned on by being in a public toilet with (other) women. The idea is that being an object of such desire might make the women quite uncomfortable, and yet there is nothing they can do about it without risking to be accused of transphobia.
The consequentialist case for social conservatism, or “Against Cultural Superstimuli”

So, I started writing an impassioned reply to this (draft got to 850 words), but I've been trying to keep my culture war efforts off this website (except for the Bayesian philosophy-of-language sub-campaign that's genuinely on-topic), so I probably shouldn't take the bait. (If nothing else, it's not a good use of my time when I have lots of other things to write for my topic-specific blog.)

If I can briefly say one thing without getting dragged into a larger fight, I would like to note that aggressively encouraging people to consider whether they might be t... (read more)

6gjm2moAll noted. For the avoidance of doubt, I'm not commenting here on whether any given "trans activism" is helpful or harmful (on net, or to any particular person). I just thought Sophronius made a really bad argument in the OP, and continued to make bad arguments in reply to me.
1Gerald Monroe2moCan't do it without enough power to overthrow a western government. Only thing that could even theoretically do that would be a TAI fighting on your side...
Why We Launched LessWrong.SubStack

Has anyone tried buying a paid subscription? I would assume the payment attempt just fails unless your credit card has a limit over $60,000, but I'm scared to try it.

1razeroxe3moTried. and yes...it fails. Ha. I definitely do not have anywhere close to that $ limit.
On future people, looking back at 21st century longtermism

I imagine them going: "Whoa. Basically all of history, the whole thing, all of everything, almost didn't happen."

But this kind of many-worldeaters thinking is already obsolete. It won't be that it "almost" didn't happen; it's that it mostly didn't happen. (The future will have the knowledge and compute to say what the distribution of outcomes was for a specified equivalence class of Earth-analogues across the multiverse.)

8johnswentworth3moIt's not that LW doesn't have emoji reactions. It's just that it has to be worth a BIG emoji reaction.
Viliam's Shortform

YouTube lets me watch the video (even while logged out). Is it a region thing?? (I'm in California, USA). Anyway, the video depicts

dirt, branches, animals, &c. getting in Rapunzel's hair as it drags along the ground in the scene when she's frolicking after having left the tower for the first time, while Flynn Rider offers disparaging commentary for a minute, before delcaring, "Okay, this is getting weird; I'm just gonna go."

If you want to know how it really ends, check out the sequel series!

Unnatural Categories Are Optimized for Deception

So, I like this, but I'm still not sure I understand where features come from.

Say I'm an AI, and I've observed a bunch of sensor data that I'm representing internally as the points (6.94, 3.96), (1.44, -2.83), (5.04, 1.1), (0.07, -1.42), (-2.61, -0.21), (-2.33, 3.36), (-2.91, 2.43), (0.11, 0.76), (3.2, 1.32), (-0.43, -2.67).

The part where I look at this data and say, "Hey, these datapoints become approximately conditionally independent if I assume they were generated by a multivariate normal with mean (2, -1), and covariance matrix [[16, 0], [0, 9]][1]; le... (read more)

Is it just, like, whatever transformations of a big buffer of camera pixels let me find conditional independence patterns probably correspond to regularities in the real world? Is it "that easy"??

Roughly speaking, yes.

Features are then typically the summary statistics associated with some abstraction. So, we look for features which induce conditional independence patterns in the big buffer of camera pixels. Then, we look for higher-level features which induce conditional independence between those features. Etc.

Unnatural Categories Are Optimized for Deception

(Thinking out loud about how my categorization thing will end up relating to your abstraction thing ...)

200-word recap of my thing: I've been relying on our standard configuration space metaphor, talking about running some "neutral" clustering algorithm on some choice of subspace (which is "value-laden" in the sense that what features you care about predicting depends on your values). This lets me explain how to think about dolphins: they simultaneously cluster with fish in one subspace, but also cluster with other mammals in a different subspace, no contr... (read more)

Yup, this seems basically right.

I have a (still incomplete) draft here which specifically addresses why the configuration space metaphor works. Short version: the key property of (Bayesian) clustering is that the points in a cluster are conditionally independent given the summary data of the cluster. For instance, if I have Gaussian clusters, then each point within a given cluster is independent given the mean and variance of that cluster. The prototypical "clustering problem" is to assign points to clusters in such a way that this works. So, for instance,... (read more)

What's So Bad About Ad-Hoc Mathematical Definitions?

This is related to something I never quite figured out in my cognitive-function-of-categorization quest. How do we quantify how good a category is at "carving reality at the joints"?

Your first guess would be "mutual information between the category-label and the features you care about" (as suggested in the Job parable in April 2019's "Where to Draw the Boundaries?"), but that actually turns out to be wrong, because information theory has no way to give you "partial credit" for getting close to the right answer, which we want. Learning whether a number bet... (read more)

(This is a good example. I'm now going to go on a tangent mostly unrelated to the post.)

I think you were on the right track with mutual information. They key insight here is not an insight about what metric to use, it's an insight about the structure of the world and our information about the world.

Let's use this example:

Learning whether a number between 1 and 10 inclusive is even or odd gives you the same amount of information (1 bit) as learning whether it's over or under 5½, but if you need to make a decision whose goodness depends continuously on the m

... (read more)
Trapped Priors As A Basic Problem Of Rationality

Maybe it's unfortunate that the same word is overloaded to cover "prior probability" (e.g., probability 0.2 that dogs are bad), and "prior information" in the sense of "a mathematical object that represents all of your starting information plus the way you learn from experience."

Where does the phrase "central example" come from?

Implied by "the noncentral fallacy"? (I'm surprised at the search engine results (Google, DuckDuckGo); I didn't realize this was a Less Wrong-ism.)

Defending the non-central fallacy

And a more natural clustering would reflect that.

What subspace are you doing your clustering in, though? Both the pro-capital-punishment and anti-capital-punishment side should be able to agree that capital punishment and "central" murder are similar in the "intentional killing of a human" aspects, but differ in the "motives and decision mechanism of the killer" aspects (where the "central" murderer is an individual, rather than a judicial institution). Each side has an incentive to try to bind the murder codeword in their shared language to a subspace ... (read more)

Unconvenient consequences of the logic behind the second law of thermodynamics

if entropy is decreasing maybe your memory is just working "backwards"

I think the key to the puzzle is likely to be here: there's likely to be some principled reason why agents embedded in physics will perceive the low-entropy time direction as "the past", such that it's not meaningful to ask which way is "really" "backwards".

1ForensicOceanography3moNo. As I said in this comment [https://www.lesswrong.com/posts/3eb5skzBJnJkrYb6M/unconvenient-consequences-of-the-logic-behind-the-second-law?commentId=3ohgCnPJMjzMdaYFv] , this can not be true, otherwise in the evening you would be able to mak prophecies about the following morning. Your brain can not measure the entropy of the universe - and its own entropy is not monotone with time.
5Adele Lopez3moYes! As Jaynes [https://bayes.wustl.edu/etj/articles/theory.2.pdf] teaches us: "[T]he order of increasing entropy is the order in which information is transfered, and has nothing to do with any temporal order."

Here's the way I understand it: A low-entropy state takes fewer bits to describe, and a high-entropy state takes more. Therefore, a high-entropy state can contain a description of a low-entropy state, but not vice-versa. This means that memories of the state of the universe can only point in the direction of decreasing entropy, i.e. into the past.

Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think

Cade Metz hadn't had this much trouble with a story in years. Professional journalists don't get writer's block! Ms. Tam had rejected his original draft focused on the subject's early warnings of the pandemic. Her feedback hadn't been very specific ... but then, it didn't need to be.

For contingent reasons, the reporting for this piece had stretched out over months. He had tons of notes. It shouldn't be hard to come up with a story that would meet Ms. Tam's approval.

The deadline loomed. Alright, well, one sentence at a time. He wrote:

In one post, he align

... (read more)
4Ben Pace4moGood point. In Eliezer’s defense I’ll note that the original proposal took pains to say “At least as honest as an unusually honest person AND THEN also truthful in communicating about your meta-level principles about when you’ll lie”, so the above isn’t a literal following of what Eliezer said (because I don’t think an unusually honest person would write that). But I think that was not a very natural idea, and I mostly think of meta-honesty as about being honest on the meta level, and that it’s important, but I don’t think of it as really tied up with the object level being super honest.
Anna and Oliver discuss Children and X-Risk

being the-sort-of-person-who-chooses-to-have-kids

What years were most of these biographies about? Sexual marketplace and family dynamics have changed a lot since, say, 1970ish. (Such that a lot of people today who don't think of themselves as the-sort-of-person-who-chooses-to-have-kids would absolutely be married with children had someone with their genotype grown up in an earlier generation.)

3David Hornbein4moAt conservative estimates, I've looked into dozens of significant pre-industrial people, dozens of significant people between the Industrial Revolution and 1970, and >100 significant post-1970 people. Among historically significant people and leaders-of-fields who get articles and books written about them, there has not been any change in who has kids large enough to jump out at me, except that in the past ~20 years there have been somewhat more openly gay entrepreneurs in the West.
Anna and Oliver discuss Children and X-Risk

Two complementary pro-natalist considerations I'd like to see discussed:

... (read more)
Above the Narrative

Consider adapting this into a top-level post? I anticipate wanting to link to it (specifically for the "smaller audiences offer more slack" moral).

4Viliam4moAh, no thanks. It's just a rewording of (my understaning of) Jacob's article, plus an attempt to preempt the obvious question: "why does outgroup 'follow the narrative', but ingroup 'speaks their mind freely', ain't that a bit too convenient?". Also, as Kaj's link suggests, my idea of "eigen-opinion" may be mathematically elegant, but it's not how things actually happened. Unless we take it one level above and say that NYT was constrained in their choice of narrative. Maybe, dunno. But the proximate cause of NYT reporters writing as they do is "being ordered to do so by their boss", which is quite boring explanation, so perhaps the real lesson here is not to skip boring explanations in favor of looking for mathematically elegant ones. And... although I am not sure whether this is a good lesson... but maybe also not to try too hard to be charitable to assholes. (Of course, it is difficult to find the right amount of charity in situations where I already take sides.) I mean, in some sense my explanation was an attempt to partially excuse the NYT as being victims or maybe collaborators of a stronger force, as opposed to being an uncaused cause of bad things. But they had more agency that I attributed to them, and they knowingly used it for evil.
Google’s Ethical AI team and AI Safety

people are afraid to engage in speech that will be interpreted as political [...] nobody is actually making statements about my model of alignment deployment [...] try to present the model at a further disconnect from the specific events and actors involved

This seems pretty unfortunate insofar as some genuinely relevant real-world details might not survive the obfuscation of premature abstraction.

Example of such an empirical consideration (relevant to the "have some members that keep up with AI Safety research" point in your hopeful plan): how much over... (read more)

“PR” is corrosive; “reputation” is not.

Thanks for the detailed reply! I changed my mind; this is kind of interesting.

This is not about "tone policing." This is about the fundamental thrust of the engagement. "You're wrong, and I'mm'a prove it!" vs. "I don't think that's right, can we talk about why?"

Can you say more about why this distinction seems fundamental to you? In my culture, these seem pretty similar except for, well, tone?

"You're wrong" and "I don't think that's right" are expressing the same information (the thing you said is not true), but the former names the speaker rather than... (read more)

7Raemon4moI want to quickly flag that I think the default way for this conversation to go in it's current public form isn't very useful. I think giant meta discussions about culture can be good, but require some deliberate buy-in and expectation setting, that I haven't seen here yet. Zack and Duncan each have their own preferred ways of conducting these sorts of conversations (which are both different from my own preferred way), so I don't know that my own advice would be useful to either of them. But my suggestion, if the conversation is to continue, is to first ask "how much do we both endorse having this conversation, what are we trying to achieve, and how much time/effort does it make sense to put into it?". (i.e. have a mini kickstarter for "is this actually worth doing?") (It seemed to me that each comment-exchange in this thread, both from Duncan and Zack, introduced introduced more meta concepts that took the conversation for a simple object level dispute to a "what is the soul of ideal truthseeking culture." I actually have some thoughts on the original exchange and how it probably could have been resolved without trying to tackle The Ultimate Meta, which I think is usually better practice, but I'm not sure that'd help anyone at this point)
“PR” is corrosive; “reputation” is not.

I also object to "would be very bad" in the subjunctive ... I assert that you ARE introducing this burden, with many of your comments, the above seeming not at all atypical for a Zack Davis clapback. Smacks of "I apologize IF I offended anybody," when one clearly did offend.

So, I think it's important to notice that the bargaining problem here really is two-sided: maybe the one giving offense should be nicer, but maybe the one taking offense shouldn't have taken it personally?

I guess I just don't believe that thoughts end up growing better than they woul... (read more)

5Duncan_Sabien4moSo, by framing things as "taking offense" and "tone policing," I sense an attempt to invalidate and delegitimize any possible criticism on the meta level. To start out with the hypothesis "Actually, Zack's doing a straightforwardly bad thing on the regular with the adversarial slant of their pushback" already halfway to being dismissed. I'm not "taking offense." I'm not pointing at "your comment made me sad and therefore it was bad," or "gosh, why did you use these words instead of these slightly different words which I'm arbitrarily declaring are better." I'm pointing at "your comment was exhausting, and could extremely easily have contained 100% of its value and been zero exhausting, and this has been true for many of the times I've engaged with you." You have a habit of choosing an unnecessarily exhaustingly combative method of engagement when you could just as easily make the exact same points and convey the exact same information cooperatively/collaboratively; no substantial emotional or interpretive labor required. This is not about "tone policing." This is about the fundamental thrust of the engagement. "You're wrong, and I'mm'a prove it!" vs. "I don't think that's right, can we talk about why?" Eric Rogstad (who's my mental exemplar of the virtue I'm pointing to here, though other people like Julia Galef and Benya Fallenstein also regularly exhibit it) could have pushed back every bit as effectively, and on every single detail, without being a dick.Eric Rogstad and Julia Galef and Benya Fallenstein are just as good as you at noticing wrongness that needs to be attacked, and they're better than you at not alienating the person who produced the mostly-right thought in the first place, and disincentivizing them from bothering to share their thoughts in the future. (I do not for one second buy your implied claim that your strategy is motivated by a sober weighing of its costs and benefits, and you're being adversarial because you genuinely believe that's t
8Ikaxas4moI think both are true, depending on the stage of development the thought is at. If the thought is not very fleshed out yet, it grows better by being nurtured and midwifed (see e.g. here [https://www.lesswrong.com/posts/PqMT9zGrNsGJNfiFR/alignment-research-field-guide#3__Models_of_social_dynamics] ). If the thought is relatively mature, it grows best by being intelligently attacked. I predict Duncan will agree.
“PR” is corrosive; “reputation” is not.

there are humans who do not laugh [...] humans who do not shiver when cold

Are there? I don't know! Part of where my comment was coming from is that I've grown wary of appeals to individual variation that are assumed to exist without specific evidence. I could easily believe, with specific evidence, that there's some specific, documented medical abnormality such that some people never develop the species-typical shiver, laugh, cry, &c. responses. (Granted, I am relying on the unstated precondition that, say, 2-week-old embryos don't count.) If you sh... (read more)

“PR” is corrosive; “reputation” is not.

Oh. I agree that introducing a burden on saying anything at all would be very bad. I thought I was trying to introduce a burden on the fake precision of using the phrase "many orders of magnitude" without being able to supply numbers that are more than 100 times larger than other numbers. I don't think I would have bothered to comment if the great-grandparent had said "a sign that you're wrong" rather than "a sign that you are many orders of magnitude more likely to be wrong than right".

The first paragraph was written from an adversarial perspective, but, ... (read more)

5Duncan_Sabien4moI mean the willful misunderstanding of the actual point I was making, which I still maintain is correct, including the bit about many orders of magnitude (once you include the should-be-obvious hidden assumption that has now been made explicit). The adversarial pretending-that-I-was-saying-something-other-than-what-I-was-clearly-saying (if you assign any weight whatsoever to obvious context) so as to make it more attackable and let you thereby express the performative incredulity you seemed to want to express, and needed more license for than a mainline reading of my words provided you. I also object to "would be very bad" in the subjunctive ... I assert that you ARE introducing this burden, with many of your comments, the above seeming not at all atypical for a Zack Davis clapback. Smacks of "I apologize IF I offended anybody," when one clearly did offend. This interaction has certainly taken my barely-sufficient-to-get-me-here motivation to "try LessWrong again" and quartered it. This thread has not fostered a sense of "LessWrong will help you nurture and midwife your thoughts, such that they end up growing better than they would otherwise." I would probably feel more willing to believe that your nitpicking was principled if you'd spared any of it for the top commenter, who made an even more ambitious statement than I (it being absolute/infinite).
“PR” is corrosive; “reputation” is not.

if you find yourself typing a sentence about some behavioral trait being universal among humans with that degree of absolute confidence, you can take this as a sign that you are many orders of magnitude more likely to be wrong than right.

"Many orders of magnitude"? (I assume that means we're working in odds rather than probabilities; you can't get more than two orders of magnitude more probability than 0.01.) So if I start listing off candidate behavioral universals like "All humans shiver when cold", "All humans laugh sometimes", "All humans tell stori... (read more)

7Duncan_Sabien4moYou're neglecting the unstated precondition that it's the type of sentence that would be generated in the first place, by a discussion such as this one. You've leapt immediately to an explicitly adversarial interpretation and ruled out meaning that would have come from a cooperative one, rather than taking a prosocial and collaborative approach to contribute the exact same information. (e.g. by chiming in to say "By the way, it seems to me that Duncan is taking for granted that readers will understand him to be referring to the set of such sentences that people would naturally produce when talking about culture and psychology. I think that assumption should be spelled out rather than left implicit, so that people don't mistake him for making a (wrong) claim about genuine near-universals like 'humans shiver when cold' that are only false when there are e.g. extremely rare outlier medical conditions." Or by asking something like "hey, when you say 'a sign' do you mean to imply that this is ironclad evidence, or did you more mean to claim that it's a strong hint? Because your wording is compatible with both, but I think one of those is wrong.") The adversarial approach you chose, which was not necessary to convey the information you had to offer, tends to make discourse and accurate thinking and communication more difficult, rather than less, because what you're doing is introducing an extremely high burden on saying anything at all. "If you do not explicitly state every constraining assumption in advance, you will be called out/nitpicked/met with performative incredulity; there is zero assumption of charity and you cannot e.g. trust people to interpret your sentences as having been produced under Grice's maxims (for instance)." The result is an overwhelming increase in the cost of discourse, and a substantial reduction in its allure/juiciness/expected reward, which has the predictable chilling effect. I absolutely would not have bothered to make my comment if I'd
Making Vaccine

I think another John Wentworth post is applicable here. It's not hard to invent reasons why any given post might increase existential risk by some amount. (What if your comment encourages pro-censorship attitudes that hamper the collective intellectual competence we need to reduce existential risk?) In order to not function as trolling, you need to present a case for the risk being plausible, not just possible.

1billmei4moTIL, thanks for the information on that. I'm not trying to troll, my apologies if my comment comes across that way. It's just interesting to me that this specific scenario was written about before, yet wasn't surfaced in the discussion.
Open & Welcome Thread – February 2021

Archived. (My guess is that no one bothered to preserve all content/links from the old Singularity Institute website when moving to the new post-MIRI-rebranding website; your intelligence.org link was presumably the product of a search-and-replace operation and probably never worked.)

1Yoav Ravid4moThanks! Perhaps this essay should be crossposted to LW to have a home (and so the links can lead somewhere).
Preface

Does this really have 534 legitimate votes (almost 400 more than the next-highest karma post dated in 2015), or was there a bug in the voting system? I could see "Preface" getting the most exposure (and therefore upvote "surface area") from people following links from an AI to Zombies ebook starting from the beginning, but I'd be surprised if that alone could account for the massive karma here.

7habryka4moNope, this is correct. For all logged out users this was very top post on the frontpage for over 3 years, creating a lot more exposure, and really a ton of new users voting on this post as their first action on the site.
2019 Review: Voting Results!

If it's easy, any chance we could get a variance (or standard deviation) column on the spreadsheet? (Quadratic voting makes it expensive to create outliers anyway, so throwing away the 50 percent most passionate of voters (as the interquartile range does) is discarding a lot of the actual dispersion signal.)

Open & Welcome Thread - January 2021

You may be thinking of Crystal Society? Best wishes, Less Wrong Reference Desk

2Richard_Kennaway5moThanks. While I'm here, if someone likes Ted Chiang and Greg Egan, who might they read for more of the same? "Non-space-opera rationalist SF that's mainly about the ideas" would be the simplest characterisation. The person in question is not keen on "spaceship stories" like (in his opinion) Iain M. Banks, and was unimpressed by Cixin Liu's "Three-Body Problem". I've suggested HPMoR, of course, but I don't think it took.
Richard Ngo's Shortform

If we can quantify how good a theory is at making accurate predictions (or rather, quantify a combination of accuracy and simplicity), that gives us a sense in which some theories are "better" (less wrong) than others, without needing theories to be "true".

Load More