LESSWRONG
LW

gjm
31165Ω372973291
Message
Dialogue
Subscribe

Hi. I'm Gareth McCaughan. I've been a consistent reader and occasional commenter since the Overcoming Bias days. My LW username is "gjm" (not "Gjm" despite the wiki software's preference for that capitalization). Elsewehere I generally go by one of "g", "gjm", or "gjm11". The URL listed here is for my website and blog, neither of which has been substantially updated for several years. I live near Cambridge (UK) and work for Hewlett-Packard (who acquired the company that acquired what remained of the small company I used to work for, after they were acquired by someone else). My business cards say "mathematician" but in practice my work is a mixture of simulation, data analysis, algorithm design, software development, problem-solving, and whatever random engineering no one else is doing. I am married and have a daughter born in mid-2006. The best way to contact me is by email: firstname dot lastname at pobox dot com. I am happy to be emailed out of the blue by interesting people. If you are an LW regular you are probably an interesting person in the relevant sense even if you think you aren't.

If you're wondering why some of my very old posts and comments are at surprisingly negative scores, it's because for some time I was the favourite target of old-LW's resident neoreactionary troll, sockpuppeteer and mass-downvoter.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Crux
2y
(+27/-22)
Do “adult developmental stages” theories have any pre-theoretical motivation?
gjm4h80

A nitpick:

Some sort of developmental-stage theory could be true without there being anything that quite looks like a strong correlation with age.

(Detailed example follows; feel free to skip if this is obvious once pointed out.)

Suppose everyone has some determinable number associated with them at any given time; let's call it their "quality". Everyone starts at 1. No one ever goes above 6. For a given person, quality only ever increases. But, tragically, "the good die young", and if we classify people by the highest quality they ever reach it looks like this:

  • Group 1: Q1 from age 0 to age 60.
  • Group 2: Q1 from age 0 to age 40, Q2 from age 40 to age 50.
  • Group 3: Q1 from age 0 to age 20, Q2 from age 20 to age 30, Q3 from age 30 to age 40.
  • Group 4: Q1 from age 0 to age 3, Q2 from age 3 to age 10, Q3 from age 10 to age 20, Q4 from age 20 to age 30.
  • Group 5: Q1 from age 0 to age 2, Q2 from age 2 to age 4, Q3 from age 4 to age 6, Q4 from age 6 to age 10, Q5 from from age 10 to age 20.
  • Group 6: Q1 from age 0 to age 1, Q2 from age 1 to age 2, Q3 from age 2 to age 3, Q4 from age 3 to age 4, Q5 from age 4 to age 5, Q6 from age 5 to age 10.

Although the good die young, the gods smile on virtue so they are more numerous: the number of people in each group is inversely proportional to the age they will attain.

So we might suppose a population with

  • 6000 group-1 people, 100 each at ages 0.5 .. 59.5, all at quality 1
  • 7200 group-2 people, 144 each at ages 0.5 .. 49.5, transitioning from quality 1 to quality 2 at age 40.
  • 9000 group-3 people, 225 each at ages 0.5 .. 39.5, transitioning at ages 20 and 30.
  • 12000 group-4 people, 400 each at ages 0.5 .. 29.5, transitioning at ages 3, 10, 20.
  • 18000 group-5 people, 900 each at ages 0.5 .. 19.5, transitioning at ages 2, 4, 6, 10.
  • 36000 group-6 people, 3600 each at ages 0.5 .. 9.5, transitioning at ages 1, 2, 3, 4, 5.

And this turns out to give a negative correlation between age and quality.

For the avoidance of doubt, I don't think it's terribly likely, conditional on some Kegan-like thing being true, that the distributions would look anything like this. Most likely, if something Kegan-like is true, age will correlate with stage. But it's not guaranteed.

Reply
Critic Contributions Are Logically Irrelevant
gjm1d151

"Maximizing X", in a vacuum, does indeed mean making X as large as possible while ignoring everything else. But we are not always in a vacuum. There is such a thing as "constrained optimization"; much of the time when someone refers to "maximizing X" it's in a context like "maximizing X while satisfying constraint C". There is such a thing as "multi-objective optimization" where you're trying to maximize X and also trying to maximize Y and you have to trade them off somehow.

So even in the technical language of mathematics "maximizing X" need not imply ignoring everything except X.

And, of course, someone writing or commenting or moderating on an internet forum is not literally solving mathematical optimization problems, and if you talk about them "caring about maximizing X" then it would literally never occur to me to interpret that as "caring about maximizing X and literally about nothing else".

... Having said which, I just polled a couple of other people of my acquaintance, both mathematicians and hence presumably more than averagely aware of the technical meaning of "maximizing", and they both said that they would interpret "X doesn't care about maximizing Y" as being consistent with X preferring Y to be bigger but also having other concerns.

I have therefore deleted the bit of this comment where I indignantly proclaim that I see no possible reason for writing the heading the way you did other than rhetorical sleight-of-hand :-), but it still seems to me that "... if you care about other things besides correctness" would be very much less liable to mislead or misdirect readers than "... if you don't care about maximizing correctness". (And I think the wording you ended up using at the end of the text of that section indicates that it's more natural to phrase things that way.)

Reply
Critic Contributions Are Logically Irrelevant
gjm2d232

The article has a heading of the form "X, if you don't care about maximizing correctness" followed by a (correct) discussion showing that "X, if you don't care exclusively about maximizing correctness".

Those two things are very different. In particular, around here almost everybody cares about maximizing correctness, but I would guess that very few people care about literally nothing else.

That discussion also makes the decision to take "ad revenue", something readers here may reasonably be expected to have some contempt for, as its working example of something other than correctness. I think this is part of the same rhetorical move: Zack acknowledges, formally, that it can be reasonable to care about things other than correctness (he also mentions, but decides not to use, the example of "total number of interesting ideas" which I would guess almost everyone here would agree is a thing worth wanting), but then he (1) chooses to instead emphasize something that most of us will feel icky about the idea of optimizing for[1] and (2) heads the section with a title that accuses people of not caring about correctness, which is extremely very utterly different from caring about other things besides correctness.

[1] Also something that, so far as I know, cannot possibly be a concern here on Less Wrong where there is no advertising revenue to try to maximize.

(I remark that this is not the first time I have had a discussion with Zack of much the same shape, where Zack implicitly or explicitly claims that epistemic virtue means caring only about one thing even when there seem (at least to me) to be other things a would-be clear thinker might reasonably care about. I think it might be the third or fourth time.)

[EDITED to add:] I realise that that last paragraph might be misleading; this is an instance of "me having a discussion with Zack" only in the sense that this comment here is by me and it's replying to something Zack wrote. I don't mean to imply that what Zack wrote had anything at all to do with me; it didn't.

Reply
Critic Contributions Are Logically Irrelevant
gjm2d132

I don't think this is an important factor that makes a big difference, but the identity of the author can make a difference to the value of the comment in the following way.

Consider a correction like the one Zack received on his post about discontinuous linear functions. A reader who sees such a correction from someone they know is good at mathematics can (at least in the absence of further argument about the correction) trust that they got it right, and update their opinions accordingly. If they see such a correction from some rando they know nothing about then they can't do that, and need to work through the mathematics themself, or wait for someone else to do it, or just put up with not knowing who's right.

(This is a separate point from the one, already acknowledged by Zack in the OP, that our estimate of comment value may be higher if we know the commenter is expert; I am saying that the actual value of a correct comment is higher if readers know the commenter is expert.)

Reply
the jackpot age
gjm8d40

They show up properly for me now.

Reply
the jackpot age
gjm8d70

I think readers using Firefox (or forks thereof) with its "enhanced tracking protection" turned on will not see images here; at any rate, they are blocked for me with the note "Socialtracking" in the network tab of the devtools, which I think means that ETP blocked them.

I don't know whether there's some other place to host images for LW articles that won't provoke such issues.

Reply
Subway Particle Levels Aren't That High
gjm11d131

I'm curious as to whether the "pretend it's a completely different person" schtick was just for fun or whether there was a deeper purpose to it (e.g., encouraging yourself to think about past-you as an entirely separate person to make it easier to rethink independently).

Reply2
Raemon's Shortform
gjm11d81

This sounds like maybe the same phenomenon as reported by Douglas Hofstadter, as quoted by Gary Marcus here: https://garymarcus.substack.com/p/are-llms-starting-to-become-a-sentient

Reply
Kaj's shortform feed
gjm15d106

Could you please clarify what parts of the making of the above comment were done by a human being, and what parts by an AI?

Reply
Kabir Kumar's Shortform
gjm15d21

Sure, but plausibly that's Scott being unusually good at admitting error, rather than Tyler being unusually bad.

Reply
Load More
133"AI achieves silver-medal standard solving International Mathematical Olympiad problems"
1y
38
21Humans, chimpanzees and other animals
2y
18
67On "aiming for convergence on truth"
2y
55
101Large language models learn to represent the world
Ω
2y
Ω
20
50Suspiciously balanced evidence
5y
24
8"Future of Go" summit with AlphaGo
8y
3
63Buying happiness
9y
34
30AlphaGo versus Lee Sedol
9y
183
8[LINK] "The current state of machine intelligence"
10y
3
23Scott Aaronson: Common knowledge and Aumann's agreement theorem
10y
4
Load More