LESSWRONG
LW

gjm
31083Ω372973231
Message
Dialogue
Subscribe

Hi. I'm Gareth McCaughan. I've been a consistent reader and occasional commenter since the Overcoming Bias days. My LW username is "gjm" (not "Gjm" despite the wiki software's preference for that capitalization). Elsewehere I generally go by one of "g", "gjm", or "gjm11". The URL listed here is for my website and blog, neither of which has been substantially updated for several years. I live near Cambridge (UK) and work for Hewlett-Packard (who acquired the company that acquired what remained of the small company I used to work for, after they were acquired by someone else). My business cards say "mathematician" but in practice my work is a mixture of simulation, data analysis, algorithm design, software development, problem-solving, and whatever random engineering no one else is doing. I am married and have a daughter born in mid-2006. The best way to contact me is by email: firstname dot lastname at pobox dot com. I am happy to be emailed out of the blue by interesting people. If you are an LW regular you are probably an interesting person in the relevant sense even if you think you aren't.

If you're wondering why some of my very old posts and comments are at surprisingly negative scores, it's because for some time I was the favourite target of old-LW's resident neoreactionary troll, sockpuppeteer and mass-downvoter.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Subway Particle Levels Aren't That High
gjm1h20

I'm curious as to whether the "pretend it's a completely different person" schtick was just for fun or whether there was a deeper purpose to it (e.g., encouraging yourself to think about past-you as an entirely separate person to make it easier to rethink independently).

Reply
Raemon's Shortform
gjm1h20

This sounds like maybe the same phenomenon as reported by Douglas Hofstadter, as quoted by Gary Marcus here: https://garymarcus.substack.com/p/are-llms-starting-to-become-a-sentient

Reply
Kaj's shortform feed
gjm4d106

Could you please clarify what parts of the making of the above comment were done by a human being, and what parts by an AI?

Reply
Kabir Kumar's Shortform
gjm4d21

Sure, but plausibly that's Scott being unusually good at admitting error, rather than Tyler being unusually bad.

Reply
Kabir Kumar's Shortform
gjm5d41

It's still pretty interesting if it turns out that the only clear example to be found of T.C. admitting to error is in a context where everyone involved is describing errors they've made: he'll admit to concrete mistakes, but apparently only when admitting mistakes makes him look good rather than bad.

(Though I kinda agree with one thing Joseph Miller says, or more precisely implies: perhaps it's just really rare for people to say publicly that they were badly wrong about anything of substance, in which case it could be that T.C. has seldom done that but that this shouldn't much change our opinion of him.)

Reply
Every Major LLM Endorses Newcomb One-Boxing
gjm23d116

The language used by some of the LLMs in answering the question seems like pretty good evidence for the "they one-box at least partly because Less Wrong is in their training data" theory. E.g., if you asked a random philosopher for their thoughts on the Newcomb problem, I don't think most of them would call the predictor "Omega" and (less confidently) I don't think most of them would frame the question in terms of "CDT" and "EDT".

Reply
The Value Proposition of Romantic Relationships
gjm1mo82
  1. Unfortunately "love" means a lot of different things. If you answer "what is the best thing about romantic relationships?" with "love" then you haven't done anything to distinguish e.g. "feelings of gooey happiness in one another's presence" from "fierce commitment to giving the other person as good a life as possible" from "accepting the other person no matter what they do or what happens to them" from etc., etc.
  2. I think it's highly debatable whether what John is describing is "love" in any of the usual senses of that word. It's clearly related to it, just as it's related to (say) trust, but it's not the same thing.
  3. Even if it is, John has identified (or at least claims to have identified) a very specific thing about the phenomenon that is much more specific than "love".
Reply1
o3 Will Use Its Tools For You
gjm3mo71

Pedantic note: there are many instances of "syncopathy" that I am fairly sure should be "sycophancy".

(It's an understandable mistake -- "syncopathy" is composed of familiar components, which could plausibly be put together to mean something like "the disease of agreeing too much" which is, at least in the context of AI, not far off what sycophancy in fact means. Whereas if you can parse "sycophancy" at all you might work out that it means "fig-showing" which obviously has nothing to do with anything. So far as I can tell, no one actually knows how "fig-showing" came to be the term for servile flattery.)

Reply
Review: Planecrash
gjm6mo154

The Additional Questions Elephant (first image in article, "image credit: Planecrash") is definitely older than Planecrash; see e.g. https://knowyourmeme.com/photos/1036583-reaction-images for an instance from 2015.

Reply
Review: Planecrash
gjm6mo40

They're present on the original for which this is a linkpost. I don't know what the mechanism was by which the text was imported here from the original, but presumably whatever it was it didn't preserve the images.

Reply
Load More
Crux
2y
(+27/-22)
133"AI achieves silver-medal standard solving International Mathematical Olympiad problems"
1y
38
21Humans, chimpanzees and other animals
2y
18
67On "aiming for convergence on truth"
2y
55
101Large language models learn to represent the world
Ω
2y
Ω
20
50Suspiciously balanced evidence
5y
24
8"Future of Go" summit with AlphaGo
8y
3
63Buying happiness
9y
34
30AlphaGo versus Lee Sedol
9y
183
8[LINK] "The current state of machine intelligence"
10y
3
23Scott Aaronson: Common knowledge and Aumann's agreement theorem
10y
4
Load More