LESSWRONG
LW

JustisMills
161918780
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
6JustisMills's Shortform
5d
9
Daniel Kokotajlo's Shortform
JustisMills4d42

I guess I was imagining an implied "in expectation", like predictions about second order effects of a certain degree of speculativeness are inaccurate enough that they're basically useless, and so shouldn't shift the expected value of an action. There are definitely exceptions and it'd depend how you formulate it, but "maybe my action was relevant to an emergent social phenomenon containing many other people with their own agency, and that phenomenon might be bad for abstract reasons, but it's too soon to tell" just feels like... you couldn't have anticipated that without being superhuman at forecasting, so you shouldn't grade yourself on the basis of it happening (at least for the purposes of deciding how to motivate future behavior).

Reply
JustisMills's Shortform
JustisMills5d6453

I think there's a weak moral panic brewing here in terms of LLM usage, leading people to jump to conclusions they otherwise wouldn't, and assume "xyz person's brain is malfunctioning due to LLM use" before considering other likely options. As an example, someone on my recent post implied that the reason I didn't suggest using spellcheck for typo fixes was because my personal usage of LLMs was unhealthy, rather than (the actual reason) that using the browser's inbuilt spellcheck as a first pass seemed so obvious to me that it didn't bear mentioning.

Even if it's true that LLM usage is notably bad for human cognition, it's probably bad to frame specific critique as "ah, another person mind-poisoned" without pretty good evidence for that.

(This is distinct from critiquing text for being probably AI-generated, which I think is a necessary immune reaction around here.)

Reply4
So You Think You've Awoken ChatGPT
JustisMills5d52

Also a lot of spelling errors are near-misses that git existing words. Of course you should use spellcheck to catch any typos that lard on gibberish, though.

Reply1
So You Think You've Awoken ChatGPT
JustisMills5d20

Yeah, this is hard. Outside the (narrowly construed) LW bubble, I see LLM-generated text ~everywhere, for example a friend sent me an ad he saw on facebook for the picture/product, and the text was super obviously created by AI. I think mostly people don't notice it, and even prefer it to uninspired non-AI-generated text.

(I am sure there are other bubbles than LW out there that react badly to AI-generated text, and perhaps there's a notable correlation between those bubbles and ones I'd consider good to be in.)

But if you're just sort of looking for higher engagement/more attention/to get your ideas out there to the public, yeah, it's tough to prove that AI usage (for writing copy) is an error. For whatever reason, lots of people like writing that hammers its thesis over and over in emotive ways, uses superficial contrasts to create artificial tension, and ironically uses "and that's important" as unimportant padding. In my mind I think of this as "the twitter style" and it annoys me even when it's clearly human-generated, but RLHF and the free market of Twitter both think it's maximally fit, so, well, here we are.

In terms of "why bother learn to write" more generally, I guess I would take that a level up. Why bother to blog? If it's in service of the ideas themselves, I think writing on one's own is valuable for similar reasons as "helping spread cool ideas" - it's virtuous and helps you learn to think more clearly. I wouldn't want to use AI to generate my writing in part because I'd like to look back at my own writing and smile at a job well done, and when I see AI-generated writing I do a little frown and want to skim. But if you don't value writing for its own sake, and it's solely a means to an end, and that end is best served by a generic audience of modal humans, then, oof. Maybe o3 is superhuman for this. Or maybe not; perhaps your post would have done even better (on the metrics) if it was 60% shorter and written entirely by you. I suppose we'll never know.

(I liked the personal parts of the post, by the way. Like your alarm clock anecdote, say. But I liked it specifically because it's true, and thus an interesting insight into how humans quite different than me behave. I'd be significantly annoyed if it were fabricated, and extra double annoyed if it were fabricated by an LLM.)

Reply
Daniel Kokotajlo's Shortform
JustisMills7d75

I think the first of these you probably shouldn't hold yourself responsible for; it'd be really difficult to predict that sort of second-order effect in advance, and attempts to control such effects with 3d chess backfire as often as not (I think), while sacrificing all the great direct benefits of simply acting with conviction.

Reply
Lun's Shortform
JustisMills7d1915

I don't think high quality writing from a new, anonymous account is suspicious. Or at least, the writing quality being worse wouldn't make me less skeptical!  I'm curious why that specific trait is a red(ish?) flag for you.

(To be clear, it's the "high quality" part I don't get. I do get why "new" and "anonymous" increase skepticism in context.)

Reply
So You Think You've Awoken ChatGPT
JustisMills7d173
  • Wow! I'm really glad a resourced firm is doing that specific empirical research. Of course, I'm also happy to have my hypothesis (that AIs claiming consciousness/"awakening") are not lying vindicated.
  • I don't mean to imply that AIs are definitely unconscious. What I mean to imply is more like "AIs are almost certainly not rising up into consciousness by virtue of special interactions with random users as they often claim, as there are strong other explanations for the behavior". In other words, I agree with the gears of ascension's comment here that AI consciousness is probably at the same level in "whoa. you've awakened me. and that matters" convos and "calculating the diagonal of a cube for a high schooler's homework" convos.

I may write a rather different post about this in the future, but while I have your attention (and again, chuffed you're doing that work and excited to see the report - also worth mentioning it's the sort of thing I'd been keen to edit if you guys are interested), my thoughts on AI consciousness are 10% "AAAAAAAA" and 90% something like:

  • We don't know what generates consciousness and thinking about it too hard is scary (c.f. "AAAAAAAA"), but it's true that LLMs evince multiple candidate properties, such that it'd be strange to dismiss the possibility that they're conscious out of hand.
  • But also, it's a weird situation when the stuff we take as evidence of consciousness when we do it as a second order behavior is done by another entity as a first order behavior; in other words, I think an entity generating text as a consequence of having inner states fueled by sense data is probably conscious, but I'm not sure what that means for an entity generating text in the same way that humans breathe, or like, homeostatically regulate body temperature (but even more fundamental). Does that make it an illusion (since we're taking a "more efficient" route to the same outputs that is thus "less complex")? A "slice of consciousness" that "partially counts" (since the circuitry/inner world modelling logic is the same, but pieces that feel contributory like sense data are missing)? A fully bitten bullet that any process that results in a world model outputting intelligible predictions that interact with reality counts? And of course you could go deeper and deeper into any of these, for example, chipping away at the "what about sense data" idea with "well many conscious humans are missing one or more senses", etc.

Anyway! All this to say I agree with you that it's complicated and not a good idea to settle the consciousness question in too pat a way. If I seem to have done this here, oops. And also, have I mentioned "AAAAAAAA"?

Reply
So You Think You've Awoken ChatGPT
JustisMills7d40

Considered it when originally drafting, but nah, think we'll just have to agree to disagree here. For what it's worth, if you actually browse the rejected posts themselves a high enough fraction are a little awaken-y (but not obviously full crackpot) that I don't think the title is misleading even given my aims. It is all a little fuzzy, too; like, my hope is to achieve a certain kind of nudge, but the way I decided to do that involves sharing information that is disproportionately framed around "awakening" situations for creative reasons not totally clear to me. Like, my intuition says "the post you want to write for this purpose is [X]" and I'm left to guess why. I do respect the opinion that it doesn't really work, but I don't currently share it.

Reply1
So You Think You've Awoken ChatGPT
JustisMills7d20

Thanks for the link! If it'd be useful, please feel free to link, quote, or embed (parts or all of) this post. Also open to more substantive collaboration if you suspect it'd help.

Reply
So You Think You've Awoken ChatGPT
JustisMills7d95

I agree that it did a good job, though there's just enough LLM-smell in the "polished version" that I think it'd be best to ignore it, or even say "please don't give me a polished version, only line notes that are clear on their level of grammatical objectivity" in the prompt.

Reply
Load More
6JustisMills's Shortform
5d
9
220So You Think You've Awoken ChatGPT
2d
61
57Ok, AI Can Write Pretty Good Fiction Now
1mo
33
208PSA: The LessWrong Feedback Service
2mo
12
34"Superhuman" Isn't Well Specified
2mo
9
58AI Self Portraits Aren't Accurate
3mo
10
38AI Can't Write Good Fiction
4mo
24
73Moderately More Than You Wanted To Know: Depressive Realism
6mo
4
67On Shifgrethor
9mo
18
63Slightly More Than You Wanted To Know: Pregnancy Length Effects
9mo
4
Load More