LESSWRONG
LW

johnswentworth
55922Ω670935933570
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
11johnswentworth's Shortform
Ω
5y
Ω
596
johnswentworth's Shortform
johnswentworth1d72

Agreed, that's basically how I use them.

Reply
johnswentworth's Shortform
johnswentworth1dΩ39111-8

I was a relatively late adopter of the smartphone. I was still using a flip phone until around 2015 or 2016 ish. From 2013 to early 2015, I worked as a data scientist at a startup whose product was a mobile social media app; my determination to avoid smartphones became somewhat of a joke there.

Even back then, developers talked about UI design for smartphones in terms of attention. Like, the core "advantages" of the smartphone were the "ability to present timely information" (i.e. interrupt/distract you) and always being on hand. Also it was small, so anything too complicated to fit in like three words and one icon was not going to fly.

... and, like, man, that sure did not make me want to buy a smartphone. Even today, I view my phone as a demon which will try to suck away my attention if I let my guard down. I have zero social media apps on there, and no app ever gets push notif permissions when not open except vanilla phone calls and SMS.

People would sometimes say something like "John, you should really get a smartphone, you'll fall behind without one" and my gut response was roughly "No, I'm staying in place, and the rest of you are moving backwards".

And in hindsight, boy howdy do I endorse that attitude! Past John's gut was right on the money with that one.

I notice that I have an extremely similar gut feeling about LLMs today. Like, when I look at the people who are relatively early adopters, making relatively heavy use of LLMs... I do not feel like I'll fall behind if I don't leverage them more. I feel like the people using them a lot are mostly moving backwards, and I'm staying in place.

Reply321
Habryka's Shortform Feed
johnswentworth1d197

We are absolutely, with no ambiguity, in the "most rapid adoptions of any technology in US history branch". Every single corporation in the world is trying to adopt AI into their products.

Disagree with your judgement on this one. Agree that everyone is trying to adopt AI into their products, but that's extremely and importantly different from actual successful adoption. It's especially importantly different because part of the core value proposition of general AI is that you're not supposed to need to retool the environment around it in order to use it.

Reply
Habryka's Shortform Feed
johnswentworth1d289

reasoning models [...] seem like a bigger deal than GPT-5 to me.

Strong disagree. Reasoning models do not make every other trick work better, the way a better foundation model does. (Also I'm somewhat skeptical that reasoning models are actually importantly better at all; for the sorts of things we've tried they seem shit in basically the same ways and to roughly the same extent as non-reasoning models. But not sure how cruxy that is.)

Qualitatively, my own update from OpenAI releasing o1/o3 was (and still is) "Altman realized he couldn't get a non-disappointing new base model out by December 2024, so he needed something splashy and distracting to keep the investor money fueling his unsustainable spend. So he decided to release the reasoning models, along with the usual talking points of mostly-bullshit evals improving, and hope nobody notices for a while that reasoning models are just not that big a deal in the long run."

Also, I don't believe you that anyone was talking in late 2023 that GPT-5 was coming out in a few months [...] End of 2024 would have been a quite aggressive prediction even just on reference class forecasting grounds

When David and I were doing some planning in May 2024, we checked the prediction markets, and at that time the median estimate for GPT5 release was at December 2024.

Reply
johnswentworth's Shortform
johnswentworth2d30

I've been working on getting more out of lower percentile conversations. The explanation is fairly woo-ey but might also relate to your interest around flirting.

I'd be interested to hear that.

Reply
johnswentworth's Shortform
johnswentworth3d20

I have a few times, found it quite interesting, and would happily do it again. It feels like the sort of thing which is interesting mainly because I learned a lot, but marginal learnings would likely fall off quickly, and I don't know how interesting it would be after doing it a few more times.

Reply
johnswentworth's Shortform
johnswentworth4d2212

Like, when I head you say "your instinctive plan-evaluator may end up with a global negative bias" I'm like, hm, why not just say "if you notice everything feels subtly heavier and like the world has metaphorically lost color"

Because everything did not feel subtly heavier or like the world had metaphorically lost color. It was just, specifically, that most nontrivial things I considered doing felt like they'd suck somehow, or maybe that my attention was disproportionately drawn to the ways in which they might suck.

And to be clear, "plan predictor predicts failure" was not a pattern of verbal thought I noticed, it's my verbal description of the things I felt on a non-verbal level. Like, there is a non-verbal part of my mind which spits out various feelings when I consider doing different things, and that part had a global negative bias in the feelings it spit out.

I use this sort of semitechnical language because it allows more accurate description of my underlying feelings and mental motions, not as a crutch in lieu of vague poetry.

Reply
johnswentworth's Shortform
johnswentworth4d20

Do group conversations count?

Yes.

Reply
johnswentworth's Shortform
johnswentworth5d270

Question I'd like to hear peoples' takes on: what are some things which are about the same amount of fun for you as (a) a median casual conversation (e.g. at a party), or (b) a top-10% casual conversation, or (c) the most fun conversations you've ever had? In all cases I'm asking about how fun the conversation itself was, not about value which was downstream of the conversation (like e.g. a conversation with someone who later funded your work).

For instance, for me, a median conversation is about as fun as watching a mediocre video on youtube or reading a mediocre blogpost. A top-10% conversation is about as fun as watching a generic-but-fun movie, like e.g. a Jason Statham action flick. In both cases, the conversation drains more energy than the equal-fun alternative. I have probably had at most a single-digit number of conversations in my entire life which were as fun-in-their-own-right as e.g. a median night out dancing, or a median escape room, or median sex, or a median cabaret show. Maybe zero, unsure.


The rest of this is context on why I'm asking which you don't necessarily need to read in order to answer the question...

So I recently had a shortform asking "hey, that thing where people send mutually escalating signals of sexual intent during a conversation, is that a thing which typical people actually do?" and a horde of people descended to say "YES, obviously, how the fuck have you not noticed that???". So naturally I now wonder exactly how this apparently-obvious-to-everyone-else thing has remained approximately invisible to me, and what else I might be missing nearby. What exactly is the shape of my blindspot here?

And a leading hypothesis for the shape of the blindspot is that I generally find casual conversation way more boring than most people, and therefore have not noticed some things which happen during casual conversation.

Some supporting evidence for this:

  • Back in fourth grade a school psychologist observed me for reasons, and in her report said that I would sit alone at lunch with a book, and if anyone came over to chat I would put the book down and talk to them and generally seemed friendly in normal ways, and then once they left I would pick the book back up. I certainly recall finding the books more interesting than conversation with my classmates.
  • Notably, plenty of people have said that I am pretty good at casual conversation, at least when I'm bothering. (The people who know me best eventually realize that I have a mental switch for this, and can intentionally toggle it.) I can make it a relatively fun conversation. But, like, I personally still find it kind of mid as entertainment goes.
  • When I think of conversations which stand out as really great for me, they're cases where either I learned some technical thing I didn't previously know, or they lead into more fun things later (and most of the fun was from the later things). I can drive the sort of playful conversations which IIUC lots of people like, but... they don't stand out as especially fun in my recollection. Fun relative to other conversation, sure, but conversation just isn't a particularly fun medium.

So anyway, I'm trying to get a bead on whether this hypothesis is correct, or whether I have a differently-shaped blindspot, or whether I'm missing something else entirely. Thank you all in advance for your data!

Reply
johnswentworth's Shortform
johnswentworth5d16246

In response to the Wizard Power post, Garrett and David were like "Y'know, there's this thing where rationalists get depression, but it doesn't present like normal depression because they have the mental habits to e.g. notice that their emotions are not reality. It sounds like you have that."

... and in hindsight I think they were totally correct.

Here I'm going to spell out what it felt/feels like from inside my head, my model of where it comes from, and some speculation about how this relates to more typical presentations of depression.

Core thing that's going on: on a gut level, I systematically didn't anticipate that things would be fun, or that things I did would work, etc. When my instinct-level plan-evaluator looked at my own plans, it expected poor results.

Some things which this is importantly different from:

  • Always feeling sad
  • Things which used to make me happy not making me happy
  • Not having energy to do anything

... but importantly, the core thing is easy to confuse with all three of those. For instance, my intuitive plan-evaluator predicted that things which used to make me happy would not make me happy (like e.g. dancing), but if I actually did the things they still made me happy. (And of course I noticed that pattern and accounted for it, which is how "rationalist depression" ends up different from normal depression; the model here is that most people would not notice their own emotional-level predictor being systematically wrong.) Little felt promising or motivating, but I could still consciously evaluate that a plan was a good idea regardless of what it felt like, and then do it, overriding my broken intuitive-level plan-evaluator.

That immediately suggests a model of what causes this sort of problem.

The obvious way a brain would end up in such a state is if a bunch of very salient plans all fail around the same time, especially if one didn't anticipate the failures and doesn't understand why they happened. Then a natural update for the brain to make is "huh, looks like the things I do just systematically don't work, don't make me happy, etc; let's update predictions on that going forward". And indeed, around the time this depression kicked in, David and I had a couple of significant research projects which basically failed for reasons we still don't understand, and I went through a breakup of a long relationship (and then dove into the dating market, which is itself an excellent source of things not working and not knowing why), and my multi-year investments in training new researchers failed to pay off for reasons I still don't fully understand. All of these things were highly salient, and I didn't have anything comparably-salient going on which went well.

So I guess some takeaways are:

  • If a bunch of salient plans fail around the same time for reasons you don't understand, your instinctive plan-evaluator may end up with a global negative bias.
  • If you notice that, maybe try an antidepressant. Bupropion has been helpful for me so far, though it's definitely not the right tool for everyone (especially bad if you're a relatively anxious person; I am the opposite of anxious).
Reply2111
Load More
No wikitag contributions to display.
From Atoms To Agents
"Why Not Just..."
Basic Foundations for Agent Models
Framing Practicum
Gears Which Turn The World
Abstraction 2020
Gears of Aging
Model Comparison
54Fictional Thinking and Real Thinking
13d
11
192The Value Proposition of Romantic Relationships
1mo
37
67That's Not How Epigenetic Modifications Work
1mo
12
532Orienting Toward Wizard Power
1mo
142
73$500 + $500 Bounty Problem: Does An (Approximately) Deterministic Maximal Redund Always Exist?
2mo
16
113Misrepresentation as a Barrier for Interp (Part I)
2mo
12
89$500 Bounty Problem: Are (Approximately) Deterministic Natural Latents All You Need?
2mo
17
294So You Want To Make Marginal Progress...
4mo
42
181Instrumental Goals Are A Different And Friendlier Kind Of Thing Than Terminal Goals
Ω
5mo
Ω
61
356The Case Against AI Control Research
5mo
81
Load More