LESSWRONG
LW

johnswentworth
56537Ω672036033760
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Loki zen's Shortform
johnswentworth5h6-1

I'm not sure what it would even mean to teach something substantive about ML/AI to someone who lacks the basic concepts of programming. Like, if someone with zero programming experience and median-high-school level math background asked me how to learn more about ML/AI, I would say "you lack the foundations to achieve any substantive understanding at all, go do a programming 101 course and some calculus at a bare minimum".

For instance, I could imagine giving such a person a useful and accurate visual explanation of how modern ML works, but without some programming experience they're going to go around e.g. imagining ghosts in the machine, because that's a typical mistake people make when they have zero programming experience. And a typical ML expert trying to give an explain-like-I'm-5 overview wouldn't even think to address a confusion that basic. I'd guess that there's quite a few things like that, as is typical. Inferential distances are not short.

Reply
JustisMills's Shortform
johnswentworth2d52

The base rate for acute psychosis is so high

Do you happen to know the number? Or is this a vibe claim?

Reply
Aspiring to Great Solstice Speeches: Mostly-Obvious Advice
johnswentworth3d191

Man, I have conflicting feelings about this post. This entire approach to speeches is... probably the right choice for someone with typical public speaking skills, but puts a ceiling on how good it can get.

For comparison, here is my general approach for basically all of my public speaking:

  • Write the speech/presentation/whatever
  • Run through it many times mentally in the course of writing, maybe once or twice out loud
  • As with any good plan, throw it away and then go do what makes sense in the moment (which usually mostly means following the outline and using load-bearing word choices of the written version, but definitely not matching every little detail of the plan)

The whole strategy of "write speech, practice doing that exact speech, then deliver it exactly as practiced" leaves no room to match the audience's energy on the fly. It rules out most forms of audience interaction as part of the speaking, because real audience interaction introduces the possibility of surprises. When I watch other people use the "follow the plan" style, it feels like it's not engaging with the audience (because, well, it isn't).

And entire concept of holding a written script while on-stage would just be complete anathema to engaging speaking, at least the style I usually use. You're stuck to the podium, so basically half of good public speaking is immediately ruled out; you can't use most really expressive body language, can't use space and movement to communicate context switches or move attention flow.

... but the flip side is that the style I usually rely on requires being completely comfortable on stage, and requires a deep understanding of the plan such that one can generalize off-distribution as surprises come up. It would be totally nonviable for lots of people.

Reply
Why is LW not about winning?
johnswentworth3d8561

If you want to solve alignment and want to be efficient about it, it seems obvious that there are better strategies than researching the problem yourself, like don't spend 3+ years on a PhD (cognitive rationality) but instead get 10 other people to work on the issue (winning rationality). And that 10x s your efficiency already.

Alas, approximately every single person entering the field has either that idea, or the similar idea of getting thousands of AIs to work on the issue instead of researching it themselves. We have thus ended up with a field in which nearly everyone is hoping that somebody else is going to solve the hard parts, and the already-small set of people who are just directly trying to solve it has, if anything, shrunk somewhat.

It turns out that, no, hiring lots of other people is not actually how you win when the problem is hard.

Reply
Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
johnswentworth5d3615

It sounds like both the study authors themselves and many of the comments are trying to spin this study in the narrowest possible way for some reason, so I'm gonna go ahead make the obvious claim: this result in fact generalizes pretty well. Beyond the most incompetent programmers working on the most standard cookie-cutter tasks with the least necessary context, AI is more likely to slow developers down than speed them up. When this happens, the developers themselves typically think they've been sped up, and their brains are lying to them.

And the obvious action-relevant takeaway is: if you think AI is speeding up your development, you should take a very close and very skeptical look at why you believe that.

Reply1
Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
johnswentworth5d379

Apologies for the impoliteness, but... man, it sure sounds like you're searching for reasons to dismiss the study results. Which sure is a red flag when the study results basically say "your remembered experience is that AI sped you up, and your remembered experience is unambiguously wrong about that".

Like, look, when someone comes along with a nice clean study showing that your own brain is lying to you, that has got to be one of the worst possible times to go looking for reasons to dismiss the study.

Reply
tlevin's Shortform
johnswentworth5d50

Their top pick for best air conditioner $50 off

Y'know, I got one of those same u-shaped Midea air conditioners, two or three years ago. Just a few weeks ago I got a notice that it was recalled. Poor water drainage, which tended to cause mold (and indeed I encountered that problem). Though the linked one says "updated model", which makes me suspect that it's deeply discounted because the market is flooded with recalled air conditioners which were modified to fix the problem.

... which sure does raise some questions about exactly what methodology led wirecutter to make it a top pick.

Reply
On thinking about AI risks concretely
johnswentworth6d60

Speaking for myself: I don't talk about this topic because my answers route through things which I do not want in the memetic mix, do not want to upweight in an LLM's training distribution, and do not want more people thinking about right now.

Reply
Generalized Hangriness: A Standard Rationalist Stance Toward Emotions
johnswentworth6d119

Agreed, I don't think it's actually that rare. The rare part is the common knowledge and normalization, which makes it so much easier to raise as a hypothesis in the heat of the moment.

Reply
Generalized Hangriness: A Standard Rationalist Stance Toward Emotions
johnswentworth6d207

If you want a post explaining the same concepts to a different audience, then go write a post explaining the same concepts to a different audience. I am well aware of the tradeoffs I chose here. I wrote the post for a specific purpose, and the tradeoffs chosen were correct for that purpose.

Reply
Load More
From Atoms To Agents
"Why Not Just..."
Basic Foundations for Agent Models
Framing Practicum
Gears Which Turn The World
Abstraction 2020
Gears of Aging
Model Comparison
11johnswentworth's Shortform
Ω
5y
Ω
610
247Generalized Hangriness: A Standard Rationalist Stance Toward Emotions
6d
20
54Fictional Thinking and Real Thinking
1mo
11
192The Value Proposition of Romantic Relationships
1mo
38
67That's Not How Epigenetic Modifications Work
2mo
12
543Orienting Toward Wizard Power
2mo
146
73$500 + $500 Bounty Problem: Does An (Approximately) Deterministic Maximal Redund Always Exist?
2mo
16
113Misrepresentation as a Barrier for Interp (Part I)
3mo
12
89$500 Bounty Problem: Are (Approximately) Deterministic Natural Latents All You Need?
3mo
18
294So You Want To Make Marginal Progress...
5mo
42
181Instrumental Goals Are A Different And Friendlier Kind Of Thing Than Terminal Goals
Ω
6mo
Ω
61
Load More