LESSWRONG
LW

1307
dr_s
3349Ω32010310
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No77e's Shortform
dr_s13h20

He mentions he's just learned coding so I guess he had the AI build the scaffolding. But the experiment itself seems like a pretty natural idea, he literally likens it to a King's council. I'm sure once you have the concept having an LLM code it is no big deal.

Reply
LLM-generated text is not testimony
dr_s13h89

I think not passing off LLM text as your own words is common good manners for a number of reasons - including that you are taking responsibility for words you didn't write and possibly not even read in depth enough, so it's going to be on you if someone reads too much into them. But it doesn't really much need any assumptions on LLMs themselves, their theory of mind, etc. Nearly the same would apply about hiring a human ghostwriter to expand on your rough draft, it's just that that has never been a problem until now because ghostwriters cost a lot more than a few LLM tokens.

Reply
LLM-generated text is not testimony
dr_s13h20

However, the plausible assumption has begun to tremble since we had a curated post whose author admitted to generating it by using Claude Opus 4.1 and substantially editing the output.

TBF "being a curated post on LW" doesn't exclude anything from being also a mix and match of arguments already said by others. One of the most common criticisms of LW I've seen is that it's a community reinventing a lot of already said philosophical wheels (which personally I don't think is a great dunk; exploring and reinventing things for yourself is often the best way to engage with them at a deep level).

Reply
Musings on Reported Cost of Compute (Oct 2025)
dr_s4d42

Thanks! I guess my original statement came off a bit too strong, but what I meant is that while there is a frontier for trade offs (maybe the GPUs' greater flexibility is worth the 2x energy cost?), I didn't expect the gap to be orders of magnitude. That's good enough for me with the understanding that any such estimates will never be particularly accurate anyway and just give us a rough idea of how much compute these companies are actually fielding. What they squeeze out of that will depend on a bunch of other details anyway, so scale is the best we can guess.

Reply
LLM robots can't pass butter (and they are having an existential crisis about it)
dr_s4d40

I mean, we do this too! Like if you were doing a very boring, simple task you would probably seek outlets for your mental energy (e.g. little additional self imposed challenges, humming, fiddling, etc).

Reply
Musings on Reported Cost of Compute (Oct 2025)
dr_s4d20

Well, within reason that can happen - I am not saying the metric is going to be perfect. But it's probably a decent first order approximation because that logic can't stretch forever. If instead of a factor of 2 it was a factor of 10 the trade off would probably not be worth it.

Reply
Uncommon Utilitarianism #3: Bounded Utility Functions
dr_s5d30

This is an argument from absurdity against infinite utility functions, but not quite against unbounded ones.

 

Can you elaborate on the practical distinction? My impression is that if your utility function is unbounded, then you should always be able to devise paths that lead to infinite utility - even by just infinite amounts of finite utility gains. So I don't know if the difference matters that much.

Reply
On Fleshling Safety: A Debate by Klurl and Trapaucius.
dr_s6d24

A dialogue that references Stanislaw Lem's Cyberiad, no less. But honestly Lem was a lot more terse and concise in making his points. I agree this is probably not very relevant to any discourse at this point (especially here on LW, where everyone would be familiar with the arguments anyway).

Reply
On Fleshling Safety: A Debate by Klurl and Trapaucius.
dr_s6d74

The counterpoint to that is that as the scale of humanity's power grows, so does the scale of those bad events. Many bad events were not in fact prevented. Wars were lost, famines happened, empires fell. But none of those were world-ending bad events because we simply lacked the ability to do anything that big; even our mistakes couldn't possibly be big enough. And that's changed.

Reply
Trying to understand Hanson's Cultural Drift argument
dr_s9d40

That's not an outside view though - we think losing our culture would be bad because we value things it preserves; us valuing those things is one and the same with us being part of that culture.

A broader argument is that this culture being so global means that its fall would likely be widely destructive.

Reply
Load More
No wikitag contributions to display.
6dr_s's Shortform
6mo
5
12An N=1 observational study on interpretability of Natural General Intelligence (NGI)
1mo
3
51A quantum equivalent to Bayes' rule
2mo
17
16Great responsibility requires great power
2mo
0
36Plato's Trolley
3mo
11
24The absent-minded variations
6mo
13
6dr_s's Shortform
6mo
5
25Review: The Lathe of Heaven
9mo
1
10Ethics and prospects of AI related jobs?
Q
1y
Q
8
31Good Bings copy, great Bings steal
2y
6
56The predictive power of dissipative adaptation
2y
14
Load More