I think not passing off LLM text as your own words is common good manners for a number of reasons - including that you are taking responsibility for words you didn't write and possibly not even read in depth enough, so it's going to be on you if someone reads too much into them. But it doesn't really much need any assumptions on LLMs themselves, their theory of mind, etc. Nearly the same would apply about hiring a human ghostwriter to expand on your rough draft, it's just that that has never been a problem until now because ghostwriters cost a lot more than a few LLM tokens.
However, the plausible assumption has begun to tremble since we had a curated post whose author admitted to generating it by using Claude Opus 4.1 and substantially editing the output.
TBF "being a curated post on LW" doesn't exclude anything from being also a mix and match of arguments already said by others. One of the most common criticisms of LW I've seen is that it's a community reinventing a lot of already said philosophical wheels (which personally I don't think is a great dunk; exploring and reinventing things for yourself is often the best way to engage with them at a deep level).
Thanks! I guess my original statement came off a bit too strong, but what I meant is that while there is a frontier for trade offs (maybe the GPUs' greater flexibility is worth the 2x energy cost?), I didn't expect the gap to be orders of magnitude. That's good enough for me with the understanding that any such estimates will never be particularly accurate anyway and just give us a rough idea of how much compute these companies are actually fielding. What they squeeze out of that will depend on a bunch of other details anyway, so scale is the best we can guess.
I mean, we do this too! Like if you were doing a very boring, simple task you would probably seek outlets for your mental energy (e.g. little additional self imposed challenges, humming, fiddling, etc).
Well, within reason that can happen - I am not saying the metric is going to be perfect. But it's probably a decent first order approximation because that logic can't stretch forever. If instead of a factor of 2 it was a factor of 10 the trade off would probably not be worth it.
This is an argument from absurdity against infinite utility functions, but not quite against unbounded ones.
Can you elaborate on the practical distinction? My impression is that if your utility function is unbounded, then you should always be able to devise paths that lead to infinite utility - even by just infinite amounts of finite utility gains. So I don't know if the difference matters that much.
A dialogue that references Stanislaw Lem's Cyberiad, no less. But honestly Lem was a lot more terse and concise in making his points. I agree this is probably not very relevant to any discourse at this point (especially here on LW, where everyone would be familiar with the arguments anyway).
The counterpoint to that is that as the scale of humanity's power grows, so does the scale of those bad events. Many bad events were not in fact prevented. Wars were lost, famines happened, empires fell. But none of those were world-ending bad events because we simply lacked the ability to do anything that big; even our mistakes couldn't possibly be big enough. And that's changed.
That's not an outside view though - we think losing our culture would be bad because we value things it preserves; us valuing those things is one and the same with us being part of that culture.
A broader argument is that this culture being so global means that its fall would likely be widely destructive.
He mentions he's just learned coding so I guess he had the AI build the scaffolding. But the experiment itself seems like a pretty natural idea, he literally likens it to a King's council. I'm sure once you have the concept having an LLM code it is no big deal.