I think those things can be generally interpreted as "trades" in the broadest sense. Sometimes trades of favour, reputation, or knowledge.
Of course, human-based entities are superintelligent in a different way than ASI probably will be, but I think that difference is irrelevant in many discussions involving ASI.
I think while the analogy absolutely does make sense and is worth taking seriously, this is wrong. The main reason why the analogy is worth taking seriously is that using partial evidence is still generally better than using no evidence at all, but the evidence is partial because the fact that ultimately a corporation is still made of people means there's tons of values that are already etched into it from the get go, ways it can fail at coordinating itself, and so on so forth, which makes it a rather different case from an ASI.
If anything, I guess the argument would be "obviously aligning a corporation should be way easier than aligning an ASI, and look at our track record there!".
He mentions he's just learned coding so I guess he had the AI build the scaffolding. But the experiment itself seems like a pretty natural idea, he literally likens it to a King's council. I'm sure once you have the concept having an LLM code it is no big deal.
I think not passing off LLM text as your own words is common good manners for a number of reasons - including that you are taking responsibility for words you didn't write and possibly not even read in depth enough, so it's going to be on you if someone reads too much into them. But it doesn't really much need any assumptions on LLMs themselves, their theory of mind, etc. Nearly the same would apply about hiring a human ghostwriter to expand on your rough draft, it's just that that has never been a problem until now because ghostwriters cost a lot more than a few LLM tokens.
However, the plausible assumption has begun to tremble since we had a curated post whose author admitted to generating it by using Claude Opus 4.1 and substantially editing the output.
TBF "being a curated post on LW" doesn't exclude anything from being also a mix and match of arguments already said by others. One of the most common criticisms of LW I've seen is that it's a community reinventing a lot of already said philosophical wheels (which personally I don't think is a great dunk; exploring and reinventing things for yourself is often the best way to engage with them at a deep level).
Thanks! I guess my original statement came off a bit too strong, but what I meant is that while there is a frontier for trade offs (maybe the GPUs' greater flexibility is worth the 2x energy cost?), I didn't expect the gap to be orders of magnitude. That's good enough for me with the understanding that any such estimates will never be particularly accurate anyway and just give us a rough idea of how much compute these companies are actually fielding. What they squeeze out of that will depend on a bunch of other details anyway, so scale is the best we can guess.
I mean, we do this too! Like if you were doing a very boring, simple task you would probably seek outlets for your mental energy (e.g. little additional self imposed challenges, humming, fiddling, etc).
Well, within reason that can happen - I am not saying the metric is going to be perfect. But it's probably a decent first order approximation because that logic can't stretch forever. If instead of a factor of 2 it was a factor of 10 the trade off would probably not be worth it.
This is an argument from absurdity against infinite utility functions, but not quite against unbounded ones.
Can you elaborate on the practical distinction? My impression is that if your utility function is unbounded, then you should always be able to devise paths that lead to infinite utility - even by just infinite amounts of finite utility gains. So I don't know if the difference matters that much.
Part of the reason why this would be beneficial is also that killing all mosquitoes is really hard and could have side effects for us (like loss of pollination). One could hope that maybe humans would have similar niche usefulness to the ASI despite the difference in power, but it's not a guarantee.