LESSWRONG
LW

2423
dr_s
3364Ω32010340
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
6dr_s's Shortform
6mo
5
Comparative advantage & AI
dr_s5h20

Part of the reason why this would be beneficial is also that killing all mosquitoes is really hard and could have side effects for us (like loss of pollination). One could hope that maybe humans would have similar niche usefulness to the ASI despite the difference in power, but it's not a guarantee.

Reply
Nina Panickssery's Shortform
dr_s14h20

I think those things can be generally interpreted as "trades" in the broadest sense. Sometimes trades of favour, reputation, or knowledge.

Reply
Reason About Intelligence, Not AI
dr_s3d20

Of course, human-based entities are superintelligent in a different way than ASI probably will be, but I think that difference is irrelevant in many discussions involving ASI.

 

I think while the analogy absolutely does make sense and is worth taking seriously, this is wrong. The main reason why the analogy is worth taking seriously is that using partial evidence is still generally better than using no evidence at all, but the evidence is partial because the fact that ultimately a corporation is still made of people means there's tons of values that are already etched into it from the get go, ways it can fail at coordinating itself, and so on so forth, which makes it a rather different case from an ASI.

If anything, I guess the argument would be "obviously aligning a corporation should be way easier than aligning an ASI, and look at our track record there!".

Reply
No77e's Shortform
dr_s4d31

He mentions he's just learned coding so I guess he had the AI build the scaffolding. But the experiment itself seems like a pretty natural idea, he literally likens it to a King's council. I'm sure once you have the concept having an LLM code it is no big deal.

Reply
LLM-generated text is not testimony
dr_s4d2126

I think not passing off LLM text as your own words is common good manners for a number of reasons - including that you are taking responsibility for words you didn't write and possibly not even read in depth enough, so it's going to be on you if someone reads too much into them. But it doesn't really much need any assumptions on LLMs themselves, their theory of mind, etc. Nearly the same would apply about hiring a human ghostwriter to expand on your rough draft, it's just that that has never been a problem until now because ghostwriters cost a lot more than a few LLM tokens.

Reply
LLM-generated text is not testimony
dr_s4d20

However, the plausible assumption has begun to tremble since we had a curated post whose author admitted to generating it by using Claude Opus 4.1 and substantially editing the output.

TBF "being a curated post on LW" doesn't exclude anything from being also a mix and match of arguments already said by others. One of the most common criticisms of LW I've seen is that it's a community reinventing a lot of already said philosophical wheels (which personally I don't think is a great dunk; exploring and reinventing things for yourself is often the best way to engage with them at a deep level).

Reply
Musings on Reported Cost of Compute (Oct 2025)
dr_s7d42

Thanks! I guess my original statement came off a bit too strong, but what I meant is that while there is a frontier for trade offs (maybe the GPUs' greater flexibility is worth the 2x energy cost?), I didn't expect the gap to be orders of magnitude. That's good enough for me with the understanding that any such estimates will never be particularly accurate anyway and just give us a rough idea of how much compute these companies are actually fielding. What they squeeze out of that will depend on a bunch of other details anyway, so scale is the best we can guess.

Reply
LLM robots can't pass butter (and they are having an existential crisis about it)
dr_s7d40

I mean, we do this too! Like if you were doing a very boring, simple task you would probably seek outlets for your mental energy (e.g. little additional self imposed challenges, humming, fiddling, etc).

Reply
Musings on Reported Cost of Compute (Oct 2025)
dr_s7d20

Well, within reason that can happen - I am not saying the metric is going to be perfect. But it's probably a decent first order approximation because that logic can't stretch forever. If instead of a factor of 2 it was a factor of 10 the trade off would probably not be worth it.

Reply
Uncommon Utilitarianism #3: Bounded Utility Functions
dr_s8d30

This is an argument from absurdity against infinite utility functions, but not quite against unbounded ones.

 

Can you elaborate on the practical distinction? My impression is that if your utility function is unbounded, then you should always be able to devise paths that lead to infinite utility - even by just infinite amounts of finite utility gains. So I don't know if the difference matters that much.

Reply
Load More
12An N=1 observational study on interpretability of Natural General Intelligence (NGI)
1mo
3
51A quantum equivalent to Bayes' rule
2mo
17
16Great responsibility requires great power
2mo
0
36Plato's Trolley
4mo
11
24The absent-minded variations
6mo
13
6dr_s's Shortform
6mo
5
25Review: The Lathe of Heaven
9mo
1
10Ethics and prospects of AI related jobs?
Q
1y
Q
8
31Good Bings copy, great Bings steal
2y
6
56The predictive power of dissipative adaptation
2y
14
Load More