Following up to ask:
Just said to someone that I would by default read anything they wrote to me in a positive light, and that if they wanted to be mean to me in text, they should put '(mean)' after the statement.
Then realized that, if I had to put '(mean)' after everything I wrote on the internet that I wanted to read as slightly abrupt or confrontational, I would definitely be abrupt and confrontational on the internet less.
I am somewhat more confrontational than I endorse, and having to actually say to myself and the world that I was intending to be harsh, rather than simply demonstrating it, would lead to me being harsh somewhat less often.
I do basically know when I'm being confrontational in public, and am Doing It On Purpose, and also expect others to know it, and almost never say "I wasn't being confrontational" when I was being confrontational, and somehow having to signal it more overtly than it is already (that is to say, very) signaled would encourage me to tone it down.
In person, I even sometimes say 'I am being difficult in a way that I endorse/feels correct/feels important' and just go on being difficult, and considering that I'm about to do that in advance doesn't discourage me from being difficult.
Might start running the internal simulation 'you have to put a '(mean)' tag on something when you're intending to be a little mean on the internet' to see how it changes things.
(I am not going to defend the position 'sometimes it is correct to be a little mean'; I'm more interested in the other phenomenon.)
not to worry; by the end of the decade we'll be able to neatly point to either trillionaires or billionaires, enabling specificity without much shift in vocabulary.
I think the populist and establishment wings of each side are discrete entities; we have an institutional right (e.g. Cheney, Romney), an institutional left (e.g. Obama, Clintons), a populist right (Trump, MTG, DeSantis), and a populist left (almost-only Bernie, but AOC, Zohran, Ilhan Omar, etc are directionally this thing).
Populist left citizens do things like assassination attempts, and both the institutional and populist right blame the institutional left (the more-plausible things they say look like 'your extreme rhetoric emboldened these crazies').
Populist right politicians do things that are directionally authoritarian and both the institutional and populist left blame the institutional right (because they ceded power to Trump, either deliberately or by accident).
Things like Ray's post seem to be advocating for the institutional wings of the two parties to come together electorally and beat out the populists on either side.
I take your post to be somewhat conflating between the institutional and populist left.
In conversations about this that I've seen the crux is usually:
Do you expect greater capabilities increases per dollar from [continued scaling by ~the current* techniques] or by some [scaffolding/orchestration scheme/etc].
The latter just isn't very dollar efficient, so I think we'd have to see the existing [ways to spend money to get better performance] get more expensive / hit a serious wall before sufficient resources are put into this kind of approach. It may be cheap to try, but verifying performance on relevant tasks and iterating on the design gets really expensive really quickly. On the scale-to-schlep spectrum, this is closer to schlep. I think you're right that something like this could be important at some point in the future, conditional on much less efficient returns from other methods.
This is a bit of a side note, but I think your human analogy for time horizons doesn't quite work, as Eli said. The question is 'how much coherent and purposeful person-minute-equivalent-doing can an LLM execute before it fails n percent of the time?' Many person-years of human labor can be coherently oriented toward a single outcome (whether it's one or many people involved). That the humans get sleepy or distracted in-between is an efficiency concern, not a coherence concern; it affects the rate at which the work gets done, but doesn't put an upper bound on the total amount of purposeful labor that can hypothetically be directed, since humans just pick up where they left off pursuing the same goals for years and years at a time, while LLMs seem to lose the plot once they've started to nod off.
The Coefficient technical grant-making team should pitch some people on doing this and just Make It Happen (although I'm obviously ignorant of their other priorities).
well, yeah, AI developers will maybe succeed at shaping the AI along a bunch of specific dimensions. But they will not succeed at exhaustively shaping the AI along all dimensions that turn out to matter.
now what say you to this clever rejoinder?
Putting '(mean)' at the end of a statement is the exact mechanism of saying 'no' that is (jokingly!) offered in this scenario, to (again, jokingly) address the exact thing you are describing, with the exact post you have linked as the referent which appeared in my mind as I made the joke! The joke being 'that is obviously not a sufficient mechanism to solve the problem of offering a real valence, thereby robbing my interlocutor of some agency/expressive power.' You are making the same point I am making as though you are correcting me.
You are repeating my own joke back to me as if you originated it, and ignoring the substantive reflective point that I thought might be of value (which does not itself importantly hinge on the original Garrabrant-adjacent context).