I think the populist and establishment wings of each side are discrete entities; we have an institutional right (e.g. Cheney, Romney), an institutional left (e.g. Obama, Clintons), a populist right (Trump, MTG, DeSantis), and a populist left (almost-only Bernie, but AOC, Zohran, Ilhan Omar, etc are directionally this thing).
Populist left citizens do things like assassination attempts, and both the institutional and populist right blame the institutional left (the more-plausible things they say look like 'your extreme rhetoric emboldened these crazies').
Populist right politicians do things that are directionally authoritarian and both the institutional and populist left blame the institutional right (because they ceded power to Trump, either deliberately or by accident).
Things like Ray's post seem to be advocating for the institutional wings of the two parties to come together electorally and beat out the populists on either side.
I take your post to be somewhat conflating between the institutional and populist left.
In conversations about this that I've seen the crux is usually:
Do you expect greater capabilities increases per dollar from [continued scaling by ~the current* techniques] or by some [scaffolding/orchestration scheme/etc].
The latter just isn't very dollar efficient, so I think we'd have to see the existing [ways to spend money to get better performance] get more expensive / hit a serious wall before sufficient resources are put into this kind of approach. It may be cheap to try, but verifying performance on relevant tasks and iterating on the design gets really expensive really quickly. On the scale-to-schlep spectrum, this is closer to schlep. I think you're right that something like this could be important at some point in the future, conditional on much less efficient returns from other methods.
This is a bit of a side note, but I think your human analogy for time horizons doesn't quite work, as Eli said. The question is 'how much coherent and purposeful person-minute-equivalent-doing can an LLM execute before it fails n percent of the time?' Many person-years of human labor can be coherently oriented toward a single outcome (whether it's one or many people involved). That the humans get sleepy or distracted in-between is an efficiency concern, not a coherence concern; it affects the rate at which the work gets done, but doesn't put an upper bound on the total amount of purposeful labor that can hypothetically be directed, since humans just pick up where they left off pursuing the same goals for years and years at a time, while LLMs seem to lose the plot once they've started to nod off.
The Coefficient technical grant-making team should pitch some people on doing this and just Make It Happen (although I'm obviously ignorant of their other priorities).
well, yeah, AI developers will maybe succeed at shaping the AI along a bunch of specific dimensions. But they will not succeed at exhaustively shaping the AI along all dimensions that turn out to matter.
now what say you to this clever rejoinder?
eminent thinkers like Dwarkesh Patel, Leopold Aschenbrenner, perhaps Scott Alexander
Dwarkesh is an interviewer; Leopold did a meme coup one time. I would like it if we avoided calling them 'eminent thinkers'. Their brand is 'thinker', but if we take the literal meaning of eminent, I basically don't think it's true that knowledgeable people respect either of them as public intellectuals.
Scott I'm more confused about.
Fraught idea for reasons I’m struggling to articulate; thank you for the encouragement!
I am enthused about Richard reading more mid-twentieth-century European philosophy. I think it's unlikely there's much value in approaching Lacan specifically, although this depends on how much context you happen to already have / pick up without specifically targeting that work toward 'preparing to read Lacan'.
Few reasons:
I mostly found value in Lacan as an aphorist, or as a few nifty transformations you can do to various parts of Freud, and less as a coherent, learnable model (this after ~40-100 hours of effort).
Meta: I feel a little self-conscious about something like "Well, now I've twice said 'that one takes a lot of context' without naming where to go to get the context." I really want to avoid making positive recommendations here without first putting forth the effort to see where you're at in more detail than I have so far. Maybe negative recommendations are useful, though!
not to worry; by the end of the decade we'll be able to neatly point to either trillionaires or billionaires, enabling specificity without much shift in vocabulary.