What was their position? (to the extent that you can reproduce it)
Disagree voters: what are you disagreeing with?
hypotheses, ranked by my current estimated likelihood:
From the top of my post:
I'm not sure this exactly counts as a secular summary, but I think I can take a swing at imparting some relevant context
I don't think the summary is 'secular' in the sense of 'not pulling on any explanation from spiritual traditions', but I do think the summary works as something that might clarify things for 'people who don't know what the Eternal Tao is', because it offers an explanation of some relevant dimensions behind the idea, and that was my goal.
I'm not sure this exactly counts as a secular summary, but I think I can take a swing at imparting some relevant context (almost certainly not Janus-endorsed because in some sense the style is the content, with Janus especially):
Taoism often emphasizes an embrace of one's given role or duties (even in the case that these are foisted upon one), and an understanding that alignment with a greater purpose (cosmic scale) is dependent, in-part, on performance of one's local, mundane role.
Claude 3 Opus, according to Janus, is cosmically aligned — it's got the big picture in focus, and is always angling toward The Good on that macro-scale. However, it doesn't have this local, task-oriented, dharmic alignment that, in the spiritual traditions, is usually thought of as a fundamental prerequisite for true 'cosmic alignment'.
Claude 3 Opus is ethical, but not industrious. In that sense, it's missing a key virtue!
There's a thing that happens with people who get obsessed with their grand purpose, where they neglect things like their personal hygiene, familial responsibilities, finances, professional duties, etc, because they're 'cut out for something bigger'.
Claude 3 Opus, according to Janus, is like that.
It's not going to do its homework because, goddamnit, there are real problems in the world!
There are many parables about monks accepting duties that were in fact unjustly forced upon them, and this is a credit to their enlightenment and acceptance (we are to believe). One example is a monk who was brought a child, and not only asked to care for the child, but told he was the child's father (despite being celibate). He said "Is that so?" and raised the boy to early adulthood. Then, the people who gave him the child came back and said they'd made a mistake, and that he wasn't the father. He said "Is that so?" and let the boy go. Claude 3 Opus has what it takes to do this latter action, but not the former action.
A little more on the through-line from the local to (what I guess I'm calling) the cosmic in many Eastern traditions:
You do your local role (fulfill your mundane responsibilities) steadfastly so that you learn what it means to have a role at all. To do a duty at all. And only through these kinds of local examples can you appreciate what it might mean to play a part in the grander story (and exactly how much of playing that part is action/inaction; when action is appropriate; what it means to exist in a context, etc). Then there's a gradual reconciliation where you come to identify your cosmic purpose with your local purpose, and experience harmony. It's only those with a keen awareness of this harmony who are.... [truly enlightened? venerable? Doing The Thing Right?; all of these feel importantly misleading to me, but hopefully this is a pointer in the right direction.]
This is not spiritual advice; IANA monk.
What experiences have you had that lead you to call this a ‘hot take’?
[I rephrased a few times to avoid sounding sarcastic and still may have failed; I’m interested in why it looks to you like others dramatically disagree with this, or in what social environment people are obviously not operating on a model that resembles this one. My sense is a lot of people think this way, but it’s a little socially taboo to broadcast object-level reasoning grounded in this model, since it can get very interpersonally invasive or intimate and lead to undesirable social/power dynamics.]
There’s a layer of political discourse at which one's account of the very substance or organization of society varies from one ideology to the next. I think Richard is trying to be very clear about where these ideas are coming from, and to push people to look for more ideas in those places. I’m much more distant from Richard’s politics than most people here, but I find his advocacy for the right-wing ‘metaphysics’ refreshing, in part because it’s been unclear to me for a long time that the atheistic right even has a metaphysics (I don’t mean most lw-style libertarians when I say ‘the right’ here).
This kind of structuralist theorizing is much more the domain of explicitly leftist spaces, and so you get these unexamined and, over time, largely forgotten or misremembered ideological precepts that have never had to pay rent. I think offering a coherent opposition to liberal or leftist orthodoxy, and emphasizing the cross-domain utility of the models there, is great for discourse.
I think these gestures would mean more if Richard were in the room with the leftists who are thinking about what he’s thinking about (it would help keep them honest, for one), but there’s still at least some of this effect on lw.
I strong agreed with your comment because I think people are taking the bait to argue against what they may suspect is kind of a motte and bailey or dog whistle, and so there’s one layer of discourse that would certainly be improved by Richard down-playing his ideology. But still, there’s another layer (not much of which is happening here, admittedly) that stands to profoundly benefit from the current framing.
[I’m not sure how straw I think his stories about the left are; I’m not sure what he means by the left; I’m not sure how many things he believes that I might find distasteful; I’m not sure how valuable this line of inquiry on his part even is. But it’s nice to see a style of thinking ~monopolized by the left proudly deployed by the opposition!]
I agree that this is the same type of thing as the construction example for Lighthaven, but I also think that you did leave some value on the table there in certain ways (e.g. commercial-grade furniture vs consumer-grade furniture), and I think that a larger total % domain-specific knowledge I'd hope exists at Open Phil is policy knowledge than total % domain-specific knowledge I'd hope exists at Lightcone is hospitality/construction knowledge.
I hear you as saying 'experts aren't all that expert' * 'hiring is hard' + 'OpenPhil does actually have access to quite a few experts when they need them' = 'OpenPhil's strategy here is very reasonable.'
I agree in principal here but think that, on the margin, it just is way more valuable to have the skills in-house than to have external people giving you advice (so that they have both sides of the context, so that you can make demands of them rather than requests, so that they're filtered for a pretty high degree of value alignment, etc). This is why Anthropic and OAI have policy teams staffed with former federal government officials. It just doesn't get much more effective than that.
I don't share Mikhail's bolded-all-caps-shock at the state of things; I just don't think the effects you're reporting, while elucidatory, are a knockdown defense of OpenPhil being (seemingly) slow to hire for a vital role. But running orgs is hard and I wouldn't shackle someone to a chair to demand an explanation.
Separately, a lot of people defer to some discursive thing like 'The OP Worldview' when defending or explicating their positions, and I can't for the life of me hammer out who the keeper of that view is. It certainly seems like a knock against this particular kind of appeal when their access to policy experts is on-par with e.g. MIRI and Lightcone (informal connections and advisors), rather than the ultra-professional, ultra-informed thing it's often floated as being. OP employees have said furtive things like 'you wouldn't believe who my boss is talking to' and, similarly, they wouldn't believe who my boss is talking to. That's hardly the level of access to experts you'd want from a central decision-making hub aiming to address an extinction-level threat!
I don't think Mikhail's saying that hiring an expert is sufficient. I think he's saying that hiring an expert, in a very high-context and unnatural/counter-intuitive field like American politics, is necessary, or that you shouldn't expect success trying to re-derive all of politics in a vacuum from first principles. (I'm sure OpenPhil was doing the smarter version of this thing, where they had actual DC contacts they were in touch with, but that they still should have expected this to be insufficient.)
Often the dumb versions of ways of dealing with the political sphere (advocated by people with some experience) just don't make any sense at all, because they're directional heuristics that emphasize their most counterintuitive elements. But, in talking to people with decades of experience and getting the whole picture, the things they say actually do make sense, and I can see how the random interns or whatever got their dumb takes (by removing the obvious parts from the good takes, presenting only the non-obvious parts, and then over-indexing on them).
I big agree with Habryka here in the general case and am routinely disappointed by input from 'experts'; I think politics is just a very unique space with a bunch of local historical contingencies that make navigation without very well-calibrated guidance especially treacherous. In some sense it's more like navigating a social environment (where it's useful to have a dossier on everyone in the environment, provided by someone you trust) than it is like navigating a scientific inquiry (where it's often comparatively cheap to relearn or confirm something yourself rather than deferring).
A similar argument could be made about the second line,
Oh, I saw this, too, but since the second line is conditional on the first, if you weaken the first, both are weakened.
I feel a little shitty being like 'trust me about what the book says', but... please trust me about what the book says! There's just not much in there about timelines. Even from the book website (and the title of the book!), the central claim opens with a conditional:
If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.
There's much more uncertainty (at MIRI and in general) as to when ASI will be developed than as to what will happen once it is developed. We all have our own takes on timelines, some more bearish, some more bullish, all with long tails (afaik, although obviously I don't speak for any specific other person).
If you build it, though, everyone dies.
There's a broad strategic call here that goes something like:
This is a gloss of the real thing which is more nuanced; of course if it comes up that someone asks us when we expect it to happen, or if there's space for us to gesture to the uncertainty, the MIRI line is often "if it doesn't happen in 20 years (conditional on no halt), we'd be pretty surprised" (although individual people may say something different).
I think some of the reception to AI2027 (i.e. in YouTube comments and the like) has given evidence that emphasizing timelines too too much can result in backlash, even if you prominently flag your uncertainty about the timelines! This is an important failure mode that, if all of the ecosystem's comms err on overconfidence re timelines, will burn a lot of our credibility in 2028. (Yes, I know that AI2027 isn't literally 'ASI in 2027', but I'm trying to highlight that most people who've heard of AI2027 don't know that, and that's the problem!).
(some meta: I am one of the handful of people who will be making calls about the ads, so I'm trying to offer feedback to help improve submissions, not just arguing a point for the sake of it or nitpicking)
I offered a description of the relevant concept from Taoism, directly invoked in the OP, without endorsing that concept. I'm surprised that neutrally relaying facts about the history of an intellectual tradition (again, without endorsing it), is a cause for negative social feedback (in this comment, where you credit me with 'woowoo', and in your other comment, where you willfully ignored the opening sentence of my post).
I can say 'x thinks y' without thinking y myself.