At a meta level, "publishing, in 2025, a public complaint about OpenPhil's publicly promoted timelines and how those may have influenced their funding choices" does not seem like it serves any defensible goal.
Let's suppose the underlying question is "why did OpenPhil give money to OpenAI in 2017". (Or, conversely, not give money to some other venture in a similar timeframe). Why is this, currently, significantly important? What plausible goal is served by trying to answer this question more precisely?
If it's because they had long timelines, it tells you that short timeline arguments were not effective, which hopefully everyone already knows. This has been robustly demonstrated across most meaningful groups of people controlling either significant money or government clout. It is not information. I would not update on this.
If they did this because they had short timelines, they believed in whatever Sam was selling for that. I would not update on this either. It is hopefully well understood, by now, that Sam is good at selling things. "You could parachute him into an island full of cannibals and come back in 5 years and he'd be the king."
If they did this for non-timeline reasons, I might update on, idk, some nebulous impression about how OpenPhil's bureaucracy worked before the year 2020 or so. How good Sam (or another principal) was at convincing people to give them money. I don't see how this is an important fact about the world.
Generally my model is that when people do not seem to be behaving optimally, they are behaving close to optimal for something, but that something is not the goal I imagine they are pursuing. I am imagining a goal like "being able to influence future events more effectively", but I can't see how that's served here, so I imagine we're optimizing for something else.
I have a very simple opinion about why pushes for AI regulation fail. Ready?
Because nobody knows what they are asking for, or what they are asking for is absurd.
Here are the absurdities:
Once we get past absurd asks we get into easily coopted asks, like "restrict compute" which turns into "declare an arms race with the largest industrial power on the planet" and "monitor training runs" which turns into "tell every potential competitor you are going to have the government crush them".
What will convince me there is a sane block pushing for AI-related regulations is when they propose a regulation that is sane. I cannot emphasize enough how much these things have failed at the drafting stage. The part everyone is allegedly good at, that part, where you write down a good idea that it might be a good idea to do? That's the part where this has failed.
This is just accurate and I have nothing interesting to add to it. I basically think that considering the thing in the abstract as an idea as opposed to the thing in the real world as a social grouping, scene, place, group of people, set of practices, etc is kind of wasted. As soon as you make a point of not just taking some part of the ideas seriously but identifying yourself as a rationalist and participating in any rationalist space (including this one) the thing you are doing is not, primarily, an abstraction that lives entirely in your head.
It's "notably and especially" part OF YUDKOWSKY'S WORK AND WHAT HE PROMOTES. It is a minor part of the characterization of rationalism.
This is legitimately such poor reading comprehension that I assume malice.
This is a category argument that I explicitly avoid making and don't think is meaningful: the word itself does not mean anything and arguing over it is meaningless. You seem to really want to do that anyway because you can support your argument better on basically definitional grounds than factual ones.
This is also being supported by a category of argument for "things people say about stuff", which is both not a method that you will ever find the truth by ("what do people say the most often? i dunno, i can go find some things I think are similar and then decide how similar they are by how similar the things people say sometimes are") and cherry picked to support your definition.
Like: this method cannot even in principle tell us anything about rationalism, and if it did, by choosing Islam, Judaism etc as comparison points instead of Scientology, Mormonism or any other smaller, newer or more modern movement is just assuming your conclusion. If you compare the things people say for or against new, modern, small religions (ie, cults) to the things they say about rationalism, then on the terms of "what's in the discourse is the definition of the thing", which is in any case trash, rationalism is clearly a new age California cult. So I don't know why you think that -- completely logically invalid, nonsensical -- criteria for truth is the one you'd want to use, since it very clearly indicates rationalism is a standard California cult.
edit: and this style of discourse around cults is SO COMMON THERE IS A MEME FORMAT ABOUT IT. It is a King of the Hill joke about cults, because this discourse surrounding cults was so common that King of the Hill made a joke about it, and then it became a meme because enough people found the joke funny AND expected to have occasion to use it in the future that they clipped a meme format about it. You do not want "what does discourse around the thing look like? that clearly defines what it is" to be your source of truth here and it's insane to me that you imagine you would
True! It is very often used pejoratively. I am mostly uninterested in whether or not it's pejorative. I think it's descriptively accurate. I think the place in thingspace where rationalism lives is close to religion.
How many people have DENIED that transhumanism, social justice, liberalism, conservatism, libertarianism, communism, capitalism, objectivism, apple or unix is a religion? Is this a common feature for people interested in those things?
If the standard is "very prominently and loudly denies being a religion" I think you will find that like 10:1 loud denials of being a religion come from cults, not from non-religions.
As I see it, that you have to argue that it's a religion is a bad sign. It means that rationality isn't passing the smell test. That is, you needed to write a post to argue that rationality is a religion, which I view as evidence against it being a religion, since most religions are clearly religions and no one writes posts arguing that Islam or Hinduism is a religion (if anything, people sometimes write the opposite for various reasons!). If it's in the category of religions, it's a marginal case at best.
How many non-religions have had people write thousand-word posts denying they are religions? Does it come up often? "The fact that someone even wrote <this thing>" does not prove the point you think it does. It proves the opposite point.
This is so common, in fact, that there's a meme format for it.
I think these are blind guesses and relying on the benchmarks is the streetlight effect, as I think we talked about in another thread. I am mostly explaining in as much detail as I can the parts I think are relevant to Neel's objection, since it is substantively the most common objection, ie, that paying attention to financial incentives or work history is irrelevant to anything. I am happy that I have addressed the scenario itself in enough detail
It's an incentive problem.
There is no way to discuss something being dangerous that does not also render it valuable. People are incentivized to seek out value; our entire economy is based on it. It works beautifully, but it is terrible at mitigating externalities. We only dial back from dangerous or bad things after the disaster; so long as doing things is profitable, rational economic actors seek out high-risk activities as far as permitted, because they alone get the profit and the majority of the risk is to other people.
In my view Yudkowsky's body of work has had two main effects, which run in opposite directions: