Interesting! That does align better with the survey data than what I see on e.g. Twitter.
Out of curiosity, is "around you" a rationalist-y crowd, or a different one?
Apologies if I'm being naive, but it doesn't seem like an oracle AI is logically or practically impossible, and a good oracle should be able to be able to perform well at long-horizon tasks without "wanting things" in the behaviorist sense, or bending the world in consequentialist ways.
The most obvious exception is if the oracle's own answers are causing people to bend the world in the service of hidden behaviorist goals that the oracle has (e.g. making the world more predictable to reduce future loss), but I don't have strong reasons to believe that this is very likely.
This is especially the case since at training time, the oracle doesn't have any ability to bend the training dataset to fit its future goals, so I don't see why gradient descent would find cognitive algorithms for "wanting things in the behaviorist sense."
 in the sense of being superhuman at prediction for most tasks, not in the sense of being a perfect or near-perfect predictor.
 e.g. "Here's the design for a fusion power plant, here's how you acquire the relevant raw materials, here's how you do project management, etc." or "I predict your polio eradication strategy to have the following effects at probability p, and the following unintended side effects that you should be aware of at probability q."
Someone privately messaged me this whistleblowing channel for people to give their firsthand accounts of board members. I can't verify the veracity/security of the channel but I'm hoping that having an anonymous place to post concerns might lower the friction or costs involved in sharing true information about powerful people:
In the last 4 days, they were probably running on no sleep (and less used to that/had less access to the relevant drugs than Altman and Bockman), and had approximately zero external advisors, while Altman seemed to be tapping into half of Silicon Valley and beyond for help/advice.
Apologies, I've changed the link.
I think it was clear from context that Lukas' "EAs" was intentionally meant to include Ben, and is also meant as a gentle rebuke re: naivete, not a serious claim re: honesty.
I feel like you misread Lukas, and his words weren't particularly unclear.
Yes, here is GPT-4's reasoning.
Besides his most well-known controversial comments re: women, at least according to my read of his Wikipedia page, Summers has a poor track record re: being able to identify and ouster sketchy people specifically.