I think this is a question on which we should spend lots of time actually thinking and writing. I'm not sure my approximations will be good at guessing the final result.
That seems more probably in a world where AI companies can bring all the required tools in house. But what if they have large supply chains for minerals and robotics and renting factory space and employign contractors to do the .0001% of work they can't.
At that point I still expect it to be hard for them to control bits of land without being governed, which I expect to be good for AI risk.
Do you think governance is currently misaligned. It seems fine to me?
In the below article I give an honest list of considerations about my thoughts about AI. Currently it sits on -1 karma (without my own).
This is sort of fine. I don't think it's a great article and I am not sure that it's highly worthy of people's attention, but I think that a community that wants to encourage thinking about AI might not want to penalise those who do so.
When I was a Christian, questions were always "acceptable". But if I asked "so why is the bible true" that would have received a sort of stern look and then a single paragraph answer. While the question was explictly accepted, it was implictly a thing I should stop doing.
I think it's probable you are doing the same thing. "oh yes, lets think about AI" but if I write something about my thoughts on AI that disagrees with you, it isn't worth reading or engaging with.
And my article is relatively mild pushback. What happens if a genuine critic comes on here and writes something. I agree that some criticism is bad, but what if it is in the form that you ask for (lists of considerations, transparently written)?
Is the only criticism worth reading that which is actually convincing to you? And won't, due to some bias, that likely leave this place an echo chamber?
I think considerations are in important input into decision making and if you downvote anyone who writes clear considerations without conforming to your extremely high standards then you will tax disagreement.
Perhaps you are very confident that you are only taxing bad takes and not just contrary ones, but I am not as confident as you are.
Overall, I think this is poor behaviour from a truth-seeking community. I don't expect every critic to be complimented to high heaven (as sometimes happens on the EA forum) but I think that this seems like a bad equilibrium for a post that is (in my view) fine and presented in the way this community requests (transparent and with a list of considerations).
As for the title:
If you titled this "some factors maybe in AI risk" or "some factors changes that have shifted my p(doom)" or something and left out the p(doom) I'd have upvoted because you have some interesting observations.
This is particular seems like a dodge. The actual title "My AI Vibes are Shifting" is hardly confident or declarative. Are you sure you would actully upvote if I had titled as you suggest?
I see I have 4 votes, with neutral karma overall. I should hope that the downvotes thought this wasn't worth reading, as opposed to that they disagreed.
These are vibes, not predictions.
But in the other worlds I expect governance to sit between many different AI actors and ensure that no single actor controls everything. And then to tax them to pay for this function.
Why doesn't SpaceX run a country?
When do you expect this to happen by?
I think it would be different if it happened today. Harris position seems less controversial. Not sure you'd print he was a racialist today.
This feels like weak evidence against my point, though I think "timelines" and "overall AI risk" are different levels of safe to argue about.