Sen
Sen has not written any posts yet.

You cannot simultaneously assert that the agent is "sufficiently advanced" such that it can always think at or above human-level higher order reasoning, and also that it will not be able to re-prioritise its terminal goals (survivability, paperclips, etc).
There is no evidence that it is possible for a system to be like that, and plenty of evidence that it isn't: a terminal goal is a collection of state in the machine or system, and state is always mutable one way or another. If higher order reasoning is possible, the agent can always ask why it is collecting paperclips, and modify its own terminal goals. It has access to its own state either... (read more)
It would be helpful if you gave some examples of what you think "AGI" might be able to do. Prices are a very inefficient method of communication compared to an AI which is able to incorporate more market information than all of the most market-knowledgable humans, for example.
How do you suppose the AGI is going to be able to wrap the sun in a dyson sphere using only the resources available on earth? Do you have evidence that there are enough resources on asteroids or nearby planets for their mining to be economically viable? At the current rate, mining an asteroid costs billions while their value is nothing. Even then we don't know if they'll have enough of the exact kind of materials necessary to make a dyson sphere around an object which has 12000x the surface area of earth. You could have von nuemman replicators do the mining but then they'd spend most of the materials on the... (read more)
Obviously I meant some kind of approximation of consensus or acceptability derived from much greater substantiation. There is no equivalent to Climate Change or ZFC in the field of AI in terms of acceptability and standardisation. Matthew Barnett made my point better in the above comments.
Yes, most policy has no degree of consensus. Most policy is also not asking to shut down the entire world's major industries. So there must be a high bar. A lot of policy incidentally ends up being malformed and hurting people, so it sounds like you're just making the case for more "consensus" and not less.
The bar is very low for me: If MIRI wants to demand the entire world shut down an entire industry, they must be an active research institution actively producing agreeable papers.
AI is not particularly unique even relative to most technologies. Our work on chemistry in the 1600's-1900's far outpaced our level of true understanding of chemistry, to the point where we only had a good model of an atom in the 20th century. And I don't think anyone will deny the potential dangers of chemistry. Other technologies followed a similar trajectory.
We don't have to agree that the range is 20-80% at all, never mind the specifics of it. Most polls demonstrate researchers find around 5-10% chance of total extinction on the high end. MIRI's own survey finds a similar result! 80% would be insanely extreme. Your landscape of experts is, I'm guessing, your own personal follower list and not statistically viable.
I am not convinced MIRI has given enough evidence to support the idea that unregulated AI will kill everyone and their children. Most of their projects are either secret or old papers. The only papers which have been produced after 2019 are random irrelevant math papers. Most of the rest of their papers are not even technical in nature and contain a lot of unverified claims. They have not even produced one paper since the breakthrough in LLM technology in 2022. Even among the papers which do indicate risk, there is no consensus among scientific peers that this is true or necessarily an extinction risk. Note: I am not asking for "peer review" as a specific process, just some actual consensus among established researchers to sift mathematical facts from conjecture.
Policymakers should not take seriously the idea of shutting down normal economic activity until this is formally addressed.
A question for all: If you are wrong and in 4/13/40 years most of this fails to come true, will you blame it on your own models being wrong or shift goalposts towards the success of the AI safety movement / government crack downs on AI development? If the latter, how will you be able to prove that AGI definitely would have come had the government and industry not slowed down development?
To add more substance to this comment: I felt Ege came out looking the most salient here. In general, making predictions about the future should be backed by heavy uncertainty. He didn't even disagree very strongly with most of the central... (read more)
Perhaps you are confusing a lack of preparation with a lack of good ideas.
The AI space will ultimately be dominated by people who know how to train models, process data, write senior-level code, consistently produce research papers, and understand the underlying technical details behind current models at the software level, because those are the people who can communicate ideas with clarity and pragmatism and command respect from their peers and the average joe. Ask yourself whether you truly believe Yudkowsky is capable of any of these things. To my knowledge he hasn't demonstrated any of this, he has produced at most a few research papers in his lifetime and has no public-facing code. So maybe the problem is not a lack of preparation.
The author explicitly states that their probability of the entire human race going extinct or some equivalent disaster will be 80% if AGI is developed by 2025. They also gave the probability of developing AGI by <2025 less than 5%. or so. Since AGI was, according to you and no one else, developed right here in 2023, this would make Porby's estimate of extinction chance even higher than 80% and they would very wrong about when AGI would be developed. So tell me, do we give it to Porby even though the human race has not gone extinct and they were obviously way off on other estimates? No of course we don't, because like I said in post one, Porby has clearly defined AGI in their own way, and whatever ambiguous model existing today that you think of as AGI is not a match with their definition of strong AGI.
Sure, one can trivially see how you get a literacy problem with the invention of TV and smart phones. It is much harder to see how you get lightcone-eating, faster-than-light, nanobot-spewing supermachines from current AI, and you don't have any evidence of doomers in the past who have crossed such a massive gap.