So, I agree p(doom) has a ton of problems. I've really disliked it for a while. I also really dislike the way it tends towards explicitly endorsed evaporative cooling, in both directions; i.e., if your p(doom) is too [high / low] then someone with a [low / high] p(doom) will often say the correct thing to do is to ignore you.
But I also think "What is the minimum necessary and sufficient policy that you think would prevent extinction?" also has a ton of problems that would also tend to make it pretty bad as a centerpiece of discourse, and not useful as a method of exchanging models of how the world works.
(I know this post does not really endorse this alternative; I'm noting, not disagreeing.)
So some problems:
Whose policy? A policy enforced by treaty at the UN? The policy of regulators in the US? An international treaty policy -- enforced by which nations? A policy (in the sense of mapping from states to actions) that is magically transferred into the brains of the top 20 people at the top 20 labs across the globe? ...a policy executed by OpenPhil??
Why a single necessary and sufficient policy? What if the most realistic way of helping everyone is several policies that are by themselves insufficient, but together sufficient? Doesn't this focus us on dramatic actions unhelpfully, in the same way that a "pivotal act" arguably so focuses us?
The policy necessary to save us will -- of course -- be downstream of whatever model of AI world you have going on, so this question seems -- like p(doom) -- to focus you on things that are downstream of whatever actually matters. It might be useful for coalition formation -- which does seem now to be MIRI's focus, so that's maybe intentional -- but it doesn't seem useful for understand what's really going on.
So yeah.
Why a single necessary and sufficient policy? What if the most realistic way of helping everyone is several policies that are by themselves insufficient, but together sufficient? Doesn't this focus us on dramatic actions unhelpfully, in the same way that a "pivotal act" arguably so focuses us?
I agree the phrasing here is maybe bad, but I think its generally accepted that "X and Y" is a policy when "X" and "Y" are independently policies, so I would expect a set of policies which are together sufficient would be an appropriate answer.
IME, a good way to cut through thorny disagreements on values or beliefs is to discuss concrete policies. Example: a guy and I were arguing about the value of "free-speech" and getting nowhere. I then suggested the kind of mechanisms I'd like to see on social media. Suddenly, we were both on the same page and rapidly reached agreement on what to do. Robustly good policies/actions exist. So I'd bet that shifting discussion from "what is your P(doom)?" to "what are your preferred policies for x-risk?" would make for much more productive conversations.
(First, my background assumptions for this discussion: I fear AGI is reachable, the leap from AGI to ASI is short, and sufficiently robust ASI alignment is impossible in principle.)
Whose policy? A policy enforced by treaty at the UN? The policy of regulators in the US? An international treaty policy -- enforced by which nations?
Given the assumptions above, and assuming AGI becomes imminent, then:
Why a single necessary and sufficient policy? What if the most realistic way of helping everyone is several policies that are by themselves insufficient, but together sufficient?
Since my upstream model is "If we succeed in building AGI, then the road to ASI is short, and ASI very robustly causes loss of human control," the core of my policy proposals is a permanent halt. How we would enforce a halt might be complicated or impossible. But at the end of the day, either someone builds it or nobody builds it. So the core policy is essentially binary.
The big challenges I see are:
The central challenge is that nothing like AGI or ASI has ever existed. And building consensus around even concrete things with clear scientific answers (e.g., cigarettes causing lung cancer) can be very difficult once incentives are involved. And we currently have low agreement on how AGI might turn out, for both good and bad reasons. Humans (very reasonably) fail to follow long chains of hypotheticals. It's almost always a good heuristic.
So trying to optimize rhetorical strategies for multiple groups with very different basic opinions is difficult.
Seems reasonable but this feels like an object level answer to what I assumed was a more meta question. (Like, this answers what you would want in a policy, and I read 1a3orn's question as why this question isn't Typed in a clear and flexible enough way)
Yeah, that's absolutely fair. I mostly gave my personal answers on the object level, and then I tried to generalize to the larger issue of why there's no simple communication strategy here.
Re "What is the minimum necessary and sufficient policy that you think would prevent extinction?"
This is phrased in a way that implies preventing extinction is a binary, when in reality for any given policy there is some probability of extinction. I actually don't know what Eliezer means here, is he asking for a policy that leads to 50%, 10%, 1%, or 0% P(extinction)?
(I think that it's common for AI safety people to talk too much about totally quashing risks rather than reducing them, in a way that leads them into unproductive lines of reasoning.)
I think that if you want to work on AI takeover prevention, you should probably have some models of the situation that give you a rough prediction for P(AI takeover). For example, I'm constantly thinking in terms of the risks conditional on scheming arising, scheming not arising, a leading AI company being responsible and having three years of lead, etc.
As another note: my biggest problem with talking about P(doom) is that it's ambiguous between various different events like AI takeover, human extinction, human extinction due to AI takeover, the loss of all value in the future, and it seems pretty important to separate these out because I think the numbers are pretty different.
(I think that it's common for AI safety people to talk too much about totally quashing risks rather than reducing them, in a way that leads them into unproductive lines of reasoning.)
Especially because we need to take into account non-AI X-risks. So maybe "What is the AI policy that would most reduce X-risks overall?" For people with lower P(X-risk|AGI) (if you don't like P(doom)), longer timelines, and/or more worried about other X-risks, the answer may be do nothing or even accelerate AI (harkening back to Yudkowsky's "Artificial Intelligence as a Positive and Negative Factor in Global Risk".
"p(Doom) if we build superintelligence soon under present conditions using present techniques"
This has the issue that superintelligence won't be developed using present techniques, and a lot of the question is about whether it'll be okay to develop superintelligence using the different techniques that will be used at the time, and so this number will potentially drastically overestimate the risk.
I of course have no problem with asking people questions about situations where the risk is higher or lower than you believe actual risk to be. But I think that focusing on this question will lead to repeated frustrating interactions where one person quotes the answer to the stated question, and then someone else conflates that with their answer to what will happen if we continue on the current trajectory, which for some people is extremely different and for some people isn't.
I guess same question: "do you think there is a better question here one should ask instead? or do you think people should really stop trying to ask questions like this in a systematic way?".
(Mostly I don't particularly think there should particularly should be a common pDoom-ish question, but insofar as I was object-level answering your question here, an answer I think feels right-ish is "ensure AI x-risk is no higher than around the background risk of nuclear war")
Seems somewhat surprising for that to be what Eliezer had in mind, given the many episodes of people saying stuff like "MIRI's perspective makes sense if we wanted to guarantee that there wasn't any risk" and Eliezer saying stuff like "no I'd take any solution that gave <=50% doom". (At least that's what I remember, though I can't find sources now.)
I do agree that's enough evidence to be confused and dissatisfied with my guess. I'm basing my guess more on the phrasing of question, which sounds more like it's just meaning to be "what a reasonable person would think 'prevent exinction' would mean", and, the fact that Eliezer said that-sort-of-thing in another context doesn't necessarily mean it's what he meant here."
I almost included a somewhat more general disclaimer of "also, this question is much more opinionated on framing", and then didn't end up including in the original post. But, I just edited that in.
Broadly, do you think there should be any question that's filling the niche that p(Doom) is filling right now? ("no" seems super reasonable, but wondering if there is a question you actually think is useful for some kind of barometer-of-consensus that carves reality at the joints?)
I think "What's your P(AI takeover)" is a totally reasonable question and don't understand Eliezer's problem with it (or his problem with asking about timelines). Especially given that he is actually very (and IMO overconfidently) opinionated on this topic! I think it would be crazy to never give bottom-line estimates for P(AI takeover) when talking about AI takeover risk.
(I think talking about timelines is more important because it's more directly decision relevant (whereas P(AI takeover) is downstream of the decision-relevant variables).)
In practice the numbers I think are valuable to talk about are mostly P(some bad outcome|some setting of the latent facts about misalignment, and some setting of what risk mitigation strategy is employed). Like, I'm constantly thinking about whether to try to intervene on worlds that are more like Plan A worlds or more like Plan D worlds, and to do this you obviously need to think about the effect on P(AI takeover) of your actions in those worlds.
This comment reads as kind of unhelpfully intellectual to me? "What would it take to fix the door?" "Well it would involve replacing the broken hinge." "Ah, that's framing the door being fixed as a binary, really you should claim that replacing the broken hinge has a certain probability of fixing the door."
I get that in life we don't have certainties, and many good & relevant policies mitigate different parts of the risk, but I think the point is to move toward a mechanistic model of what the problem is and what its causes are, and often talking about probabilities isn't appropriate for that and essentially just adds cognitive overhead.
"What is a policy that could be implemented by a particular government, or an international treaty signed by many governments, such that you would no longer think that working on preventing extinction-or-similar from AGI was the top priority (or close to the top priority) for our civilization?"
Another phrasing: "such that you would no longer be concerned that, this century, humanity would essentially lose control over the future by getting outcompeted by an AGI with totally different values?"
I think I could understand you not getting what the question means if, in your model of the future, all routes are crazy and pass through AGI takeover of some sort, and our full-time job regardless of what happens is to navigate that. Like, there isn't cleanly a 'safe' world, all the worlds involve our full-time job being dealing with alignment problems of AGIs.
(Edited down to cut extraneous text.)
I'm surprised that's the question. I would guess that's not what Eliezer means because he says Dath Ilan is responding sufficiently to AI risk but also hints at Dath Ilan still spending a significant fraction of its resources on AI safety (I've only read a fraction of the work here, maybe wrong). I have a background belief that the largest problems don't change that much, and it's rare for problems to go from #1 problem to not-in-top-10 problems, and that most things have diminishing returns such that it's not worthwhile to solve them so thoroughly. An alternative definition that's spiritually similar that I like more is; "What policy could governments implement such that the improving the AI x-risk policy would now not be the #1 priority, if the governments were wise.". This isolates AI / puts it in context of other global problems, such that the AI solution doesn't need to prevent governments from changing their minds over the next 100 years or whatever needs to happen for the next 100 years to go well.
Fixing doors is so vastly easier than predicting the future that analogies and intuitions don't transfer.
Compare someone asking in 1875, 1920, 1945, or 2025, "What is the minimum necessary and sufficient policy that you think would prevent Germany invading France in the next 50 years?". The problem is non-binary, there are no guarantees, and even definitions are treacherous. I wouldn't ask the question that way
Instead I might ask "what policies best support peace between France and Germany, and how?". So we can talk mechanistically without the distraction of "minimum", "necessary", "sufficient", and "prevent".
Separately, I do not want anyone to be thinking of minimum policies here. There is no virtue in doing the minimum necessary to prevent extinction.
these are pretty rough log-odds and it may do violence to your own mind to force itself to express its internal intuitions in those terms which is why I don't go around forcing my mind to think in those terms myself,
I've seen this take from Eliezer a lot. I don't think I've read any longer defense of it, explaining why to believe that it gets harder to think about things if you put numbers on them. I'd especially like a defense that engages with the obvious upside in clarity, ability to check for consistency between multiple different numbers, etc, and argues that the downsides outweigh the upsides. (I currently think the upsides are larger.)
I can't speak for Eliezer, but I can make some short comments about how I am suspicious of thinking in terms of numbers too quickly. I warn you beforehand my thoughts on the subject aren't very crisp (else, of course, I could put a number on them!)
Mostly I feel like emphasizing the numbers too much fails to respect the process by which we generate them in the first place. When I go as far as putting a number on it, the point is to clarify my beliefs on the subject; it is a summary statistic about my thoughts, not the output of a computation (I mean it technically is, but not a legible computation process we can inspect and/or maybe reverse). The goal of putting a number on it, whatever it may be, is not to manipulate the number with numerical calculations any more than the goal of writing an essay to is grammatically manipulate the concluding sentence, in my view.
Through the summary statistic analogy, I think that I basically disagree with the idea of numbers providing a strong upside in clarity. While I agree that numbers as a format are generally clear, they are only clear as far as that number goes - they communicate very little about the process by which they were reached, which I claim is the key information we want to share.
Consider the arithmetic mean. This number is perfectly clear, insofar as it means there are some numbers which got added together and then divided by how many numbers were summed. Yet this tells us nothing about how many numbers there were, or what the values of the numbers themselves were, or how wide the range of numbers was, or what the possible values were; there are infinitely many variations behind just the mean. It is also true going from no number at all to a mean screens out infinitely many possibilities, and I expect that infinity is substantially larger than the number of possibilities behind any given average. I feel like the crux of my disagreement with the idea of emphasizing numbers is people who endorse them strongly look at the number of possibilities eliminated in the step of going from nothing to an average and think "Look at how much clarity we have gained!" whereas I look at the number of possibilities remaining and think "This is not clear enough to be useful."
The problem gets worse when numbers are used to communicate. Supposing two people meet in a Bay Area House Party and tell each other their averages. If they both say "seven," they'll probably assume they agree, even though it is perfectly possible for the average of what to have literally zero overlap. This is the point at which numbers turn actively misleading, in the literal sense that before they exchanged averages they at least knew they knew nothing, and after exchanging averages they wrongly conclude they agree.
Contrast this with a more practical and realistic case where we might get two different answers on something like probabilities from a data science question. Because it's a data science question we are already primed to ask questions about the underlying models and the data to see why the numbers are different. We can of course do the same with the example about averages, but in the context of the average even giving the number in the first place is a wasted step because we gain basically nothing until we have the data information (where the sum-of-all-n-divided-by-n is the model). By contrast, in the data science question we can reasonably infer that the models will be broadly similar, and that if they aren't that information by itself likely points to the cruxes between them. As a consequence, getting the direct numbers is still useful; if two data science sources give very similar answers, they likely do agree very closely.
In sum, we collectively have gigantic uncertainty about the qualitative questions of models and data for whether AI can/will cause human extinction. I claim the true value of quantifying our beliefs, the put-a-number-on-it mental maneuver, is clarifying the qualitative questions. This is also what we really want to be talking about with other people. The trouble is the number we have put on all of this internally is what we communicate but does not contain the process for generating the number, and then the conversation invariably becomes about the numbers, and in my experience this actively obscures the key information we want to exchange.
Thanks, I think I'm sympathetic to a good chunk of this (though I think I still put somewhat greater value on subjective credences than you do). In particular, I agree that there are lots of ways people can mess up when putting subjective credences on things, including "assuming they agree more than they do".
I think the best solution to this is mostly to teach people about the ways that numbers can mislead, and how to avoid that, so that they can get the benefits of assigning numbers without getting the downside. (E.g.: complementing numerical forecasts with scenario forecasts. I'm a big fan of scenario forecasts.)
My impression is that Eliezer holds a much stronger position than yours. In the bit I quoted above, I think Eliezer isn't only objecting to putting too much emphasis on subjective credences, but is objecting to putting subjective credences on things at all.
I also think he objects to putting numbers on things, and I also avoid doing it. A concrete example: I explicitly avoid putting numbers on things in LessWrong posts. The reason is straightforward - if a number appears anywhere in the post, about half of the conversation in the comments will be on that number to the exclusion of the point of the post (or the lack of one, etc). So unless numbers are indeed the thing you want to be talking about, in the sense of detailed results of specific computations, they are positively distracting from the rest of the post for the audience.
I focused on the communication aspect in my response, but I should probably also say that I don't really track what the number is when I actually go to the trouble of computing a prior, personally. The point of generating the number is clarifying the qualitative information, and then the point remains the qualitative information after I got the number; I only really start paying attention to what the number is if it stays consistent enough after doing the generate-a-number move that I recognize it as being basically the same as the last few times. Even then, I am spending most of my effort on the qualitative level directly.
I make an analogy to computer programs: the sheer fact of successfully producing an output without errors weighs much more than whatever the value of the output is. The program remains our central concern, and continuing to improve it using known patterns and good practices for writing code is usually the most effective method. Taking the programming analogy one layer further, there's a significant chunk of time where you can be extremely confident the output is meaningless; suppose you haven't even completed what you already know to be minimum requirements, and compile the program anyway, just to test for errors so far. There's no point in running the program all the way to an output, because you know it would be meaningless. In the programming analogy, a focus on the value of the output is a kind of "premature optimization is the root of all evil" problem.
I do think this probably reflects the fact that Eliezer's time is mostly spent on poorly understood problems like AI, rather than on stable well-understood domains where working with numbers is a much more reasonable prospect. But it still feels like even in the case where I am trying to learn something that is well-understood, just not by me, trying for a number feels opposite the idea of hugging the query, somehow. Or in virtue language: how does the number cut the enemy?
I’m confused by the “necessary and sufficient” in “what is the minimum necessary and sufficient policy that you think would prevent extinction?”
Who’s to say there exists a policy which is both necessary and sufficient? Unless we mean something kinda weird by “policy” that can include a huge disjunction (e.g. “we do any of the 13 different things I think would work”) or can be very vague (e.g “we solve half A of the problem and also half B of the problem”).
It would make a lot more sense in my mind to ask “what is a minimal sufficient policy that you think would prevent extinction”?
I don't think this engages with a realistic model of why "p(doom)" was so memetically fit in the first place. I would point to Val's model as something which is probably closer, specifically for purposes of understanding the memetic fitness of "p(doom)".
Does that mean you think it's more like "a way for anxiety to manifest" than "a way for tribal impulses to manifest?"
Yes, though perhaps a mildly gearsier frame would be that anxiety (or something like it) is providing a kind of raw fuel for memeticity, and various memes eat up that fuel and redirect it in other ways, including sometimes toward tribalism. Like, the tribalism is more like one kind of engine which can take anxiety-type fuel.
I'm inclined to look at the blunt limitations of bandwidth on this one. The first hurdle is that p(doom) can pass through tweets and shouted conversations in bay area house parties.
Eliezer in 2008, in When (Not) To Use Probabilities, wrote:
To be specific, I would advise, in most cases, against using non-numerical procedures to create what appear to be numerical probabilities. Numbers should come from numbers.
As a minor anecdatal addition: people (at least in the bay) were annoyingly and unproductively asking about p(doom) before Death with Dignity was published.
If you want someone to compress and communicate their views on the future, whether they anticipate everyone will be dead within a few decades because of AI seems like a pretty important thing to know. And it's natural to find your way from that to asking for a probability. But I think that shortcut isn't actually helpful, and it's more productive to just ask something like "Do you anticipate that, because of AI, everyone will be dead within the next few decades?". Someone can still give a probability if they want, but it's more natural to give a less precise answer like "probably not" or a conditional answer like "I dunno, depends on whether <thing happens>" or to avoid the framing like "well, I don't think we're literally going to die, but".
I overall agree with the points outlined above. It's for those reasons that I don't have a p(doom), at least if we're only talking about AI. What one person considers "doom" may be very different from what another person does. Furthermore I've seen people have a p(doom) of say, 45%, which seems to be a rough ballpark when their p(extinction) is 20% and their p(takeover) is 50%. A casual observer might assume that they see extinction as the only bad outcome with a probability of 45%
My P(doom) is 100% - Ɛ to Ɛ: 100% - Ɛ, Humanity has already destroyed the solar system (and the AGI is rapidly expanding to the rest of the galaxy) on at least some of the quantum branches due to an independence gaining AGI accident. I am fairly certain this was possible by 2010, and possibly as soon as around 1985. Ɛ, at least some of the quantum branches where independence gaining AGI happens will have a sufficiently benign AGI that we survive the experience or we will actually restrict computers enough to stop accidentally creating AGI. It is worth noting that if AGI has a very high probability of killing people, what it looks like in a world with quantum branching is periodically there will be AI winters when computers and techniques approach being capable of AGI, because many of the "successful" uses of AI result in deadly AGI, and so we just don't live long enough to observe those.
And if I am just talking with an average person, I say > 1%, and make the side comment that if a passenger plane had a 1% probability of crashing in the next flight, it would not take off.
Edit: And to really explain this would take a lot longer post, so I apologize for that. That said, if I had to choose, to prevent accidental AGI, I would be willing to restrict myself and everyone else to computers on the order of 512 KiB of RAM, 2 MiB of disk space as part of a comprehensive program to prevent existential risk.
For awhile, I kinda assumed Eliezer had basically coined the concept of p(Doom). Then I was surprised one day to hear him complaining about it being an antipattern he specifically thought was unhelpful and wished people would stop.
He noted: "If you want to trade statements that will actually be informative about how you think things work, I'd suggest, "What is the minimum necessary and sufficient policy that you think would prevent extinction?"
Complete text of the corresponding X Thread:
I spent two decades yelling at nearby people to stop trading their insane made-up "AI timelines" at parties. Just as it seemed like I'd finally gotten them to listen, people invented "p(doom)" to trade around instead. I think it fills the same psychological role.
If you want to trade statements that will actually be informative about how you think things work, I'd suggest, "What is the minimum necessary and sufficient policy that you think would prevent extinction?"
The idea of a "p(doom)" isn't quite as facially insane as "AGI timelines" as marker of personal identity, but
(1) you want action-conditional doom,
(2) people with the same numbers may have wildly different models,
(3) these are pretty rough log-odds and it may do violence to your own mind to force itself to express its internal intuitions in those terms which is why I don't go around forcing my mind to think in those terms myself,
(4) most people haven't had the elementary training in calibration and prediction markets that would be required for them to express this number meaningfully and you're demanding them to do it anyways,
(5) the actual social role being played by this number is as some sort of weird astrological sign and that's not going to help people think in an unpressured way about the various underlying factual questions that ought finally and at the very end to sum to a guess about how reality goes.
Orthonormal responds:
IMO "p(doom)" was a predictable outgrowth of the discussion kicked off by Death with Dignity, specifically saying you thought of success in remote log-odds. (I did find it a fair price to pay for the way the post made the discourse more serious, ironically for April Fools.)
Eliezer responds:
the post was about fighting for changes in log-odds and at no point tried to give an absolute number
(There is some further back-and-forth about this)
I kinda agree with Orthonormal (on this being a fairly natural outgrowth of Death with Dignity's ), although I think it's more generally downstream of "be a culture that puts probabilities on things."
I'm posting this partly because, I had vaguely been attributing "p(Doom)" discourse to Eliezer, and it seemed like maybe other people were too, and it seemed good to correct the record for anyone else who also thought that.
I know at least a few other prominent x-risky thinkers who also think p(Doom) is a kind of bad way to compress worldviews. I'm posting this today because Neel Nanda recently tweeted about generally hating being asked for his p(Doom), noting a time an interviewer recently asked him about it and he replied:
I decline to answer that question because I am not particularly confident in my answer, and I think that the social norm of asking for off-the-cuff numbers falsely implies that they matter more than they do.
Originally, I wanted to flesh out this post with some thinking about what people are trying to get out of "saying your p(Doom)", and how to best achieve that. Spelling that out nicely turned out to be a lot of work, but, it still seemed nice to crosspost this mini-essay.
I guess I will include some takes without justifying them yet, and hash things out in the comments:
0. It's not nonzero useful to discuss p(Doom), the thing that is weird/bad is how much people fixate on it relative to other things.
1. It's not useful to articulate more precise p(Doom) than you have a credibly calibrated belief about.
i.e. most people probably should be saying things more like "idk somewhere between X and Y%?" than "Z%."
I think it's better to give ranges than to just use words like "probably?" "probably not" "very unlikely", because not everyone means the same thing by those words, and having common units is helpful for avoiding misunderstandings.
2. You should at least factor out "p(Doom) if we build superintelligence soon under present conditions using present techniques" vs "p(Doom) all-things-considered."
"How hard is navigating superintelligence" and "how competently will humanity navigate superintelligence" seem like fairly different questions.
In particular, this helps prevent p(Doom) being more like a measure of mood-affiliation of vibes than an actual prediction.
I like Eliezer's alternate question of "What is the minimum necessary and sufficient policy that you think would prevent extinction?" (with "nothing" being an acceptable answer), but, it does seem noticeably harder to answer and at least one of the nice things about p(Doom) is you probably have an implicit thing you already believe. (It's also a much more opinionated framing)
3. It's useful to track some kind of consensus about p(Doom)-ish questions, but not for the reasons most people think.
It's not good for leaning into tribal camps about how doomy to be.
It's also not good for figuring out what sort of ideas or questions are acceptable to talk about.
I think it's kinda reasonable for people who are mostly not going to think about x-risk anyways, and who don't trust themselves or want to put in the time to evaluate the arguments themselves, to differ to some vague consensus of people you trust.
(I do think it's pretty silly to do that re: mainstream AI experts, because so many of them are clearly not paying much attention to the arguments at all. But, if you don't trust anyone in the x-risk community, idk, I don't have a better suggestion. But you are hear reading LessWrong so probably you aren't doing that)
It does seem kinda useful to track what mainstream experts believe, for purposes of modeling mainstream society so you can then make predictions about interventions on mainstream society. But, a) it still seems better to separate "doom-if-prosaic-AI-under-present-conditions" from "overall doom" b) I think it's easy to fall into some tribal dynamics here. Please don't.
It also seems kinda useful to track what the x-risk-thinkers consensus is, for purposes of modeling the x-risk conversation and how to make intellectual progress on it, but, again, don't fall into the attractor of overdoing it or doing it in a tribal way, and don't overindex on p(Doom) as opposed to all the other questions that are worth asking.
Some alternate ways of breaking down this question:
Rob Bensinger has made some previous attempts at more nuanced things. In 2021 he sent this 2-question survey to ~117 people working on long-term AI risk
1. How likely do you think it is that the overall value of the future will be drastically less than it could have been, as a result of humanity not doing enough technical AI safety research?
2. How likely do you think it is that the overall value of the future will be drastically less than it could have been, as a result of AI systems not doing/optimizing what the people deploying them wanted/intended?
And in 2023 made this much more multifacted view-shapshot chart.
So insofar as you wanted something like "a barometer of what some people think", I think this last one is too complex to be useful except as a one-time highish effort survey.