## LESSWRONGLW

And then on top of that there are significant other risks from the transition to AI. Maybe a total of more like 40% total existential risk from AI this century? With extinction risk more like half of that, and more uncertain since I've thought less about it.

40% total existential risk, and extinction risk half of that? Does that mean the other half is some kind of existential catastrophe / bad values lock-in but where humans do survive?

4evhub1mo
Fwiw, I would put non-extinction existential risk at ~80% of all existential risk from AI. So maybe my extinction numbers are actually not too different than Paul's (seems like we're both ~20% on extinction specifically).

This is a temporary short form, so I can link people to Scott Alexander's book review post. I'm putting it here because Substack is down, and I'll take it down / replace it with a Substack link once it's back up. (also it hasn't been archived by Waybackmachine yet, I checked)

The spice must flow.

Edit: It's back up, link: https://astralcodexten.substack.com/p/book-review-what-we-owe-the-future

It isn't down for me right now [https://astralcodexten.substack.com/p/book-review-what-we-owe-the-future]. It was already on archive.today [http://archive.ph/2022.08.23-095051/https://astralcodexten.substack.com/p/book-review-what-we-owe-the-future] and on Internet Archive [https://web.archive.org/web/20220823060835/https://astralcodexten.substack.com/p/book-review-what-we-owe-the-future] .

I hope to write a longer response later, but wanted to address what might be my main criticism, the lack of clarity about how big of a deal it is to break your pledge, or how "ironclad" the pledge is intended to be.

I think the biggest easy improvement would be amending the FAQ (or preferably something called "pledge details" or similar) to present the default norms for pledge withdrawal. People could still choose to choose different norms if they preferred, but it would make it more clear what people were agreeing to, and how strong the commitment was intended to be, without adding more text to the main pledge.

1lukefreeman2mo
Thanks! Will have a think about how any existing language could be updated or what should be added. The TL;DR version of what my ideal would be is that people to take it seriously enough with enough foresight that there is a small (5-20%) chance of withdrawal (it's best to keep promises), but not so seriously that they wouldn't take it (worried that there's some tiny chance they'll break it) or wouldn't resign if it were truly the best thing for the world (including not just their direct impact but also the impact on their own wellbeing and the impact their resignation/follow through has on the norm). I see a few problems with having default norms for withdrawal, It can (a) be hard to universalise; (b) provide licence for some; (c) devalue the efforts of others. For the purpose of look at (a) (b) and (c) let's imagine we made an explicit default norm to be developing a chronic health issue (something that I can imagine being a good reason to withdraw for some people after careful consideration of their exact circumstance): (a) The types of chronic health issues can vary significantly on how much they'd change someones ability to follow through; and the places in which someone lives (e.g. public/private healthcare) and their employment situation can also change that. For example, I've had chronic back pain and headaches/migraines since I was 14 years old, but I don't see that as a dealbreaker. (c) We all fall prey to motivated reasoning and pre-commitment is meant to help you avoid that to some extent. If chronic health issues were listed as a norm I might have looked at the pledge and thought "Oh, that's for healthy people, I'm all good, I should keep 100% of my money." Or say I took the pledge before my chronic pain started, and then at some point I reflected and thought "I guess I'll just stop giving because that's just for the healthy people.". (b) If some people with the same situation have worked through it then it can devalue their efforts and/or make th

I'm a little surprised that I don't see more discussion of ways that higher bandwidth brain-computer interfaces might help, e.g. neurolink or equivalent. Like it sounds difficult but do people feel really confident it won't work? Seems like if it could work it might be achievable on much faster timescales than superbabies.

Oh cool. I was thinking about writing some things about private non-ironclad commitments but this covers most of what I wanted to write. :)

I cannot recommend this approach on the grounds of either integrity or safety 😅

7Elizabeth2mo
it worked for Jon Snow I don't know what your problem is.

Yeah, I think it's somewhat boring without without more. Solving the current problems seems very desirable to me, very good, and also really not complete / compelling / interesting. That's why I'm intending to try to get at in part II. I think it's the harder part.

Rot13: Ab vg'f Jbegu gur Pnaqyr

1UnderTruth3mo
Rot13: V gubhtug vg jbhyq or Znaan ol Znefunyy Oenva

This could mitigate financial risk to the company but I don't think anyone will sell existential risk insurance, or that it would be effective if they did

I think that's a legit concern. One mitigating factor is that people who seem inclined to rash destructive plans tend to be pretty bad at execution, e.g. Aum Shinrikyo

Recently Eliezer has used the dying with dignity frame a lot outside his April 1st day post. So while some parts of that post may have been a joke, the dying with dignity part was not. For example: https://docs.google.com/document/d/11AY2jUu7X2wJj8cqdA_Ri78y2MU5LS0dT5QrhO2jhzQ/edit?usp=drivesdk

If you have specific examples where you think I took something too seriously that was meant to be a joke, I'd be curious to see those.

1. Recently Eliezer has used the dying with dignity frame a lot outside his April 1st day post. So while some parts of that post may have been a joke, the dying with dignity part was not. For example: https://docs.google.com/document/d/11AY2jUu7X2wJj8cqdA_Ri78y2MU5LS0dT5QrhO2jhzQ/edit?usp=drivesdk

2. I think you're right that dying with dignity is a better frame specifically for recommending against doing unethical stuff. I agree with everything he said about not doing unethical stuff, and tried to point to that (maybe if I have time I will add some more em

2johnlawrenceaspden6mo
Not clear to me. Why not?
That makes sense. And thank you for emphasizing this. I think both of our points stand. My point is about the title of this specific April Fools Day post. If it's gonna be an April Fools Day post, "playing to your outs" isn't very April Fools-y. And your point stands I think as well, if I'm interpreting you correctly, that he's chosen the messaging of "death with dignity" outside of the context of April Fools Day as well, in which case "it's an April Fools Day post" isn't part of the explanation. I hear ya for sure. I'm not sure what to think about how necessary it is either. The heuristic of "be more cynical about humans" comes to mind though, and I lean moderately strongly towards thinking it is a good idea.

Just a note on confidence, which seems especially important since I'm making a kind of normative claim:

I'm very confident "dying with dignity" is a counterproductive frame for me. I'm somewhat confident that "playing to your outs" is a really useful frame for me and people like me. I'm not very confident "playing to your outs" is a good replacement to "dying with dignity" in general, because I don't know how much people will respond to it like I do. Seeing people's comments here is helpful.

So, in my mind, the thing that "dying with dignity" is supposed to do is that when you look at plan A and B, you ask yourself: "which of these is more dignified?" instead of "which of these is less likely to lead to death?", because your ability to detect dignity is more sensitive than your ability to detect likelihood of leading to death on the present margin. [This is, I think, the crux; if you don't buy this then I agree the framing doesn't seem sensible.]

This lets you still do effective actions (that, in conjunction with lots of other things, can still... (read more)

"It also seems to encourage #3 (and again the vague admonishment to "not do that" doesn't seem that reassuring to me.)"

I just pointed to Eleizer's warning which I thought was sufficient. I could write more about why I think it's not a good idea, but I currently think a bigger portion of the problem is people not trying to come up with good plans rather than people coming up with dangerous plans which is why my emphasis is where it is.

Eliezer is great at red teaming people's plans. This is great for finding ways plans don't work, and I think it's very impor... (read more)

4Joe_Collman6mo
I largely agree with that, but I think there's an important asymmetry here: it's much easier to come up with a plan that will 'successfully' do huge damage, than to come up with a plan that will successfully solve the problem. So to have positive expected impact you need a high ratio of [people persuaded to come up with good plans] to [people persuaded that crazy dangerous plans are necessary]. I'd expect your post to push a large majority of readers in a positive direction (I think it does for me - particularly combined with Eliezer's take). My worry isn't that many go the other way, but that it doesn't take many.

I currently don't know of any outs. But I think I know some things that outs might require and am working on those, while hoping someone comes up with some good outs - and occasionally taking a stab at them myself.

I think the main problem is the first point and not the second point:

• Do NOT assume that what you think is an out is certainly an out.
• Do NOT assume that the potential outs you're aware of are a significant proportion of all outs.

The current problem, if Eleizer is right, is basically that we have 0 outs. Not that the ones we have might be less... (read more)

3Joe_Collman6mo
Oh sure - I don't mean to imply there's no upside in this framing, or that I don't see a downside in Eliezer's. However, whether you know of outs depends on what you see as an out. E.g. buying much more time to come up with a solution could be seen as an out by some people. It's easy to imagine many bad plans to do that, with potentially hugely negative side-effects. Some of those bad plans would look rational, conditional on an assumption that there was no other way to avoid losing the future. Of course making such an assumption is poor reasoning, but the trouble is that it happens implicitly: nobody needs to say to themselves "...and here I assume that no-one on earth has or will come up with approaches I've missed", they only need to fail to ask themselves the right questions. Conditional on being very clear on not knowing the outs, I think this framing may well be a good one for many people - but I'm serious about the mental exercise.

I agree finding your outs is very hard, but I don't think this is actually a different challenge than increasing "dignity". If you don't have a map to victory, then you probably lose. I expect that in most worlds where we win, some people figured out some outs and played to them.

I donated:

$100 to Zvi Mowshowitz for his post "Covid-19: My Current Model" but really for all his posts. I appreciated how Zvi kept posting Covid updates long after I have energy to do my own research on this topic. I also appreciate how he called the Omicron wave pretty well.$100 to Duncan Sabien for his post "CFAR Participant Handbook now available to all". I'm glad CFAR decided to make it public, both because I have been curious for a while what was in it and because in general I think it's pretty good practice for orgs like CFAR to publish more of what they do. So thanks for doing that!

I've edited the original to add "some" so it reads "I'm confident that some nuclear war planners have..."

It wouldn't surprise me if some nuclear war planners had dismissed these risks while others had thought them important.

I'm fairly confident that at least some nuclear war planners have thought deeply about the risks of climate change from nuclear war because I've talked to a researcher at RAND who basically told me as much, plus the group at Los Alamos who published papers about it, both of which seem like strong evidence that some nuclear war planners have taken it seriously. Reisner et al., "Climate Impact of a Regional Nuclear Weapons Exchange: An Improved Assessment Based On Detailed Source Calculations" is mostly Los Alamos scientists I believe.

Just because some of t... (read more)

Thanks this is helpful! I'd be very curious to see where Paul agreed / disagree with the summary / implications of his view here.

4Rob Bensinger10mo
(I'll emphasize again, by the way, that this is a relative comparison of my model of Paul vs. Eliezer. If Paul and Eliezer's views on some topic are pretty close in absolute terms, the above might misleadingly suggest more disagreement than there in fact is.)

After reading these two Eliezer <> Paul discussions, I realize I'm confused about what the importance of their disagreement is.

It's very clear to me why Richard & Eliezer's disagreement is important. Alignment being extremely hard suggests AI companies should work a lot harder to avoid accidentally destroying the world, and suggests alignment researchers should be wary of easy-seeming alignment approaches.

But it seems like Paul & Eliezer basically agree about all of that. They disagree about... what the world looks like shortly before the end... (read more)

I would frame the question more as 'Is this question important for the entire chain of actions humanity needs to select in order to steer to good outcomes?', rather than 'Is there a specific thing Paul or Eliezer personally should do differently tomorrow if they update to the other's view?' (though the latter is an interesting question too).

Some implications of having a more Eliezer-ish view include:

• In the Eliezer-world, humanity's task is more foresight-loaded. You don't get a long period of time in advance of AGI where the path to AGI is clear; nor do yo

Another way to run this would be to have a period of time before launches are possible for people to negotiate, and then to not allow retracting nukes after that point.  And I think next time I would make it so that the total of no-nukes would be greater than the total if only one side nuked, though I did like this time that people had the option of a creative solution that "nuked" a side but lead to higher EV for both parties than not nuking.

1Idan Arye1y
You also need to only permit people who took part in the negotiations to launch nukes. Otherwise newcomers could just nuke without anyone having a chance to establish a precommittment to retaliate against them.

I think the fungibility is a good point, but it seems like the randomizer solution is strictly better than this. Otherwise one side clearly gets less value, even if they are better off than they would have been had the game not happened. It's still a mixed motive conflict!

I'm not sure that anyone exercised restraint in not responding to the last attack, as I don't have any evidence that anyone saw the last response. It's quite possible people did see it and didn't respond, but I have no way to know that.

Oh I should have specified, that I would consider the coin flip to be a cooperative solution! Seems obviously better to me than any other solution.

I think there are a lot of dynamics present here that aren't present in the classic prisoners dilemma, and some dynamics that are present (and some that are present in various iterated prisoner's dilemmas). The prize might be different for different actors, since actors place different value of "cooperative" outcomes. If you can trust people's precommitments, I think there is a race to commit OR precommit to an action.

E.g. if I wanted the game to settle with no nukes launched, then I could pre-commit to launching a retaliatory strike to either side if an attack was launched.

I sort of disagree. Not necessarily that it was the wrong choice to invest your security resources elsewhere--I think your threat model is approximately correct--but I disagree that it's wrong to invest in that part of your stack.

My argument here is that following best practices is a good principle, and that you can and should make exceptions sometimes, but Zack is right to point it out as a vulnerability. Security best practices exist to help you reduce attack surface without having to be aware of every attack vector. You might look at this instance and r... (read more)

6habryka1y
I do not know what the difference here is. Presumably one implies the other?

Furthermore, It is also not inconceivable to me that an adversary might be able to use the hash itself without cracking it. For example, the sha256 hash of some information is commonly used to prove that someone has that information without revealing it, so an adversary, using the hash, could credibly lie that he already possesses a launch code and in a possible counterfactual world where no one found about the client side leaking the hash except this adversary, use this lie to acquire an actual code with some social engineering.

Like:

I agree that this is a correct application of security mindset; exposures like these can compound with, for example, someone's automatic search of the 100 most common ways to screw up secure random number generation such as by using the current time as a seed. Deep security is about reducing the amount of thinking you have to do and your exposure to wrong models and stuff you didn't think of.

I don't think it's super clear, but I do think it's the clearest that we are likely to get that's more than 10% likely. I disagree that SARS could 15 years, or at least I think that one could have been called within a year or two. My previous attempt to operationalize a bet had the bet resolve if, within two years, a mutually agreed upon third party updated to believe that there is >90% probability that an identified intermediate host or bat species was the origin point of the pandemic, and that this was not a lab escape.

Now that I'm writing this ... (read more)

1Donald Gislason1y
Given that full transparency from Chinese authorities is unlikely, assessing the probabilities is the best we can do. Fortunately, that has been done with with impressive scientific rigour by DRASTIC member Dr. Steven Quay MD, PhD in his technically detailed 193-page Bayesian analysis of 26 known facts about the outbreak: https://zenodo.org/record/4477081#.YNAFry0ZNE4 [https://zenodo.org/record/4477081#.YNAFry0ZNE4] which he explains in layman's terms in his interview with Julius Killerby (cited in my comment above). The advantage of this approach is that it follows the scientific method: laying out clearly its premises and calculations so that they can be challenged and tested by experts in the field. The evidence is so convincing that, along with his influential piece in the Wall Street Journal (co-authored by astrophysicist Richard Muller) https://www.wsj.com/articles/the-science-suggests-a-wuhan-lab-leak-11622995184 [https://www.wsj.com/articles/the-science-suggests-a-wuhan-lab-leak-11622995184] his Bayesian analysis -- made available to both the WHO and the Biden administration -- likely represents 'the writing on the wall' for public decision-makers. It was the 'nudge' indicating that keeping the story low-key was no longer an option, given the amount of technical expertise weighing in on the subject in public discussion. In my view, given the dramatic quality of the statistical evidence, the Biden administration now finds itself the dog that caught the car. The three-month time period for a report from the intelligence community is likely only a breather to assess how to handle the truth of the matter politically with China, and no longer an attempt to establish what is actually true.
2ChristianKl1y
Having looked more into it, it's quite plausible that we will have confirmation that it's a lab leak in a few months or years. The US intelligence community is currently tasked with looking for evidence, and it's quite plausible that someone in China actually knows that it's a lab leak and the US intelligence community manages to intercept clearcut information that goes beyond the reduced cell phone traffic and possible road closures around the WIV in October 2019 and the 3 researchers from the WIV who went to the hospital with symptoms matching flu and COVID-19 in November 2019.

It seems like an interesting hypothesis but I don't think it's particularly likely. I've never heard of other viruses becoming well adapted to humans within a single host. Though, I do think that's the explanation for how several variants evolved (since some of them emerged with a bunch of functional mutations rather than just one or two). I'd be interest to see more research into the evolution of viruses within human hosts, and what degree of change is possible & how this relates to spillover events.

Thanks! I'm still wrapping my mind around a lot of this, but this gives me some new directions to think about.

I have an intuition that this might have implications for the Orthogonality Thesis, but I'm quite unsure. To restate the Orthogonality Thesis in the terms above, "any combination of intelligence level and model of the world, M2". This feels different than my intuition that advanced intelligences will tend to converge upon a shared model / encoding of the world even if they have different goals. Does this make sense? Is there a way to reconcile these intuitions?

3johnswentworth1y
Important point: neither of the models in this post are really "the optimizer's model of the world".M1is an observer's model of the world (or the "God's-eye view"); the world "is being optimized" according to that model, and there isn't even necessarily "an optimizer" involved.M2says what the world is being-optimized-toward. To bring "an optimizer" into the picture, we'd probably want to say that there's some subsystem which "chooses"/determinesθ′, in such a way thatE[−logP[X|M2]|M1( θ′)]≤E[−logP[X|M2]|M1(θ)], compared to some otherθ-values. We might also want to require this to work robustly, across a range of environments, although the expectation does that to some extent already. Then the interesting hypothesis is that there's probably a limit to how low such a subsystem can make the expected-description-length without makingθ′depend on other variables in the environment. To get past that limit, the subsystem needs things like "knowledge" and a "model" of its own - the basic purpose of knowledge/models for an optimizer is to make the output depend on the environment. And it's that model/knowledge which seems likely to converge on a similar shared model/encoding of the world.

A random observation I want to note here is the relative lack of good disagreement I've seen around questions of SARS-CoV-2 origin. I've mostly seen people arguing past each other or trying to immediately dismiss each other. This seems true of experts in the space in addition to non-experts. I'd love to see better structured disagreement, i.e. back and forth in journals or other public forums. This might be a good topic for adversarial collaboration.

There have even been claims of SARS-CoV-2 in March 2019, which I think are almost certainly false positives:

https://www.lesswrong.com/posts/fzAGNMeL7a4J8k7im/was-sars-cov-2-actually-present-in-march-2019-wastewater

I should have worded that better. I copied that sentence from a facebook post where I had a claim above that sentence that said something like, "I think this article is basically correct in its interpretation of the literature". The disagreement is about claims the  NY mag article made that weren't backed up by sources / were the authors original speculation. I meant to convey "I think the NY article did a decent summarization of the articles he cited -- that being said, while I agree with the general thrust of the article, I think there some points the author speculated about that are likely wrong"

2ESRogs2y
Got it, thanks for the clarification.

For context, I have a background in evolutionary theory (though nothing specific to viruses or pathogens) and have recently transitioned from part time to full time research in the longtermist biosecurity space.

When investigating this question, I found researcher's arguments pretty easy to follow, but found some of the claims about ease of engineering to be hard to follow because they often relied on tacit knowledge like "how hard / expensive is it make an infectious clone of a new coronavirus".  And some the more technical molecular phylogenetics wer... (read more)

There are a group of researchers concerned with CoV19 origins who frequent Twitter and use the moniker #DRASTIC. They count a number of geneticists / microbiologists in their number. See this list: https://twitter.com/i/lists/1344953249334513666 [https://twitter.com/i/lists/1344953249334513666] @ydeigin, @__ice9, @MonaRahalkar, @Rossana38510044, @Ayjchan and @AntGDuarte may be good candidates for your questions. Note that they consider RATG13 to be a chimera designed to obfuscate research.
3Gerald Monroe2y
Ok, so here is what I read (my knowledge may be out of date): There is a way to set up a chain of animals in a "gain of function" experiment. You start with the wild-type virus, and infect the first animal. You put a gap to the next animal where the viral particles able to bridge that gap (out of the very large number of copies some mutated in the infected animal) are more probable to be capable of bridging gaps. Eventually in the last animal, the gap is a large air gap, and the virus is now airborne in lab animals. All it takes for a leak after that is a single mistake by a laboratory employee - such as a faulty seal or air filter or procedure error - and they become infected. They then spread it to someone in Wuhan and that's your outbreak. This same 'gap bridging' is what is creating these mutant variants of Covid that are more infectious. Anyways this chain of events can occur 'naturally', in the same way that enough U-235 can concentrate itself naturally to form a nuclear reactor, it just isn't very likely. Part of this is the exactness of the setup. In a laboratory gain of function experiment, you carefully control each gap to force the virus to evolve to bridge it. (each gap is a little bit more difficult in a smooth progression) In some random cave or mineshaft, conditions are chaotic and the virus is not under as much pressure. Sort of like the difference between using pure neutron reflectors and pure uranium ore to make a nuclear pile and relying on nature to make one by accident.

I've done over 200 hours of research on this topic and have read basically all the sources the article cites. That said, I don't agree with all of the claims. I do not think the SARS-CoV-2 virus is very likely to have been created using the RATG13 virus, because of the genetic differences spread out throughout the genomes. However, there are many other paths that could have led to a lab escape, and I'm somewhat agnostic between several of them.

I don't have a lot of time to investigate this further, but if someone was going to spend serious time on it, then... (read more)

1Richard Dixon1y
This is fairly convincing that it's a plausible and even likely: https://nicholaswade.medium.com/origin-of-covid-following-the-clues-6f03564c038
3CTVKenney2y
With regard to the rootclaim link, I agree that it would be good to try to adapt what they've done to our own beliefs. However, I want to urge some caution with regard to the actual calculation shown on that website. The event to which they give a whopping 81% probability, "the virus was developed during gain-of-function research and was released by accident," is a conjunction of two independent theses. We have to be very cautious about such statements, as pointed out in the Rationality A-Z, here https://www.lesswrong.com/s/5g5TkQTe9rmPS5vvM/p/Yq6aA4M3JKWaQepPJ

A random observation I want to note here is the relative lack of good disagreement I've seen around questions of SARS-CoV-2 origin. I've mostly seen people arguing past each other or trying to immediately dismiss each other. This seems true of experts in the space in addition to non-experts. I'd love to see better structured disagreement, i.e. back and forth in journals or other public forums. This might be a good topic for adversarial collaboration.

4ESRogs2y
This may be a bit of a pedantic comment, but I'm a bit confused by how your comment starts: The "That said, ..." part seems to imply that what follows is surprising. As though the reader expects you to agree with all the claims. But isn't the default presumption that, if you've done a whole bunch of research into some controversial question, that the evidence is mixed? In other words, when I hear, "I've done over 200 hours of research ... and have read ... all the sources", I think, "Of course you don't agree with all the claims!" And it kind of throws me off that you seem to expect your readers to think that you would agree with all the claims. Is the presumption that someone would only spend a whole bunch of hours researching these claims if they thought they were highly likely to be true? Or that only an uncritical, conspiracy theory true believer would put in so much time into looking into it?
3River2y
How do you reconcile the hypothesis that it escaped from a lab in China with the reports that covid-19 antibodies were found in more than a dozen blood samples taken in Italy in early October 2019, and therefor must have been circulating in Italy in September 2019?

For context, I have a background in evolutionary theory (though nothing specific to viruses or pathogens) and have recently transitioned from part time to full time research in the longtermist biosecurity space.

When investigating this question, I found researcher's arguments pretty easy to follow, but found some of the claims about ease of engineering to be hard to follow because they often relied on tacit knowledge like "how hard / expensive is it make an infectious clone of a new coronavirus".  And some the more technical molecular phylogenetics wer... (read more)

Huh, I haven't heard this. Or, rather this was definitely the case early in the cold war re China, a good account of which is in Daniel Ellsberg's the Doomsday Machine. The US war plans considered China hostile even though it wasn't closely aligned with the Soviet Union, and planned to nuke its major cities in the event of a US-Soviet war. I would expect nuclear war plans to sometimes include military allies of target countries, but usually neutral countries. Though I'd be very interested to see a source to the contrary!

I think what you're saying makes sense, but it's not clear to me how prominent diplomatic objectives of nuclear war are in current nuclear war plans. I wrote on this some here: https://www.lesswrong.com/posts/rn2duwRP2pqvLqGCE/does-the-us-nuclear-policy-still-target-cities If you have sources for this I'd be interested!

Thanks Alex! Yeah, I agree with you that adding approximate numbers or likelihood ratios would improve this, as would comparing my credences with Toby Ord's. I might do a followup post with some of this if I get time. Originally I was going to find a co-author and go in more depth on some of these things, especially the nuclear winter literature, but I keep starting and not finishing posts and I figured it was finally time to just put up what I had.

It would be good to separate "kill everyone with acute radiation right away" and "kill everyone with radiatio... (read more)

Yeah, the point that risks from nuclear war would be coupled with risks from great power conflict is a good one. I expect this to be more of a problem in the future, but there could be some risks at present from secret bioweapon systems or other kinds of WMDs.

My mainline expectation is that in a nuclear war scenario, chemical, biological, and conventional weapon effects would be dwarfed by the effects from nuclear weapons. This is based on my understanding of the major powers deterrence strategy, but might be wrong if there are secret weapons I'm not... (read more)

9maximkazhenkov2y
I would classify biological weapons as more dangerous than nuclear, but that's a different topic. Besides, biological and nuclear warfare don't mix well - without commercial air travel and trade biological agents don't spread well.

Yeah, you're right I'm making an assumption that a "nuclear war" refers to a nuclear war scenario with current arsenals or those in the near future.

1) Future nuclear weapons, especially if they're designed to kill everyone, could greatly increase the risk. Poseidon / Status-6 aside, I don't think states are likely to invest in omnicide capabilities, for several reasons. One is that it's a really hard optimization problem, and it's easier to be able to just crush your enemy with standard hydrogen bombs. So why pay far more for something that doesn't provide... (read more)

2avturchin2y
Thanks for your reply. One thing which is in play here is that Doomsday and geophysical weapons is the last resort of weakest side. If a stronger side has effective anti-missle tech and-or first strike capability, than having nuclear misseles becomes useless. This is a situation for Russia now. This is the reason why they are building Poseidon. Giving the mindset, a county like North Korea may invest in Doomsday weapon, but not a western country. Russia and China also could do it.

I really don't think fossil fuel depletion is very likely to permanently curtail humanity's potential in a nuclear or other collapse situation. I've seen this point argued a bunch though, so I think it's worth taking the hypothesis seriously. I'd love to see an in depth analysis of this question.

Good point re Australia hosting missile tracking capabilities, I agree that it might be targeted given that. I'm less worried about carrier groups and such things being hit. I don't disagree that some of these might be hit, and this may result in some fallout in the southern hemisphere, it doesn't seem like enough to move the dial. The ocean has a lot of area.

I didn't cover sterilization or birth defects from either the initial fallout or from ingested radionuclides later on. These are both problems, but I would be quite surprised if they killed a large pe... (read more)

1Stuart Anderson2y
-

Was the quality not up to par? If the reading / recording was good, it's surprising that it's not up on intelligence.org

4mingyuan2y
It is there! See https://intelligence.org/rationality-ai-zombies/, [https://intelligence.org/rationality-ai-zombies/,] scroll to the bottom and click "R:AZ Audio Book" '> Links and Details'.

I would be very excited about this. I'm constantly looking for more high quality (both from a content-level and sound-level) audio content. Good readers really go a long way towards creating a pleasant listening experience, so I also would be excited about people skilling up in this area within our community.

My guess is that most lesswrong members don't listen to much audio content, but that a substantial minority listen to a lot of audio content. I expect for this group, more audio content would be quite high value. I also expect the majority group to underrate the importance of this if they haven't spoken with many people in the smaller audio-listening category.

I think their claim is that labs only (or usually) work with viruses that have been described / that they have published the sequences for. And furthermore that they would have published such GoF work if they had done it (?). Like I said, not very compelling claims, especially because they're general and unclear.

I followed Google to this post via Googling: "how to include images in lesswrong posts"

Based on the advice, I tried to upload my photo to Google Drive and share it, but it looks like Google Drive doesn't support this kind of URL-embeddable sharing anymore, if I understand correctly. Next time I will try Dropbox, but if you could update this post to reflect Google Drive no longer supporting this (if you can confirm this is true), I think that would be helpful to others. Including a link on how to upload then share a link to an image would also save future people time who use the same Google query to find this post.

2habryka2y
Fixed. Independently of whether it might be possible if you use the right settings, it seems easy enough to shoot yourself in the foot that it seems like a bad recommendation.
2Raemon2y
I'm not yet sure about this, but last I checked you had to make sure on Google Drive that the image was shared fully publicly. Doublechecking you tried that?