gbear605

gbear605's Posts

Sorted by New

gbear605's Comments

An alarm bell for the next pandemic
Is anybody feeling hunky dory about the way world governments are behaving now, or how they will change over time?

I suspect that this will incite governments to care a lot more about any potential pandemics in the future, since no one is going to forget this for the next twenty years. If anything, I'd put extra resources into other major dangers that aren't pandemics.

What to draw from Macintyre et al 2015?

How would contamination get onto the mask, other than you? It seems like a strict improvement, even if it isn't much improvement. Also the contamination should at least be only on one side of the mask.

Discussion about COVID-19 non-scientific origins considered harmful

Your evidence shows support that generally it's bad to discuss this conspiracy theory. But it doesn't show evidence that it's helpful to ban it on LessWrong, or in other similar spaces.

Virus As A Power Optimisation Process: The Problem Of Next Wave
Biological global catastrophic risks were neglected for years, while AGI risks were on the top. The main reason for this is that AGI was presented as a powerful superintelligent optimizer, and germs were just simple mindless replicators.

I think that that is an inaccurate description of why people on LessWrong have focused on AI risk over pandemic risk.

A pandemic certainly could be an existential risk, but the chance of that seems to be low. COVID-19 is a once-in-a-century level event, and its worst case scenario is killing ~2% of the human population. Completely horrible, yes, but not at all an existential threat to humanity. Given that there hasn't been an existential threat from a pandemic before in history, it seems unlikely that one would happen in the next few hundred years. On the other hand, AI risk is relevant in the coming century, or perhaps sooner (decades?). It at least seems plausible to me that the danger from the two is on the same order of magnitude, and that humans should pay roughly equal attention to the x-risk from both.

However, while there are many people out there who have been working very hard on pandemic control, there aren't many who focus on AI risk. The WHO has many researchers specializing in pandemics, along with scientists across nations, while the closest thing for AI safety might be MIRI or FHI, meaning that an individual on LW might have an impact to AI risk in a sense that an individual wouldn't have an impact to pandemic risk. On top of that, the crowd on LW tends to be geared towards working on AI (knowledge of software, philosophy) and not so much geared towards pandemic risk (knowledge of biology, epidemiology).

Finally, while they weren't the top priority, LW has definitely talked about pandemic risk over the years. See the results on https://duckduckgo.com/?q=pandemic+risk+site%3Alesswrong.com+-coronavirus+-covid&t=ffab&ia=web

A practical out-of-the-box solution to slow down COVID-19: Turn up the heat

One problem with this theory is Iran. While your graph depicts the Iran cases in the yellow area, much of Iran lies outside of it. Many of the diagnosed cases are in that yellow area, but there are many outside of there, and it is suspected that thousands more lie in the outer regions of the country and are currently undiagnosed.

In addition, Germany and New York are in an entirely different climate zone and are facing outbreaks, so that's a bad sign as well.

Rationalist prepper thread

The death percentage should perhaps be higher than that 3% actually, since it takes time for someone to be killed by the illness, and most of the people infected have not been infected long enough to die of it.

Rationalist prepper thread

It has been doubling in this time frame, but that's because of its unique circumstances. Many other illnesses have had a similar rise when they first appear, but illnesses tend to have their peak and then cycle down. Perhaps this may change, but the data does not indicate that yet.

An Emergency Fund for Effective Altruists

The recipient of the fund's donations would have to be constrained to only known-ethical effective charities, since otherwise you could "donate" through it to your personal trust and then withdraw from the emergency fund, or in a more unlikely case, the runners of an effective charity could "donate" through the fund to themselves, and then withdraw from the emergency fund.

Another problem is that if you care only about your chosen effective charity and yourself, you could donate $100 through the fund and then withdraw $95, which effectively means that you pay only $5 to donate $80 to your chosen charity. Someone who is doing this could essentially use the emergency fund as a 16:1 match on donations, which certainly isn't the intended goal of it.

An Emergency Fund for Effective Altruists

It would still help people (in the US) who would be donating less than their standard deduction ($12k/yr, so those who earn less than $120k per year), and those are the ones who are most likely to be in the relevant risky situation. It might still be a killer problem though.

Two-headed Go

Did you talk out loud when doing this? If so, was your family friend able to use your communication?

I wonder what the ELO difference would be for doing this with chess.

Load More