William_S

PhD student at the University of Toronto, studying machine learning and working on AI safety problems.

Comments

Learning the prior and generalization

Okay, makes more sense now, now my understanding is that for question X, answer from ML system Y,  amplification system A, verification in your quote is asking the A to answer "Would A(Z) output answer Y to question X?", as opposed to asking A to answer "X", and then checking if it equals "Y". This can at most be as hard as running the original system, and maybe could be much more efficient.

Do we have updated data about the risk of ~ permanent chronic fatigue from COVID-19?

https://institute.global/policy/long-covid-reviewing-science-and-assessing-risk

From the COVID Symptom Study in the UK (app based questionaire), "10 per cent of those taking part in the survey had symptoms of long Covid for a month, with between 1.5 and 2 per cent still experiencing them after three months", and they claim "long Covid is likely a bigger issue than excess deaths as a result of Covid, which are between 0.5 per cent and 1 per cent".

App-based survey, so not necessarily representative of population. Not clear how severe the 3 month cases are, though they state "The most common reported symptom has been described by doctors as “profound fatigue”". Article also summarizes other related studies.

Learning the prior and generalization

Right, but in the post the implicitly represented Z is used by an amplification or debate system, because it contains more information than a human can quickly read and use (so are you assuming it's simple to verify the results of amplification/debate systems?)

Learning the prior and generalization

for extremely large  which are represented only implicitly as in Paul's post, we might not always check whether the model matches the ground truth by actually generating the ground truth and instead just ask the human to verify the answer given 

 

I'm not sure what "just ask the human to verify the answer given " looks like, for implicitly represented 

Have the lockdowns been worth it?

I'm skeptical of this.

  • Wuhan needed 2 months on lockdown: https://en.wikipedia.org/wiki/COVID-19_pandemic_lockdown_in_Hubei
  • I'd expect that imposing China-style lockdowns in the West would require significant force and might end up causing a large-scale panic in and of itself.
  • I'd expect that any lockdown in the West wouldn't have been effective enough to stamp out 100% of cases, and if you don't eradicate it then you need ongoing measures or it will just flare up again later, so one strictly enforced lockdown wouldn't cut it. (Though maybe you could do very rigorous contact tracing and lock down just people who might have been in contact with cases, which could be less costly than full lockdown but probably still need significant enforcement).
Do we have updated data about the risk of ~ permanent chronic fatigue from COVID-19?

https://www.microcovid.org/paper/2-riskiness#fn6 discusses https://covid.joinzoe.com/post/covid-long-term which has an app-based survey claiming 1 in 10 people still have symptoms after 3 weeks. (but since people can just sign up for the app I'd guess this is harder to know how to interpret than the telephone survey). Microcovid.org uses this 1 in 10 figure as the estimate for chance of some ongoing health consequence, and claims the risk of ongoing health problems from a 1% chance of COVID is equivalent to the risk from 1 year of driving (but this comparison involves even more assumptions).

microCOVID.org: A tool to estimate COVID risk from common activities

https://www.cdc.gov/mmwr/volumes/69/wr/mm6930e1.htm found that ~1 in 5 of 18-34 year olds with no underlying health conditions had symptoms 3 weeks later (telephone survey of people who'd been symptomatic and had a positive test).

Other discussion in comments of https://www.lesswrong.com/posts/ahYxBHLmG7TiGDqxG/do-we-have-updated-data-about-the-risk-of-permanent-chronic

Do we have updated data about the risk of ~ permanent chronic fatigue from COVID-19?

Not addressing fatigue, and just a study in progress, but this study is looking for long term neurological problems, might another weak bit of evidence when it releases results https://www.cambridgebrainsciences.com/studies/covid-brain-study

Competition: Amplify Rohin’s Prediction on AGI researchers & Safety Concerns

Seems like it could be helpful if people who've thought about this would also predict on the question of what the survey value would be today. (e.g. via elicit snapshots)

OpenAI announces GPT-3

Google's Meena (2.6 billion parameters, February 2020) creates original puns in "Conversation D", and I think "Cross-turn Repetition Example 2" is absurdist comedy, but maybe more as a result of the way the model fails.

Conversation D
Human: I like how cute cows are
Meena: Ah, interesting. You like how they are cute, right?
Human: yeah, and they’re realy smart
Meena: I heard they go to college
Human: cows go to college?
Meena: I heard that a cow went to Harvard.
Human: what did the cow study?
Meena: Bovine sciences.
Human: do horses go to Harvard?
Meena: Horses go to Hayvard.
Human: that’s a pretty good joke, I feel like you led me into it
Meena: You were trying to steer it elsewhere, I can see it.
Load More