LESSWRONG
LW

683
Michael Wiebe
2915540
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
4Michael Wiebe's Shortform
2y
3
Michael Wiebe's Shortform
Michael Wiebe2y30

Should you "trust literatures, not papers"?
I replicated the literature on meritocratic promotion in China, and found that the evidence is not robust.

https://twitter.com/michael_wiebe/status/1750572525439062384

Reply
Michael Wiebe's Shortform
Michael Wiebe2y10

Do vaccinated children have higher income as adults?
I replicate a paper on the 1963 measles vaccine, and find that it is unable to answer the question.

https://twitter.com/michael_wiebe/status/1750197740603367689

Reply
Michael Wiebe's Shortform
Michael Wiebe2y10

New replication: I find that the results in Moretti (AER 2021) are caused by coding errors. The paper studies agglomeration effects for innovation (do bigger cities cause technological progress?), but the results supporting a causal interpretation don't hold up.

https://twitter.com/michael_wiebe/status/1749462957132759489

Reply
I'm a Former Israeli Officer. AMA
Michael Wiebe2y30

What was the effect of reservists joining the protests? This says: "Some 10,000 military reservists were so upset, they pledged to stop showing up for duty." Does that mean they were actively 'on strike' from their duties? It looks like they're now doing grassroots support (distributing aid).

Reply
What social science research do you want to see reanalyzed?
Michael Wiebe2y20

Yeah, I do reanalysis of observational studies rather than rerunning experiments.

Reply
What social science research do you want to see reanalyzed?
Michael Wiebe2y20

Do you have any specific papers in mind?

Reply
AGI Safety FAQ / all-dumb-questions-allowed thread
Michael Wiebe3y10

But isn't it problematic to start the analysis at "superhuman AGI exists"? Then we need to make assumptions about how that AGI came into being. What are those assumptions, and how robust are they?

Reply
AGI Safety FAQ / all-dumb-questions-allowed thread
Michael Wiebe3y60

Why start the analysis at superhuman AGI? Why not solve the problem of aligning AI for the entire trajectory from current AI to superhuman AGI?

Reply
AGI safety from first principles: Conclusion
Michael Wiebe3y10

Also came here to say that 'latter' and 'former' are mixed up.

Reply
AGI safety from first principles: Control
Michael Wiebe3y10

In particular, we should be interested in how long it will take for AGIs to proceed from human-level intelligence to superintelligence, which we’ll call the takeoff period.

Why is this the right framing? Why not focus on the duration between 50% human-level and superintelligence? (Or p% human-level for general p.)

Reply
Load More
4Michael Wiebe's Shortform
2y
3
14What social science research do you want to see reanalyzed?
2y
9
2How to calibrate your political beliefs
12y
75