Liam Donovan

Liam Donovan's Comments

Criticism as Entertainment

I wonder if it negatively impacts the cohesiveness/teamwork ability of the resulting AI safety community by disproportionately attracting a certain type of person? It seems unlikely that everyone would enjoy this style

romeostevensit's Shortform

.

[This comment is no longer endorsed by its author]Reply
2020's Prediction Thread

FWIW you can bet on some of these on PredictIt -- for example, Predictit assigns only a 47% chance Trump will win in 2020. That's not a huge difference, but still worth betting 5% of your bankroll (after fees) on if you bet half-Kelly. (if you want to bet with me for whatever reason, I'd also be willing to bet up to $700 that Trump doesn't win at PredictIt odds if I don't have to tie up capital)

2010s Predictions Review

We can test if the most popular books & music of 2019 sold less copies than the most popular books & music of 2009 (I might or might not look into this later)

Programmers Should Plan For Lower Pay
GDP is 2x higher than in 2000

Why not use per capita real GDP (+25% since 2000)?

How’s that Epistemic Spot Check Project Coming?

I'm thinking that if there were liquid prediction markets for amplifying ESCs, people could code bots to do exactly what John suggests and potentially make money. This suggests to me that there's no principled difference between the two ideas, though I could be missing something (maybe you think the bot is unlikely to beat the market?)

Funk-tunul's Legacy; Or, The Legend of the Extortion War

Based on the quote from Jessica Taylor, it seems like the FDT agents are trying to maximize their long-term share of the population, rather than their absolute payoffs in a single generation? If I understand the model correctly, that means the FDT agents should try to maximize the ratio of FDT payoff : 9-bot payoff (to maximize the ratio of FDT:9-bot in the next generation). The algebra then shows that they should refuse to submit to 9-bots once the population of FDT agents gets high enough (Wolfram|Alpha link), without needing to drop the random encounters assumption.


It still seems like CDT agents would behave the same way given the same goals, though?

How’s that Epistemic Spot Check Project Coming?

What's the difference between John's suggestion and amplifying ESCs with prediction markets? (not rhetorical)

2019 AI Alignment Literature Review and Charity Comparison

I was somewhat confused by the discussion of LTFF grants being rejected by CEA; is there a public writeup of which grants were rejected?

Embedded World-Models

In order to do this, the agent needs to be able to reason approximately about the results of their own computations, which is where logical uncertainty comes in

Load More