This mostly fits with my personal experience. The part where I differ the most is this:
The correct play is usually to take advantage of the isolation and lack of distractions. That makes some activities actively great to do.
The way my brain works means that I find planes to be a distraction-heavy environment:
I can listen to music or "light" podcasts on a plane, but I have a hard time focusing on intellectually-demanding podcasts or books or movies or work, basically anything that requires sustained attention.
For the same reason, I can't really enjoy being at the airport because I'm anticipating that I will have to get on a plane soon. But I still tend to leave for the airport early because if I'm at home, I'm anticipating that I will have to leave for the airport soon, which isn't any better.
This is cool. Interesting to see how some models are wrong in certain particular ways: Qwen 72B is mostly right, but it thinks Australia is huge; Llama 3 has a skinny South America and a bunch of patches of ocean in Asia.
It sounds like you and I are fairly politically aligned. For me, my libertarian streak is not just about preventing authoritarianism in government, but preventing authoritarianism anywhere. Both your and Aella's proposed policies increase government power, but (arguably) decrease authoritarianism overall by restricting the power of monopolies.
More broadly I think restricting monopoly power is one of the most defensible uses of government authority.
(Right now I think the US government, and most developed-country governments for that matter, are simultaneously too powerful and too unwilling to take antitrust action.)
He's not depriving himself of sleep, he's just following an unusual sleep schedule, right? Which I would guess is not a big deal for longevity (AFAIK the research is agnostic on this question but I'm not too familiar with the research).
Not exercising is a big deal, though.
Also by "Trump doesn't do anything completely insane", I don't really mean "Trump behaves incompetently." I was thinking more along the lines of "Trump does something out-of-band that makes no rational sense and makes the situation much worse."
Overall I think AI 2027 is really good. It has received plenty of (mostly wrong IMO) criticism for being too pessimistic, but there are some ways that it might be too optimistic.
Even in the Bad Ending, some lucky things happen:
In the Good Ending:
To be clear, I'm not saying any of these events are particularly implausible. I'm just saying I wouldn't be surprised if real life turned out even worse than the Bad Ending, e.g. because the real-life equivalent of Agent-4 never gets caught, or OpenBrain succeeds at covering up the misalignment, or maybe the risk from ASI becomes abundantly clear but the government is still too slow-moving to do anything in time.
A quick search found this chart from a 2019 study on how couples meet. It looks like the fraction of couples who met at a bar has actually been going up in recent decades which is not what I would have predicted. But I don't know how reliable this study is.
This was also my impression, but I didn't read much of the actual text of the plan so I figured Zvi knew better than me. But now my Aumann-updating-toward-Zvi and Aumann-updating-toward-Habryka have canceled out and I am back to my initial belief that the plan is bad.
I am also confused by the praise from Dean Ball who apparently worked on this plan. I thought he was pretty x-risk-pilled?
- AI lab monitor
There are a few orgs doing things like this:
The first two of these are solo projects by Zach Stein-Perlman.
Comments