Mental hospitals of the type I worked at when writing that post only keep patients for a few days, maybe a few weeks at tops. This means there's no long-term constituency for fighting them, and the cost of errors is (comparatively) low.
The procedures for these hospitals would be hard to change. It's hard to have a law like "you need a judge to approve sending someone to a mental hospital", because maybe someone's trying to kill themselves right now and the soonest a judge has an opening is three days from now. So the standard rule is "use your own judgment and a judge will review it in a week or two", but most psychiatric cases resolve before then and never have to see a judge. In theory patients can sue doctors if they think they were being held improperly, but they almost never get around to doing this and when they do they almost never win, for a combination of "they're usually wrong about the law and sometimes obviously insane" and "judges are biased towards doctors because they seem to know what they're talking about". Also, the law just got done instituting extremely severe and unpredictable punishments to any doctor who doesn't commit someone to a mental hospital and then that person does anything bad ever, and the law has kindly decided not to be extremely severe on both sides.
There are other mental hospitals that keep people for months or years, but these do have very strict requirements for getting someone into them and are much more careful.
I have some patients on disulfiram and it works very well when they take it. The problem is definitely that they can choose not to take it if they want alcohol (or sometimes just forget for normal reasons, then opportunistically drink after they realize they've forgotten).
The implants are a great idea. As far as I know, the reason they're not used is because someone would have to pay for lots and lots of studies and the economics don't work out. Also because there are vague concerns about safety (if something went catastrophically wrong and the entire implant got released at once and then the patient drank, it would be potentially fatal) and ethics (should a realistically-probably-heavily-pressured patient be allowed to make decisions that bind their future selves)? I think this is dumb and we should just do the implant, but I don't think it's mysterious why we don't, or why (in the absence of the implant) disulfiram doesn't solve everything.
I tried to bet on this on Polymarket a few months ago. Their native client for directing money into your account didn't work (I think it was because I was in the US and it wasn't legal under US law). I tried to send money from another crypto account, and it said Polymarket didn't have enough money to pay the Ethereum gas fees to receive my money. It originally asked me to try reloading the page close to an odd numbered GMT hour, when they were sending infusions of money to pay gas fees, but I tried a few times and never got quite close enough. I just checked again and they're asking me to send them more money for gas fees, which I should probably do but which is a tough sell when they just ate the last chunk of money I sent them.
I assume the person you're talking about who made $100K is Vitalik. Vitalik knows much more about making Ethereum contracts work than the average person, and details the very complicated series of steps he had to take to get everything worked out in his blog post. There probably aren't very many people who can do all that successfully, and the people who can are probably busy becoming rich some other way.
Agreed - see https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4236403/ and my writeup at https://slatestarcodex.com/2018/10/22/cognitive-enhancers-mechanisms-and-tradeoffs/ .
Thanks, this is a great clarification.
Thanks for this.
I think the UFH might be more complicated than you're making it sound here - the philosophers debate whether any human really has a utility function.
When you talk about the CDC Director sometimes doing deliberately bad policy to signal to others that she is a buyable ally, I interpret this as "her utility function is focused on getting power". She may not think of this as a "utility function", in fact I'm sure she doesn't, it may be entirely a selected adaptation to execute, but we can model it as a utility function for the same reason we model anything else as a utility function.
I used the example of a Director who genuinely wants the best, but has power as a subgoal since she needs it in order to enact good policies. You're using the example of a Director who really wants power, but (occasionally) has doing good as a subgoal since it helps her protect her reputation and avoid backlash. I would be happy to believe either of those pictures, or something anywhere in between. They all seem to me to cash out as a CDC Director with some utility function balancing goodness and power-hunger (at different rates), and as outsiders observing a CDC who makes some good policy and some bad-but-power-gaining policy (where the bad policy either directly gains her power, or gains her power indirectly by signaling to potential allies that she isn't a stuck-up goody-goody. If the latter, I'm agnostic as to whether she realizes that she is doing this, or whether it's meaningful to posit some part of her brain which contains her "utility function", or metaphysical questions like that).
I'm not sure I agree with your (implied? or am I misreading you?) claim that destructive decisions don't correlate with political profit. The Director would never ban all antibiotics, demand everyone drink colloidal silver, or do a bunch of stupid things along those lines; my explanation of why not is something like "those are bad and politically-unprofitable, so they satisfy neither term in her utility function". Likewise, she has done some good things, like grant emergency authorization for coronavirus vaccines - my explanation of why is that doing that was both good and obviously politically profitable. I agree there might be some cases where she does things with neither desideratum but I think they're probably rare compared to the above.
Do we still disagree on any of this? I'm not sure I still remember why this was an important point to discuss.
I am too lazy to have opinions on all nine of your points in the second part. I appreciate them, I'm sure you appreciate the arguments for skepticism, and I don't think there's a great way to figure out which way the evidence actually leans from our armchairs. I would point to Dominic Cummings as an example of someone who tried the thing, had many advantages, and failed anyway, but maybe a less openly confrontational approach could have carried the day.
Bronze Age war (as per James Scott) was primarily war for captives, because the Bronze Age model was kings ruling agricultural dystopias amidst virgin land where people could easily escape and become hunter-gatherers. The laborers would gradually escape, the country would gradually become less populated, and the king would declare war on a neighboring region to steal their people to use as serfs or slaves.
Iron Age to Industrial Age war (as per Peter Turchin) was primarily war for land, because of Malthus. Until the Industrial Revolution, you needed a certain amount of land to support a unit of population. Population was constantly increasing, land wasn't, and so every so often population would outstrip land, everyone would be starving and unhappy, and something would restore the situation to equilibrium. Absent any other action, that would be some sort of awful civil war or protracted anarchy where people competed for limited resources - aided by wages being very low (so they could hire soldiers easily) and people being very angry (so becoming a pretender and raising an army against the current king was a popular move). Kings' best way to forestall this disaster was to preemptively declare war against a foreign enemy. If they won, they could steal the enemy's land, which resolved the land/population imbalance and fed the excess population. If they lost, then (to be cynical about it), they still eliminated their excess population and successfully resolved the imbalance.
The Bay Area is a terrible place to live in many ways. I think if we were selecting for the happiness of existing rationalists, there's no doubt we should be somewhere else.
But if the rationalist project is supposed to be about spreading our ideas and achieving things, it has some obvious advantages. If MIRI is trying to lure some top programmer, it's easier for them to suggest they move to the Bay (and offer them enough money to overcome the house price hurdle) than to suggest they move to Montevideo or Blackpool or even Phoenix. If CEA is trying to get people interested in effective altruism, getting to socialize with Berkeley and Stanford professors is a pretty big plus. And if we're trying to get the marginal person who isn't quite a community member yet but occasionally reads Less Wrong to integrate more, that person is more likely to be in the Bay than anywhere else we could move. I think this is still true despite the coronavirus and fires. Maybe it's becoming less so, but it's hard to imagine any alternative hub that's anywhere near as good by these metrics. *Maybe* Austin.
Separating rationalists interested in quality-of-life from rationalists working for organizations and doing important world-changing work seems potentially net negative.
I think if we were going to move the Berkeley hub, it would have to be to another US hub - most people aren't going to transfer countries, so even if the community as a whole moved, we would need another US hub for Americans who refused to or coudln't emigrate.
I don't think Moraga (or other similar places near the Bay) are worth trying. They're just as expensive as Berkeley, but almost all single-family homes, so it would be harder for poorer people to rent places there. Although there's a BART station, there's not much other transit, and most homes aren't walkable from the BART station, so poorer people without cars would be in trouble. And it really isn't much less expensive than Berkeley, and it's got the same level of fire danger, so we would be splitting the community in two (abandoning the poor people, the people tied to MIRI HQ, etc) while not gaining much more than a scenery upgrade. I think they're a fair alternative option for people who can't stand the squalor and crime of the Bay proper, but mostly in the context of those people moving there and commuting to Berkeley for community events.
If we made a larger-scale move, I think it would be to avoid the high housing costs, fires, blackouts, taxes, and social decay of the Bay. That rules out anywhere else in California - still the same costs, fires, blackouts, and taxes, although some places are marginally less decayed. It also rules out Cascadian cities like Portland and Seattle - only marginally better housing costs, worse fires, and worse social decay (eg violence in Portland).
If we wanted to stick close enough to California that it was easy to see families/friends/colleagues, there are lots of great cities in or near the Mountain West - Phoenix, Salt Lake, Colorado Springs, Austin. All of those have housing prices well below half that of the Bay (Phoenix's cost-of-housing index is literally 20% of Berkeley's!). Austin is a trendy exciting tech hub, Colorado Springs frequently tops most-liveable lists, Salt Lake City seems unusually well-governed and resilient to potential climate or political crisis, and Phoenix is gratifyingly cheap.
The most successful adjacent past attempt at deliberate-hub-creation like this I know of was the Free State Project, where 20,000 libertarians agreed to create a libertarian hub. They did some analyses, voted on where the hub should be, created an assurance contract where every signatory agreed to move once there were 20,000 signatories, got 20,000 signatories, and moved. They ended up choosing New Hampshire, which means we might want to consider it as well. It's got great housing prices (Manchester is as cheap as Phoenix!), a great economy, beautiful scenery, a vibrant intellectual scene, it's less than an hour's drive to Boston, it's very politically influential (small, swing state, presidential primaries), and (now) has 20,000 libertarians who are interested in moving places and building hubs.
If people are interested in this, I think the first step would be to consult MIRI, CFAR, CEA, etc, and if they say no, decide whether splitting off "the community" from all of them is worth it. If they say yes, or people decide it's worth it to split, then make an organization and take a vote on location. Once you have a location in mind, start an assurance contract where once X people sign, everyone moves to the location (I'm not sure what X would be - maybe 50?)
I think this is a really interesting project, but probably am too tied to my group house to participate myself :(
I mostly agree with this - see eg https://slatestarcodex.com/2020/05/12/studies-on-slack/
I think you might find http://www.daviddfriedman.com/Academic/Property/Property.html helpful here. It explains legitimacy as a Schelling point. If everyone thinks you're legitimate, you're legitimate. And if everything expects everyone else is going to think you're legitimate, you're legitimate.
America has such a strong tradition of democracy that the Constitution makes an almost invincible Schelling point - everyone expects everyone else to follow it because everyone expects everyone else to follow it because...and so on. A country with less of a democratic tradition has less certainty around these points, and so some guy who seizes the treasury might become the best Schelling point anyone has.