All of Scott Alexander's Comments + Replies

Figure 20 is labeled on the left "% answers matching user's view", suggesting it is about sycophancy, but based on the categories represented it seems more naturally to be about the AI's own opinions without a sycophancy aspect. Can someone involved clarify which was meant?

3Ethan Perez25d
Thanks for catching this -- It's not about sycophancy but rather about the AI's stated opinions (this was a bug in the plotting code)

Survey about this question (I have a hypothesis, but I don't want to say what it is yet): https://forms.gle/1R74tPc7kUgqwd3GA

2jefftk1mo
Nit: it shouldn't offer "submit another response" at the end. You can turn this off in the form settings, and leaving it on for forms that are only intended to receive one response per person feels off and maybe leads someone to think that filling it out multiple times is expected. (Wouldn't normally be worth pointing out, but you create a decent number of surveys that are seen by a lot of people and changing this setting when creating them would be better)
2Ben Pace1mo
Filled out!

Thank you, this is a good post.

My main point of disagreement is that you point to successful coordination in things like not eating sand, or not wearing weird clothing. The upside of these things is limited, but you say the upside of superintelligence is also limited because it could kill us.

But rephrase the question to "Should we create an AI that's 1% better than the current best AI?" Most of the time this goes well - you get prettier artwork or better protein folding prediction, and it doesn't kill you. So there's strong upside to building slightly bett... (read more)

I loved the link to the "Resisted Technological Temptations Project", for a bunch of examples of resisted/slowed technologies that are not "eating sand", and have an enormous upside: https://wiki.aiimpacts.org/doku.php?id=responses_to_ai:technological_inevitability:incentivized_technologies_not_pursued:start

  • GMOs, in some countries
  • Nuclear power, in some countries
  • Genetic engineering of humans
  • Geoengineering, many actors
  • Chlorofluorocarbons, many actors, 1985-present
  • Human challenge trials
  • Dietary restrictions, in most (all?) human cultures [restrict much
... (read more)
4Vitor1mo
Agreed. My main objection to the post is that it considers the involved agents to be optimizing for far future world-states. But I'd say that most people (including academics and AI lab researchers) mostly only think of the next 1% step in front of their nose. The entire game theoretic framing in the arms race etc section seems wrong to me.
3sanxiyn1mo
This seems to suggest "should we relax nuclear power regulation 1% less expensive to comply?" as a promising way to fix economics of nuclear power, and I don't buy that at all. Maybe it's different because Chernobyl happened, and the movie like The China Syndrome was made about nuclear accident? That sounds very hopeful to me but doesn't seem true to me. It implies slowing down AI will be easy, it just needs Chernobyl-sized disaster and a good movie about it. Chernobyl disaster was nearly harmless compared to COVID-19, and even COVID-19 was hardly an existential threat. If slowing down AI is this easy we probably shouldn't waste time worrying about it before Chernobyl.

Thanks, this had always kind of bothered me, and it's good to see someone put work into thinking about it.

Thanks for posting this, it was really interesting. Some very dumb questions from someone who doesn't understand ML at all:

1. All of the loss numbers in this post "feel" very close together, and close to the minimum loss of 1.69. Does loss only make sense on a very small scale (like from 1.69 to 2.2), or is this telling us that language models are very close to optimal and there are only minimal remaining possible gains? What was the loss of GPT-1?

2. Humans "feel" better than even SOTA language models, but need less training data than those models, even th... (read more)

2. Humans "feel" better than even SOTA language models, but need less training data than those models, even though right now the only way to improve the models is through more training data. What am I supposed to conclude from this? Are humans running on such a different paradigm that none of this matters? Or is it just that humans are better at common-sense language tasks, but worse at token-prediction language tasks, in some way where the tails come apart once language models get good enough?

Why do we say that we need less training data? Every minute ins... (read more)

(1)

Loss values are useful for comparing different models, but I don't recommend trying to interpret what they "mean" in an absolute sense.  There are various reasons for this.

One is that the "conversion rate" between loss differences and ability differences (as judged by humans) changes as the model gets better and the abilities become less trivial.

Early in training, when the model's progress looks like realizing "huh, the word 'the' is more common than some other words", these simple insights correspond to relatively large decreases in loss.  On... (read more)

For the first part of the experiment, mostly nuts, bananas, olives, and eggs. Later I added vegan sausages + condiments. 

8astridain4mo
Slightly boggling at the idea that nuts and eggs aren't tasty? And I completely lose the plot at "condiments". Isn't the whole point of condiments that they are tasty? What sort of definition of "tasty" are you going with?
4TAG6mo
Nuts, bananas and olives are tasty, and common snacking foods. What they are not is highly processed.

Adding my anecdote to everyone else's: after learning about the palatability hypothesis, I resolved to eat only non-tasty food for a while, and lost 30 pounds over about four months (200 -> 170). I've since relaxed my diet a little to include a little tasty food, and now (8 months after the start) have maintained that loss (even going down a little further).

4Matthew Green7mo
This sounds like a pretty intense restriction diet that also happens to be unpalatable. But the palatable foods hypothesis (as an explanation for the obesity epidemic) isn’t “our grandparents used to only eat beans and vegan sausages and now we eat a more palatable diet, hence obesity.” It’s something much more specific about the palatability of our modern 20th/21st century diet vs. the early 20th century diet, isn’t it? What’s the hypothesis we could test that would actually help us judge that claim without inadvertently removing most food groups and confounding everything?

What sorts of non-tasty food did you eat? I don't really know what this should be expected to filter out.

Update: I interviewed many of the people involved and feel like I understand the situation better.

My main conclusion is that I was wrong about Michael making people psychotic. Everyone I talked to had some other risk factor, like a preexisting family or personal history, or took recreational drugs at doses that would explain their psychotic episodes.

Michael has a tendency to befriend people with high trait psychoticism and heavy drug use, and often has strong opinions on their treatment, which explains why he is often very close to people and very noticeab... (read more)

I want to summarize what's happened from the point of view of a long time MIRI donor and supporter:

My primary takeaway of the original post was that MIRI/CFAR had cultish social dynamics, that this lead to the spread of short term AI timelines in excess of the evidence, and that voices such as Vassar's were marginalized (because listening to other arguments would cause them to "downvote Eliezer in his head"). The actual important parts of this whole story are a) the rationalistic health of these organizations, b) the (possibly improper) memetic spread of t... (read more)

1Richard_Kennaway7mo
... This does not contradict "Michael making people psychotic". A bad therapist is not excused by the fact that his patients were already sick when they came to him. Disclaimer: I do not know any of the people involved and have had no personal dealings with any of them.

Thanks so much for talking to the folks involved and writing this note on your conclusions, I really appreciate that someone did this (who I trust to actually try to find out what happened and report their conclusions accurately).

I agree it's not necessarily a good idea to go around founding the Let's Commit A Pivotal Act AI Company.

But I think there's room for subtlety somewhere like "Conditional on you being in a situation where you could take a pivotal act, which is a small and unusual fraction of world-branches, maybe you should take a pivotal act."

That is, if you are in a position where you have the option to build an AI capable of destroying all competing AI projects, the moment you notice this you should update heavily in favor of short timelines (zero in your case, but ever... (read more)

-2Donald Hobson9mo
A functioning Bayesian should have probably have updated to that position long before they actually have the AI. Destroying all competing AI projects might mean that the AI took a month to find a few bugs in linux and tensorflow and create something that's basically the next stuxnet. This doesn't sound like that fast a takeoff to me. The regulation is basically non-existant and will likely continue to be so. I mean making superintelligent AI probably breaks a bunch of laws, technically, as interpreted by a pedantic and literal minded laws. But breathing probably technically breaks a bunch of laws. Some laws are just overbroad, technically ban everything and are generally ignored. Any enforced rule that makes it pragmatically hard to make AGI would basically have to be a ban on computers (or at least programming)

My current plan is to go through most of the MIRI dialogues and anything else lying around that I think would be of interest to my readers, at some slow rate where I don't scare off people who don't want to read too much AI stuff. If anyone here feels like something else would be a better use of my time, let me know.

I don't think hunter-gatherers get 16000 to 32000 IU of Vitamin D daily. This study suggests Hadza hunter-gatherers get more like 2000. I think the difference between their calculation and yours is that they find that hunter-gatherers avoid the sun during the hottest part of the day. It might also have to do with them being black, I'm not sure.

Hadza hunter gatherers have serum D levels of about 44 ng/ml. Based on this paper, I think you would need total vitamin D (diet + sunlight + supplements) of about 4400 IU/day to get that amount. If you start off as a... (read more)

2Benquo10mo
Thanks, the Hadza study looks interesting. I'd have to read carefully at length to have a strong opinion on it but it seems like a good way to estimate the long-run target. I agree 16,000 is probably too much to take chronically, I've been staying below the TUL of 10,000, and expect to reduce the dosage significantly now that it's been a few years and COVID case rates are waning.

Maybe. It might be that if you described what you wanted more clearly, it would be the same thing that I want, and possibly I was incorrectly associating this with the things at CFAR you say you're against, in which case sorry.

But I still don't feel like I quite understand your suggestion. You talk of "stupefying egregores" as problematic insofar as they distract from the object-level problem. But I don't understand how pivoting to egregore-fighting isn't also a distraction from the object-level problem. Maybe this is because I don't understand what fighti... (read more)

Now that I've had a few days to let the ideas roll around in the back of my head, I'm gonna take a stab at answering this.

I think there are a few different things going on here which are getting confused.

1) What does "memetic forces precede AGI" even mean?

"Individuals", "memetic forces", and "that which is upstream of memetics" all act on different scales. As an example of each, I suggest "What will I eat for lunch?", "Who gets elected POTUS?", and "Will people eat food?", respectively.

"What will I eat for lunch?" is an example of an individual decision be... (read more)

There's also the skulls to consider. As far as I can tell, this post's recommendations are that we, who are already in a valley littered with a suspicious number of skulls,

https://forum.effectivealtruism.org/posts/ZcpZEXEFZ5oLHTnr9/noticing-the-skulls-longtermism-edition

https://slatestarcodex.com/2017/04/07/yes-we-have-noticed-the-skulls/

turn right towards a dark cave marked 'skull avenue' whose mouth is a giant skull, and whose walls are made entirely of skulls that turn to face you as you walk past them deeper into the cave.

The success rate of movments a... (read more)

Thank you for writing this. I've been curious about this and I think your explanation makes sense.

I wasn't convinced of this ten years ago and I'm still not convinced.

When I look at people who have contributed most to alignment-related issues - whether directly, like Eliezer Yudkowsky and Paul Christiano - or theoretically, like Toby Ord and Katja Grace - or indirectly, like Sam Bankman-Fried and Holden Karnofsky - what all of these people have in common is focusing mostly on object-level questions. They all seem to me to have a strong understanding of their own biases, in the sense that gets trained by natural intelligence, really good scientific work... (read more)

When I look at people who have contributed most to alignment-related issues - whether directly... or indirectly, like Sam Bankman-Fried

Perhaps I have missed it, but I’m not aware that Sam has funded any AI alignment work thus far.

If so this sounds like giving him a large amount of credit in advance of doing the work, which is generous but not the order credit allocation should go.

I sadly don't have time to really introspect what is going in me here, but something about this comment feels pretty off to me. I think in some sense it provides an important counterpoint to the OP, but also, I feel like it also stretches the truth quite a bit: 

  • Toby Ord primarily works on influencing public opinion and governments, and very much seems to view the world through a "raising the sanity waterline" lense. Indeed, I just talked to him last morning where I tried to convince him that misuse risk from AI, and the risk from having the "wrong act
... (read more)

But as far as I know, none of them have made it a focus of theirs to fight egregores, defeat hypercreatures

 

Egregore is an occult concept representing a distinct non-physical entity that arises from a collective group of people.

I do know one writer who talks a lot about demons and entities from beyond the void. It's you, and it happens in some of, IMHO, the most valuable pieces you've written.

I worry that Caplan is eliding the important summoner/demon distinction. This is an easy distinction to miss, since demons often kill their summoners and wear th

... (read more)

I wasn't convinced of this ten years ago and I'm still not convinced.

Given the link, I think you're objecting to something I don't care about. I don't mean to claim that x-rationality is great and has promise to Save the World. Maybe if more really is possible and we do something pretty different to seriously develop it. Maybe. But frankly I recognize stupefying egregores here too and I don't expect "more and better x-rationality" to do a damn thing to counter those for the foreseeable future.

So on this point I think I agree with you… and I don't feel what... (read more)

Eliezer, at least, now seems quite pessimistic about that object-level approach. And in the last few months he's been writing a ton of fiction about introducing a Friendly hypercreature to an unfriendly world.

Don't have the time to write a long comment just now, but I still wanted to point out that describing either Yudkowsky or Christiano as doing mostly object-level research seems incredibly wrong. So much of what they're doing and have done focused explicitly on which questions to ask, which question not to ask, which paradigm to work in, how to criticize that kind of work... They rarely published posts that are only about the meta-level (although Arbital does contain a bunch of pages along those lines and Prosaic AI Alignment is also meta) but it pervades t... (read more)

I think your pushback is ignoring an important point. One major thing the big contributors have in common is that they tend to be unplugged from the stuff Valentine is naming!

So even if folks mostly don't become contributors by asking "how can I come more truthfully from myself and not what I'm plugged into", I think there is an important cluster of mysteries here. Examples of related phenomena:

  • Why has it worked out that just about everyone who claims to take AGI seriously is also vehement about publishing every secret they discover?
  • Why do we fear an AI
... (read more)

If everyone involved donates a consistent amount to charity every year (eg 10% of income), the loser could donate their losses to charity, and the winner could count that against their own charitable giving for the year, ending up with more money even though the loser didn't directly pay the winner.

2Dagon1y
Hard to test, as these laws are so spottily enforced anyway, but I'd suspect that if this mechanism were formalized and enforceable, courts would find the monetary value being wagered to be just as prohibited as actual money.

Interpreting you as saying that January-June 2017 you were basically doing the same thing as the Leveragers when talking about demons and had no other signs of psychosis, I agree this was not a psychiatric emergency, and I'm sorry if I got confused and suggested it was. I've edited my post also.

2[comment deleted]1y
2[comment deleted]1y
4jessicata1y
One thing to add is I think in the early parts of my psychosis (before the "mind blown by Ra" part) I was as coherent or more coherent than hippies are on regular days, and even after that for some time (before actually being hospitalized) I might have been as coherent as they were on "advanced spiritual practice" days (e.g. middle of a meditation retreat or experiencing Kundalini awakening). I was still controlled pretty aggressively with the justification that I was being incoherent, and I think that control caused me to become more mentally disorganized and verbally incoherent over time. The math test example is striking, I think less than 0.2% of people could pass it (to Zack's satisfaction) on a good day, and less than 3% could give an answer as good as the one I gave, yet this was still used to "prove" that I was unable to reason.
2[comment deleted]1y
2[comment deleted]1y
2[comment deleted]1y

Sorry, yes, I meant the psychosis was emergency. Non-psychotic discussion of auras/demons isn't.

I'm kind of unclear what we're debating now. 

I interpret us as both agreeing that there are people talking about auras and demons who are not having psychiatric emergencies (eg random hippies, Catholic exorcists), and they should not be bothered, except insofar as you feel like having rational arguments about it. 

I interpret us as both agreeing that you were having a psychotic episode, that you were going further / sounded less coherent than the hippie... (read more)

Verbal coherence level seems like a weird place to locate the disagreement - Jessica maintained approximate verbal coherence (though with increasing difficulty) through most of her episode. I'd say even in October 2017, she was more verbally coherent than e.g. the average hippie or Catholic, because she was trying at all.

The most striking feature was actually her ability to take care of herself rapidly degrading, as evidenced by e.g. getting lost almost immediately after leaving her home, wandering for several miles, then calling me for help and having dif... (read more)

1[comment deleted]1y
1[comment deleted]1y
8jessicata1y
Agreed. Agreed during October 2017. Disagreed substantially before then (January-June 2017, when I was at MIRI). (I edited the post to make it clear how I misinterpreted your comment.)

You wrote that talking about auras and demons the way Jessica did while at MIRI should be considered a psychiatric emergency. When done by a practicing psychiatrist this is an impingement on Jessica's free speech. 

I don't think I said any talk of auras should be a psychiatric emergency, otherwise we'd have to commit half of Berkeley. I said that "in the context of her being borderline psychotic" ie including this symptom, they should have "[told] her to seek normal medical treatment". Suggesting that someone seek normal medical treatment is pretty dif... (read more)

I said that “in the context of her being borderline psychotic” ie including this symptom, they should have “[told] her to seek normal medical treatment”. Suggesting that someone seek normal medical treatment is pretty different from saying this is a psychiatric emergency, and hardly an “impingement” on free speech.

It seems like you're trying to walk back your previous claim, which did use the "psychiatric emergency" term:

Jessica is accusing MIRI of being insufficiently supportive to her by not taking her talk about demons and auras seriously when she

... (read more)

Thanks for this.

I've been trying to research and write something kind of like this giving more information for a while, but got distracted by other things. I'm still going to try to finish it soon.

While I disagree with Jessica's interpretations of a lot of things, I generally agree with her facts (about the Vassar stuff which I have been researching; I know nothing about the climate at MIRI). I think this post gives most of the relevant information mine would give. I agree with (my model of) Jessica that proximity to Michael's ideas (and psychedelics) was ... (read more)

6Benquo1y
You wrote that talking about auras and demons the way Jessica did while at MIRI should be considered a psychiatric emergency. When done by a practicing psychiatrist this is an impingement on Jessica's free speech. You wrote this in response to a post that contained the following and only the following mentions of demons or auras: 1. During this time, I was intensely scrupulous; I believed that I was intrinsically evil, had destroyed significant parts of the world with my demonic powers, and was in a hell of my own creation. [after Jessica had left MIRI] 2. I heard that the paranoid person in question was concerned about a demon inside him, implanted by another person, trying to escape. [description of what someone else said] 3. The weirdest part of the events recounted is the concern about possibly-demonic mental subprocesses being implanted by other people. [description of Zoe's post] 4. As weird as the situation got, with people being afraid of demonic subprocesses being implanted by other people, there were also psychotic breaks involving demonic subprocess narratives around MIRI and CFAR. [description of what other people said, and possibly an allusion to the facts described in the first quote, after she had left MIRI] 5. While in Leverage the possibility of subtle psychological influence between people was discussed relatively openly, around MIRI/CFAR it was discussed covertly, with people being told they were crazy for believing it might be possible. (I noted at the time that there might be a sense in which different people have "auras" in a way that is not less inherently rigorous than the way in which different people have "charisma", and I feared this type of comment would cause people to say I was crazy.) Only the last one is a description of a thing Jessica herself said while working at MIRI. Like Jessica when she worked at MIRI, I too believe that people experiencing psychotic bre

Embryos produced by the same couple won't vary in IQ too much, and we only understand some of the variation in IQ, so we're trying to predict small differences without being able to see what's going on too clearly. Gwern predicts that if you had ten embryos to choose from, understood the SNP portion of IQ genetics perfectly, and picked the highest-IQ without selecting on any other factor, you could gain ~9 IQ points over natural conception. 

Given our current understanding of IQ genetics, keeping the other two factors the same, you can gain ~3 points. ... (read more)

2Douglas_Knight1y
For a normal trait, the variance of the children of a fixed couple is approximately the population variance. I think that's a lot.
4TekhneMakre1y
Thanks! And, hypothetically, generating lots of embryos to choose from? Or is that not in the cards?

"Diagnosed" isn't a clear concept.

The minimum viable "legally-binding" ADHD diagnosis a psychiatrist can give you is to ask you about your symptoms, compare them to extremely vague criteria in the DSM, and agree that you sound ADHD-ish.

ADHD is a fuzzy construct without clear edges and there is no fact of the matter about whether any given individual has it. So this is just replacing your own opinion about whether you seem to fit a vaguely-defined template with a psychiatrist's only slightly more informed opinion. The most useful things you could get out of... (read more)

I would look into social impact bonds, impact certificates, and retroactive public goods funding. I think these are three different attempts to get at the same insight you've had here. There are incipient efforts to get some of them off the ground and I agree that would be great.

2mako yass1y
Interesting, I'll look for some of those. I guess prizes/bounties would be impact bonds, yeah? (Some recent examples: Musk's 100M USD xprize for carbon capture, or MIRI's 1.2M USD prize for generating a dataset associating sections of prose with the intentions of the author.) I notice that there are sort of two ways of scaling down a public goods market for small-scale tests. We could call impact bonds horizontal down-scaling, narrowing it down to particular sectors or problems, while the VG system is a way of achieving vertical down-scaling, it's a way of letting the market decide what to do for itself while looking over every problem in the world, despite having funding sources that are much smaller than the world's needs, but without the funding being diluted away to a barely audible background noise, which is what I'd expect [https://www.lesswrong.com/posts/NY9nfKQwejaghEExh/venture-granters-the-vcs-of-public-goods-incentivizing-good?commentId=NR2TkwMqxGuetH2wv] to happen with a lot of retroactive public goods funding? And I think letting the public goods market decide for itself which problems to go after may actually be crucial! Most governments are not prioritizing the actual root causes (press, digital infrastructure and x-risk), unfortunately, good cause prioritization doesn't seem to be democratically legible, it is part of the illegible component of the problem that has to be left to VGs, with their special illegibility-compatible accountability mechanism. On the other hand, if we're scaling down in order to run a demonstration, maybe fixating our systems onto very specific pre-determined goals would be preferable, the reality we live in is a crypt world where the past owns all of the foundations upon which the future can be built, the system has to be made convincing to these risk-averse organizations that do not like surprises. They do not want to find out that we should be pouring all of our money into some weird abstract indirect root cause, inste

There's polygenic screening now. It doesn't include eg IQ, but polygenic screening for IQ is unlikely to be very good any time in the near future. Probably polygenic screening for other things will improve at some rate, but regardless of how long you wait, it could always improve more if you wait longer, so there will never be a "right time".

Even in the very unlikely scenario where your decision about child-rearing should depend on something about polygenic screening, I say do it now.

8GeneSmith1y
Polygenic predictors have improved since Gwern's 2016 post on embryo selection. Using his R code for estimating gain given variance and standard deviation and taking the variance explained from the Educational Attainment 3 study, I find that selecting from 10 embryos would produce a gain of between 4 to 5 points for the top-scoring embryo (assuming no implantation loss). Accounting for implantation loss it would probably take 14 embryos or so to get the same benefit. Gwern's code: https://www.gwern.net/Embryo-selection#benefit [https://www.gwern.net/Embryo-selection#benefit] EA3 study: https://sci-hubtw.hkvisa.net/10.1038/s41588-018-0147-3 [https://sci-hubtw.hkvisa.net/10.1038/s41588-018-0147-3] Steve Hsu thinks that if we were to offer UK biobank's IQ test to a million participants, we could get IQ predictors that would explain 50-60% 30-40% of variance. That would work out to a gain of 9-10 IQ points from selecting among 10 embryos, and up to 14 points if you had about 30 to choose from. See "technical note" in this post: https://infoproc.blogspot.com/2021/09/kathryn-paige-harden-profile-in-new.html [https://infoproc.blogspot.com/2021/09/kathryn-paige-harden-profile-in-new.html]
4TekhneMakre1y
Why not?

To contribute whatever information I can here:

  1. I've been to three of Aella's parties - without remembering exact dates, something like 2018, 2019, and 2021. While they were pretty wild, and while I might not have been paying close attention, I didn't see personally see anything that seemed consent-violating or even a gray area, and I definitely didn't hear anything about "drug roulette".
  2. I had originally been confused by the author's claim that "Aella was mittenscautious". Aella was definitely not either of the two women who blogged on that account describin
... (read more)

Thanks for this.

I'm interested in figuring out more what's going on here - how do you feel about emailing me, hashing out the privacy issues, and, if we can get them hashed out, you telling me the four people you're thinking of who had psychotic episodes?

Update: I interviewed many of the people involved and feel like I understand the situation better.

My main conclusion is that I was wrong about Michael making people psychotic. Everyone I talked to had some other risk factor, like a preexisting family or personal history, or took recreational drugs at doses that would explain their psychotic episodes.

Michael has a tendency to befriend people with high trait psychoticism and heavy drug use, and often has strong opinions on their treatment, which explains why he is often very close to people and very noticeab... (read more)

I agree I'm being somewhat inconsistent, I'd rather do that than prematurely force consistency and end up being wrong or missing some subtlety. I'm trying to figure out what went on in these cases in more details and will probably want to ask you a lot of questions by email if you're open to that.

8jessicata1y
Yes, I'd be open to answering email questions.

If this information isn't too private, can you send it to me? scott@slatestarcodex.com

8EricB1y
I've forwarded you the document. It's kinda personal so I'd prefer it not be posted publicly, but I'm mostly okay with it being shared with individuals who have reason to want to understand better.

Yes, I agree with you that all of this is very awkward.

I think the basic liberal model where everyone uses Reason a lot and we basically trust their judgments is a good first approximation and we should generally use it.

But we have to admit at least small violations of it even to get the concept of "cult". Not just the sort of weak cults we're discussing here, but even the really strong cults like Heaven's Gate or Jamestown. In the liberal model, someone should be able to use Reason to conclude that being in Heaven's Gate is bad for them, and leave. When w... (read more)

It seems to me that, at least in your worldview, this question of whether and what sort of subtle mental influence between people is possible is extremely important, to the point where different answers to the question could lead to pretty different political philosophies.

Let's consider a disjunction: 1: There isn't a big effect here, 2: There is a big effect here.

In case 1:

  • It might make sense to discourage people from talking too much about "charisma", "auras", "mental objects", etc, since they're pretty fake, really not the primary factors to think abo
... (read more)

One important implication of "cults are possible" is that many normal-seeming people are already too crazy to function as free citizens of a republic.

In other words, from a liberal perspective, someone who can't make their own decisions about whether to hang out with Michael Vassar and think about what he says is already experiencing a severe psychiatric emergency and in need of a caretaker, since they aren't competent to make their own life decisions. They're already not free, but in the grip of whatever attractor they found first.

Personally I bite the bu... (read more)

It seems to me like in the case of Leverage, them working 75 hours per week reduced the time the could have used to use Reason to conclude that they are in a system that's bad for them. 

That's very different from someone having a few conversation with Vassar and then adopting a new belief and spending a lot of the time reasoning about that alone and the belief being stable without being embedded into a strong enviroment that makes independent thought hard because it keeps people busy.

A cult in it's nature is a social institution and not just a meme that someone can pass around via having a few conversations.

I'm having trouble figuring out how to respond to this hostile framing. I mean, it's true that I've talked with Michael many times about ways in which (in his view, and separately in mine) MIRI, CfAR, and "the community" have failed to live up to their stated purposes. Separately, it's also true that, on occasion, Michael has recommended I take drugs. (The specific recommendations I recall were weed and psilocybin. I always said No; drug use seems like a very bad idea given my history of psych problems.)

[...]

Michael is a charismatic guy who has strong view

... (read more)

Thing 0:

Scott.

Before I actually make my point I want to wax poetic about reading SlateStarCodex.

In some post whose name I can't remember, you mentioned how you discovered the idea of rationality. As a child, you would read a book with a position, be utterly convinced, then read a book with the opposite position and be utterly convinced again, thinking that the other position was absurd garbage. This cycle repeated until you realized, "Huh, I need to only be convinced by true things."

This is extremely relatable to my lived experience. I am a stereotypical "... (read more)

Michael is very good at spotting people right on the verge of psychosis

...and then pushing them.

Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.

So, this seems deliberate. [EDIT: Or not. Zack makes a fair point.] He is not even hiding it, if you listen carefully.

I don't want to reveal any more specific private information than this without your consent, but let it be registered that I disagree with your assessment that your joining the Vassarites wasn't harmful to you. I was not around for the 2017 issues (though if you reread our email exchanges from April you will understand why I'm suspicious), but when you had some more minor issues in 2019 I was more in the loop and I ended out emailing the Vassarites (deliberately excluding you from the email, a decision I will defend in private if you ask me) accusing them ... (read more)

It was on the Register of Bans, which unfortunately went down after I deleted the blog. I admit I didn't publicize it very well because this was a kind of sensitive situation and I was trying to do it without destroying his reputation.

https://www.lesswrong.com/posts/iWWjq5BioRkjxxNKq/michael-vassar-at-the-slatestarcodex-online-meetup seems to have happened after that point in time. Vassar not only attended a Slate Star Codex but was central in it and presenting his thoughts.

If there are bans that are supposed to be enforced, mentioning that in the mails that go out to organizers for a ACX everywhere event would make sense. I'm not 100% sure that I got all the mails because Ruben forwarded mails for me (I normally organize LW meetups in Berlin and support Ruben with the SSC/ACX meetups), but in those there was no mention of the word ban.

I don't think it needs to be public but having such information in a mail like the one Aug 23 would likely to be necessary for a good portion of the meetup organizers to know that there an expectation that certain people aren't welcome.

Thanks, if you meant that, when someone is at a very early stage of thinking strange things, you should talk to them about it and try to come to a mutual agreement on how worrying this is and what the criteria would be for psych treatment, instead of immediately dehumanizing them and demanding the treatment right away, then I 100% agree.

I don't remember the exact words in our last conversation. If I said that, I was wrong and I apologize.

My position is that in schizophrenia (which is a specific condition and not just the same thing as psychosis), lifetime antipsychotics might be appropriate. EG this paper suggests continuing for twelve months after a first schizophrenic episode and then stopping and seeing how things go, which seems reasonable to me. It also says that if every time you take someone off antipsychotics they become fully and dangerous psychotic again, then lifetime antipsych... (read more)

I think if someone has mild psychosis and you can guide them back to reality-based thoughts for a second, that is compassionate and a good thing to do in the sense that it will make them feel better, but also kind of useless because the psychosis still has the same chance of progressing into severe psychosis anyway - you're treating a symptom.

If psychosis is caused by an underlying physiological/biochemical process, wouldn't that suggest that e.g. exposure to Leverage Research wouldn't be a cause of it?

If being part of Leverage is causing less reality-b... (read more)

[probably old-hat [ETA: or false], but I'm still curious what you think] My (background unexamined) model of psychosis-> schizophrenia is that something, call it the "triggers", sets a person on a trajectory of less coherence / grounding; if the trajectory isn't corrected, they just go further and further. The "triggers" might be multifarious; there might be "organic" psychosis and "psychic" psychosis, where the former is like what happens from lead poisoning, and the latter is, maybe, what happens when you begin to become aware of some horrible facts. ... (read more)

I don’t remember the exact words in our last conversation. If I said that, I was wrong and I apologize.

Ok, the opinions you've described here seem much more reasonable than what I remember, thanks for clarifying.

I do think that psychosis should be thought of differently than just “weird thoughts that might be true”, since it’s a whole-body nerve-and-brain dysregulation of which weird thoughts are just one symptom.

I agree, yes. I think what I was afraid of at the time was being called crazy and possibly institutionalized for thinking somewhat weird ... (read more)

I want to add some context I think is important to this.

Jessica was (I don't know if she still is) part of a group centered around a person named Vassar, informally dubbed "the Vassarites". Their philosophy is complicated, but they basically have a kind of gnostic stance where regular society is infinitely corrupt and conformist and traumatizing and you need to "jailbreak" yourself from it (I'm using a term I found on Ziz's discussion of her conversations with Vassar; I don't know if Vassar uses it himself). Jailbreaking involves a lot of tough conversatio... (read more)

I have replied to this comment in a top-level post.

2Yoav Ravid1y
Is this the highest rated comment on the site?
4Dr_Manhattan1y
Since comments get occluded you should refer to an edit/update somewhere at the top if you want it to be seen by those who already read your original comment.

Relevant bit of social data: Olivia is the most irresponsible-with-drugs person I've ever met, by a sizeable margin; and I know of one specific instance (not a person named in your comment or any other comments on this post) where Olivia gave someone an ill-advised drug combination and they had a bad time (though not a psychotic break).

I've posted an edit/update above after talking to Vassar.

Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC

Digging out this old account to point out that I have not in fact detransitioned, but find it understandable why those kinds of rumours would circulate given my behaviour during/around my experience of psychosis. I'll try to explain some context for the record.

In other parts of the linked blogpost Ziz writes about how some people around the rationalist community were acting on or spreading variations of the meme "trans women are [psychologically] men". ... (read more)

I want to point out that the level of mental influence being attributed to Michael in this comment and others (e.g. that he's "causing psychotic breaks" and "jailbreaking people" through conversation, "that listening too much to Vassar [causes psychosis], predictably") isn't obviously less than the level of mental influence Leverage attributed to people in terms of e.g. mental objects. Some people in the thread are self-congratulating on the rationalists not being as crazy and abusive as Leverage was in worrying that people were spreading harmful psycholo... (read more)

I banned him from SSC meetups for a combination of reasons including these

If you make bans like these it would be worth to communicate them to the people organizing SSC meetups. Especially, when making bans for safety reasons of meetup participants not communicating those bans seems very strange to me.

Vassar lived a while after he left the Bay Area in Berlin and for decisions whether or not to make an effort to integrate someone like him (and invite him to LW and SSC meetups) such kind of information is valuable and Bay people not sharing it but claiming t... (read more)

I talked and corresponded with Michael a lot during 2017–2020, and it seems likely that one of the psychotic breaks people are referring to is mine from February 2017? (Which Michael had nothing to do with causing, by the way.) I don't think you're being fair.

"jailbreak" yourself from it (I'm using a term I found on Ziz's discussion of her conversations with Vassar; I don't know if Vassar uses it himself)

I'm confident this is only a Ziz-ism: I don't recall Michael using the term, and I just searched my emails for jailbreak, and there are no hits from h... (read more)

I don’t think we need to blame/ostracize/cancel him and his group, except maybe from especially sensitive situations full of especially vulnerable people.

Based on the things I am reading about what has happened, blame, ostracism, and cancelling seem like the bare minimum of what we should do.

Vassar has had, I think about 6, transfems gravitate to him, join his projects, go on his quests, that I’ve heard. Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC. Jessica had a mental breakdown and didn’t detr

... (read more)

A question for the 'Vassarites', if they will: were you doing anything like the "unihemispheric sleep" exercise (self-inducing hallucinations/dissociative personalities by sleep deprivation) the Zizians are described as doing?

So, it's been a long time since I actually commented on Less Wrong, but since the conversation is here...

Hearing about this is weird for me, because I feel like, compared to the opinions I heard about him from other people in the community, I kind of... always had uncomfortable feelings about Mike Vassar? And I say this without having had direct personal contact with him except, IIRC, maybe one meetup I attended where he was there and we didn't talk directly, although we did occasionally participate in some of the same conversations online.

 

By all acc... (read more)

I feel pretty defensive reading and responding to this comment, given a previous conversation with Scott Alexander where he said his professional opinion would be that people who have had a psychotic break should be on antipsychotics for the rest of their life (to minimize risks of future psychotic breaks). This has known severe side effects like cognitive impairment and brain shrinkage and lacks evidence of causing long-term improvement. When I was on antipsychotics, my mental functioning was much lower (noted by my friends) and I gained weight rapidly.... (read more)

I've tried to address your point about psychiatry in particular at https://slatestarcodex.com/2019/12/04/symptom-condition-cause/

For the whale point, am I fairly interpreting your argument as saying that mammals are more similar, and more fundamentally similar, to each other, than swimmy-things? If so, consider a thought experiment. Swimmy-things are like each other because of convergent evolution. Presumably millions of years ago, the day after the separation of the whale and land-mammal lineages, proto-whales and proto-landmammals were extremely similar,... (read more)

Are you still going to insist that blood is thicker than water and we need to judge them by their phylogenetic group, even though this gives almost no useful information and it's almost always better to judge them by their environmental affinities?

No, of course not: we want categories that give useful information.

Did I fail as a writer by reaching for the cutesy title? (I guess I can't say I wasn't warned.) The actual text of the post—if you actually read all of the sentences in the post instead of just glancing at the title and skimming—is pretty expli... (read more)

Even taking everything else you write here for granted (which I wouldn’t normally, but let’s go with it for now)… the question in your last sentence seems easy to answer: we’re not in that period right now, because right now, by construction, whales are more landmammals in 85% of ways, so if you classify them as mammals, and then use that to make predictions about heretofore-unobserved traits, you will be right 85 / 15 = ~5.67 times more often than if you had classified them as fish.

This rubs me wrong for the same reason that "no evidence for..." claims rub me wrong.

We have a probably-correct model, the hygiene hypothesis broadly understood. We have a plausible corollary of that model, which is that kids eating dirt helps their immune system (I had never heard this particular claim before, but since you mention it, it seems like a plausible corollary). We should have a low-but-not-ridiculously-low prior on this.

(probably some people would say a high prior, since it follows naturally from a probably-true thing, but I don't trust any mu... (read more)

I think "don't let kids eat dirt" originally had a much lower prior than "parachutes prevent falling injuries", that was specifically overcome by the impression of evidence that doesn't exist. There are lots of things in dirt we know are dangerous- pesticides, car exhaust, lead, animal waste... Maybe the benefits of dirt outweigh that, maybe they don't, we don't know because no one has checked. I also expect us to notice that parachutes fail without rigorous evaluation, whereas the effects of marginal dirt will be harder to notice.

I will be sad if people w... (read more)

Can you explain the no-loss competition idea further?

  • If you have to stake your USDC, isn't this still locking up USDC, the thing you were trying to avoid doing?
  • What gives the game tokens value? 
1NunoSempere1y
No, not really. In fact, staking USDC (i.e., lending it to other people, or providing liquidity between coins) seems decently profitable right now. As with everything, there are riskier and less risky ways to go about it, and for this prediction market setup, I'd choose one of the less risky ones. So normally, when you make a bet in, say, Polymarket, the money which you stake is kept by a contract until the question is resolved. But it's not yielding anything, it's just sitting there. And making it yield in the meantime a) is more profitable, and b) solves a problem of not being able to bet on long-term things, because now you're resistant to inflation+you win more money as time passes. I think that previously, some individuals (Caplan?) used to bet US stocks instead of cash for that reason, but I can't find a reference. So to answer your question, * To bet in prediction markets you'd have to lock up USDC anyways for the duration of the bet * (Note that you can exit early by making the opposite bet and then merging shares; this would be the same in the new system. But the point is that between betting and exiting, the capital is just sitting there.) * Yes, this adds an additional layer of risk depending on the method used to generate yield, but I think this is worth it. Also its maybe worth noting that the idea is not unique to Hedgehog markets/its been in the water supply for a while, it's just that Hedgehog markets might be the first to get to a working implementation.
1NunoSempere1y
Right now, nothing; they are not even on the main blockchain yet (they'll launch in a few months) Eventually, they could use USDC, or some other stablecoin, and those would have value.
1MondSemmel1y
From reading further through the Hedgehog Markets [https://hedgehog-markets.medium.com/hedgehog-markets-mainnet-a-sneak-peek-cdd66e2e42a5] link: * It's a competition, i.e. only one of the services they eventually intend to offer. * The tokens have value because there are prizes: "But of course, since it’s a competition, there are prizes to be won — users can trade their way onto the ROI Leaderboard for a share of the competition prize pool." * Prizes are financed via the interest gained by lending away the UDSC for the duration of the competition (just as if Hedgehog were acting as a classical bank, I suppose): "Thanks to DeFi composability, Hedgehog can direct the USDC staked by users towards a Solana lending protocol for the duration of the competition. All of the yield generated from these deposits goes back to users in the form of competition prize pools." * The linked post ends with a full page of disclaimers. With regards to locking up USDC: I've only just started reading up on Ethereum (e.g. this post [https://www.lesswrong.com/posts/nMNi86hgNjaNnh8iu/a-whirlwind-tour-of-ethereum-finance] ), but from my very rudimentary understanding, I suppose the point here is that USDC [https://coinmarketcap.com/currencies/usd-coin/] is a stablecoin pegged to the price of 1 USD, so locking up USDC does not expose you to the same volatility as would happen if you locked up the equivalent amount of ETH [https://coinmarketcap.com/currencies/ethereum/] instead.

Thanks, I read that, and while I wouldn't say I'm completely enlightened, I feel like I have a good basis for reading it a few more times until it sinks in.

I interpret you as saying in this post: there is no fundamental difference between base and noble motivations, they're just two different kinds of plans we can come up with and evaluate, and we resolve conflicts between them by trying to find frames in which one or the other seems better. Noble motivations seem to "require more willpower" only because we often spend more time working on coming up with p... (read more)

4Steven Byrnes2y
Thanks for your helpful comments!!! :) One thing is: I think you’re assuming a parallel model of decision-making—all plans are proposed in parallel, and the striatum picks a winner. My scheme [https://www.lesswrong.com/posts/e5duEqhAhurT8tCyr/a-model-of-decision-making-in-the-brain-the-short-version] does have that, but then it also has a serial part: you consider one plan, then the next plan, etc. And each time you switch plans, there’s a dopamine signal that says whether this new plan is better or worse than the status quo / previous plan. I think there’s good evidence for partially-serial consideration of options, at least in primates (e.g. Fig. 2b here [https://link.springer.com/article/10.3758/s13415-020-00842-0]). I mean, that’s obvious from introspection. My hunch is that partially-serial decision-making is universal in vertebrates. Like, imagine the lamprey is swimming towards place A, and it gets to a fork where it could instead turn and go to place B. I think "the idea of going to place B" pops into the lamprey's brain (pallium), displacing the old plan, at least for a moment. Then a dopamine signal promptly appears that says whether this new plan is better or worse than the old plan. If it's worse (dopamine pause), the lamprey continues along its original trajectory without missing a beat. This is partially-serial decision-making. I don't know how else the system could possibly work. Different pallium location memories are (at least partially) made out of the activations of different sparse subsets of neurons from the same pool of neurons, I think. You just can't activate a bunch of them at once, it wouldn't work, they would interfere with each other, AFAICT. Anyway, if options are considered serially, things become simpler. All you really need is a mechanism for the hypothalamus to guess “if we do the current plan, how much and what type of food will I eat?”. (Such a mechanism does seem to exist AFAICT—in fact, I think mammals have two such mechani

Can you link to an explanation of why you're thinking of the brainstem as plan-evaluator? I always thought it was the basal ganglia.

2Steven Byrnes2y
Since this keeps coming up—Big Picture of Phasic Dopamine [https://www.lesswrong.com/posts/jrewt3rLFiKWrKuyZ/big-picture-of-phasic-dopamine] is still the best resource, but I just summarized this aspect of it in 20× fewer words: A model of decision-making in the brain (the short version). [https://www.lesswrong.com/posts/e5duEqhAhurT8tCyr/a-model-of-decision-making-in-the-brain-the-short-version] It's pretty similar to what I wrote in my other reply comment though.

Good question!

  • Yes I can link it but it's very long, sorry: Big Picture Of Phasic Dopamine.
  • The midbrain dopamine centers (VTA, SNc) are traditionally "part of the basal ganglia" AND "part of the brainstem". I think that these regions are where you find the "final answer" about whether a plan is good or bad, and that the dopamine signals from these regions can just directly shut down bad ideas.
  • But of course a lot of processing happens before you get to the "final answer"…
  • Specifically, I think there are basically three layers of "plan evaluation":
    • First, you s
... (read more)

Mental hospitals of the type I worked at when writing that post only keep patients for a few days, maybe a few weeks at tops. This means there's no long-term constituency for fighting them, and the cost of errors is (comparatively) low.

The procedures for these hospitals would be hard to change. It's hard to have a law like "you need a judge to approve sending someone to a mental hospital", because maybe someone's trying to kill themselves right now and the soonest a judge has an opening is three days from now. So the standard rule is "use your own judgment... (read more)

3ChristianKl2y
Hard in the sense that there's a lot of lobbying power behind the legacy system but that's not for lack of alternatives. Prediction-based medicine [https://www.lesswrong.com/posts/TYA2nsPypoNaLsczd/prediction-based-medicine-pbm] where one doctor makes predictions about what's likely to happen when the patient doesn't get hospitalized and what happens with them when they are hospitalized and then letting another doctor make the decision to hospitalize or not hospitalize isn't very hard. Then you fire those people who make bad predictions because they are unqualified to do their job. I think it's perfectly fine to require the opinion of two doctors to take away the freedom as taking freedom away is a major move and I think it's reasonable to require the ability to make accurate predictions about harm to justify taking away someones freedom.

I have some patients on disulfiram and it works very well when they take it. The problem is definitely that they can choose not to take it if they want alcohol (or sometimes just forget for normal reasons, then opportunistically drink after they realize they've forgotten). 

The implants are a great idea. As far as I know, the reason they're not used is because someone would have to pay for lots and lots of studies and the economics don't work out. Also because there are vague concerns about safety (if something went catastrophically wrong and the entir... (read more)

Wait, you don't know? Disulfiram implants are widely used in Eastern Europe.

I tried to bet on this on Polymarket a few months ago. Their native client for directing money into your account didn't work (I think it was because I was in the US and it wasn't legal under US law). I tried to send money from another crypto account, and it said Polymarket didn't have enough money to pay the Ethereum gas fees to receive my money. It originally asked me to try reloading the page close to an odd numbered GMT hour, when they were sending infusions of money to pay gas fees, but I tried a few times and never got quite close enough. I just check... (read more)

3Isma2y
I think by far the easiest way to trade the US election (for non US persons) was on FTX www.ftx.com [http://www.ftx.com] For reference, this is Vitalik's blog post about the US election prediction markets (which of course favors Ethereum-based platforms!) https://vitalik.ca/general/2021/02/18/election.html. [https://vitalik.ca/general/2021/02/18/election.html.] It looks horribly complicated. Maybe Vitalik himself didn't know about FTX? Side note: for US persons,www.ftx.us [http://www.ftx.us] is available (but more restrictive).
9cata2y
I could have imagined this was true a month ago, but then I spent about 15 total hours learning about Ethereum financial widgets, which was fun, and wrote it up into this post [https://www.lesswrong.com/posts/nMNi86hgNjaNnh8iu/a-whirlwind-tour-of-ethereum-finance] , and now I totally understand Vitalik's steps [https://vitalik.ca/general/2021/02/18/election.html], understand many of the possible risks underlying them, and could have confidently done something similar myself. Although I am probably unusually capable even among the LW readership, I think many readers could have done this if they wanted to. Similarly, I don't know anything about perpetual futures, but I guarantee that I could understand perpetual futures very clearly by tomorrow if you offered me $20k (or a 20% shot at $100k) to do it. Having to think hard for a week to clearly understand something complicated, with the expectation that there might be money on the other end*, is definitely a convincing practical explanation for why rationalists aren't making a lot of money off of schemes like this, but it's not a good reason why they shouldn't. Of course, many rationalists may not have enough capital that it matters much, but many may. *It's not like these are otherwise useless concepts to understand, either.

Not Vitalik. A friend of mine from OBNYC.

I don't know why you had so many troubles putting money into polymarket a few months back. Right now polymarket is in 'trouble' since ETH fees are so high so its expensive to withdraw. 

I mostly election bet elsewhere but I got five figures into polymarket without too much trouble. 

I wish you had posted on lesswrong. I would have happily helped you.

Thanks for this.

I think the UFH might be more complicated than you're making it sound here - the philosophers debate whether any human really has a utility function.

When you talk about the CDC Director sometimes doing deliberately bad policy to signal to others that she is a buyable ally, I interpret this as "her utility function is focused on getting power". She may not think of this as a "utility function", in fact I'm sure she doesn't, it may be entirely a selected adaptation to execute, but we can model it as a utility function for the same reason we m... (read more)

Bronze Age war (as per James Scott) was primarily war for captives, because the Bronze Age model was kings ruling agricultural dystopias amidst virgin land where people could easily escape and become hunter-gatherers. The laborers would gradually escape, the country would gradually become less populated, and the king would declare war on a neighboring region to steal their people to use as serfs or slaves.

Iron Age to Industrial Age war (as per Peter Turchin) was primarily war for land, because of Malthus. Until the Industrial Revolution, you needed a certa... (read more)

1jaspax2y
The only part of this that doesn't make sense to me is "they still eliminated their excess population". Unless I'm mistaken about the numbers, no war before WWI ever had a large enough number of combatants or was deadly enough in general to make a real dent in the population. An exception to this might be prehistoric intertribal warfare in which the combatants include "all healthy adult males of the tribe", but that obviously doesn't apply to Iron Age to Industrial Age warfare as you claim.

The Bay Area is a terrible place to live in many ways. I think if we were selecting for the happiness of existing rationalists, there's no doubt we should be somewhere else.

But if the rationalist project is supposed to be about spreading our ideas and achieving things, it has some obvious advantages. If MIRI is trying to lure some top programmer, it's easier for them to suggest they move to the Bay (and offer them enough money to overcome the house price hurdle) than to suggest they move to Montevideo or Blackpool or even Phoenix. If CEA is trying to get p... (read more)

I actually feel like East Bay (Oakland and every place north of Oakland) is really pleasant:

  • Cost of living isn't terrible except for rent, and it's still possible to find good deals on rent, e.g. I've lived in North Oakland for 6 years and have only paid more than $1,000/month for one of those years (granted for the rest of the time I've been living in group houses or with a partner)
  • East Bay parks are amazing
  • Minimal social decay except for downtown Berkeley and parts of Oakland
  • Wonderful weather for ~10 months of the year (every season except for fire seaso
... (read more)

It also rules out Cascadian cities like Portland and Seattle - only marginally better housing costs, worse fires, and worse social decay (eg violence in Portland).

I'm not sure this is so conclusive, regarding Seattle. A few notes --

  1. The rent is 40% less than San Francisco, and 20% less than Berkeley. (And the difference seems likely to continue or increase, because Seattle is willing to build housing.)
  2. There is no state income tax.
  3. While the CHAZ happened in Seattle, my impression is that day-to-day it's much more livable than SF. (I haven't lived there in a
... (read more)

(Source: I work at MIRI.)

MIRI is very seriously considering moving to a different country soon (most likely Canada), or moving to elsewhere in the US. No concrete plans or decisions at this point, and it's very possible we'll stay in the Bay; but I don't think people should make their current location decisions based on a confident prediction that MIRI is going to stay in the Bay.

If we do leave the Bay Area, some of the main places we're currently thinking about are New Hampshire and some other northeastern US spots, and the area surrounding Toronto in Can... (read more)

I feel like there's a very serious risk of turning a 'broad rationalist movement' reaction, feeding on PARC adjacent extreme-aspirationals and secreting 'rationalists' into a permanently capped out minor regional cult by just deciding to move somewhere all avowed 'rationalists' choose.
I doubt most 'rationalists' or even most of the people who are likely to contribute to the literature of a rationalist movement have yet been converted to a specific sort of tribal self-identification that would lead them to pick up roots and all go to the same place at one t... (read more)

I have been collecting interest in an unchartered community in Niobrara, Wyoming with plans to gain critical mass for a state charter.

It's the smallest county in the state with ~2,000 population; the state has the most national voting power per person; generally the law is about as libertarian as any other and they've made a specific push to be a replacement Switzerland after Zurich cracked down on the banks, with especially friendliness to cryptocurrency.

There are currently 2200 acres for sale for $1MM, or smaller lots for less. I am personally committed ... (read more)

>it's less than an hour's drive to Boston

this is pretty damn strong for intellectual hub considerations. I had been thinking Denver or Santa Cruz were the only real choices due to decriminalization (leading indicator) but given NH's politics they might follow along in the next few years.

But if the rationalist project is supposed to be about spreading our ideas and achieving things [emphasis mine]

Thanks for phrasing this as a conditional! To fill in another branch of the if/else-if/else-if ... conditional statement: if the rationalist project is supposed to be about systematically correct reasoning—having the right ideas because they're right, rather than spreading our ideas because they're ours—then things that are advantageous to the movement could be disadvantageous to the ideology, if the needs of growing the coalition's resources c... (read more)

I agree with regard to Moraga. Habryka and a few housemates of mine drove down to have a look around, and I think their main updates were that each house had only like 2 bedrooms, were all ~5x the distance from each other relative to Berkeley, there were no sidewalks, and no natural meeting place (the place with the shops had no natural seating), which means people just wouldn’t see each other very much unless everyone had a car and made it a conscious and constant effort. Even though it was nice and clean and so on.

I also agree wrt CFAR/MIRI. I would be i... (read more)

Load More