This is a special post for quick takes by Ben Pace. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

The comments here are a storage of not-posts and not-ideas that I would rather write down than not.

Benito's Shortform Feed
246 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Ben Pace3818

I often wish I had a better way to concisely communicate "X is a hypothesis I am tracking in my hypothesis space". I don't simply mean that X is logically possible, and I don't mean I assign even 1-10% probability to X, I just mean that as a bounded agent I can only track a handful of hypotheses and I am choosing to actively track this one.

  • This comes up when a substantially different hypothesis is worth tracking but I've seen no evidence for it. There's a common sentence like "The plumber says it's fixed, though he might be wrong" where I don't want to communicate that I've got much reason to believe he might be wrong, and I'm not giving it even 10% or 20%, but I still think it's worth tracking, because strong evidence is common and the importance is high.
  • This comes up in adversarial situations when it's possible that there's an adversarial process selecting on my observations. In such situations I want to say "I think it's worth tracking the hypothesis that the politician wants me to believe that this policy worked in order to pad their reputation, and I will put some effort into checking for evidence of that, but to be clear I haven't seen any positive evidence for that hypothesi
... (read more)
6Chris_Leong
Maybe just say that you're tracking the possibility?
4Richard_Kennaway
"Trust, but verify."
4Dagon
Standard text in customer-facing outage recovery notices: all systems appear to be operating correctly, and we are actively monitoring the situation". In more casual conversations, I sometimes say "cautiously optimistic" when stating that I think things are OK, but I'm paying more attention than normal for signs I'm wrong.  Mostly, I talk about my attention and what I'm looking for, rather than specifying the person who's making claims.  Instead of "the plumber says it's fixed, though he might be wrong", I'd say "The plumber fixed it, but I'm keeping an eye out for further problems".  For someone proposing something I haven't thought about, "I haven't noticed that, but I'll pay more attention for X and Y in the future".
3jam_brand
Before I read the aphoristic three-word reply to you from Richard Kennaway (admittedly a likely even clearer-cut way to indicate the following sentiment), I was thinking that to downplay any unintended implications about the magnitude of your probabilities that you could maybe say something about your tracking being for mundane-vigilance or intermittent-map-maintenance or routine-reality-syncing / -surveying / -sampling reasons. For any audience you anticipate familiarity with this essay though, another idea might be to use a version of something like: "The plumber says it's fixed, which I'm splitting on [by default][and {also} tracking <for posterity>]." (spoilered section below just corrals a ~dozen expansions / embellishments of the above)
3Valdes
Adapted from the french "j'envisage que X" I propose "I am considering the possibility that X" or in some contexts "I am considering X". "The plumber says it's fixed, but I am considering he might be wrong".
2Richard_Kennaway
What's wrong with your original sentence, "X is a hypothesis I am tracking in my hypothesis space"? Or more informal versions of that, like "I'll be keeping an eye on that", "We'll see", etc.?
2Ben Pace
I guess it's just that I don't feel mastery over my communication here, I still anticipate that I will find it clunky to add in a whole chunk of sentences to communicate my epistemic status. I anticipate often in the future that I'll feel a need to write a whole paragraph, say in the political case, just to clarify that though I think it's worth considering the possibility that the politician is somehow manipulating the evidence, I've seen no cause to believe it in this case. I feel like bringing up the hypothesis with a quick "though I'm tracking the possibility that Adam is somehow manipulating the evidence for political gain" pretty commonly implies that the speaker (me) thinks it is likely enough to be worth acting on, and so I feel I have to explicitly rule that out as why I'm bringing it up, leaving me with my rather long sentence from above.
2jmh
In the plumbing context I generally say or think, "The repair/work has been completed and I'll see how it lasts." or sometimes something like, "We've addressed the immediate problem so lets see if that was a fix or a bandage."
2Steven Byrnes
“The plumber says it’s fixed, but I’ll keep an eye out for evidence of more problems.” (ditto Dagon) also “The politician seems to be providing sound evidence that her policy is working, but I’ll remain vigilant to the possibility that she’s being deceptive.”
1mattmacdermott
"Bear in mind he could be wrong" works well for telling somebody else to track a hypothesis. "I'm bearing in mind he could be wrong" is slightly clunkier but works ok.
1Mateusz Bagiński
"The hypothesis/possibility that 'X' is mindworthy" ("worth being mindful about it"). Maybe the nicest solution would be to coin a one-syllable modal verb like "may" or "can" to communicate exactly this.
2Ben Pace
"Keep in mind that X".
1FlorianH
Maybe "I'm interested in the hypothesis/possibility..."
1NineDimensions
In some cases something like this might work: "The plumber says it's fixed, so hopefully it is" Or "The plumber says it's fixed, so it probably is" Which I think conveys"there's an assumption I'm making here, but I'm just putting a flag in the ground to return to if things don't play out as expected"
-9Shankar Sivarajan

Yesterday I noticed that I had a pretty big disconnect from this: There's a very real chance that we'll all be around, business somewhat-as-usual in 30 years. I mean, in this world many things have a good chance of changing radically, but automation of optimisation will not cause any change on the level of the industrial revolution. DeepMind will just be a really cool tech company that builds great stuff. You should make plans for important research and coordination to happen in this world (and definitely not just decide to spend everything on a last-ditch effort to make everything go well in the next 10 years, only to burn up the commons and your credibility for the subsequent 20).

Only yesterday when reading Jessica's post did I notice that I wasn't thinking realistically/in-detail about it, and start doing that.

Related hypothesis: people feel like they've wasted some period of time e.g. months, years, 'their youth', when they feel they cannot see an exciting path forward for the future. Often this is caused by people they respect (/who have more status than them) telling them they're only allowed a small few types of futures.

6lc
How do you feel about this today?
4Ben Pace
* The next 30 years seem really less likely to be 'relatively normal'. My mainline world-model is that nation states will get involved with ML in the next 10 years, and that many industries will be really changed up by ML. * One of my personal measures of psychological health is how many years ahead I feel comfortable making trade-offs for today. This changes over time, I think I feel like I'm a bit healthier now than I was when I wrote this, but still not great. Not sure how to put a number to this, I'll guess I'm maybe able to go up to 5 years at the minute (the longest ones are when I think about personal health and fitness)? Beyond that feels a bit foolish. * I still resonate a bit with what I wrote here 4 years ago, but definitely less. My guess is if I wrote this today the number I would pick would be "8-12 years" instead of "30".
2Ben Pace
Nation states got involved with ML faster than I expected when I wrote this!
2Ben Pace
Epistemic status: Thinking out loud some more. Hm, I notice I'm confused a bit about the difference between "ML will blow up as an industry" and "something happens that effects the world more than the internet and smartphones have done so far". I think honestly I have a hard time imagining ML stuff that's massively impactful but isn't, like, "automating programming", which seems very-close-to-FOOM to me. I don't think we can have AGI-complete things without being within like 2 years (or 2 days) of a FOOM. So then I get split into two worlds, one where it's "FOOM and extinction" and another world which is "a strong industry that doesn't do anything especially AGI-complete". The latter is actually fairly close to "business somewhat-as-usual", just with a lot more innovation going on, which is kind of nice (while unsettling). Like, does "automated drone warfare" count as "business-as-usual"? I think maybe it does, it's part of general innovation and growth that isn't (to me) clearly more insane than the invention of nukes was. I think I am expecting massive innovation and that ML will be shaking up the world like we've seen in the 1940's and 1950's (transistors, DNA, nukes, etc etc). I'm not sure whether to expect 10-100x more than that before FOOM. I think my gut says "probably not" but I do not trust my gut here, it hasn't lived through even the 1940's/50's, never mind other key parts of the scientific and industrial and agricultural and eukaryotic revolutions. As we see more progress over the next 4 years I expect we'll be in a better position to judge how radical the change will be before FOOM. The answer to lc's original question is then:
2eigen
Hey, I think you should also consider how the out-of-nowhere narrative-breaking nature of COVID. Which also happened after you wrote this. It's not necessarily a proof that the narrative can "break," but it sure is an example. And, while I think I read the sequences way longer than 4 years ago, if I remember something it gave me is a sense of "everything can change very, very fast."

Privacy: a tool for thinking for yourself

As part of some recent experiments with debates, today I debated Ronny Fernandez on the topic of whether privacy is good or bad, and I was randomly assigned the “privacy is good” side. I’ve cut a few excerpts together that I think work as a standalone post, and put them below.

At the start I was defending privacy in general, and then we found that our main disagreement was about whether it was helpful for thinking for yourself, so I focus even more on that after the opening statement.

This is an experiment. I'm down for feedback on whether to do more of this sort of thing (it only takes me ~2 hours), how I could make it better for the reader, whether to make it a top-level post, etc.

Epistemic status: soldier mindset. I will here be exaggerating the degree to which I believe my conclusions.

Opening Statement

My core argument is that, in general, the pressures for conformity amongst humans are crazy.

This is true of your immediate circle, your local community, and globally. Each one of these has sufficiently strong pressures that I think it is a good heuristic to actively keep secrets and things you think about and facts about your life separate fr... (read more)

6Elizabeth
I'm surprised that the frame around Crazy Eye Travis is framed around whether his friends will pressure him or his own psychology,  instead of the environmental consequences. Most opera houses are not going to let you in if your ass is hanging out. Nor will banks or bigtech jobs. Failure to conform with institutions typically results in losing access to those institutions. 
4sunwillrise
I agree with your perspective almost entirely (for reasons basically building on top of what Zvi has written about at length before), so I would be a lot more curious to see what Ronny's argument was during the debate (if he is okay with sharing it, of course). I know you have referenced and quoted part of his reasoning, but it's a bit weird to read a rebuttal to someone's argument without first reading the argument itself, in their own words.
3abergal
Thanks for writing this up-- at least for myself, I think I agree with the majority of this, and it articulates some important parts of how I live my life in ways that I hadn't previously made explicit for myself.

Did anyone else feel that when the Anthropic Scaling Policies doc talks about "Containment Measures" it sounds a bit like an SCP, just replaced with the acronym ASL?

Item #: ASL-2-4

Object Class: Euclid, Keter, and Thaumiel

Threat Levels:

ASL-2... [does] not yet pose a risk of catastrophe, but [does] exhibit early signs of the necessary capabilities required for catastrophic harms

ASL-3... shows early signs of autonomous self-replication ability... [ASL-3] does not itself present a threat of containment breach due to autonomous self-replication, because it is both unlikely to be able to persist in the real world, and unlikely to overcome even simple security measures... 

...an early guess (to be updated in later iterations of this document) is that ASL-4 will involve one or more of the following... [ASL-4 has] become the primary source of national security risk in a major area (such as cyberattacks or biological weapons), rather than just being a significant contributor. In other words, when security professionals talk about e.g. cybersecurity, they will be referring mainly to [ASL-4] assisted... attacks. A related criterion could be that deploying an ASL-4 system without safeguards

... (read more)

I Changed My Mind on Prison Sentencing.

I used to have the opinion that prison sentencing should be a disincentive proportional to the upside of the crime. The question I'd ask was "How much of a prison sentence would make the crime not worth it to people?" I tried to estimate the upside to the criminal, and what length of sentence would make the expected utility reliably turn out negative. I would sometimes discuss with people how long of a prison sentence they'd risk in order to get the upside of a crime, and use this as an input into how long I thought sentencing should be. However, I now think this was an error, for two reasons:

  1. Many criminals or norm-breakers are not doing something as clear-headed as an explicit expected-value calculation, they are often thinking with part of their mind that is relatively uncivilized (and perhaps desperate or hidden from much of their conscious mind). For instance, it is done impulsively, or it is a crime of passion.
  2. An Astral Codex Ten blogpost reiterated the common wisdom that length of sentencing has little effect on crime rate, and instead reliability of punishment has a much larger effect. The conclusion reads "Deterrence effects are so wea
... (read more)

I recall it heard claimed that a reason why financial crimes sometimes seem to have disproportionately harsh punishments relative to violent crimes is that financial crimes are more likely to actually be the result of a cost-benefit analysis.

5Ben Pace
As a concrete example, I previously thought that Sam Bankman-Fried should be sentenced to 20-40 years in prison for his fraud, because this was the sort of time that I think most people would no longer be willing to trade for an even shot at getting $10Bs (e.g. when I asked my personal trainer, he said he would accept 15 years in prison for an even shot at $10B; I think many would take more, and also the true upside was higher). From the above I've updated that the diff between expecting 5 years or 50 years in prison wasn't a primary input into SBF's repeated decisions to do fraud. However, I do think he is sufficiently sociopathic that I never expect him to not be a danger to society, so my new position is that life in-prison is probably best for him. (This is not meant as punishment, I would not mind him going to those pleasant Swedish prisons I've heard about, I just otherwise expect him to continue to competently do horrendous things with zero moral compunction.)
6Lukas_Gloor
I like all the considerations you point out, but based on that reasoning alone, you could also argue that a con man who ran a lying scheme for 1 year and stole only like $20,000 should get life in prison -- after all, con men are pathological liars and that phenotype rarely changes all the way. And that seems too harsh? I'm in two minds about it: On the one hand, I totally see the utilitarian argument of just locking up people who "lack a conscience" forever the first time they get caught for any serious crime. On the other hand, they didn't choose how they were born, and some people without prosocial system-1 emotions do in fact learn how to become a decent citizen.  It seems worth mentioning that punishments for financial crime often include measures like "person gets banned from their industry" or them getting banned from participating in all kinds of financial schemes. In reality, the rules there are probably too lax and people who got banned in finance or pharma just transition to running crypto scams or sell predatory online courses on how to be successful (lol). But in theory, I like the idea of adding things to the sentencing that make re-offending less likely. This way, you can maybe justify giving people second chances. 
7Ben Pace
Good point. I can imagine things like "permanent parole" (note that probation and parole are meaningfully different) or being under house arrest or having constraints on your professional responsibilities or finances or something, being far better than literal incarceration.
7Noosphere89
One of the missing considerations is that crime is done mostly by young people, and the rate of crimes goes down the older you get. A lot of this IMO is that the impulsiveness/risk-taking behavior of crimes decreases a lot with age, but the empirical fact of crime going down with age, especially reoffending is a big reason why locking people up for life is less good than Ben Pace said, because the reoffending rate goes down with age.
4Ben Pace
I agree there are people who do small amounts of damage to society, are caught, and do not reoffend. Then there are other people whose criminal activities will be most of their effect on society, will reliably reoffend, and for whom the incapacitation strongly works out positive in consequentialist terms. My aim would be to have some way of distinguishing between them. The amount of evidence we have about Bankman-Fried's character is quite different than that of most con men, including from childhood and from his personal diary, so I hope we can have more confidence based on that. But a different solution is to not do any psychologizing, and just judge based on reoffending. See this section from the ACX post: I should add that Scott has lots of concerns about doing this in the US, and argues that properly doing this in the US would massively increase the incarcerated population. I didn't quite follow his concerns, but I was not convinced that something like this would be a bad idea on consequentialist grounds, even if the incarcerated population were to massively increase. (Note that I would support improving the quality of prisons to being broadly as nice as outside of prisons.)
4Mark Xu
This is in part the reasoning used by Judge Kaplan: from https://time.com/6961068/sam-bankman-fried-prison-sentence/
4Ben Pace
I want to try out the newly updated claims feature! Here are some related claims, I invite you to vote your probabilities. Prediction   Prediction (This can be for whatever reason you think, such as because his expected value calculations would've changed, or because he would've taken more care around these particular behaviors, or any other reason you please.)
4Nick_Tarleton
I can easily imagine an argument that: SBF would be safe to release in 25 years, or for that matter tomorrow, not because he'd be decent and law-abiding, but because no one would trust him and the only crimes he's likely to (or did) commit depend on people trusting him. I'm sure this isn't entirely true, but it does seem like being world-infamous would have to mitigate his danger quite a bit. More generally — and bringing it back closer to the OP — I feel interested in when, and to what extent, future harms by criminals or norm-breakers can be prevented just by making sure that everyone knows their track record and can decide not to trust them.
4Ben Pace
I think having an easily-findable reputation makes it harder to do crimes, but being famous makes it easier. Many people are naive & gullible, or are themselves willing to do crime, and would like to work with him. I expect him to get opportunities for new ventures on leaving prison, with unsavory sorts. I definitely support track-records being more findable publicly. Of course there's some balance in that the person who publishes it has a lot of power over the person being written about, and if they exaggerate it or write it hyperbolically then they can impose a lot of inappropriate costs on the person that they're in a bad position to push back on.
2lc
I voted 75% because taken literally I think in 25 years AI will be so advanced that he won't have much of an ability to impact the world at all 🤓 (Otherwise 40%)
4Nathan Helm-Burger
I've been mad about the inefficiencies of the criminal justice system for many years. It does wasteful harm to perpetrators while also not doing a great job at prevention or helping victims. One possible reform I'm excited about the possibility of is having more AI monitoring of public spaces. For instance, if a woman could wear a sign saying "monitored by live camera" she might feel safer walking at night. Or a store might post a sign about AI security cameras. Another possible innovation is AI parole officers (perhaps house in a cell-service neckband with a camera), and a reformed parole system intended to be a long-term nanny rather than an excuse to jail the parolee. If you had to wear your AI nanny anytime you were out of the house, and it would scold you if it looked like you were about to commit a crime... This might substitute entirely for prison for the majority of crimes.
4jmh
I agree with the view that punishment is not really a great deterrent as many crimes are not committed from a calculated cost-benefit perspective. I do think we need to apply that type of thinking towards what we might do with that insight/fact of things. On that point, would like to see more on your claim that we would get better bang for the buck as it were from more investment in preventing crimes. In this regard I'm thinking about the contrast between western legal views and places like China as well as the estimates on the marginal pecuniary costs of prevention to the marginal pecuniary savings from reduced punishment. Clearly two (among many) difference margins along which trade-offs will need to be made. Another aspect that seems worth exploring (and I'm sure it has been but not sure where the literature stands on the question) is how, at least in my understanding of USA criminal law, victims of crimes are not often compensated (white collar, fraud, financial crimes are something of an exception) but the victims, as a part of society, are then paying costs to punish the criminal. Full prevention is not a reasonable assumption (not sure what level is a reasonable assumption) but we might find a better solution even at the current rate of prevention if more of those harmed by crimes were actually compensated for the harms rather than just imposing the punishment of the criminal actor. A primary reason for preventing the event of a crime is the prevention of the harm. But if the harm can be largely mitigated after the fact there is a degree of equivalent between it never having occurred and it's compensation (perhaps here we might think of law and punishment as a type of insurance). I also think there is something to look at in terms of prevention of incidence of crimes due to incarceration -- a type of exile. There might be scope for approaches there that maintain that type of prevention for repeat offenders (those demonstrating a propensity for some bad beh
1ZY
For "prison sentencing" here, do you mean some time in prison, but not life sentencing? Also instead of prison sentencing, after increasing "reliability of being caught", would you propose alternative form of sentencing? Some parts of 1) and most of 2) made me feel educating people on the clear consequences of the crime is important. For people who frequently go in and out of prison - I would guess most legal systems already make it more severe than previous offenses typically, but for small crimes they may not be. I do think other types of punishments that you have listed there (physical pain, training programs, etc) would be interesting depending on the crime.

Hypothesis: power (status within military, government, academia, etc) is more obviously real to humans, and it takes a lot of work to build detailed, abstract models of anything other than this that feel as real. As a result people who have a basic understanding of a deep problem will consistently attempt to manoeuvre into powerful positions vaguely related to the problem, rather than directly solve the open problem. This will often get defended with "But even if we get a solution, how will we implement it?" without noticing that (a) there is no real effort by anyone else to solve the problem and (b) the more well-understood a problem is, the easier it is to implement a solution.

5Benquo
I think this is true for people who've been through a modern school system, but probably not a human universal.
4Ben Pace
My, that was a long and difficult but worthwhile post. I see why you think it is not the natural state of affairs. Will think some more on it (though can't promise a full response, it's quite an effortful post). Am not sure I fully agree with your conclusions.
6Benquo
I'm much more interested in finding out what your model is after having tried to take those considerations into account, than I am in a point-by-point response.
8Raemon
This seems like a good conversational move to have affordance for.
2Kaj_Sotala
This might be true, but it doesn't sound like it contradicts the premise of "how will we implement it"? Namely, just because understanding a problem makes it easier to implement, doesn't mean that understanding alone makes it anywhere near easy to implement, and one may still need significant political clout in addition to having the solution. E.g. the whole infant nutrition thing.
2Ruby
Seems related to Causal vs Social Reality.
1Eli Tyre
Do you have an example of a problem that gets approached this way? Global warming? The need for prison reform? Factory Farming?
4Ben Pace
AI.
4Eli Tyre
It seems that AI safety has this issue less than every other problem in the world, by proportion of the people working on it. Some double digit percentage of all of the people who are trying to improve the situation, are directly trying to solve the problem, I think? (Or maybe I just live in a bubble in a bubble.) And I don’t know how well this analysis applies to non-AI safety fields.

I'd take a bet at even odds that it's single-digit.

To clarify, I don't think this is just about grabbing power in government or military. My outside view of plans to "get a PhD in AI (safety)" seems like this to me. This was part of the reason I declined an offer to do a neuroscience PhD with Oxford/DeepMind. I didn't have any secret for why it might be plausibly crucial.

4Ben Pace
Strong agree with Jacob.

Er, Wikipedia has a page on misinformation about Covid, and the first example is Wuhan lab origin. Kinda shocked that Wikipedia is calling this misinformation. Seems like their authoritative sources are abusing their positions. I am scared that I'm going to stop trusting Wikipedia soon enough, which is leaving me feeling pretty shook.

7Dagon
Wikipedia has beaten all odds for longevity of trust - I remember pretty heated arguments circa 2005 whether it was referenceable on any topic, though it was known to be very good on technical topics or niches without controversy where nerds could agree on what was true (but not always what was important).   By 2010, it was pretty widely respected, though the recommendation from Very Serious People was to cite the underlying sources, not the articles themselves.  I think it was considered pretty authoritative in discussions I was having  by 2013 or so, and nowadays it's surprising and newsworthy when something is wrong for very long (though edit wars and locking down sections happens fairly often).   I still take it with a little skepticism for very recently-edited or created topics - it's an awesome resource to know the shape of knowledge in the area, but until things have been there for weeks or months, it's hard to be sure it's a consensus.
5Viliam
Could it be a natural cycle? Wikipedia is considered trustworthy -> people with strong agenda get to positions where they can abuse Wikipedia -> Wikipedia is considered untrustworthy -> people with strong agenda find better use of their time and stop abusing Wikipedia, people who care about correct information fix it -> Wikipedia is considered trustworthy...
8ChristianKl
The agenda is mainly to follow the institutions like the New York Times. In a time where the New York Times isn't worth much more then saw dust, that's not a strategy to get to truth. 
6Steven Byrnes
"No safe defense, not even Wikipedia" :-P I suggest not having a notion of "quality" that's supposed to generalize across all wiki pages. They're written by different people, they're scrutinized to wildly different degrees. Even different sections of the same article can be obviously different in trustworthiness ... Or even different sentences in the same section ... Or different words in the same sentence :)
4ChristianKl
Wikipedia unfortunately threw out their neutral point of view policy on COVID-19. Besides that page, the one of ivermectin ignores the meta analysises in favor of using it for COVID-19. There's also no page for "patient zero" (who was likely employed in the Wuhan Institute for Virology)
2Pattern
Fix it. (And let us know how long that sticks for.)
2Ben Pace
You fix it! If you think it's such a good idea :) I am relatively hesitant to start doing opinionated fixes on Wikipedia, I think that's not the culture of page setup that they want. My understanding is that the best Wikipedia editors write masses of pages that they're relatively disinterested in, and that being overly interested in a specific page mostly leads you to violating all of their rules and getting banned. This sort of actively political editing is precisely the sort of thing that they're trying to avoid.
2Viliam
By saying "Wuhan lab origin", you can roughly mean three things: * biological weapon, intentionally released, * natural virus collected, artificially improved, then escaped, * natural virus collected, then escaped in the original form. The first we can safely dismiss: who would drop a biological weapon of this type on their own population? We can also dismiss the third one, if you think in near mode what that would actually mean. It means the virus was already out there. Then someone collected it -- obviously, not all existing particles of the virus -- which means that most of the virus particles that were already out there, have remained out there. But that makes the leak from Wuhan lab an unnecessary detail; "virus already in the wild, starts pandemic" is way more likely than "virus already in the wild, does not start pandemic, but when a few particles are brought into a lab and then accidentally released without being modified, they start pandemic"... what? This is why arguing for natural evolution of the virus is arguing against the lab leak. (It's just not clearly explained.) If you do not assume that the virus was modified, then the hypothesis that the pandemic started by Wuhan lab leak, despite the virus already being out there before it was brought to the Wuhan lab, is privileging the hypothesis. If the virus is already out there, you don't need to bring it to a lab and let it escape again in order to... be out there, again. Now here I agree that the artificial improvement of the virus cannot be disproved. I mean, whatever can happen in the nature, probably can also happen in the lab, so how would you prove it didn't? I guess I am trying to say that in the Wikipedia article, the section "gain of function research" does not deserve to be classified as misinformation, but the remaining sections do.

Responding to Scott's response to Jessica.

The post makes the important argument that if we have a word whose boundary is around a pretty important set of phenomena that are useful to have a quick handle to refer to, then

  • It's really unhelpful for people to start using the word to also refer to a phenomena with 10x or 100x more occurrences in the world because then I'm no longer able to point to the specific important parts of the phenomena that I was previously talking about
    • e.g. Currently the word 'abuser' describes a small number of people during some of their lives. Someone might want to say that technically it should refer to all people all of the time. The argument is understandable, but it wholly destroys the usefulness of the concept handle.
  • People often have political incentives to push the concept boundary to include a specific case in a way that, if it were principled, indeed makes most of the phenomena in the category no use to talk about. This allows for selective policing being the people with the political incentive.
  • It's often fine for people to bend words a little bit (e.g. when people verb nouns), but when it's in the class of terms w
... (read more)

I will actually clean this up and into a post sometime soon [edit: I retract that, I am not able to make commitments like this right now]. For now let me add another quick hypothesis on this topic whilst crashing from jet lag.

A friend of mine proposed that instead of saying 'lies' I could say 'falsehoods'. Not "that claim is a lie" but "that claim is false".

I responded that 'falsehood' doesn't capture the fact that you should expect systematic deviations from the truth. I'm not saying this particular parapsychology claim is false. I'm saying it is false in a way where you should no longer trust the other claims, and expect they've been optimised to be persuasive.

They gave another proposal, which is to say instead of "they're lying" to say "they're not truth-tracking". Suggest that their reasoning process (perhaps in one particular domain) does not track truth.

I responded that while this was better, it still seems to me that people won't have an informal understanding of how to use this information. (Are you saying that the ideas aren't especially well-evidenced? But they so... (read more)

3Pattern
Is this "bias"?
3Ben Pace
Yeah good point I may have reinvented the wheel. I have a sense that’s not true but need to think more.

The definitional boundaries of "abuser," as Scott notes, are in large part about coordinating around whom to censure. The definition is pragmatic rather than objective.*

If the motive for the definition of "lies" is similar, then a proposal to define only conscious deception as lying is therefore a proposal to censure people who defend themselves against coercion while privately maintaining coherent beliefs, but not those who defend themselves against coercion by simply failing to maintain coherent beliefs in the first place. (For more on this, see Nightmare of the Perfectly Principled.) This amounts to waging war against the mind.

Of course, in matter of actual fact we don't strongly censure all cases of consciously deceiving. In some cases (e.g. "white lies") we punish those who fail to lie, and those who call out the lie. I'm also pretty sure we don't actually distinguish between conscious deception and e.g. reflexively saying an expedient thing, when it's abundantly clear that one knows very well that the expedient thing to say is false, as Jessica pointed out here.

*It's not clear to me that this is a good kind of concept to ... (read more)

2Ben Pace
Note: I just wrote this in one pass when severely jet lagged, and did not have the effort to edit it much. If I end up turning this into a blogpost I will probably do that. Anyway, I am interested in hearing via PM from anyone who feels that it was sufficiently unclearly written that they had a hard time understanding/reading it.

Two recent changes to LessWrong that I made!

  1. I have added two new reacts: "I'd bet this is false" and "I'd bet this is true".
  2. You can now sort a given user's comments by 'top' i.e. by karma.

For the first one, if you want to let someone know you'd be willing to take a bet (or if you want to call someone out on their bullshit) you can now highlight the claim they made and use the react. The react is a pair of dice, because we're never certain about propositions we're betting on (and because 'a hand offering money' was too hard to make out at the small scale). Hopefully this will increase people's affordance to take more bets on the site!

These reacts replaced "I checked it's true" and "I checked it's false", which didn't get that much use, but were some of the most abused reacts (often used on opinions or statements-of-positions that were simply not checkable).

For the second, if you go to a user profile and scroll down to the comments, you can now sort by 'top', 'newest', 'oldest', and 'recent replies'. I find that 'top' is a great way to get a sense of a person's thoughts and perspective on the world, and I used to visit greaterwrong a lot for this feature. Now you can do it on LessWrong!

Reply10741
7Zac Hatfield-Dodds
This feels pretty nitpick-y, but whether or not I'd be interested in taking a bet will depend on the odds - in many cases I might take either side, given a reasonably wide spread. Maybe append at p >= 0.5 to the descriptions to clarify? The shorthand trading syntax "$size @ $sell_percent / $buy_percent" is especially nice because it expresses the spread you'd accept to take either side of the bet, e.g. "25 @ 85/15 on rain tomorrow" to offer a bet of $25 dollars, selling if you think probability of rain is >85%, buying if you think it's <15%. Seems hard to build this into a reaction though!
7Ben Pace
The only reason I've not already done this is that there's already a lot of text in the hover-over Another option is that you could pair the "I'd bet on this" react with a probability react. There could be a single react that says "I'm willing to be on this" and then you also react with one of <1%, 10%, 25%, 40%, 50%, 60%, 75%, 90%, 99+%, so that people know what odds you'd take.
2Raemon
lol we did have a debate about this internally before shipping. Right now we’re trying to get a rough sense of ‘would people make more bets on LW if they had more affordance to?’, and an easy thing to try was just making some reacts. But a) I encourage people to reply with followup details if they are interested in betting, and b) if it turns out reasonably popular we may make a more dedicated feature.
2Raemon
(FYI I was happy to see your recent bet with BenG, and am hoping more things like that happen)
4the gears to ascension
I like #2. On a similar thread: would be nice to have a separate section for pinned comments. I looked into pull requesting it at one point but looks like it either isn't as trivial as I hoped, or I simply got lost in the code. I feel like folks having more affordance to say, "Contrary to its upvotes or recency, I think this is one of the most representative comments from me, and others seeing my page should see it" would be helpful - pinning does this already but it has ui drawbacks because it simply pushes recent comments out of the way and the pinned marker is quite small (hence why I edit my pinned comments to say they're pinned).
4Shankar Sivarajan
Suggestion: the two dice should have different numbers on top, maybe a ⚅ on the "true" bet instead of a ⚀.
2Yoav Ravid
The comment option sorting is amazing! Thanks! The new reacts are also cool, though I also liked the "I checked" reacts and would have liked to have both.
3Rana Dexsin
I was pretty sad about the ongoing distortion of “I checked” in what's meant to be an epistemics-oriented community. I think the actual meanings are potentially really valuable, but without some way of avoiding them getting eaten, they become a hazard. My first thought is to put a barrier in the way, but I don't know if that plays well with the reactions system being for lower-overhead responses, and it might also give people unproductive bad feelings unless sold the right way.

Okay, I’ll say it now, because there’s been too many times.

If you want your posts to be read, never, never, NEVER post multiple posts at the same time.

Only do that if you don’t mind none of the posts being read. Like if they’re all just reference posts.

I never read a post if there’s two or more to read, it feels like a slog and like there’s going to be lots of clicking and it’s probably not worth it. And they normally do badly on comments on karma so I don’t think it’s just me.

Even if one of them is just meant as reference, it means I won’t read the other one.

I recently circled for the first time. I had two one-hour sessions on consecutive days, with 6 and 8 people respectively.

My main thoughts: this seems like a great way for getting to know my acquaintances, connecting emotionally, and build closer relationships with friends. The background emotional processing happening in individuals is repeatedly brought forward as the object of conversation, for significantly enhanced communication/understanding. I appreciated getting to poke and actually find out whether people's emotional states matched the words they were using. I got to ask questions like:

When you say you feel gratitude, do you just mean you agree with what I said, or do you mean you're actually feeling warmth toward me? Where in your body do you feel it, and what is it like?

Not that a lot of my circling time was skeptical of people's words, a lot of the time I trusted the people involved to be accurately reporting their experiences. It was just very interesting - when I noticed I didn't feel like someone was honest about some micro-emotion - to have the affordance to stop and request an honest internal report.

It felt like there was a constant tradeoff betw... (read more)

Here's a quick list of 7 things people sometimes do instead of losing arguments, when it would be too personally costly to change their position.

  1. They pick a variable in the argument that is hard for the debaters to get closer to, and state that their raw-intuition is simply different to the person they're talking to. ("I just believe that human ingenuity will overcome problems like this, and you don't, and this isn't something we're easily going to be able to resolve." or "I think we're just not going to be able to resolve whether this city-wide intervention works or not without a good RCT.")
  2. They undermine your ability to have an opinion in the argument. ("I'm afraid there's very advanced research papers you'd need to read to have an understanding of this" or "You have been wrong so many times on so many issues, I don't think you're worth trusting on this one")
  3. They performatively don't understand basic ideas, or weaponize confusion. ("I'm sorry, could you explain your theory of 'ownership', I don't really understand what you mean when you say you own your company" or "I'm sorry, I'm not sure what it means to 'provoke'? I was just saying what I thought.")
  4. They get emotionally irate i
... (read more)

Schopenhauer's sarcastic essay The Art of Being Right (a manuscript published posthumously) goes in this direction. In it he suggests 38 rhetorical strategies to win a dispute by any means possible. E.g. your 3 corresponds to his 31, your 5 is similar to his 30, and your 7 to is similar his 18. Though he isn't just focusing on avoiding losing arguments, but on winning them.

3Ben Pace
It's quite funny, thanks for the link! Quoting from number 31: Of course it's harder now given that people do have the internet at-hand, but I think I still see this tactic employed.
6Viliam
Yeah, there are 8 billion people like that, so that's probably the most frequent case. I think the list of "things people sometimes do instead of losing arguments" would be very long. Did you pick these 7 because they are most frequent (in your bubble) or just because they irritate you most? I can add an example of a thing that irritates me (but maybe that's just 1 person I talked to too much recently); it's when after explaining some crucial fact that was missing in their 'edgy' perspective, the person suddenly goes: "and why is this topic so important to you? why do you care so much?" as if it's weird to know facts about something, but just a while ago it was okay for them to spend fifteen minutes talking platitudes about it
2Ben Pace
I just picked the first ones that came to mind, so I guess they're probably the ones I run into most frequently.
4Nathan Helm-Burger
Worth keeping in mind is that seeing one of these things happen doesn't prove that what is going on is that subconsciously or consciously they believe they are losing the argument. It also shouldn't be taken as evidence from the universe that your argument is correct. People are fallible and run on noisy hacky biological computers.  So yeah, it's worth noticing these sorts of argument breakdowns, but also worth being careful about reading too much into them. Here's a hypothetical: Suppose instead that the person you are arguing with is very knowledgeable about specific topic X. You have dabbled in X and know more than the average layperson, but lack a deep knowledge of the subject. The two of you have a disagreement about X.  The two of you begin arguing about it, but after a few minutes it becomes clear to them that you don't have a deep knowledge of the subject. They think through what they'd need to teach you in order for you to make a valid argument which would potentially change their mind. They estimate it would require at least 6 hours of lecturing, maybe divided up as 3 lectures, with lots of dense technical sources assigned as prerequisite reading before each lecture. They realize that that investment of time and energy, even if you agreed that you needed it and committed to wholeheartedly tackling it, wouldn't be worth it to them in order to continue the argument. They give up on you as a conversational partner on the subject of X, and seek a convenient social out to end the conversation as soon as possible.   Here's another hypothetical: Someone begins having an argument with you, on a topic which for you is mostly emotionally neutral. You are arguing politely for your point of view in a fair and objective way. They, however, do not have an emotionally neutral stance on this. They start out arguing in an fair and objective way, thinking rationally and logically about each point. Soon however, the debate touches on deep powerful feelings they have,

Good posts you might want to nominate in the 2018 Review

I'm on track to nominate around 30 posts from 2018, which is a lot. Here is a list of about 30 further posts I looked at that I think were pretty good but didn't make my top list, in the hopes that others who did get value out of the posts will nominate their favourites. Each post has a note I wrote down for myself about the post.

... (read more)

I was just re-reading the classic paper Artificial Intelligence as Positive and Negative Factor in Global Risk. It's surprising how well it holds up. The following quotes seem especially relevant 13 years later.

On the difference between AI research speed and AI capabilities speed:

The first moral is that confusing the speed of AI research with the speed of a real AI once built is like confusing the speed of physics research with the speed of nuclear reactions. It mixes up the map with the territory. It took years to get that first pile built, by a small group of physicists who didn’t generate much in the way of press releases. But, once the pile was built, interesting things happened on the timescale of nuclear interactions, not the timescale of human discourse. In the nuclear domain, elementary interactions happen much faster than human neurons fire. Much the same may be said of transistors.

On neural networks:

The field of AI has techniques, such as neural networks and evolutionary programming, which have grown in power with the slow tweaking of decades. But neural networks are opaque—the user has no idea how the neural net is making its decisions—and cannot easily be rendered
... (read more)

Reviews of books and films from my week with Jacob:

Films watched:

  • The Big Short
    • Review: Really fun. I liked certain elements of how it displays bad nash equilibria in finance (I love the scene with the woman from the ratings agency - it turns out she’s just making the best of her incentives too!).
    • Grade: B
  • Spirited Away
    • Review: Wow. A simple story, yet entirely lacking in cliche, and so seemingly original. No cliched characters, no cliched plot twists, no cliched humour, all entirely sincere and meaningful. Didn’t really notice that it was animated (while fantastical, it never really breaks the illusion of reality for me). The few parts that made me laugh, made me laugh harder than I have in ages.
    • There’s a small visual scene, unacknowledged by the ongoing dialogue, between the mouse-baby and the dust-sprites which is the funniest thing I’ve seen in ages, and I had to rewind for Jacob to notice it.
    • I liked how by the end, the team of characters are all a different order of magnitude in size.
    • A delightful, well-told story.
    • Grade: A+
  • Stranger Than Fiction
    • Review: This is now my go-to film of someone trying something original and just failing. Filled with new ideas, but none executed well, a
... (read more)

"Slow takeoff" at this point is simply a misnomer.

Paul's position should be called "Fast Takeoff" and Eliezer's position should be called "Discontinuous Takeoff".

2Vladimir_Nesov
Slow takeoff doesn't imply absence of discontinuous takeoff a bit later, it just says that FOOM doesn't happen right away and thus there is large AI impact (which is to say, things are happening fast) even pre-FOOM, if it ever happens.
2lc
Why not drop "Fast vs Slow" entirely and just use "continuous" vs. "discontinuous" takeoff to refer to the two ideas?
6Ben Pace
I guess it helps remind everyone that both positions are relatively extreme compared to how most other people have been expecting that the future will go. But continuous vs discontinuous also seems pretty helpful.

I don't normally just write-up takes, especially about current events, but here's something that I think is potentially crucially relevant to the dynamics involved in the recent actions of the OpenAI board, that I haven't seen anyone talk about:

The four members of the board who did the firing do not know each other very well.

Most boards meet a few times per year, for a couple of hours. Only Sutskever works at OpenAI. D'Angelo works senior roles in tech companies like Facebook and Quora, Toner is in EA/policy, and MacAulay at other tech companies (I'm not aware of any overlap with D'Angelo).

It's plausible to me that MacAulay and Toner have spent more than 50 hours in each others' company, but overall I'd probably be willing to bet at even odds that no other pair of them had spent more than 10 hours together before this crisis.

This is probably a key factor in why they haven't written more publicly about their decision. Decision-by-committee is famously terrible, and it's pretty likely to me that everyone pushes back hard on anything unilateral by the others in this high-tension scenario. So any writing representing them has to get consensus, and they're focused on firefighting and ge... (read more)

4Ben Pace
In this mess, Altman and Helen should not be held to the same ethical standards, because I believe one of them has been given a powerful career in substantial part based on her commitments to higher ethical standards (a movement that prided itself on openness and transparency and trying to do the most good). If Altman played deceptive strategies, and insofar as Helen played back the same deceptive strategies as Altman, then she did not honor the EA name. (The name has a lot of dirt on it these days already, but still. It is a name that used to mean something back when it gave her power.) Insofar as you got a position specifically because you were affiliated with a movement claiming to be good and open and honest and to have unusually high moral standards, and then when you arrive you become a standard political player, that's disingenuous.
2ryan_greenblatt
I think Holden being added to the board shouldn't be mostly attributed to his affiliation with EA. And the Helen board seat is originally from this. (The relevant history here is that this is the OpenAI grant that resulted in a board seat while here is a post from just earlier about Holden's takes on EA.)
4Ben Pace
Some historical context Holden in 2013 on the GiveWell blog: Holden in 2015 on the EA Forum (talking about GiveWell Labs, which grew into OpenPhil): Holden in April 2016 about plans for working on AI: (Dewey who IIRC had worked at FHI and CEA ahead of this, and Beckstead from FHI.) Holden in 2016 about why they're making potential risks from advanced AI a priority: Holden about the OpenAI grant in 2017: As a negative datapoint: I looked through a bunch of the media articles linked at the bottom of this GiveWell page, and most of them do not mention Effective Altruism, only effective giving / cost-effectiveness. So their Effective Altruist identity have had less awareness amongst folks who primarily know of Open Philanthropy through their media appearances.
2Ben Pace
I think this is accurately described as "an EA organization got a board seat at OpenAI", and the actions of those board members reflect directly on EA (whether internally or externally). Why did OpenAI come to trust Holden with this position of power? My guess is Holden and Dustin's personal reputations were substantial effects here, along with Open Philanthropy's major funding source, but also that many involved people's excitement about and respect for the EA movement were a relevant factor in OpenAI wanting to partner with Open Philanthropy, and that Helen's and Tasha's actions have directly and negatively reflected on how the EA ecosystem is viewed by OpenAI leadership. There's a separate question about why Holden picked Helen Toner and Tasha MacAulay, and to what extent they were given power in the world by the EA ecosystem. It seems clear that these people have gotten power through their participation in the EA ecosystem (as OpenPhil is an EA institution), and to the extent that the EA ecosystem advertises itself as more moral than other places, if they executed the standard level of deceptive strategies that others in the tech industry would in their shoes, then that was false messaging.
4Ben Pace
I'm not quite sure in the above comment how to balance between "this seems to me like it could explain a lot" and also "might just be factually false". So I guess I'm leaving this comment, lampshading it.
3Ben Pace
The most important thing right now: I still don't know why they chose to fire Altman, and especially why they chose to do it so quickly.  That's an exceedingly costly choice to make (i.e. with the speed of it), and so when I start to speculate on why, I only come up with commensurately worrying states of affair e.g. he did something egregious enough to warrant it, or he didn't and the board acted with great hostility. Them going back on their decision is bayesian evidence for the latter — if he'd done something egregious, they'd just be able to tell relevant folks, and Altman wouldn't get his job back. So many people are asking this (e.g. everyone at the company). I'll be very worried if the reason doesn't come out.
3Ben Pace
In brief: I'm saying that once you condition on: 1. The board decided the firing was urgent. 2. The board does not know each other very well and defaults to making decisions by consensus. 3. The board is immediately in a high-stakes high-stress situation. Then you naturally get        4. The board fails to come to consensus on public comms about the decision.
2Ben Pace
Also, I don't know that I've said this, but from reading enough of his public tweets, I had blocked Sam Altman long ago. He seemed very political in how he used speech, and so I didn't want to include him in my direct memetic sphere. As a small pointer to why: he would commonly choose not to share object-level information about something, but instead share how he thought social reality should change. I think I recall him saying that the social consensus was wrong about fusion energy, and pushed for it to move in a specific direction; he did this rather than just plainly say what his object level beliefs about fusion were, or offer a particular counter-argument to an argument that was going around. It's been a year or two since I blocked him, so I don't recall more specifics, but it seemed worth mentioning, as a datapoint for folks to include in their character assessments.
2Ben Pace
My current guess is that most of the variance in what happened is explained by a board where 3 out of 4 people don't know the dynamics of upper management in a multi-billion dollar company, where the board don't know each other well, and (for some reason) the decision was made very suddenly. Pretty low-expectations given that situation. Seems like Shear was a pretty great replacement get given the hand dealt. Assuming that they had legit reason to fire the CEO, it's probably primarily through lack of skill and competence that they failed, more so than as a result of Altman's superior deal-making skill and leadership abilities (though that was what finished it off).

There's a game for the Oculus Quest (that you can also buy on Steam) called "Keep Talking And Nobody Explodes".

It's a two-player game. When playing with the VR headset, one of you wears the headset and has to defuse bombs in a limited amount of time (either 3, 4 or 5 mins), while the other person sits outside the headset with the bomb-defusal manual and tells you what to do. Whereas with other collaboration games, you're all looking at the screen together, with this game the substrate of communication is solely conversation, the other person is providing all of your inputs about how their half is going (i.e. not shown on a screen).

The types of puzzles are fairly straightforward computational problems but with lots of fiddly instructions, and require the outer person to figure out what information they need from the inner person. It often involves things like counting numbers of wires of a certain colour, or remembering the previous digits that were being shown, or quickly describing symbols that are not any known letter or shape.

So the game trains you and a partner in efficiently building a shared language for dealing with new problems.

More than that, as the game gets harder, often

... (read more)
6Matt Goldenberg
There's a similar free game for Android and iOs called space team that I highly recommend.
4Gordon Seidoh Worley
I use both this game and Space Team as part of training people in the on-call rotation at my company. They generally report that it's fun, and I love it because it usually creates the kind of high-pressure feelings in people they may experience when on-call, so it creates a nice, safe environment for them to become more familiar with those feelings and how to work through them. On a related note, I'm generally interested in finding more cooperative games with asymmetric information and a need to communicate. Lots of games meet one or two of those criteria, but very few games are able to meet all simultaneously. For example, Hanabi is cooperative and asymmetric, but lacks much communication (you're not allowed to talk), and many games are asymmetric and communicative but not cooperative (Werewolf, Secret Hitler, etc.) or cooperative and communicative but not asymmetric (Pandemic, Forbidden Desert, etc.).
1ioannes
+1 – this game is great. It's really good with 3-4 people giving instructions and one person in the hot seat. Great for team bonding.

I talked with Ray for an hour about Ray's phrase "Keep your beliefs cruxy and your frames explicit".

I focused mostly on the 'keep your frames explicit' part. Ray gave a toy example of someone attempting to communicate something deeply emotional/intuitive, or perhaps a buddhist approach to the world, and how difficult it is to do this with simple explicit language. It often instead requires the other person to go off and seek certain experiences, or practise inhabiting those experiences (e.g. doing a little meditation, or getting in touch with their emotion of anger).

Ray's motivation was that people often have these very different frames or approaches, but don't recognise this fact, and end up believing aggressive things about the other person e.g. "I guess they're just dumb" or "I guess they just don't care about other people".

I asked for examples that were motivating his belief - where it would be much better if the disagreers took to hear the recommendation to make their frames explicit. He came up with two concrete examples:

  • Jim v Ray on norms for shortform, where during one hour they worked through the same reasons
... (read more)
[-]Zvi280

I find "keep everything explicit" to often be a power move designed to make non-explicit facts irrelevant and non-admissible. This often goes along with burden of proof. I make a claim (real example of this dynamic happening, at an unconference under Chatham house rules: That pulling people away from their existing community has real costs that hurt those communities), and I was told that, well, that seems possible, but I can point to concrete benefits of taking them away, so you need to be concrete and explicit about what those costs are, or I don't think we should consider them.

Thus, the burden of proof was put upon me, to show (1) that people central to communities were being taken away, (2) that those people being taken away hurt those communities, (3) in particular measurable ways, (4) that then would impact direct EA causes. And then we would take the magnitude of effect I could prove using only established facts and tangible reasoning, and multiply them together, to see how big this effect was.

I cooperated with this because I felt like the current estimate of this cost for this person was zero, and I could easily raise that, and that was better than nothing,... (read more)

To complement that: Requiring my interlocutor to make everything explicit is also a defence against having my mind changed in ways I don't endorse but that I can't quite pick apart right now. Which kinda overlaps with your example, I think.

I sometimes will feel like my low-level associations are changing in a way I'm not sure I endorse, halt, and ask for something that the more explicit part of me reflectively endorses. If they're able to provide that, then I will willingly continue making the low-level updates, but if they can't then there's a bit of an impasse, at which point I will just start trying to communicate emotionally what feels off about it (e.g. in your example I could imagine saying "I feel some panic in my shoulders and a sense that you're trying to control my decisions"). Actually, sometimes I will just give the emotional info first. There's a lot of contextual details that lead me to figure out which one I do.

One last bit is to keep in mind that most (or, many things), can be power moves.

There's one failure mode, where a person sort of gives you the creeps, and you try to bring this up and people say "well, did they do anything explicitly wrong?" and you're like "no, I guess?" and then it turns out you were picking up something important about the person-giving-you-the-creeps and it would have been good if people had paid some attention to your intuition.

There's a different failure mode where "so and so gives me the creeps" is something you can say willy-nilly without ever having to back it up, and it ends up being it's own power move.

I do think during politically charged conversations it's good to be able to notice and draw attention to the power-move-ness of various frames (in both/all directions)

(i.e. in the "so and so gives me the creeps" situation, it's good to note both that you can abuse "only admit explicit evidence" and "wanton claims of creepiness" in different ways. And then, having made the frame of power-move-ness explicit, talk about ways to potentially alleviate both forms of abuse)

6Raemon
Want to clarify here, "explicit frames" and "explicit claims" are quite different, and it sounds here like you're mostly talking about the latter. The point of "explicit frames" is specifically to enable this sort of conversation – most people don't even notice that they're limiting the conversation to explicit claims, or where they're assuming burden of proof lies, or whether we're having a model-building sharing of ideas or a negotiation. Also worth noting (which I hadn't really stated, but is perhaps important enough to deserve a whole post to avoid accidental motte/bailey by myself or others down the road): My claim is that you should know what your frames are, and what would change* your mind. *Not* that you should always tell that to other people. Ontological/Framework/Aesthetic Doublecrux is a thing you do with people you trust about deep, important disagreements where you think the right call is to open up your soul a bit (because you expect them to be symmetrically opening their soul, or that it's otherwise worth it), not something you necessarily do with every person you disagree with (especially when you suspect their underlying framework is more like a negotiation or threat than honest, mutual model-sharing) *also, not saying you should ask "what would change my mind" as soon as you bump into someone who disagrees with you. Reflexively doing that also opens yourself up to power moves, intentional or otherwise. Just that I expect it to be useful on the margin.
6Zvi
Interesting. It seemed in the above exchanges like both Ben and you were acting as if this was a request to make your frames explicit to the other person, rather than a request to know what the frame was yourself and then tell if it seemed like a good idea. I think for now I still endorse that making my frame fully explicit even to myself is not a reasonable ask slash is effectively a request to simplify my frame in likely to be unhelpful ways. But it's a lot more plausible as a hypothesis.
4Raemon
I've mostly been operating (lately) within the paradigm of "there does in fact seem to be enough trust for a doublecrux, and it seems like doublecrux is actually the right move given the state of the conversation. Within that situation, making things as explicit as possible seems good to me." (But, this seems importantly only true within that situation) But it also seemed like both Ben (and you) were hearing me make a more aggressive ask than I meant to be making (which implies some kind of mistake on my part, but I'm not sure which one). The things I meant to be taking as a given are: 1) Everyone has all kinds of implicit stuff going on that's difficult to articulate. The naively Straw Vulcan failure mode is to assume that if you can't articulate it it's not real. 2) I think there are skills to figuring out how to make implicit stuff explicit, in a careful way that doesn't steamroll your implicit internals. 3) Resolving serious disagreements requires figuring out how to bridge the gap of implicit knowledge. (I agree that in a single-pair doublecrux, doing the sort of thing you mention in the other comment can work fine, where you try to paint a picture and ask them questions to see if they got the picture. But, if you want more than one person to be able to understand the thing you'll eventually probably want to figure out how to make it explicit without simplifying it so hard that it loses its meaning) 4) The additional, not-quite-stated claim is "I nowadays seem to keep finding myself in situations where there's enough longstanding serious disagreements that are worth resolving that it's worth Stag Hunting on Learning to Make Beliefs Cruxy and Frames Explicit, to facilitate those conversations." I think maybe the phrase "*keep* your beliefs cruxy and frames explicit" implied more of an action of "only permit some things" rather than "learn to find extra explicitness on the margin when possible."
4Raemon
As far as explicit claims go: My current belief is something like: If you actually want to communicate an implicit idea to someone else, you either need 1) to figure out how to make the implicit explicit, or 2) you need to figure out the skill of communicating implicit things implicitly... which I think actually can be done. But I don't know how to do it and it seems hella hard. (Circling seems to work via imparting some classes of implicit things implicitly, but depends on being in-person) My point is not at all to limit oneself to explicit things, but to learn how to make implicit things explicit (or, otherwise communicable). This is important because the default state often seems to be failing to communicate at all. (But it does seem like an important, related point that trying to push for this ends up very similar sounding, from the outside, like 'only explicit evidence is admissable', which is a fair thing to have a instinctive resistance to) But, the fact that this is real hard is because the underlying communication is real hard. And I think there's some kind of grieving necessary to accept the fact that "man, why can't they just understand my implicit things that seem real obvious to me?" and, I dunno, they just can't. :/
4Zvi
Agreed it's a learned skill and it's hard. I think it's also just necessary. I notice that the best conversations I have about difficult to describe things definitely don't involve making everything explicit, and they involve a lot of 'do you understand what I'm saying?' and 'tell me if this resonates' and 'I'm thinking out loud, but maybe'. And then I have insights that I find helpful, and I can't figure out how to write them up, because they'd need to be explicit, and they aren't, so damn. Or even, I try to have a conversation with someone else (in some recent cases, you) and share these types of things, and it feels like I have zero idea how to get into a frame where any of it will make any sense or carry any weight, even when the other person is willing to listen by even what would normally be strong standards. Sometimes this turns into a post or sequence that ends up explaining some of the thing? I dunno.
6Raemon
FWIW, upcoming posts I have in the queue are: * Noticing Frame Differences * Tacit and Explicit Knowledge * Backpropagating Facts into Aesthetics * Keeping Frames Explicit (Possibly, in light of this conversation, adding a post called something like "Be secretly explicit [on the margin]")

I'd been working on a sequence explaining this all in more detail (I think there's a lot of moving parts and inferential distance to cover here). I'll mostly respond in the form of "finish that sequence."

But here's a quick paragraph that more fully expands what I actually believe:

  • If you're building a product with someone (metaphorical product or literal product), and you find yourself disagreeing, and you explain "This is important because X, which implies Y", and they say "What!? But, A, therefore B!" and then you both keep repeating those points over and over... you're going to waste a lot of time, and possibly build a confused frankenstein product that's less effective than if you could figure out how to successfully communicate.
    • In that situation, I claim you should be doing something different, if you want to build a product that's actually good.
    • If you're not building a product, this is less obviously important. If you're just arguing for fun, I dunno, keep at it I guess.
  • A separate, further claim is that the reason you're miscommunicating is because you have a bunch of hidden assumptions in yo
... (read more)
every time you disagree with someone about one of your beliefs, you [can] automatically flag what the crux for the belief was

This is the bit that is computationally intractable.

Looking for cruxes is a healthy move, exposing the moving parts of your beliefs in a way that can lead to you learning important new info.

However, there are an incredible number of cruxes for any given belief. If I think that a hypothetical project should accelerate it's development time 2x in the coming month, I could change my mind if I learn some important fact about the long-term improvements of spending the month refactoring the entire codebase; I could change my mind if I learn that the current time we spend on things is required for models of the code to propagate and become common knowledge in the staff; I could change my mind if my models of geopolitical events suggest that our industry is going to tank next week and we should get out immediately.

4Raemon
I'm not claiming you can literally do this all the time. [Ah, an earlier draft of the previous comment emphasized this this was all "things worth pushing for on the margin", and explicitly not something you were supposed to sacrifice all other priorities for. I think I then rewrote the post and forgot to emphasize that clarification] I'll try to write up better instructions/explanations later, but to give a rough idea of the amount of work I'm talking about. I'm saying "spend a bit more time than you normally do in 'doublecrux mode'". [This can be, like, an extra half hour sometimes when having a particular difficult conversation]. When someone seems obviously wrong, or you seem obviously right, ask yourself "what are cruxes are most loadbearing", and then: * Be mindful as you do it, to notice what mental motions you're actually performing that help. Basically, do Tuning Your Cognitive Strategies to the double crux process, to improve your feedback loop. * When you're done, cache the results. Maybe by writing it down, or maybe just sort of thinking harder about it so you remember it a better. The point is not to have fully mapped out cruxes of all your beliefs. The point is that you generally have practiced the skill of noticing what the most important cruxes are, so that a) you can do it easily, and b) you keep the results computed for later.