Valentine

I was a co-founder of CFAR in 2012. I'd been actively trying to save the world for about a decade at that point. I left in 2018 to seriously purify my mind & being. I realized in 2020 that I'd been using the fear of the end of the world like an addictive drug and did my damnedest to quit cold-turkey. I'm now doing my best to embody an answer to the global flurry in a way that's something like a fusion of game theory and Buddhist Tantra.

Find my non-rationalist writing, social media, and projects at my Linktree.

Wiki Contributions

Comments

Sorted by
Valentine2618

I think breastfeeding is different because… public health people decided it should be, and we’ve internalized their messaging.

I haven't gone around and checked much, but my gut impression isn't that this is about public health people. I think it's more like a Chesterton's Fence backlash against previous generations' experts claiming that formula was obviously better. IIRC, mothers were warned against using breastmilk and told to go to formula instead, because it's Scientific™. So it took some cultural pushback to reclaim evolution's solution to feeding newborns.

Valentine102

I haven't read OP yet, just a quick translation note: 

The Sanskrit word "tanha" shares an etymology with English words like "tenacious", "tendency", and "tenet". The PIE root means "grip" or "hold".

I think most folk in my social circles who use "tanha" these days are referencing Romeo's "(mis)Translating the Buddha":

Tanha is usually translated as desire or craving but this is wrong and misleading. Tanha is more literally translated as 'fused to' or 'welded to'. It immediately follows the mental moment that you zoom in with the attentional aperture on something. It could be that a flower or an item on the shelf at the supermarket captures your attention, or you turn your head to catch more detail as you pass by an accident on the road. Many hundreds of thousands of such events take place in the course of a single day. With most of them attention then relaxes and makes space for the next thing. But with some small proportion you find the mind doesn't quite 'unclench' from the object or some aspect of the object. This tension aspect is why it is sometimes translated as ‘grasping’ which is closer. Imagine something you aren’t finished with being pulled out of your hand and you tensing your fingers to resist.

That seems maybe true. What's the problem you see with that?

I consider ultra-BS a primarily 'central route' argument, as the practitioner uses explicit reasoning to support explicit narrative arguments. […]

Putting someone off balance, on the other hand, is more 'peripheral route' persuasion. There's far more emphasis on the implicit messaging.

Ah! This distinction helped clarify a fair bit for me. Thank you!

 

…I think I might conclude that your implicit primers and vibes are very good at detecting implicit persuasion, which typically but not always has a correlation with dark artsy techniques.

I agree on all accounts here. I think I dumped most of my DADA skill points into implicit detection. And yes, the vibes thing isn't a perfect correlation to Dark stuff, I totally agree.

 

Is this example satisfying?

It's definitely helpful! The category still isn't crisp in my mind, but it's a lot clearer. Thank you!

 

Thanks for the response in any case, I really enjoy these discussions! Would you like to do a dialogue sometime? 

I've really enjoyed this exchange too. Thank you!

And sure, I'd be up for a dialogue sometime. I don't have a good intuition for what kind of thing goes well in dialogues yet, so maybe take the lead if & when you feel inspired to invite me into one?

Can you spell this out a little more? Did Brent and LaSota employ baloney-disclaimers and uncertainty-signaling in order to bypass people's defenses?

I think Brent did something different from what I'm describing — a bit more like judo plus DOS attacks.

I'm not as familiar with LaSota's methods. I talked with them several times, but mostly before I learned to detect the level of psychological impact I'm talking about with any detail. Thinking back to those interactions, I remember it feeling like LaSota was confidently asserting moral and existential things that threatened to make me feel inadequate and immoral if I didn't go along with what they were saying and seek out the brain hemisphere hacking stuff they were talking about. And maybe even then I'd turn out to be innately "non-good".

(Implied here is a type of Dark hack I find most folk don't have good defenses against other than refusing to reason and blankly shutting down. It works absurdly well on people who believe they should do what they intellectually conclude makes sense to do.)

The thing I was referring to is something I personally stumbled across. IME rationalists on the whole are generally more likely to take in something said in a low-status way. It's like the usual analyze-and-scrutinize machinery kind of turns off.

One of the weirder examples is, just ending sentences as though they're questions? I'm guessing it's because ending each thing with confidence as a statement is a kind of powerful assertion. But, I mean, if the person talking is less confident then maybe what they're saying is pretty safe to consider?

(I'm demoing back & forth in that paragraph, in case that wasn't clear.)

I think LaSota might have been doing something like this too, but I'm not sure.

(As a maybe weird example: Notice how that last sentence is in fact caveated, but it's still confident. I'm quite sure this is my supposition. I'm sure I'm not sure of the implied conclusion. I feel solid in all of this. My impression is, this kind of solidity is a little (sometimes a lot) disturbing to many rationalists (with some exceptions I don't understand very well — like how Zvi and Eliezer can mostly get away with brazen confidence without much pushback). By my models, the content of the above sentence would have been easier to receive if rewritten along the lines of, "I'm really not sure, but based on my really shaky memories, I kinda wonder if LaSota might have been doing something like this too — but don't believe me too much!")

Does that answer what you'd hoped?

Yep, I think you're basically right on all accounts. Maybe a little off with the atheist fellow, but because of context I didn't think to share until reading your analysis, and what you said is close enough!

It's funny, I'm pretty familiar with this level of analysis, but I still notice myself thinking a little differently about the bookstore guy in light of what you've said here. I know people do the unbalancing thing you're talking about. (Heck, I used to quite a lot! And probably still do in ways I haven't learned to notice. Charisma is a hell of a drug when you're chronically nervous!) But I didn't think to think of it in these terms. Now I'm reflecting on the incident and noticing "Oh, yeah, okay, I can pinpoint a bunch of tiny details when I think of it this way."

The fact that I couldn't tell whether any of these were "ultra-BS" is more the central point to me.

If I could trouble you to name it: Is there a more everyday kind of example of ultra-BS? Not in debate or politics?

I'm gonna err on the side of noting disagreements and giving brief descriptions of my perspective rather than writing something I think has a good chance of successfully persuading you of my perspective, primarily so as to actually write a reply in a timely fashion.

Acknowledged.

 

I don't see this as showing that in all domains one must maintain high offensive capabilities in order to have good defenses.

Oh, uh, I didn't mean to imply that. I meant to say that rejecting attention to military power is a bad strategy for defense. A much, much better defensive strategy is to study offense. But that doesn't need to mean getting good at offense!

(Although I do think it means interacting with offense. Most martial arts fail spectacularly on this point for instance. Pragmatically speaking, you have to have practice actually defending yourself in order to get skillful at defense. And in cases like MMA, that does translate to getting skilled at attack! But that's incidental. I think you could design good self-defense training systems that have most people never practicing offense.)

 

I think these problems aren't that hard once you have community spaces that are willing to enforce boundaries. Over the last few years I've run many events and spaces, and often gotten references for people who want to enter the spaces, and definitely chosen to not invite people due to concerns about ethics and responsible behavior. I don't believe I would've accepted these two people into the spaces more than once or twice at most.

Nice. And I agree, boundaries like this can be great for a large range of things.

I don't think this helps the Art much though.

And it's hard to know how much your approach doesn't work.

I also wonder how much this lesson about boundaries arose because of the earlier Dark exploits. In which case it's actually, ironically, an example of exactly the kind of thing I'm talking about! Only with lessons learned much more painfully than I think was necessary due to their not being sought out.

But also, maybe this is good enough for what you care about. Again, I don't mean to pressure that you should do anything differently.

I'm mostly pushing back against the implication I read that "Nah, our patches are fine, we've got the Dark Arts distanced enough that they're not an issue." You literally can't know that.

 

My position is that most thinking isn't really about reality and isn't truth-tracking, but that if you are doing that thinking then a lot of important questions are surprisingly easy to answer.

Totally agree. And this is a major defense against a lot of the stuff that bamboozles most folk.

 

I think there's a ton of adversarial stuff going on as well, but the primary reason that people haven't noticed that AI is an x-risk isn't because people are specifically trying to trick them about the domain, but because the people are not really asking themselves the question and checking.

I agree — and I'm not sure why you felt this was relevant to say? I think maybe you thought I was saying something I wasn't trying to.

 

(I think there's some argument to be made here that the primary reason people don't think for themselves is because civilization is trying to make them go crazy, which is interesting, though I still think the solution is primarily "just make a space where you can actually think about the object level".)

This might be a crux between us. I'm not sure. But I think you might be seriously underestimating what's involved in that "just" part ("just make a space…"). Attention on the object-level is key, I 100% agree there. But what defines the space? What protects its boundaries? If culture wants to grab you by the epistemic throat, but you don't know how it tries to do so, and you just try to "make a space"… you're going to end up way more confident of the clarity of your thinking than is true.

 

I acknowledge that there are people who are very manipulative and adversarial in illegible ways that are hard to pin down. […] …I think probably there are good ways to help that info rise up and get shared…. I don't think it requires you yourself being very skilled at engaging with manipulative people.

I think there's maybe something of a communication impasse happening here. I agree with what you're saying here. I think it's probably good enough for most cases you're likely to care about, for some reasonable definition of "most". It also strikes me as obvious that (a) it's unlikely to cover all the cases you're likely to care about, and (b) the Art would be deeply enriched by learning how one would skillfully engage with manipulative people. I don't think everyone who wants to benefit from that enrichment needs to do that engagement, just like not everyone who wants to train in martial arts needs to get good at realistic self-defense.

I've said this several times, and you seem to keep objecting to my implied claim of not-that. I'm not sure what's going on there. Maybe I'm missing your point?

 

I do sometimes look at people who think they're at war a lot more than me, and they seem very paranoid and to spend so many cognitive cycles modeling ghosts and attacks that aren't there. It seems so tiring!

I agree. I think it's dumb.

 

I suspect you and I disagree about the extent to which we are at war with people epistemically.

Another potentially relevant point here is that I tend to see large groups and institutions as the primary forces deceiving me and tricking me, and much less so individuals.

Oh! I'm really glad you said this. I didn't realize we were miscommunicating about this point.

I totally agree. This is what I mean when I'm talking about agents. I'm using adversarial individuals mostly as case studies & training data. The thing I actually care about is the multipolar war going on with already-present unaligned superintelligences. Those are the Dark forces I want to know how to be immune to.

I'm awfully suspicious of someone's ability to navigate hostile psychofauna if literally their only defense against (say) a frame controller is "Sus, let's exclude them." You can't exclude Google or wokism or collective anxiety the same way.

Having experienced frame control clawing at my face, and feeling myself become immune without having to brace… and noticing how that skill generalized to some of the tactics that the psychofauna use…

…it just seems super obvious to me that this is really core DADA. Non-cognitive, very deep, very key.

 

  • Personally I would like to know two or three people who have successfully navigated being manipulated, and hopefully have them write up their accounts of that.

Ditto!

 

  • I think aspiring rationalists should maneuver themselves into an environment where they can think clearly and be productive and live well, and maintain that, and not try to learn to survive being manipulated without a clear and present threat that they think they have active reason to move toward rather than away from.

Totally agree with the first part. I think the whole thing is a fine choice. I notice my stance of "Epistemic warriors would still be super useful" is totally unmoved thus far though. (And I'm reminded of your caveat at the very beginning!)

I'm reminded of the John Adams quote: "I must study Politicks and War that my sons may have liberty to study Mathematicks and Philosophy.  My sons ought to study mathematics and Philosophy, Geography, natural History, naval Architecture, navigation, Commerce and Agriculature, in order to give their Children a right to study Painting, Poetry, Musick, Architecture, Statuary, Tapestry and Porcelaine."

 

I note that when I read your comment I'm not sure whether you're saying "this is an important area of improvement" or "this should be central to the art", which are very different epistemic states.

Oh, I don't know what should or shouldn't be central to the Art.

It just strikes me that rationality currently is in a similar state as aikido.

Aikido claims to be an effective form of self-defense. (Or at least it used to! Maybe it's been embarrassed out of saying that anymore?) It's a fine practice, it has immense value… it's just not what it says on the tin.

If it wanted to be what it claims, it needs to do things like add pressure testing. Realistic combat. Going into MMA tournaments and coming back with refinements to what it's doing.

And that could be done in a way that honors its spirit! It can add the constraints that are key to its philosophy, like "Protect everyone involved, including the attacker."

But maybe it doesn't care about that. Maybe it just wants to be a sport and discipline.

That's totally fine!

It does seem weird for it to continue claiming to be effective self-defense though. Like it needs its fake meaning to be something its practitioners believe in.

I think rationality is in a similar state. It has some really good stuff in it. Really good. It's a great domain.

But I just don't see it mattering for the power plays. I think rationalists don't understand power, the same way aikido practitioners don't understand fighting. And they seem to be in a similar epistemic state about it: they think they basically do, but they don't pressure-test their understanding to check, best as I can tell.

So of your two options, it's more like "important area for improvement"… roughly like pressure-testing could be an important area of improvement for aikido. It'd probably become a kind of central if it were integrated! But I don't know.

And, I think the current state of rationality is fine.

Just weak in one axis it sometimes claims to care about.

Well, that particular comment had a lot of other stuff going on…

That's really not a central example of what I meant. I meant more like this one. Or this one.

 

But also, yeah, I do kinda feel like "downvoting people when they admit they did something bad" is a thing we sometimes do here and that's not great incentives. If someone wants to avoid that kind of downvote, "stop admitting to the bad thing" seems like an obvious strategy. Oops! And like, I remember times when I asked someone a question and they got downvoted for their answer, and I did think it was a bad answer that in a vacuum deserved downvotes, but I still upvoted as thanks for answering.

Yep. This is messy and unfortunate, I agree.

 

Someone might not have realized the thing they did was bad-according-to-LW, and the downvotes help signal that.

It's not possible to take the downvotes as a signal of this if downvotes get used for a wide range of things. If the same signal gets used for

"This was written in bad form, but if you'd written it differently it would have been welcome"

and

"Your attitude doesn't belong on this website, and you should change it or leave"

and

"I don't like your vibe, so I'm just gonna downvote"

then the feedback isn't precise enough to be helpful in shaping behavior.

 

If someone did a bad thing and doesn't care, maybe we just don't want them here.

True.

Although if the person disagrees with whether it was bad, and the answer to that disagreement is to try to silence them… then that seems to me like a pretty anti-epistemic norm. At least locally.

 

I'd also really like to see a return of the old LW cultural thing of, if you downvote then you explain why. There are some downvotes on my comments that I'm left scratching my head about and going "Okay, whatever." It's hard for downvotes to improve culture if the feedback amounts to "Bad."

I think there's currently too many things that deserve downvotes for that to be realistic.

I have a hard time believing this claim. It's not what I see when I look around.

The dynamic would be pretty simple:

  • After I downvote, I skim the replies to see if someone else already explained what had me do the downvote. If so, I upvote that explanation and agree-vote it too.
  • If there's no such explanation, I write one.

Easy peasy. I seriously doubt the number of things needing downvotes on this site is so utterly overwhelming that this approach is untenable. The feedback would be very rich, the culture well-defined and transparent.

I don't know why LW stopped doing this. Once upon a time it used to cost karma to downvote, so people took downvotes more seriously. I assume there was some careful thought put into changing that system to the current one. I haven't put more than a sum total of maybe ten minutes of thinking into this. So I'm probably missing something.

But without knowing what that something is, and without a lot of reason for me to invest a ton more time into figuring it out… my tentative but clear impression is that what I'm describing would be way better for culture here by a long shot.

Valentine8-1

…I think another pretty good option is "a master rationalist would definitely avoid surrounding themselves with con artists and frauds and other adversarial actors".

I think that's a great option. I'd question a "master rationalist's" skills if they couldn't avoid such adversarial actors, or notice them if they slip through the cracks.

 

I do think there are real skills you are pointing to, but to some extent I prefer the world where I don't have those skills and in place of that my allies and I coordinate to identify and exclude people who are using the dark arts.

I like your preference. I'll say some things, but I want to start by emphasizing that I don't think you're making a wrong or bad choice.

I want to talk about what I think the Art could be, kind of for aesthetic reasons. This isn't to assert anything about what you or any given individual should or shouldn't be doing in any kind of moral sense.

So with that said, here are three points:

 

(1) I think there's a strong analogy here to studying combat and war. Yes, if you can be in a pacifist cluster and just exclude folk who are really into applied competitive strategy, then you have something kind of like a cooperate/cooperate equilibrium. But if that's the whole basis of your culture, it's extremely vulnerable, the way cooperate-bot is vulnerable in prisoners' dilemmas. You need military strength, the way a walled garden needs walls. Otherwise folk who have military strength can just come take your resources, even if you try to exclude them at first.

At the risk of using maybe an unfair example, I think what happened with FTX last year maybe illustrates the point.

Clearer examples in my mind are Ziz and Brent. The point not being "These people are bad!" But rather, these people were psychologically extremely potent and lots of folk in the community could neither (a) adequately navigate their impact (myself included!) nor (b) rally ejection/exclusion power until well after they'd already had their impact.

Maybe, you might hope, you can make the ejection/exclusion sensitivity refined enough to work earlier. But if you don't do that by studying the Dark Arts, and becoming intimately familiar with them, then what you get is a kind of naïve allergic response that Dark Artists can weaponize.

Again, I don't mean that you in particular or even rationalists in general need to address this. There's nothing wrong with a hobby. I'm saying that as an Art, it seems like rationality is seriously vulnerable if it doesn't include masterful familiarity with the Dark Arts. Kind of like, there's nothing wrong with practicing aikido as a sport, but you're not gonna get the results you hope for if you train in aikido for self-defense. That art is inadequate for that purpose and needs exposure to realistic combat to matter that way.

 

(2) …and I think that if the Art of Rationality were to include intimate familiarity with the Dark Arts, it would work way way better.

Things like the planning fallacy or confirmation bias are valuable to track. I could stand to improve my repertoire here for sure.

But the most potent forms of distorted thinking aren't about sorting out the logic. I think they look more like reaching deep down and finding ways to become immune to things like frame control.

Frame control is an amazing example in my mind precisely because of the hydra-like nature of the beast. How do you defend against frame control without breaking basic things about culture and communication and trust? How do you make it so your cultural and individual defenses don't themselves become the manual that frame controllers use to get their desired effects?

And this barely begins to touch on the kind of impact that I'd want to call "spiritual". By which I don't mean anything supernatural; I'm talking about the deep psychological stuff that (say) conversing with someone deep in a psilocybin trip can do to the tripper. That's not just frame control. That's something way deeper, like editing someone's basic personality operating system code. And sometimes it reaches deeper even than that. And it turns out, you don't need psychedelics to reach that deep; those chemical tools just open a door that you can open other ways, voluntarily or otherwise, sometimes just by having a conversation.

The standard rationalist defense I've noticed against this amounts to mental cramping. Demand everything go through cognition, and anything that seems to try to route around cognition gets a freakout/shutdown/"shame it into oblivion" kind of response. The stuff that disables this immune response is really epistemically strange — things like prefacing with "Here's a fake framework, it's all baloney, don't believe anything I'm saying." Or doing a bunch of embodied stuff to act low-status and unsure. A Dark Artist who wanted to deeply mess with this community wouldn't have to work very hard to do some serious damage before getting detected, best as I can tell (and as community history maybe illustrates).

If this community wanted to develop the Art to actually be skillful in these areas… well, it's hard to predict exactly what that'd create, but I'm pretty sure it'd be glorious. If I think of the Sequences as retooling skeptical materialism, I think we'd maybe see something like a retooling of the best of Buddhist psychotechnology. I think folk here might tend to underestimate how potent that could really be.

(…and I also think that it's maybe utterly critical for sorting out AI alignment. But while I think that's a very important point, it's not needed for my main message for this exchange.)

 

(3) It also seems relevant to me that "Dark Arts" is maybe something of a fake category. I'm not sure it even forms a coherent cluster.

Like, is being charismatic a Dark Art? It certainly can be! It can act as a temptation. It seems to be possible to cultivate charisma. But the issue isn't that charisma is a Dark Art. It's that charisma is mostly symmetric. So if someone has a few slightly anti-epistemic social strategies in them, and they're charismatic, this can have a net Dark effect that's even strategic. But this is a totally normal level of epistemic noise!

Or how about something simpler, like someone using confirmation bias in a way that benefits their beliefs? Astrology is mostly this. Is astrology a Dark Art? Is talking about astrology a Dark Art? It seems mostly just epistemically hazardous… but where's the line between that and Dark Arts?

How about more innocent things, like when someone is trying to understand systemic racism? Is confirmation bias a helpful pattern-recognizer, or a Dark Art? Maybe it's potentially in service to Dark Arts, but is a necessary risk to learn the patterns?

I think Vervaeke makes this point really well. The very things that allow us to notice relevance are precisely the things that allow us to be fooled. Rationality (and he explicitly cites this — even the Keith Stanovich stuff) is a literally incomputable practice of navigating both Type I and Type II errors in this balancing act between relevance realization and being fooled.

When I think of central examples of Dark Arts, I think mostly of agents who exploit this ambiguity in order to extract value from others.

…which brings me back to point (1), about this being more a matter of skill in war. The relevant issue isn't that there are "Dark Arts". It's that there are unaligned agents who are trying to strategically fool you. The skill isn't to detect a Dark toolset; it's to detect intelligent intent to deceive and extract value.

 

All of which is to say:

  • I think a mature Art of Rationality would most definitely include something like skillful navigation of manipulation.
  • I don't think every practitioner needs to master every aspect of a mature Art. Much like not all cooks need to know how to make a roux.
  • But an Art that has detection, exclusion, & avoidance as its only defense against Dark Artists is a much poorer & more vulnerable Art. IMO.

The unspoken but implicit argument is that Russia doesn't need a reason to nuke us. If we give them the Arctic there's no question, we will get nuked.

Ah, interesting, I didn't read that assumption into it. I read it as "The power balance will have changed, which will make Russia's international bargaining position way stronger because now it has a credible threat against mainland USA."

I see the thing you're pointing out as implicit though. Like an appeal to raw animal fear.

 

For a successful nuclear first strike to be performed Russia must locate all of our military assets (plus likely that of our NATO allies as well), take them all out at once, all while the CIA somehow never gets wind of a plan.

That makes a lot of sense. I didn't know about the distributed and secret nature of our nuclear capabilities… but it's kind of obvious that that's how it'd be set up, now that you say so. Thank you for spelling this out.

 

Reactions like yours are thus part of what I was counting on when making the argument. It works because in general I can count on people not having prior knowledge. (don't worry, you're not alone)

Makes sense!

And I wasn't worried. I'm actually not concerned about sounding like (or being!) an idiot. I'm just me, and I have the questions I do! But thank you for the kindness in your note here.

 

It also seems rather incongruous with most people's model of the world […]. Suppose Russia was prepared to nuke the US, and had a credible first strike capability. Why isn't Uncle Sam rushing to defend his security interests? Why haven't pundits and politicians sounded the alarm? Why has there been no diplomatic incidents? A second Cuban missile crisis? A Russian nuclear attack somewhere else?

I gotta admit, my faith in the whole system is pretty low on axes like this. The collective response to Covid was idiotic. I could imagine the system doing some stupid things simply because it's too gummed up and geriatric to do better.

That's not my main guess about what's happening here. I honestly just didn't think through this level of thing when I first read your arctic argument from your debate. But collective ineptitude is plausible enough to me that the things you're pointing out here just don't land as damning.

But they definitely are points against. Thank you for pointing them out!

 

I hope that answers your question! Is everything clear now?

For this instance, yes!

There's some kind of generalization that hasn't happened for me yet. I'm not sure what to ask exactly. I think this whole topic (RE what you're saying about Dark Arts) is bumping into a weak spot in my mind that I wasn't aware was weak. I'll need to watch it & observe other examples & let it settle in.

But for this case: yes, much clearer!

Thank you for taking the time to spell all this out!

Load More