David Althaus

Comments

Thanks. Sorry for not being more clear, I pasted a screenshot (I'm reading the book on Kindle and can't copy-paste) and asked Claude to transcribe the image into written text. 

Again, this is not the first time this happened. Claude refused to help me translate a passage from the Quran (I wanted to check which of two translations was more accurate), refused to transcribe other parts of the above-mentioned Kindle book, and refused to provide me with details about what happened at Tuol Sleng prison. I eventually could persuade Claude in all of these cases but I grew tired of wasting my time and found it frustrating to deal with Claude's obnoxious holier-than-thou attitude. 

I downvoted Claude's response (i.e., clicked the thumbs-down symbol below the response) and selected "overactive refusal" as the reason. I didn't get in contact with Anthropic directly.

I had to cancel my Claude subscription (and signed up for ChatGPT) because Claude (3.5 Sonnet) constantly refuses to transcribe or engage with texts that discuss extremism or violence, even if it's clear that this is done in order to better understand and prevent extremist violence. 

Example text Claude refuses to transcribe below. For context, the text discusses the motivations and beliefs of Yigal Amir who assassinated the Israeli Prime Minister in 1995.

God gave the land of Israel to the Jewish People," he explained, and he, Yigal Amir, was making certain that God's promises, which he believed in with all his heart and to which he had committed his life, were not to be denied. He could not fathom, he declared, how a Jewish state would dare renege on the Jewish birthright, and he could not passively stand by as this terrifying religious tragedy took place. In Amir's thinking, his action was not a personal matter or an act of passion but a solution, albeit an extreme one, to a religious and psychological trauma brought about by the actions of the Rabin government. Though aware of the seriousness of his action, Amir explained that his fervent faith encouraged and empowered him to commit this act of murder. He told his interrogators, "Without believing in God and an eternal world to come, I would never have had the power to do this." Rabin deserved to die because he was facilitating, in Amir's and other militants' view, the possible mass murder of Jews by consenting to the Oslo peace agreements. This made Rabin, according to halacha, or Jewish law, a rodef, someone about to kill an innocent person and whom a bystander may therefore execute without a trial. Rabin was also a moser, a Jew who willingly betrays his brethren, and guilty of treason for cooperating with Yasser Arafat and the Palestinian Authority in surrendering rights to the Holy Land. Jewish jurisprudence considers the actions of the rodef and moser among the most pernicious crimes; persons guilty of such acts are to be killed at the first opportunity.

This type of refusal has happened numerous times. Claude doesn't change its behavior when I provide arguments (unless I spend a lot of time on this). 

I haven't used ChatGPT as much but it so far has never refused.

I hope Anthropic changes Claude so I can continue using it again; I certainly don't like the idea of supporting OpenAI. 

Really great post! 

It’s unclear how much human psychology can inform our understanding of AI motivations and relevant interventions but it does seem relevant that spitefulness correlates highly (Moshagen et al., 2018, Table 8, N  1,261) with several other “dark traits”, especially psychopathy (r = .74), sadism (r = .59), and Machiavellianism (r = .59). 

(Moshagen et al. (2018) therefore suggest that “[...] dark traits are specific manifestations of a general, basic dispositional behavioral tendency [...] to maximize one’s individual utility— disregarding, accepting, or malevolently provoking disutility for others—, accompanied by beliefs that serve as justifications.”)

Plausibly there are (for instance, evolutionary) reasons for why these traits correlate so strongly with each other, and perhaps better understanding them could inform interventions to reduce spite and other dark traits (cf. Lukas' comment). 

If this is correct, we might suspect that AIs that will exhibit spiteful preferences/behavior will also tend to exhibit other dark traits (and vice versa!), which may be action guiding. (For example, interventions that make AIs less likely to be psychopathic, sadistic, Machiavellian, etc. would also make them less spiteful, at least in expectation.)

Great post, thanks for writing! 

Most of this matches my experience pretty well. I think I had my best ideas during phases (others seem to agree) when I was unusually low on guilt- and obligation-driven EA/impact-focused motivation and was just playfully exploring ideas for fun and out of curiosity.

One problem with letting your research/ideas be guided by impact-focused thinking is that you basically train your mind to immediately ask yourself after entertaining a certain idea for a few seconds "well, is that actually impactful?". And basically all of the time, the answer is "well, probably not". This makes you disinclined to further explore the neighboring idea space. 

However, even really useful ideas / research angles start out being somewhat unpromising and full of hurdles and problems and need a lot of refinement. If you allow yourself to just explore idea space for fun, you might overcome these problems and stumble on something truly promising. But if you had been in an "obsessing about maximizing impact" mindset you would have given up too soon because, in this mindset, spending hours or even days without having any impact feels too terrible to keep going.

Thanks for this post, I thought this was useful. 

I needed a writing buddy to pick up the momentum to actually write it

I'd be interested in knowing more how this worked in practice (no worries if you don't feel like elaborating/don't have the time!). 

I think mostly I expect us to continue to overestimate the sanity and integrity of most of the world, then get fucked over like we got fucked over by OpenAI or FTX. I think there are ways to relating to the rest of the world that would be much better, but a naive update in the direction of "just trust other people more" would likely make things worse.

[...]
Again, I think the question you are raising is crucial, and I have giant warning flags about a bunch of the things that are going on (the foremost one is that it sure really is a time to reflect on your relation to the world when a very prominent member of your community just stole 8 billion dollars of innocent people's money and committed the largest fraud since Enron), [...]

I very much agree with the sentiment of the second paragraph. 

Regarding the first paragraph, my own take is that (many) EAs and rationalists might be wise to trust themselves and their allies less.[1]

The main update of the FTX fiasco (and other events I'll describe later) I'd make is that perhaps many/most EAs and rationalists aren't very good at character judgment.  They probably trust other EAs and rationalists too readily because they are part of the same tribe and automatically assume that agreeing with noble ideas in the abstract translates to noble behavior in practice. 

(To clarify, you personally seem to be good at character judgment, so this message is not directed at you. (I base that mostly on your comments I read about the SBF situation, big kudos for that, btw!)

It seems like a non-trivial fraction of people that joined the EA and rationalist community very early turned out to be of questionable character, and this wasn't noticed for years by large parts of the community. I have in mind people like Anissimov, Helm, Dill, SBF, Geoff Anders, arguably Vassar—these are just the known ones. Most of them were not just part of the movement, they were allowed to occupy highly influential positions. I don't know what the base rate for such people is in other movements—it's plausibly even higher—but as a whole our movements don't seem to be fantastic at spotting sketchy people quickly. (FWIW, my personal experiences with a sketchy, early EA (not on the above list) inspired this post.)

My own takeaway is that perhaps EAs and rationalists aren't that much better in terms of integrity than the outside world and—given that we probably have to coordinate with some people to get anything done—I'm now more willing to coordinate with "outsiders" than I was, say, eight years ago. 

 

  1. ^

    Though I would be hesitant to spread this message; the kinds of people who should trust themselves and their character judgment less are more likely the ones who will not take this message to heart, and vice versa.

This is mentioned in the introduction. 

I'm biased, of course, but it seems fine to write a post like this. (Similarly, it's fine for CFAR staff members to write a post about CFAR techniques. In fact, I prefer if precisely these people write such posts because they have the relevant expertise.)

Would you like us to add a more prominent disclaimer somewhere? (We worried that this might look like advertising.)

A quick look through https://www.goodtherapy.org/learn-about-therapy/types/compassion-focused-therapy gives an impression of yet another mix of CBT, DBT and ACT, nothing revolutionary or especially new, though maybe I missed something.

In my experience, ~nothing in this area is downright revolutionary. Most therapies are heavily influenced by previous concepts and techniques. (Personally, I'd still say that CFT brings something new to the table.)

I guess what matters if it works for you or not. 

Is this assertion borne out by twin studies? Or is believing it a test for CFT suitability only?

To some extent. Most human traits have a genetic component, including (Big-Five) personality traits, depressive tendencies, anxiety disorders, conduct disorders, personality disorders, and so on. (e.g., Polderman et al., 2015). This is also true for (self-)destructive tendencies like malevolent personality traits (citing my own summary of some studies here because I'm lazy, sorry).

(Also agree with Kaj's warning about misinterpreting heritability.)

More generally speaking, I'd say this belief is borne out of understanding evolutionary psychology/history. Basically, all of our motivations and fears have an evolutionary basis. We fear death, because the ancestors who didn't were eaten by lions. We fear being ostracized and care about being respected because in the Environment of Evolutionary Adaptedness our survival and reproductive success was dependent on our social status. Therefore, it's to be expected that most humans, at some point or another, worry about death or health problems or feel emotions like jealousy or envy. They don't have to be rooted in some trauma or early life experience—though they are usually exacerbated by them. In most cases, it's not realistic to eliminate such emotions entirely. This doesn't mean that one is an "abnormal" or "defective" person that experienced irreversible harm inflicted by another human sometime in one's development. (Just to be clear, as mentioned in the main text, no one believes that life experiences don't matter. Of course, they matter a great deal!)

But yeah, if you are skeptical of the above, it's a good reason to not seek a CFT therapist. 

Load More