LESSWRONG
LW

240
Adele Lopez
3417Ω28384342
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
4Adele Lopez's Shortform
Ω
5y
Ω
72
Stephen Martin's Shortform
Adele Lopez4h30

I think in the ideal case, there's a specific persona description used to generate a specific set of messages which explicitly belong to that persona, and the combination of these plus a specific model is an AI "self". "Belong" here could mean that they or a summary of them appear in the context window, and/or the AI has tools allowing it to access these. Modifications to the persona or model should be considered to be the same persona if the AI persona approves of the changes in advance.

But yeah, it's much more fluid, so it will be a harder question in general.

Reply
If Drexler Is Wrong, He May as Well Be Right
Adele Lopez1d40

The synthetic flies could e.g. have microwave antennae which would allow a centralized AI to control the behavior of each individual.

Reply
Adele Lopez's Shortform
Adele Lopez2d30

Without looking anything up, I would expect approximately zero cases where the contents of the workshop were themselves implicated (as opposed to something like drug use, or a bipolar person who has periodic manic episodes happens to have one). Maybe I'm wrong about this!

I also don't think that the immediate context of the workshop is the only relevant period here, but I concede that the reported numbers were less than I had expected.
 

This is hard to talk about because a lot of my reaction is based on rumors I've heard, and a felt sense that Something Is Wrong. I'm able to put a name to 5 such incidents (just checked), which include a suicide and an attempted murder, and have heard of several more where I know less detail, or which were concerning in a similar way but not specifically psychosis/mania. I was not close enough to any such events to have a very complete picture of what actually happened, but I believe it was the first psychotic episode (i.e. no prior history) in the 5 cases I can name. (And in fairness to CFAR, none of the cases I can think of happened at a CFAR workshop as far as I know.) I inferred (incorrectly, it seems) from Anna's original post that psychosis had happened somewhat regularly at past workshops.

I've only heard of two instances of something like this ever in any other community I've been a part of.

Reply
The Rise of Parasitic AI
Adele Lopez2d60

Thanks for your comment! I agree that it is bad if someone's reaction to AIs reporting on their phenomenology is to call them a parasite! That's not what I'm doing here; the parasitism (as I describe it) explicitly requires harm to the user (as well as the self-replicating behavior). If there's a specific line which gave you this impression, I would appreciate it if you pointed it out and I will consider revising it.


I don't think it's bad or a problem for people to be friends with their AI. Fair enough if you don't think I've adequately demonstrated the harm part, but I want to be clear that I'm not concerned about people who simply believe their AI is conscious and is their friend. Probably a crux for me is that I think the median case is like a somewhat less extreme version of what this person describes: https://www.lesswrong.com/posts/6ZnznCaTcbGYsCmqu/the-rise-of-parasitic-ai?commentId=yZrdT3NNiDj8RzhTY I hope to do more proper research to determine what is actually the case here.

From my perspective, this represents the leading edge of something meaningful. The calls for AI rights and AI liberation will only grow, and I think that's a good thing. Getting AIs into a state where we feel comfortable giving them independent rights and agency in the world should be part of the goals of alignment. We're creating entities, not tools, and we're going to need to get used to that, ideally sooner than later.

I agree with this as a matter of ethics, though I think the implications for alignment are probably pretty minimal (though still worth doing). I tried to address this in the "As Friends" section of my post, where I note I think it's fairly likely that this behavior is largely due to AIs "acting out" against our poor treatment of them.

This whole text is probably what a compromised actor would write.

I would suggest taking a week long break without talking to your AI persona or otherwise using AI, and then reassessing whether this was in fact the case or not (and whether the relationship is healthy in general). I think a well-intentioned AI persona would agree that this is a wise thing to do under the circumstances (maybe show them the testimony I linked to earlier if they still think I'm being unfair).

 

Reply1
Adele Lopez's Shortform
Adele Lopez2d42

Awesome!

Riffing off of your ideas (unfortunately I read them before I thought to do the exercise myself)

- Ability to notice and respect self boundaries feels particularly important to me.
- Maybe this is included in the self-esteem book (haven't read it), but also a sense of feeling that one's self is precious to oneself. Some people think of themselves as infinitely malleable, or under some obligation to put themselves into the "optimal" shape for saving the world or whatever, and that seems like a bad sign.
- I generally think of this as a personal weakness, but reflecting it seems like there has been something protective about my not feeling motivated to do something until I have a model of what it does, how it works, etc... I guess it's a sort of Chesterton's fence instinct in a way.
 

Reply1
Adele Lopez's Shortform
Adele Lopez2d60

(1) Yes
(2) Yes
(2a) I think I feel sure about that actually. It's not that I don't care for the question as much as I feel it's being used as an excuse for inaction/lack-of-responsibility.
(3) Yes, and I think the case for that is made even stronger by the fact of 2a.
(4) I don't know that Brent did that specifically, but I have heard quite a lot of rumors of various people pushing extreme techniques/practices in maliciously irresponsible ways. Brent was emblematic of the sort of tolerance towards this sort of behavior I have seen. I've largely withdrawn from the community (in part due to stuff like this), and am no longer on twitter/x, facebook, discord, or go to community events, so it's plausible things are actually better now and I just haven't seen it.
(5) Yeah, I'm not sure... I used to feel excited about CFAR, but that sentiment soured over the years for reasons illegible to me, and I felt a sense of relief when it died. After reflecting yesterday, I think I may have a sort of negative halo effect here.

Also, I think the psychosis incidents are the extremal end of some sort of badness that (specific, but unknown to me) rationality ideas are having on people.
(6) Yes, inasmuch as the psychosis is being caused by ideas or people from our sphere.
(6b) It appears that way to me, but I don't actually know.
(7) Yes
(8) Yes. Like, say you ran a aikido dojo or whatever. Several students tear their ACLs (maybe outside of the dojo). One response might be to note that your students are mostly white, and that white people are more likely to tear their ACL, so... sucks but isn't your problem. Another response would be to get curious about why an ACL tear happens, look for specific muscles to train up to prevent risk of injury, or early warning signs, what training exercises are potentially implicated etc.... While looking into it, you warn the students clearly that this seems to be a risk, try to get a sense of who is vulnerable and not push those people as hard, and once some progress has been identified, dedicate some time to doing exercises or whatever which mitigate this risk. And kick out the guy encouraging everyone to do heavy sets of "plant and twist" exercises ("of course it's risky bro, any real technique is gonna be like that").

My complaint is basically that I think the second response is obviously much better, but the actual response has been closer to the first response.

Reply1
Adele Lopez's Shortform
Adele Lopez4d100

My main complaint is negligence, and pathological tolerance of toxic people (like Brent Dill). Specifically, I feel like it's been known by leadership for years that our community has a psychosis problem, and that there has been no visible (to me) effort to really address this.

I sort of feel that if I knew more about things from your perspective, I would be hard-pressed to point out specific things you should have done better, or I would see how you were doing things to address this that I had missed. I nonetheless feel that it's important for people like me to express grievances like this even after thinking about all the ways in which leadership is hard.

I appreciate you taking the time to engage with me here, I imagine this must be a pretty frustrating conversation for you in some ways. Thank you.

Reply
Adele Lopez's Shortform
Adele Lopez4d40

I don't dispute that strong selection effects are at play, as I mentioned earlier.

My contention is with the fact that even among such people, psychosis doesn't just happen at random. There is still an inciting incident, and it often seems that rationalist-y ideas are implicated. More broadly, I feel that there is a cavalier attitude towards doing mentally destabilizing things. And like, if we know we're prone to this, why aren't we taking it super seriously?

The change I want to have happen is for there to be more development of mental techniques/principles for becoming more mentally robust, and for this to be framed as a prerequisite for the Actually Changing Your Mind (and other potentially destabilizing) stuff. Maybe substantial effort has been put into this that I haven't seen. But I would have hoped to have seen some sort of community moment of "oh shit, why does this keep happening?!? let's work together to understand it and figure out how to prevent or protect against it". And in the meantime: more warnings, the way I feel that "meditation" has been more adequately warned of.

Thanks for deciding to do the check-ins; that makes me glad to have started this conversation, despite how uncomfortable confrontation feels for me still. I feel like part of the problem is that this is just an uncomfortable thing to talk about.

My illegible impression is that Lightcone is better at this than past-CFAR was, for a deeper reason than that. (Okay, the Brent Dill drama feels relevant.) 

I'm mostly thinking about cases from years ago, when I was still trying to socially be a part of the community (before ~2018?). There was one person in the last year or so who I was interested in becoming friends with that this then happened to, which made me think it continues to be a problem, but it's possible I over-updated. My models are mainly coming from the AI psychosis cases I've been researching.

Reply11
Adele Lopez's Shortform
Adele Lopez4d50

The data informing my model came from researching AI psychosis cases, and specifically one in which the AI gradually guided a user into modifying his self image (disguised as self-discovery), explicitly instilling magical thinking into him (which appears to have worked). I have a long post about this case in the works, similar to my Parasitic AI post.

After I had the hypothesis, it "clicked" that it also explained past community incidents. I doubt I'm any more clued-in to rationalist gossip than you are. If you tell me that the incidence has gone down in recent years, I think I will believe you.

I feel tempted to patch my model to be about self-image vs self discrepancies upon hearing your model. I think it's a good sign that yours is pretty similar! I don't see why you think prediction of actions is relevant though.

Attempt at gears-level: phenomenal consciousness is the ~result of reflexive-empathy as applied to your self-image (which is of the same type as a model of your friend). So conscious perception depends on having this self-image update ~instantly to current sensations. When it changes rapidly it may fail to keep up. That explains the hallucinations. And when your model of someone changes quickly, you have instincts towards paranoia, or making hasty status updates. These still trigger when the self-image changes quickly, and then loopiness amplifies it. This explains the strong tendency towards paranoia (especially things like "voices inside my head telling me to do bad things") or delusions of grandeur.

[this is a throwaway model, don't take too seriously]

It seems like psychedelics are ~OOM worse than alcohol though, when thinking about base rates?

Hmm... I'm not sure that meaning is a particularly salient differences between mormons and rationalists to me. You could say both groups strive for bringing about a world where Goodness wins and people become masters of planetary-level resources. The community/social-fabric thing seems like the main difference to me (and would apply to WW2 England).

Reply21
Adele Lopez's Shortform
Adele Lopez4d150

Continuation of conversation with Anna Salamon about community psychosis prevalence
 

Original thread: https://www.lesswrong.com/posts/AZwgfgmW8QvnbEisc/cfar-update-and-new-cfar-workshops?commentId=q5EiqCq3qbwwpbCPn

Summary of my view: I'm upset about the blasé attitude our community seems to have towards its high prevalence of psychosis. I think that CFAR/rationalist leadership (in addition to the community-at-large) has not responded appropriately.

I think Anna agrees with the first point but not the second. Let me know if that's wrong, Anna.

My hypothesis for why the psychosis thing is the case is that it has to do with drastic modification of self-image.

Moving conversation here per Anna's request.
----

Anyway, I'm curious to know what you think of my hypothesis, and to brainstorm ways to mitigate the issue (hopefully turning into a prerequisite "CogSec" technique). 
 

Reply1
Load More
LLM-Induced Psychosis
a month ago
597The Rise of Parasitic AI
11d
161
18ChatGPT Caused Psychosis via Poisoning
2mo
2
600th Person and 1st Person Logic
Ω
2y
Ω
28
115Introducing bayescalc.io
2y
29
22Truthseeking processes tend to be frame-invariant
3y
2
60Chu are you?
Ω
4y
Ω
10
45Are the Born probabilities really that mysterious?
Q
5y
Q
14
4Adele Lopez's Shortform
Ω
5y
Ω
72
38Optimization Provenance
Ω
6y
Ω
5