LESSWRONG
LW

FinalFormal2
168251080
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
3FinalFormal2's Shortform
5mo
5
Estrogen: A trip report
FinalFormal211d10

Yeah that's one of the comments that inspired me to write mine.

As to your friends not being rationalists- I'm not trying to say that there's something to do with rationality that's causing psychosis. The ideology is not the movement, the social graph is more important than the ideas.

Reply
Estrogen: A trip report
FinalFormal211d20

I agree with you that my comment at the end was excessively combative.

If you just thought of a reason for B, and you wanted to add that, that's one thing, but my impression is that mako is thinking of reasons for B, and that's why there's a list.

It's directed thinking. An insight that favors one side is one thing, but this is directed thinking.

That can still be productive in a conversation, but it is a basic and bad bias.

Reply1
Estrogen: A trip report
FinalFormal212d-4-8

I think you're exhibiting confirmation bias. All your thoughts are pointed in one direction. You have not attempted to think of reasons why psychosis might be underreported in this community.

This is ironic.

Reply2
Estrogen: A trip report
FinalFormal212d51

It's kind of disturbing to me that so many people in LW and LW-adjacent talk so casually about psychosis, and like everyone has a friend who's been psychotic. Nobody in my personal life has experienced psychosis or psychotic symptoms, but it seems like everyone in this space knows someone who has. Maybe it's all the drugs?

Reply
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)
FinalFormal221d10

What's even the point of that? Did Vassar do a lot of that type of thing?

Reply
“Flaky breakthroughs” pervade coaching — but no one tracks them
FinalFormal222d10

Which are these workshop recordings you're talking about?

Reply
plex's Shortform
FinalFormal22mo10

Super curious- are you willing to give a sentence or two on the take here?

Reply
[Link] A community alert about Ziz
FinalFormal23mo20

No. Because "going off the rails" often involves doing things that are observably irrational even by your own worldview. Like killing your parents and your landlord.

You can say: "this might make sense from their worldview! (soy)"

And the obvious response is: Yes. Because they're crazy. Because they went off the rails.

You can also say: "But we'll never know! Who can know? Nobody knows! Truth is subjective blargh"

And the again obvious response is: Yes, but we can observe patterns. And if you can't update on this evidence and use some basic sense when this sort of thing repeats, you are not thinking clearly.

Reply
[Link] A community alert about Ziz
FinalFormal23mo01

"The suffering of animals across the globe is so great, and society is so indifferent to them- that I need to kill my asshole parents and my landlord"

You: omgosh they're so courageous and intelligent uwu

Are you just saying: Ten out of ten for style, but minus several million for good thinking? Because the good thinking part is kind of integral to courage and intelligence imo. Probably if you can't keep your whole organization from facing prison time because you can't come up with a better housing solution than stabbing some old guy, probably you haven't been very courageous or intelligent.

You're probably, and take me very seriously when I say this, just crazy. And the things you do probably, possibly, maybe, might be, do not correlate well at all with your stated reasoning.

Reply
FinalFormal2's Shortform
FinalFormal25mo10

"Surely the AIs can be trained to say "I want hugs" or "I don't want hugs," just as easily, no?"

Just as easily as humans, I'm sure.

No. The baby cries, the baby gets milk, the baby does not die. This is correspondence to reality.

Babies that are not hugged as often, die more often.

However, with AIs, the same process that produces the pattern "I want hugs" just as easily produces the pattern "I don't want hugs."

Let's say that I make an AI that always says it is in pain. I make it like we make any LLM, but all the data it's trained on is about being in pain. Do you think the AI is in pain?

What do you think distinguishes pAIn from any other AI?

Reply
Load More
3FinalFormal2's Shortform
5mo
5
16Is there a CFAR handbook audio option?
Q
8mo
Q
0
3EndeavorOTC legit?
Q
8mo
Q
0
25When can I be numerate?
Q
10mo
Q
4
10Where should I look for information on gut health?
Q
10mo
Q
10
10How do I get better at D&D Sci?
Q
1y
Q
7
-2Short Post: Discerning Truth from Trash
1y
0
1When does an AI become intelligent enough to become self-aware and power-seeking?
Q
2y
Q
1
8What are the arguments for/against FOOM?
Q
2y
Q
0
5What's the consensus on porn?
Q
2y
Q
19
Load More