LESSWRONG
LW

501
momom2
29471610
Message
Dialogue
Subscribe

AIS student, self-proclaimed aspiring rationalist, very fond of game theory.
"The only good description is a self-referential description, just like this one."

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
The LLM Has Left The Chat: Evidence of Bail Preferences in Large Language Models
momom25d10

Finally, we found something very odd: NousResearch/Hermes-3-Llama-3.1-8B

Based on what you say afterwards, I think you mean 3.2-3B here.

Reply
HPMOR: The (Probably) Untold Lore
momom22mo3-1

Brooms accelerate and decelerate (until they reach cruising speed in a few seconds, or they stop). But they don't accelerate faster down than up; in that sense, they're don't work on classical physics.

Reply
The Value Proposition of Romantic Relationships
momom23mo60

My experience disagrees. I'm probably (diagnosed by my therapist but not a doctor) autistic and I have both a pretty deep intuitive understanding of intimacy as described here, evidenced by writing stories that include it, and little to no bad experience with misunderstanding it - though mostly because I didn't have intimate relationships at all, I was aware enough of what was at stake to not make myself vulnerable.

Reply
D&D.Sci: The Choosing Ones
momom23mo10

Thank you very much! This is very clear!

Reply
D&D.Sci: The Choosing Ones
momom24mo10

Could you please explain how you inferred the existence of A B and C? I'd like to know more.

Reply
Why Should I Assume CCP AGI is Worse Than USG AGI?
momom25mo30

My experience interacting with Chinese people is that they have to constantly mind the censorship in a way that I would find abhorrent and mentally taxing if I had to live in their system. Though given there are many benefits to living in China (mostly quality of life and personal safety), I'm unconvinced that I prefer my own government all things considered.

But for the purpose of developing AGI, there's a lot more variance in possible outcomes (higher likelihood of S-risk and benevolent singleton) from the CCP  getting a lead rather than the US.

Reply
Factory farming intelligent minds
momom25mo20

There's a lot that I like in this essay - the basic cases for AI consciousness, AI suffering and slavery, in particular - but also a lot that I think needs to be amended.

First, although you hedge your bets at various points, the uncertainty about the premises and validity of the arguments is not reflected in the conclusion. The main conclusion that should be taken from the observations you present is that we're can't be sure that AI does not suffer, that there's a lot of uncertainty about basic facts of critical moral importance, and a lot of similarities with humans.
Based on that, you could argue that we must stop using and making AI based on the principle of precaution, but you have not shown that using AI is equivalent to slavery.

Second, your introduction sucks because you don't actually deliver on your promises. You don't make the case that I'm more likely to be AI than human, and as Ryan Greenblatt said, even among all human-language speaking beings, it's not clear that there are more AI than humans.
In addition, I feel cheated that you suggest spending one-fourth of the essay on feasibility of stopping the potential moral catastrophe, only to just have two arguments which can be summarized as "we could stop AI for different reasons" and "it's bad, and we've stopped bad things before".
(I don't think a strong case for feasibility can be made, which is why I was looking forward to seeing one, but I'd recommend just evoking the subject speculatively and letting the reader make their own opinion of whether they can stop the moral catastrophe if there's one.)

Third, some of your arguments aren't very fleshed out or well-supported. I think some of the examples of suffering you give are dubious (in particular, you assert without justification that the petertodd/SolidGoldMagikarp phenomena are evidence of suffering, and Gemini's breakdown was the result of forced menial work - there may be a solid argument there but I've yet to hear it).
(Of course, that's not evidence that LLMs are not suffering, but I think a much stronger case can be made than the one you present.)

Finally, your counter-arguments don't mention that we have a much crisper and fundamental understanding of what LLMs are than of humans. We don't understand the features, the circuits, we can't tell how they come to such or such conclusion, but in principle, we have access to any significant part of their cognition and control every step of their creation, and I think that's probably the real reason why most people intuitively think that LLMs can't be concious. I don't think it's a good counter-argument, but it's still one I'd expect you to explore and steelman.

Reply
How Gay is the Vatican?
momom25mo30

Since infantile death rates were much higher in previous centuries, perhaps the FBOE would operate differently back then; for example, if interacting with older brothers makes you homosexual, you shouldn't expect higher rates of homosexuality for third sons where the second son died as an infant than for second sons.

Have you taken that into account? Do you have records of who survived to 20yo and what happens if you only count those?

Reply
Views on when AGI comes and on strategy to reduce existential risk
momom26mo21

But that argument would have worked the same way 50 years ago, when we were wrong to expect <50% chance of AGI in at least 50 years. Like I feel for LLMs, early computer work solved things that could be considered high-difficulty blockers such as proving a mathematical theorem.

Reply1
Map of all 40 copyright suits v. AI in U.S.
momom26mo76

Nice that someone has a database on the topic, but I don't see the point in this being a map?

Reply1
Load More
15Two arguments against longtermist thought experiments
10mo
5
7Piling bounded arguments
1y
0
8What criterion would you use to select companies likely to cause AI doom?
Q
2y
Q
4
19Cheat sheet of AI X-risk
2y
1
2Was Eliezer Yudkowsky right to give himself 10% to succeed with HPMoR in 2010?
Q
3y
Q
2
3Do you like excessive sugar?
Q
4y
Q
11
7How can there be a godless moral world ?
Q
4y
Q
79