LESSWRONG
LW

cubefox
1907Ω338165
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
5cubefox's Shortform
1y
9
Against asking if AIs are conscious
cubefox18d2-6

To say that Eliezer is a moral realist is deeply, deeply misleading.

No, it is not at all misleading. He is quite explicit about that in the linked Arbital article. You might want to read it.

Eliezer’s ethical theories correspond to what most philosophers would identify as moral anti-realism (most likely as a form of ethical subjectivism, specifically).

They definitely would not. They would immediately qualify as moral realist. Helpfully, he makes that very clear:

Within the standard terminology of academic metaethics, "extrapolated volition" as a normative theory is:

  • Cognitivist. Normative propositions can be true or false. You can believe that something is right and be mistaken.

He explicitly classifies his theory as cognitivist theory, which means it ascribes truth values to ethical statements. Since it is a non-trivial cognitivist theory (it doesn't make all ethical statements false, or all true, and your ethical beliefs can be mistaken, in contrast to subjectivism) it straightforwardly classifies as a "moral realist" theory in metaethics.

He does argue against moral internalism (the statement that having an ethical belief is inherently motivating) but this is not considered a requirement for moral realism. In fact, most moral realist theories are not moral internalist. His theory also implies moral naturalism, which is again common for moral realist theories (though not required). In summary, his theory not only qualifies as a moral realist theory, it does so straightforwardly. So yes, according to metaethical terminology, he is a moral realist, and not even an unusual one.

Additionally, he explicitly likens his theory to Frank Jackson's Moral Functionalism (that is indeed very similar to his theory!), which is considered an uncontroversial case of a moral realist theory.

Reply
Against asking if AIs are conscious
cubefox18d30

An illusion is perception not accurately representing external reality. So the perception by itself cannot be an illusion, since an illusion is a relation (mismatch) between perception and reality. The Müller-Lyer illusion is a mismatch between the perception "line A looks longer than line B" (which is true) and the state of affairs "line A is longer than line B" (which is false). The physical line on the paper is not longer, but it looks longer. The reason is that sense information is already preprocessed before it arrives in the part of the brain which creates a conscious perception. We don't perceive the raw pixels, so to speak, but something that is enhanced in various ways, which leads to various optical illusions in edge scenarios.

Reply
Viliam's Shortform
cubefox18d70

I assume the minimum number was put into place in order to prevent another method of gaming the system.

Reply
Maybe Social Anxiety Is Just You Failing At Mind Control
cubefox18d2-1

I think it's trainable mainly in the indirect sense: If someone has a lot of social grace, they can pull off things (e.g. a risque joke in front of a woman) that would be perceived as cringe or creepy in people with significantly less social grace. These latter people can become less cringe/creepy by learning not to attempt things that are beyond their social grace capabilities. (Which reduces their extraversion, in contrast to treating anxiety, which boosts extraversion.)

I think already children show significant differences in social grace. I remember a kid in elementary school who got bullied a lot because of his often obnoxious behavior. He had both very low social grace and very low social anxiety. I assume with time he learned to become less outgoing, because that wasn't working in his favor. Outgoing behavior can be scaled down at will, but social grace can't be easily scaled up. Even people who are good at social grace don't explicitly know how to do it. It's a form of nonverbal body/intonation language that seems to be largely innate and unconscious, perhaps controlled by an (phylogenetically) older part of the brain, e.g. the cerebellum rather than the cortex.

Of course that's all anecdotal and speculation, but I would hypothesize that statistically the level of social grace of a person tends to stay largely the same over the course of their life. The main reason is that I think that lack of social grace is strongly related to ASD, which is relatively immutable. It seems people can't change their natural "EQ" (which includes social grace) much beyond learning explicit rules about how to behave according to social norms, similar to how people can't change their natural IQ much beyond acquiring more knowledge.

Reply
Against asking if AIs are conscious
cubefox18d7-8

The issue is that most people have only read what he wrote in the sequences, but didn't read Arbital.

Reply
Against asking if AIs are conscious
cubefox18d20

Human moral judgement seem easily explained as an evolutionary adaptation for cooperation and conflict resolution,

That's not true: You can believe that what you do or did was unethical, which doesn't need to have anything to do with conflict resolution.

and very poorly explained by perception of objective facts. If such facts did exist, this doesn't give humans any reason to perceive

Beliefs are not perceptions. Perceptions are infallible, beliefs are not, so this seems like a straw man.

or be motivated by them.

Moral realism only means that moral beliefs, like all other contingent beliefs, can be true or false. It doesn't mean that we are necessarily or fully motivated to be ethical. In fact, some people don't have any altruistic motivation at all (people with psychopathy), but that only means they don't care to behave ethically, and they can be perfectly aware that they are behaving unethically.

Reply
Maybe Social Anxiety Is Just You Failing At Mind Control
cubefox19d2-2

Unfortunately I think social grace can only be trained to a small degree, for reasons similar to ASD not being curable. Some people just have a natural social grace, others are much more socially awkward, and removing their social inhibitions too much may make them "cringe" or even "creepy".

Reply
Kajus's Shortform
cubefox19d53

Current LLMs can already do this, e.g. when implementing software with agentic coding environments like Cursor.

Reply
reallyeli's Shortform
cubefox19d20

This is also discussed here. LLMs seem to do a form of imitation learning (evidence: the plateau in base LLM capabilities roughly at human level), while humans, and animals generally, predict reality more directly. The latter is not a form of imitation and therefore not bounded by the abilities of some imitated system.

Reply
Maybe Social Anxiety Is Just You Failing At Mind Control
cubefox20d20

Thanks, that was an interesting post, it seems like an overall plausible theory. In fact more plausible than the recent one by Chipmonk you linked to, as your theory is much wider and somewhat includes the one by Chipmonk (per point two in your list).

I think one common reason for social anxiety is still missing in this list though: Fear of being humiliated. A rejection, or a cringe comment, can feel excessively humiliating to someone with social anxiety, even if they don't believe the other person will feel awkward or will dislike them.

I think that's indeed something exposure therapy can help with. Just thinking something like "this fear of humiliation is clearly exaggerated, let's not worry about it" is like thinking "this fear of spiders is clearly exaggerated, let's not worry about it". It won't help much because the fear by itself isn't really what's exaggerated, it's the consequence of something that is exaggerated. The fear comes from correctly predicting that a spider touching you would freak you out excessively, just as you're correctly predicting that you would feel excessively humiliated if a social faux pas or a rejection were to occur. It's more a phobia than a proper anxiety. I don't think you can reason yourself out of a phobia without some form of "exposure therapy".

Though again, that's only one additional potential cause for social anxiety which doesn't apply to every case.

Reply
Load More
You might be surprised to learn that the most prototypical LessWrong user (Eliezer Yudkowsky) is a moral realist.
Eurisko
3mo
Eurisko
3mo
(+13/-56)
Eurisko
3mo
Eurisko
3mo
(+252/-29)
Message to future AI
4mo
(+91)
Scoring Rule
1y
(+15/-12)
Litany of Gendlin
2y
5cubefox's Shortform
1y
9
24Is LLM Translation Without Rosetta Stone possible?
Q
1y
Q
14
18Are Intelligence and Generality Orthogonal?
3y
16