LESSWRONG
LW

cubefox
1942Ω338195
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
5cubefox's Shortform
1y
9
Shorter Tokens Are More Likely
cubefox6d20

I'm probably also misunderstanding, but wouldn't this predict that large production models prefer words starting with "a" and names starting with "I" (capital "i")? Because these letters are, simultaneously, frequently-used words in English. Which makes it likely that the tokenizer includes the tokens " a" and " I" and that the model is incentivized to use them.

Reply
Hidden Reasoning in LLMs: A Taxonomy
cubefox6d70

This is an excellent clarification of an important topic!

Reply
Banning Said Achmiz (and broader thoughts on moderation)
cubefox9d62

I also have noticed in the past his sometimes unusually hostile/gaslighting/uncharitable/unproductive war-of-attrition discussion style when he disagrees with someone, described here in detail by habryka. Including his aggressive/escalating voting behavior in simple one-to-one disagreements, also mentioned by habryka. (I also wondered whether sock puppet accounts or specific "voting friends" are involved, but as far as I see habryka didn't mention these exist, which is some evidence that they don't.) I have not seen anyone else act like that, so I don't think this is a case of just "banning people who voice criticism". There are countless people posting outspoken criticisms without remotely employing an unconstructive style like that.

Reading now that this has been going on for many years, including temporary bans, I believe this is a psychological property of his personality, and likely not something he can really control. Similar to how some people have a natural tendency to voice disagreements in friendly and productive manner, but the other way round.

Reply2
Against asking if AIs are conscious
cubefox3mo2-6

To say that Eliezer is a moral realist is deeply, deeply misleading.

No, it is not at all misleading. He is quite explicit about that in the linked Arbital article. You might want to read it.

Eliezer’s ethical theories correspond to what most philosophers would identify as moral anti-realism (most likely as a form of ethical subjectivism, specifically).

They definitely would not. They would immediately qualify as moral realist. Helpfully, he makes that very clear:

Within the standard terminology of academic metaethics, "extrapolated volition" as a normative theory is:

  • Cognitivist. Normative propositions can be true or false. You can believe that something is right and be mistaken.

He explicitly classifies his theory as cognitivist theory, which means it ascribes truth values to ethical statements. Since it is a non-trivial cognitivist theory (it doesn't make all ethical statements false, or all true, and your ethical beliefs can be mistaken, in contrast to subjectivism) it straightforwardly classifies as a "moral realist" theory in metaethics.

He does argue against moral internalism (the statement that having an ethical belief is inherently motivating) but this is not considered a requirement for moral realism. In fact, most moral realist theories are not moral internalist. His theory also implies moral naturalism, which is again common for moral realist theories (though not required). In summary, his theory not only qualifies as a moral realist theory, it does so straightforwardly. So yes, according to metaethical terminology, he is a moral realist, and not even an unusual one.

Additionally, he explicitly likens his theory to Frank Jackson's Moral Functionalism (that is indeed very similar to his theory!), which is considered an uncontroversial case of a moral realist theory.

Reply
Against asking if AIs are conscious
cubefox3mo30

An illusion is perception not accurately representing external reality. So the perception by itself cannot be an illusion, since an illusion is a relation (mismatch) between perception and reality. The Müller-Lyer illusion is a mismatch between the perception "line A looks longer than line B" (which is true) and the state of affairs "line A is longer than line B" (which is false). The physical line on the paper is not longer, but it looks longer. The reason is that sense information is already preprocessed before it arrives in the part of the brain which creates a conscious perception. We don't perceive the raw pixels, so to speak, but something that is enhanced in various ways, which leads to various optical illusions in edge scenarios.

Reply
Viliam's Shortform
cubefox3mo70

I assume the minimum number was put into place in order to prevent another method of gaming the system.

Reply
Maybe Social Anxiety Is Just You Failing At Mind Control
cubefox3mo2-1

I think it's trainable mainly in the indirect sense: If someone has a lot of social grace, they can pull off things (e.g. a risque joke in front of a woman) that would be perceived as cringe or creepy in people with significantly less social grace. These latter people can become less cringe/creepy by learning not to attempt things that are beyond their social grace capabilities. (Which reduces their extraversion, in contrast to treating anxiety, which boosts extraversion.)

I think already children show significant differences in social grace. I remember a kid in elementary school who got bullied a lot because of his often obnoxious behavior. He had both very low social grace and very low social anxiety. I assume with time he learned to become less outgoing, because that wasn't working in his favor. Outgoing behavior can be scaled down at will, but social grace can't be easily scaled up. Even people who are good at social grace don't explicitly know how to do it. It's a form of nonverbal body/intonation language that seems to be largely innate and unconscious, perhaps controlled by an (phylogenetically) older part of the brain, e.g. the cerebellum rather than the cortex.

Of course that's all anecdotal and speculation, but I would hypothesize that statistically the level of social grace of a person tends to stay largely the same over the course of their life. The main reason is that I think that lack of social grace is strongly related to ASD, which is relatively immutable. It seems people can't change their natural "EQ" (which includes social grace) much beyond learning explicit rules about how to behave according to social norms, similar to how people can't change their natural IQ much beyond acquiring more knowledge.

Reply
Against asking if AIs are conscious
cubefox3mo7-8

You might be surprised to learn that the most prototypical LessWrong user (Eliezer Yudkowsky) is a moral realist. The issue is that most people have only read what he wrote in the sequences, but didn't read Arbital.

Reply
Against asking if AIs are conscious
cubefox3mo20

Human moral judgement seem easily explained as an evolutionary adaptation for cooperation and conflict resolution,

That's not true: You can believe that what you do or did was unethical, which doesn't need to have anything to do with conflict resolution.

and very poorly explained by perception of objective facts. If such facts did exist, this doesn't give humans any reason to perceive

Beliefs are not perceptions. Perceptions are infallible, beliefs are not, so this seems like a straw man.

or be motivated by them.

Moral realism only means that moral beliefs, like all other contingent beliefs, can be true or false. It doesn't mean that we are necessarily or fully motivated to be ethical. In fact, some people don't have any altruistic motivation at all (people with psychopathy), but that only means they don't care to behave ethically, and they can be perfectly aware that they are behaving unethically.

Reply
Maybe Social Anxiety Is Just You Failing At Mind Control
cubefox3mo2-2

Unfortunately I think social grace can only be trained to a small degree, for reasons similar to ASD not being curable. Some people just have a natural social grace, others are much more socially awkward, and removing their social inhibitions too much may make them "cringe" or even "creepy".

Reply
Load More
Eurisko
5mo
Eurisko
5mo
(+13/-56)
Eurisko
5mo
Eurisko
5mo
(+252/-29)
Message to future AI
6mo
(+91)
Scoring Rule
1y
(+15/-12)
Litany of Gendlin
2y
5cubefox's Shortform
1y
9
30Is LLM Translation Without Rosetta Stone possible?
Q
1y
Q
15
18Are Intelligence and Generality Orthogonal?
3y
16