DanArmak

DanArmak's Comments

Conversational Cultures: Combat vs Nurture (V2)

This post is well written and not over-long. If the concepts it describes are unfamiliar to you, it is a well written introduction. If you're already familiar with them, you can skim it quickly for a warm feeling of validation.

I think the post would be even better with a short introduction describing its topic and scope, but I'm aware that other people have different preferences. In particular:

  • There are more than two 'cultures' or styles of discussion, perhaps many more. The post calls this out towards the end (apparently this is new in v2).
  • The post gives two real examples of Combat Culture, and only one made-up scenario of Nurture Culture. It does not attempt to ground the discussion in anything quantitative - how common these cultures are, what they correlate with, how to recognize or test for them, how gradually they may shade into each other or into something else altogether.

I don't want to frame these as shortcomings; the post is still useful and interesting without them!

Is Clickbait Destroying Our General Intelligence?

This post raises some reasonable-sounding and important-if-true hypotheses. There seems to be a vast open space of possible predictions, relevant observations, and alternative explanations. A lot of it has good treatment, but not on LW, as far as I know.

I would recommend this post as an introduction to some ideas and a starting point, but not as a good argument or a basis for any firm conclusions. I hope to see more content about this on LW in the future.

On alien science
Firstly, on a historical basis, many of the greatest scientists were clearly aiming for explanation not prediction.

In all of your examples, the new theory allowed making predictions, either more correct than previous ones (relativity, astronomy) or in situations that were previously completely un-predictable (evolution). Scientists expected good predictions to follow from good explanations, and they were in large part motivated by it.

Wiener, on the other hand, is saying it doesn't matter what explanation you choose if all explanations yield the same prediction, in a particular field of study or experiment. And you don't need explanations at all if they can't ever yield different predictions (in any possible experiment). That's a different statement.

I think that taking prediction to be the point of doing science is misguided in a few ways.

This seems to be just a matter of definitions. Scientists are human beings, they have a wide variety of interests and goals. You can label a more narrow subset of them "science", and then say that some of what they're doing "isn't science". Or you can label everything they tend to do as "science", because it tends to come together. But the question "what is the real point of doing science?" is just a matter of definition.

Is value drift net-positive, net-negative, or neither?

I'll open the discussion for the broadest definition of 'value drift' - 'changes in values over time'.

'Good' and 'bad' are only defined relative to some set of values.

A simplistic (but technically correct answer): if you had values A, and then changed to have different values B, from the viewpoint of A this is bad *by definition*, no matter what A and B actually are. And from the viewpoint of B it's good by definition. Values are always optimal according to themselves. (If they're in conflict with one another, there should be a set of optimal balances defined by some meta-values you also need to hold.)

A more complex and human-like scenario: you're not perfectly rational. Knowing this, and wanting to achieve a certain goal, it might be useful to "choose" a set of values other than the trivial set "this goal is good", to influence your own future behavior. Just as it can be instrumentally rational to choose some false beliefs (or to omit some true ones), so it can be instrumentally rational to choose a set of values in order to achieve something those values don't actually claim to promote.

A contrived example: you value donating to a certain charity. If you join a local church and become influential, you could convince others to donate to it. You don't actually value the church. If you were perfectly rational, you could perfectly pretend to value it and act to optimize your real values (the charity). But humans tend to be bad at publicly espousing values (or beliefs) without coming to really believe them to some degree. So you'll get value drift towards really caring about the church. But the charity will get more donations than if you hadn't joined. So from the point of view of your original values (charity above all), the expected value drift to (charity + church) is an instrumentally good choice.

Meaning and Moral Foundations Theory

Loyalty, authority, and fairness are also about other people. A lone person can't be loyal, authoritative, or fair; you have to be those things to someone else.

And, as I've been saying, Harm/Care is also about the conduct of the individual: do you harm others or care for them?

Meaning and Moral Foundations Theory
"I can't learn the material for you" as opposed to "if you want to climb Mt Everest, you have to do it for yourself rather than for someone else".

I'm not sure I understand the difference, can you make it more explicit?

"I can't learn the material for you": if I learn it, it won't achieve the goal of you having learned it, i.e. you knowing the material.

"I can't climb the mountain for you": if I climb it, the prestige and fun will be mine; I can't give you the experience of climbing the mountain unless you climb it yourself.

The two cases seem the same...

if people care about others being pure, it seems they can just as easily care about others being caring. And that we should think about people trying to observe the norm of caring and making sure others do, rather than trying to care effectively. Is that right?

Yes, that's what I think is happening: people observing norms and judging others on observing them, rather than on achieving goals efficiently or achieving more. Consequentially, we want to save everyone. Morally, we don't judge people harshly for not saving everyone as long as they're doing their best - and we don't expect them to make an extraordinary effort.

And so, I don't see a significant difference between Harm/Care and the other foundations.

Is Rhetoric Worth Learning?

I did not mean to misrepresent what lawyers do (or are allowed to do). I noted they are restricted by lawyer ethics, but that was in a different comment than the one you replied to. Yes, absolutely, they not supposed to lie or even deliberately mislead, and a lawyer's reputation would suffer horribly if they were caught in a lie.

I'm not sure I understand people who aren't OK with ethical lawyers, as a concept. Is there something they would like instead of lawyers? (See: my other comment.) Or do they feel that lawyers are immoral by association with injustice - the intuition of "moral contagion" (I forget the correct term) that someone who only partially fixes a moral wrong, is worse than someone who doesn't try to fix it at all?

Meaning and Moral Foundations Theory
Harm/Care is unusual among the foundations in that it's other-directed. The goal is to help other people, and it does not especially matter how that occurs. [...] In contrast, the other foundations centre on the moral actor themselves. I cannot be just, loyal, a good follower, or pure for you.

It seems to me that Harm/Care isn't as different as you say. Native (evolved) morality is mostly deontological. The object of moral feelings is the act of helping, not the result of other people being better off. "The goal is to help other people" sounds like a consequentialist reformulation. Helping a second party to help a third party may not be efficient, but morality isn't concerned with efficiency.

In contrast, the other foundations centre on the moral actor themselves. I cannot be just, loyal, a good follower, or pure for you.

I could say: yes, I can be just *to* you, loyal *to* you, a good follower *of* you. And pure *for* you too - think about purity pledges, aka "save it *for* your future spouse".

In all these cases, morality is about performance - deontology - rather than about accomplishing a goal. But each case does have an apparent goal, so our System 2 can apply consequentialist logic to it. Why do you treat Harm/Care differently?

Is Rhetoric Worth Learning?

I think my definition of rhetoric is the same as OP's: namely, the art of shaping words or a speech to be beautiful, moving, convincing, or otherwise effective. How to best verbally convince others of an idea: I think that's a useful term.

In particular the OP referred to dispositio (concise, addressing the right points) and pronuntiatio (body language and delivery).

I’m not convinced this is true.

I'm not sure what exactly you're not convince of. That speech is much more effective when its form is liked as well as its object level claims?

Is Rhetoric Worth Learning?
I don’t think that’s true. Lots of people are bothered by this. Maybe you’re right, maybe a majority is unbothered, but this is interesting only to the extent that it doesn’t embody a larger pattern of what proportion of people care about injustice.

I agree that most people are bothered by anything they perceive as injustice. But if they don't know a way to make things better, or what things being better would look like, then they tend not to blame e.g. lawyers for participating in the system and being good at it.

Is there a better way of doing things, that lots of people would prefer to be the case? Not just "I wish judges applied the law fairly and for Justice" - then you might as well wish for people not to commit crimes in the first place. But a system that would work when being gamed by people desperate not to go to jail?

Alternatively, is there a relevant moral principle that people can follow unilaterally that would make the world a better place (other than deontologically)? If we tell a defendant not to hire a lawyer, or a lawyer not to argue as well as they can (while keeping to lawyer ethics), or the jury not to listen to the lawyers - then the side that doesn't cooperate will win the trial, or the jury will ignore important claims, and justice won't be better served on average.

Load More