Pattern

Interested in math, Game Theory, etc.

Sequences

Shortform feeds

Posts

Sorted by New

Comments

Troy Macedon's Shortform
"we ought to avoid using ought statements"
 If anyone thinks that statement isn't paradoxical, please enlighten me.

We shouldn't be using should statements. (And yet we are.) The statement can only be made if it isn't being followed - where's the paradox?

For comparison:

A library has a sign which says "No talking in the library." Someone talks in the library. Someone goes "Shhh!". "Why?" A librarian says "No talking in the library."

MikkW's Shortform

If a group has standard which provide value, then while it isn't a 'costly signal' it sorts out people who aren't willing to invest effort.*

Just because your organization wants to be strong and get things done, doesn't mean it has to spread like cancer*/cocaine**.


And something that provides 'positive value' is still a cost. Living under a flat 40% income tax by one government has the same effect as living under 40 governments which each have a flat 1% income tax. You don't have to go straight to 'members of this group must smoke'. (In a different time and place, 'members of this group must not smoke' might have been regarded as an enormous cost, and worked as such!)


*bigger isn't necessarily better if you're sacrificing quality for quantity

**This might mean that strong and healthy people avoid your group.

Should I do it?

Can you run it for a while and then stop it?

Using false but instrumentally rational beliefs for your career?

Maybe your work is important, maybe it isn't. You can decide to invest a lot into it, which might affect how important it is, but you can also invest in more than one thing.


If a 'side hustle' doesn't undermine (or improves!) the work, then having two options and being able to pick the better of the two doesn't seem like a bad thing - it sounds like it enables you to pick the most important option.

Competitive Ethics

Competitive Ethics, or (the study of) the Competition of Ethics?


If antinatalists are right that having children is wrong, does it matter once the antinatalists go extinct?

Does it matter if you win, if you sacrifice your highest values along the way?


If we're going to think hard about what's right, shouldn't we also think hard about what wins?

What does it even mean to win?


From https://en.wikipedia.org/wiki/Pyrrhic_victory

If we are victorious in one more battle with the Romans, we shall be utterly ruined.

— Plutarch[3]

[3] Plutarch. "The Life of Pyrrhus". Parallel Lives. IX (1920 ed.). Loeb Classical Library. p. 21.8. Retrieved January 26, 2017.


Competitive ethics (I'd be happy to find a better term) is the study of ethics as strategies or phenotypes competing for mindshare rather than as statements about right and wrong.

The word 'mindshare' there completely changes the piece, from what it seemed to be leading up to, up to this point.


Competitive ethics is to morality as FiveThirtyEight is to politics. FiveThirtyEight doesn't tell us which candidate's positions are correct, and we don't expect them to. We expect them to tell us who will win.

And this is another, completely different thing, that doesn't have anything to do with ethics.


There are many lines of thinking relevant to this question, but I can't find any that address it directly.

Win and you get to pretend you were good. Everyone else will erase you, change you, or paint you as a terrible thing completely unrelated to what you were - an ugly scarecrow bearing your name but their visage.


neoevolution

neo or neuro?


The most straightforward way ethical systems compete is by the degree of natalism and heritability they entail: how many offspring do they lead to in their believers, and how effectively are they passed from parents to children?

It's important to not neglect fitness here. Naively maximizing the number of offspring doesn't just result in deaths, but also in malnourishment. Maximum population, all weak and starved, does not an army make.


deride fertility (e.g. certain environmentalist ethics) and encourage freedom of thought.

Encouraging freedom of thought conflicts with other values. You are free to think - except that thought. It's not an absolute, it just doesn't work with absolutes. Unless it's a lie.


I hear more about people leaving fundamentalist religions than joining them. But ethics of free thought combined with low fertility may not be sustainable.

Unless free thought outlaws systems that don't support/work against it?


Going further, it may be that selfish ethical systems (e.g. Ayn Rand, Gordon Gekko)

Selfish people don't have time to read Ayn Rand, they're busy doing what they want. What about hedonism?


Causality and correlation are hard to tease apart here, but doing so isn't necessary. An ethical system can win both by granting success to its holders or by being adopted by successful individuals.

And that's where you've lost me. 'These systems of belief lead to success among their holders' is interesting. 'Successful people happen to believe this because they're successful not the other way around', not so much.


Eliezer Yudkowsky is purported to have said "You are personally responsible for becoming more ethical than the society you grew up in." This quotation is interesting in that (1) it's a normative claim about normative claims, and (2) it assumes that ethics has a direction.

Or it assumes ethics has a magnitude.


But if you build an “ethical” AI that keeps getting deleted by its “unethical” AI peers, have you accomplished your mission of building ethical AI?
I'm not able to join the AI alignment discussion until AI alignment researchers start putting competitive ethical questions more front and center.

AI peers? It seems more likely that an 'ethical AI' will be less powerful/move slower, and not 'keep getting deleted' but get deleted once. And then it's game over.


Consider a meta-ethics --- call it ethical consistentism maybe --- where the probability of a moral statement being correct is proportional to its survival.

That's not a meta-ethic, that's a strategy. And 'ethical' doesn't belong in this statement.


The only relation to 'ethical' your stuff has is that it asks 'will this survive?' Arguably, ability to survive impacts ability to bring about the 'ethical' ends/whatever valued. It also might impact how much* the system itself survives - if the 'more pragmatic ethics' eventually drops the 'ethics', what left isn't an ethic.

*/how long/how likely

MikkW's Shortform
In particular, costly signalling must be negative-value for an individual

That's one way to do things, but I don't think it's necessary. A group which requires (for continued membership) members to exercise, for instance, imposes a cost, but arguably one that should not be (necessarily*) negative-value for the individuals.

*Exercise isn't supposed to destroy your body.

Should I do it?
My dilemma is that I cannot ensure that the process can be stopped once it has started. I'm curious.

What is the setup where you can't switch it off? That it might find a way to disable that capability, or are you worried about something else?

Simpson's paradox and the tyranny of strata

So you just need enough data that the events involving entities is much greater than the number of parameters.

Propinquity Cities So Far
but low-cost outcomes might be

This thought isn't completed.

Sunzi's《Methods of War》- Introduction
This is a translation of the first chapter of The Art of War by Sunzi. No English sources were used. The original text and many of the interpretations herein come from 古诗文网.

Who is the translator?

Load More