unparadoxed

Posts

Sorted by New

Comments

Adam Smith's Shortform

Is being able to copy a system necessary for that system to be deterministic? 

Maybe unrelated, but I am thinking of infinite series as an example. Imagine a "system" that comprises of the sum of inverse powers of 2. This "system" has infinite terms, and is "deterministic" in that the value of of each term of the series is well-defined and that the infinite sum is equal to 1. It would be impossible to "copy" this system as it involves enumerating an infinite number of terms, but the behavior of this system could be argued to be "deterministic".

Your Cheerful Price

I can see scenarios where both participants in a trade would benefit from interacting via Cheerful Prices. I'm trying to think if it's a concept that still works even if one party does not fully buy into it. If I don't feel comfortable thinking about a Cheerful Price to give you, would I be spending some social / friendship capital that I have with you?

TurnTrout's shortform feed

Hmm, maybe it would be easier if we focused on one kind/example of craziness. Is there a particular one you have in mind?

unparadoxed's Shortform

Yes, Markets are Efficient, but only when they conform to my biases. If not, they are clearly fraudulent and incorrectly valued.

Yoav Ravid's Shortform

Yeah, that makes sense. The way I came to think of it is that person A commits a crime, then faints and is unconscious after that. Afterwards, a separate nefarious cloner then clones person A in a black box, so one person A goes in, two persons A come out from the cloning black box.  Person(s!) A awake, and having a strong conscience of their crime, turn themselves in. Since they have exactly the same memories and conscience, they are indistinguishable from the point of view of being the person who committed the crime, both internally and externally.

This is actually a good question. I feel that both persons should be declared guilty, since cloning oneself (whether intentionally or not) should not give one an automatic-out from moral judgement. I am not as sure about whether the punishment should be equal or shared.

Yoav Ravid's Shortform

It seems to me that you are thinking about some "stronger" form of cloning. The framework that I was thinking in was that the "clone" was a similar-but-distinct entity, something like a Twin materialized out of thin air instantaneously. But it seems that you are thinking of a stronger form where we should treat the two entities as exactly the same.

I have difficulties conceptualizing this since in my mind a clone still occupies a distinct time, space and consciousness as the original, and so is treated distinctly in my eyes. (In terms of being judged for the morality of actions that the original committed).

I will try to think of a situation / framework where this "stronger" form of cloning makes sense to me.

crl826's Shortform

If you have some feedback loop based on those metrics, then the wiser amongst them might (will?) eventually figure that 1) you were not honest with your metrics and 2) they are being evaluated against some metric that is not defined to them. Now we are in Simulacrum Level 3, which in a way is the same level that would be reached with Goodhart's Law.

unparadoxed's Shortform

I want to join/create a society of people who do not judge others at all, but how will they decide who to let in?

Yoav Ravid's Shortform

On first thought, it does not seem to me that (im)morality is something that is commonly ascribed to atoms. Just as bits do not actually have a color, so it seems to me that atoms do not have morality. But I'm not a moral philosopher, so that's just my feeling.

On second thought, consider a thought experiment where we judge the clone. Was the clone a direct / proximate cause of the immorality? It would seem not, as the original was. Did the clone have the intention to cause the immorality? It would seem not, the original did. So I don't think I would hold the clone liable for the committed immorality.

A more interesting scenario to me would be - We have two clones, we know one of them committed an immorality, but we do not know which one. How do we proceed?

unparadoxed's Shortform

More thoughts on Simulacrum.

Assume that the setting is such that Agents can make statements about Reality.

Level 0 : Reality

Level 1 : Agents are concerned about Reality and making statements about Reality that are True / Honest. Agents in Level 1 seek to understand and exploit Level 0 - Reality. All Agents in level 1 trust each other. As Level-0 Reality asserts its constraints and agents face scarcity, some thus shift to... 

Level 2 : Agents are concerned about perceptions (theirs and others) of Reality, and making statements about Reality that induce perceptions about Reality that are beneficial to them. By making potentially False / Dishonest statements, Value in Level 1 is destroyed. All Agents in level 2 are parasitic on agents in level 1. As enough Level-1 agents wise up from being exploited and become Level-2 agents, some thus shift to...

Level 3 : Agents are concerned about statements of Reality. Yes, the statements themselves. Agents are concerned in making statements about Reality which are implicitly valued by having made the statement. This value cannot be derived from explicit statements because all agents distrust each other's statements about Reality due to Level-2 actions. Thus, the value of statements about Reality do not lie in what they state about Reality, but merely in that they are stated. Note that even though Agents focus on the statements themselves, agents cannot simply state any random statement about reality, as the implicit value from making the statement has to at least partially be derived from the substance of the statement as it would be interpreted in Level 1 (and 2?). As enough Level-3 agents gradually start focusing on the statements themselves, some thus shift to...

Level 4 : Agents are concerned. with everything and nothing. As every statement can be and is interpreted in a potentially infinite number of implicit subtexts, agents make statements without bearing on what they might or might not imply. No practical meaning and value can be derived from the object of statements or the statements themselves.

 

In the context of agents in a resource-constrained setting, one could see it as follows :

Abundant resources : Level 0 - Agents cooperate to map and exploit resources.

Constrained resources : Level 1 - Due to constraints, Agents trick each other to get a comparative advantage.

Scarce resources : Level 2 - Agents have to band together to fight other bands for resources. Agents make statements to signify their Tribe as an implicit focusing point.

Dying resources : Level 3 - Resources are too scarce to sustain a band, so it is every agent for themselves. 

Load More