Wiki Contributions


The best explanation I have found to explain this discrepancy is that ... RLACE ... finds ... a direction where there is a clear separation,

You could test this explanation using a support vector machine - it finds the direction that gives the maximum separation.

(This is a drive-by comment. I'm trying to reduce my external obligations, so I probably won't be responding.)

A lot of the steps in your chain are tenuous. For example, if I were making replicators, I'd ensure they were faithful replicators (not that hard from an engineering standpoint). Making faithful replicators negates step 3.

(Note: I won't respond to anything you write here. I have too many things to respond to right now. But I saw the negative vote total and no comments, a situation I'd find frustrating if I were in it, so I wanted to give you some idea of what someone might disagree with/consider sloppy/wish they hadn't spent their time reading.)

Feature request: some way to keep score. (Maybe a scoring mode that makes the black box an outline on hover and then clicking right=unscored, left-right=correct, and left-left-right=incorrect - or maybe a mouse-out could be unscored and left = incorrect and right = correct).

I haven't finished reading this; I read the first few paragraphs and scanned the rest of the article to see if it would be worth reading. But I want to point out that starting with Harsanyi's Utilitarianism Theorem (a.k.a. Harsanyi's Impartial Observer Theorem) implies that you assume "independence of irrelevant alternatives" because the theorem assumes that its agents obey [1] the Von Neumann–Morgenstern utility theorem. The fourth axiom of this theorem (as listed in Wikipedia) is the "independence of irrelevant alternatives.". Since from the previous article,

The Nash Bargaining Solution is the only one that fulfills the usual three desiderata, and the axiom of Independence of Irrelevant Alternatives.

I am not surprised that this results in the Nash Bargaining solution as the solution to Bargaining Games. The last article also points out that the independence of irrelevant alternatives is not an obvious axiom, so I do not find that the Nash Bargaining solution to be more plausible because it is a generalization of the CoCo Equilibria.[2]

  1. From the abstract here: ['  s_'Utilitarian_Theorem'_and_Utilitarianism] and the introduction to Generalized UtilitarianismandHarsanyi's Impartial Observer Theorem ↩︎

  2. This is a bit inconsistent on my part because I usually make formal decisions according to Rule Utilitarianism, and most forms of utilitarianism assume Von Neumann–Morgenstern expected utility. However, in my defense, I'm not firmly attached to Rule Utilitarianism; it is just the current best I've found. ↩︎

You're trying to bake your personal values (like happy humans) into the rules.

My point is that this has already happened. The underlying assumptions bake in human values. The discussion so far did not convince me that an alien would share these values. I list instances where a human might object to these values. If a human may object to "a player which contributes absolutely nothing ... gets nothing," an alien may object too; if a human may object to "the only inputs are the set of players and a function from player subsets to utility," an alien may object too; and so forth. These are assumptions baked into the rules of how to divide the resources. So, I am not convinced that these rules allow all agents with conflicting goals to reach a compromise because I am not convinced all agents will accept these rules.[1]

I brought up the "happy humans term" as a way to point out that maybe aliens wouldn't object to the rule of "contribute nothing ... get nothing" because they could always define the value functions so that the set of participants who contribute nothing is empty.

  1. This sets up a meta-bargaining situation where we have to agree on which rules to accept to do bargaining before we can start bargaining. This situation seems to be a basic "Bargaining Game." I think we might derive the utilities of each rule set from the utilities the participants receive from a bargain made under those rules + a term for how much they like using that rule set[2]. Unfortunately, except for "Choose options on the Pareto frontier whose utilities exceed the BATNA," this game seems underdetermined, so we'll have trouble reaching a consensus. ↩︎

  2. To understand why I think there should be a term for how much they like using the rule set, imagine aliens who value self-determination and cooperative decision-making for all sentient beings and can wipe us out militarily. Imagine we want to split the resources in an asteroid both of us landed on. Consider the rule set of "might makes right." Under this set, they can unilaterally dictate how the asteroid is divided. So they get maximum utility from the asteroid's resources. However, they recognize that this is the opposite of self-determination and cooperative decision making; so getting all of the resources this way is of less utility to them than getting all the resources under another set of rules. ↩︎

There are quite a few assumptions to pin down solutions that seem to unnecessarily restrict the solution space for bargaining strategies. For example,

  1. "A player which contributes absolutely nothing to the project and just sits around, regardless of circumstances, should get 0 dollars."

    We might want solutions that benefit players who cannot contribute. For example, in an AGI world, a large number of organic humans may not be able to contribute because overhead swamps gains from trade in comparative advantage. We still want to give these people a slice of the pie. We want to value human life, not just production.

    Maybe you could reconceive the project as including a "has more happy humans" term. This makes all participants contributors.

  2. Related, is the implicit assumption that the player's input is what should determine the "chaa" result. I'd rather divide up the pie on consequentialist terms: what division brings the maximum utility for the worst off person or median person or maximum mean utility. A Marxist would want to distribute the gains according to the players' "needs." If our fellow humans come up with such different notions, an alien or AI can scarcely be expected to be more similar. Unfortunately the inputs to the problem are missing terms for "need" and long term population utility.

  3. The assumption that if the total pile is times as big, everyone should get times as much is also unwarranted. Utility arising from 500,000,000 pieces of candy is less thank 100,000,000 times the utility of 5 pieces. We get more mean and median utility when the extra gains go disproportionately to those who would have been allotted less.

  4. The CoCo solution has it's share of of assumptions. For example: Payoff dominance. If player A gets more money than player B in all cells, then player A will leave the game with more money than player B.

    I don't see why this is the way we want to design an allocation method. We may need this to make an incentive structure for certain types of behavior, but for arbitrary situations, I don't think this is a requirement.

I use the "Bearable" app for very rough time logging. It has a system of toggles for "factors" where you can specify what factor was present in a 6-hour interval of your day. Since I am mainly interested in correlations with other things I measure, a primary purpose of "Bearable," this low resolution is a good compromise. It also makes it easy to log after the fact. "Did I do this activity in this 6-hour period?" is a much easier question than remembering down to an hour or quarter-hour granularity. The downside is I can't tell how much time I've invested in a particular category.

I do much more detailed time logging at work with Jira and the "Tempo" plugin. I can then look back when I create my monthly reports. And I can use the per-ticket data to estimate the effort required for future tickets.

I think learning is likely to be a hard problem in general (for example, the "learning with rounding problem" is the basis of some cryptographic schemes). I am much less sure whether learning the properties of the physical or social worlds is hard, but I think there's a good chance it is. If an individual AI cannot exceed human capabilities by much (e.g., we can get an AGI as brilliant as John von Neumann but not much more intelligent), is it still dangerous?

You may want to look at what happens with test data never shown to the network or used to make decisions about its training. Pruning often improves generalization when data are abundant compared to the complexity of the problem space because you are reducing the number of parameters in the model.

Going from "Parts" to "Self," you said the Self might be all the Parts processing together. (Capitalized "Self" means the IFS "Core Self.") How likely is the hypothesis that the Self is an artifact of the therapeutic procedure? When someone says they feel angry at a Part and claims that anger does not come from a Part but is their self, the therapist doesn't accept it. The therapist tells them they need to unblend. But when they describe the 8 C's and say that is their self, the therapist does not ask them to unblend, perceiving that as their Self.

Load More