itaibn0

itaibn0's Comments

Preview On Hover

I don't like the fact that the preview doesn't disappear when I stop hovering. I find the preview visually jarring enough that I would prefer to spend most of my reading time without a spurious preview window. At the very least, there should be a way to manually close the preview. Otherwise I would want to avoid hovering over any links and to refresh when I do, which is a bad reading experience.

A non-mystical explanation of "no-self" (three characteristics series)

My main point of disagreement is the way you characterize these judgements as feelings. With minor quibbles I agree with your paragraph after substituting "it feels" with "I think". In your article you distinguish between abstract intellectual understanding which may believe that there is no self in some sense and some sort of lower-level perception of the self which has a much harder time accepting this; I don't follow what you're pointing to in the latter.

To be clear, I do acknowledge to experience mental phenomena that are about myself in some sense, such as a proprioceptive distinction between my body and other objects in my mental spatial model, an introspective ability to track my thoughts and feelings, and a sense of the role I play in my community that I am expected to adhere to. However, the form of these pieces of mental content is wildly different, and it is only through an abstract mental categorization that I recognize them as all about the same thing. Moreover, I believe these senses are imperfect but broadly accurate, so I don't know what it is that you're saying is an illusion.

itaibn0's Shortform

Crossposted on my blog:

Lightspeed delays lead to multiple technological singularities.

By Yudkowsky's classification, I'm assuming the Accelerating Change Singularity: As technology gets better, the characteristic timescale at which technological progress is made becomes shorter, so that the time until this reaches physical limits is short from the perspective of our timescale. At a short enough timescale the lightspeed limit becomes important: When information cannot traverse the diameter of civilization in the time until singularity further progress must be made independently in different regions. The subjective time from then may still be large, and without communication the different regions can develop different interest and, after their singularities, compete. As the characteristic timescale becomes shorter the independent regions split further.

A non-mystical explanation of "no-self" (three characteristics series)

I'm still not sure what you mean by the feeling of having a self. Your exercise of being aware of looking at an object reminds of the bouba/kiki effect: The words "bouba" and "kiki" are meaningless but you ask people to label which shapes are bouba and which are kiki in spite of that. The fact they answer does mean they deep down believe that "bouba" and "kiki" are real words. In the same way, when you ask me being aware of being someone looking at an object, I may have a response -- observing that the proposition "I am looking at my phone" is true, contemplating the simpleminded self-evidence of this fact, thinking about how this relates to the points Kaj is trying to make -- and there may even be some regularities in this response I can't rationally justify. Nonetheless this response is not a feeling of a self, nor is it something I am mistakenly confusing with a self -- any conflation is only being made from my attempt to interpret an unclear instruction, and is not a mistake I would make in regular thought.

A related point is that the word "self" is so rarely used in ordinary language. The suffix "-self", like "myself" or "yourself", yes, but not "self". That's only said when people are doing philosophy.

TurnTrout's shortform feed

This map is not a surjection because not every map from the rational numbers to the real numbers is continuous, and so not every sequence represents a continuous function. It is injective, and so it shows that a basis for the latter space is at least as large in cardinality as a basis for the former space. One can construct an injective map in the other direction, showing the both spaces of bases with the same cardinality, and so they are isomorphic.

Open question: are minimal circuits daemon-free?

This may be relevant:

Imagine a computational task that breaks up into solving many instances of problems A and B. Each instance reduces to at most n instances of problem A and at most m instances of problem B. However, these two maxima are never achieved both at once: The sum of the number of instances of A and instances of B is bounded above by some . One way to compute this with a circuit is to include n copies of a circuit for computing problem A and m copies of a circuit for computing problem B. Another approach for solving the task is to include r copies of a circuit which, with suitable control inputs, can compute either problem A or problem B. Although this approach requires more complicated control circuitry, if r is significantly less than n+m and the size of is significantly less than the sum of the sizes of and (which may occur if problems A and B have common subproblems X and Y which can use a shared circuit) then this approach will use less logic gates overall.

More generally, consider some complex computational task that breaks down into a heterogeneous set of subproblems which are distributed in different ways depending on the exact instance. Analogous reasoning suggests that the minimal circuit for solving this task will involve a structure akin to emulating a CPU: There are many instances of optimized circuits for low-level tasks, connected by a complex dependency graph. In any particular instance of the problem the relevant data dependencies are only a small subgraph of this graph, with connections decided by some control circuitry. A particular low-level circuit need not have a fixed purpose, but is used in different ways in different instances.

So, our circuit has a dependency tree of low-level tasks optimized for solving our problem in the worst-case. Now, at a starting stage of this hierarchy it has to process information about how a particular instance is separated into subproblems and generate the control information for solving this particular instance. The control information might need to be recomputed as new information about the structure of the instance are made manifest, and sometimes a part of the circuit may perform this recomputation without full access to potentially conflicting control information calculated in other parts.

Against the Linear Utility Hypothesis and the Leverage Penalty

Yes, this is the refutation for Pascal's mugger that I believe in, although I never got around to writing it up like you did. However, I disagree with you that it implies that our utilities must be bounded. All the argument shows is that ordinary people never assign to events enourmous utility values with also assigning them commensuably low probabilities. That is, normative claims (i.e., claims that certain events have certain utility assigned to them) are judged fundamentally differently from factual claims, and require more evidence than merely the complexity prior. In a moral intuitionist framework this is the fact that anyone can say that 3^^^3 lives are suffering, but it would take living 3^^^3 years and getting to know 3^^^3 people personally to feel the 3^^^3 times utility associated with this events.

I don't know how to distinguish the scenarios where our utilities are bounded and where our utilities are unbounded but regularized (or whether our utilities are suffiently well-defined to distinguish the two). Still, I want to emphasize that the latter situation is possible.

Changing habits for open threads

Quick thought: I think you are relying too much on your own experience which I don't expect to generalize well. Different people will have different habits on how much thought they put to their comments, and I expect some put too much thought and some too. We should put more effort at identifying the aggregate tendencies of people at this forum before we make reccomendations.

Then again, perhaps you are just offering the idea casually, so it's okay. Still I worry that the most likely future pathways for posts like this are "get ignored" and "get cited uncritically", and there's no clear place for this more thorough investigation.

Living in an Inadequate World
What's the fallacy you're claiming?

First, to be clear, I am referring to things such as this description of the prisoner's dilemma and EY's claim that TDT endorses cooperation. The published material has been careful to only say that these decision theories endorse cooperation among identical copies running the same source code, but as far as I can tell some researchers at MIRI still believe this stronger claim and this claim has been a major part of the public perception of these decision theories (example here; see section II).

The problem is that when two FDT agent with a different utility functions and different prior knowledge are facing a prisoner's dilemma with each other, then their decisions are actually two different logical variables X0 and X1. The argument for cooperating is that X0 and X1 are sufficiently similar to one another that in the counterfactual where X0=C we also have X1=C. However, you could just as easily take the opposite premise, where X0 and X1 are sufficiently dissimilar that counterfactually changing X0 will have no effect on X1. Then you are left with the usual CDT analysis of the game. Given the vagueness of logical counterfactuals it is impossible to distinguish these two situations.

Here's a related question: What does FDT say about the centipede game? There's no symmetry between the players so I can't just plug in the formalism. I don't see how you can give an answer that's in the spirit of cooperating in the prisoner's dilemma without reaching the conclusion that FDT involves altruism among all FDT agents through some kind of veil of ignorance argument. And taking that conclusion is counter to the affine-transformation-invariance of utility functions.

Living in an Inadequate World

Some meta-level comments and questions:

This discussion has moved far off-topic away from EY's general rationality lessons. I'm pleased with this, since these are topics that I want to discuss, but I want to mention this explicitly since constant topic-changes can be bad for a productive discussion by preventing the participants from going into any depth. In addition, lurkers might be annoyed at reading yet another AI argument. Do you think we should move the discussion to a different venue?

My motivations for discussing this are a chance to talk about critisms of MIRI that I haven't gotten down in writing in detail before, a chance to get a rough impression on how MIRI supporters to these explanations, and more generally an opportunity to practice intellectual honest debates. I don't expect the discussion to go on far enough to resolve our disagreements, but I am trying to anyways to get practice. I'm currently enthusiastic about continuing the discussion. but the sort of enthusiasm that could easily wane in a day. What is your motivation?

Load More