MinusGix

Programmer.

Wiki Contributions

Comments

I draw the opposite conclusion from this: the fact that the decision theory posts seem to work on the basis of a computationalist theory of identity makes me think worse of the decision-theory posts.

Why? If I try to guess, I'd point at not often considering indexicality as a consideration, merely thinking of it as having a single utility function which simplifies coordination. (But still, a lot of decision theory doesn't need to take into account indexicality..)
I see the decision theory posts as less as giving new intuitions, and more breaking old ones that are ill-adapted, though that's partially framing/semantics.

Can you link to some of these? I do not recall seeing anything like this here.

I'll try to find some, but they're more likely to be side parts of comment chains rather than posts, which does make them more challenging to search for. I doubt they're as in-depth as we'd like, I think there is work done there, even if I do think the assumption of QM not mattering much is likely.

The basic idea is what would it give you? If the brain uses it for a random component, why can't that be replaced with something pseudorandom? Which is fine from an angle of not seeing determinism as a problem. If the brain utilizes entangled atoms/neurons/whatever for efficiency, why can't those be replaced with another method — possibly impractically inefficient? Does the brain functionally depend on an arbitrary precision Real for a calculation, why would it, and what would be the matter if it was cut off to N digits?

There's certainly more, but finding specific comments I've read over the years is a challenge.

Everything was determined in the initial configuration of quantum waveforms in the distant past of your lightcone. The experience of time and change is just a side-effect of your embeddedness in this giant static many-dimensional universe."

I'm not sure I understand the distinction. Even if the true universe is a bunch of freeze-frame slices, time and change still functionally act the same. Given that I don't remember random nonsense in my past, there's some form of selection about which freeze-frames are constructed. Or, rather, with differing measure. Thus most of my 'future' measure is concentrated on freeze-frames that are consistent with what I've observed, as that has held true in the past.

Like, what you seem to be saying is Timeless Physics, of which I'd agree more with this statement:

An unchanging quantum mist hangs over the configuration space, not churning, not flowing. But the mist has internal structure, internal relations; and these contain time implicitly. The dynamics of physics—falling apples and rotating galaxies—is now embodied within the unchanging mist in the unchanging configuration space.

So I'd agree that computation only makes sense with some notion of time. That there has to be some way it is being stepped forward. (To me this is an argument in favor of not privileging spatial position in the common teleportation example, but we've seemed to move down a level to whether the brain can be implemented at all)

(bits about CEV) conceptually incoherent

I misworded what I say, sorry. I more meant that you consider it to say/imply nothing meaningful, but you can certainly still argue against it (such as arguing that it isn't coherent).

I think it would be non-physicalist if (to slightly modify the analogy, for illustrative purposes) you say that a computer program I run on my laptop can be identified with the Python code it implements, because it is not actually what happens.

I would say the that the computer program running can be considered as an implementation of the abstract python code. I agree that this model is missing details. Such as the exact behavior of the transistor, how fast it switches, the exact positions of the atoms, etcetera. That is dependent on the mind considering it, I agree. The cosmic ray event would make so it is no longer an implementation of the abstract python program. You could expand the consideration to include more of the universe. Just as you could expand your model to consider the computer program as an implementation of the python program with some constraints: that if this specific transistor gets flipped one too many times it will fry, that there's a slight possibility of a race condition that we didn't consider at all in our abstract implementation, there's a limit to the speed and heat it can operate at, a cosmic ray could come from these areas of space and hit it with 0.0x% probability thus disrupting functionality...

It still seems quite reasonable to say it is an implementation of the python program. I'm open to the argument that there isn't a completely natural privileged point of consideration from which the computer is implementing the same pattern as another computer, and that the pattern is this python program. But as I said before, even if this is ultimately some amount of purely subjective, it still seems to capture quite a lot of the possible ideas?

Like in mathematics, I can have an abstract implementation of a sorting algorithm and prove that a python program for a more complicated algorithm (bubblesort, whatever) is equivalent. This is missing a lot of details, but that same sort of move is what I'm gesturing at.

It is merely part of a mathematical model that, as I've described in response to Ruby earlier, represents a very lossy compression of the underlying physical substrate

I can understand why you think that just the neurons / connections is too lossy, but I'm very skeptical of the idea that we'd need all of the amplitudes related to the brain/mind. Apriori that seems unlikely whatwith how little fundamentally turns on the specifics of QM, and those that do can all be implemented specially. As I discussed above some.

(That also reminds me of another reason why people sometimes just mentions neurons/connections which I forgot in my first reply: because they assume you've gotten the basic brain architecture that is shared and just need to plug in the components that vary)

I disagree that this distinction between our model and reality has been lost, merely that it has been deemed not too significant, or as something you'd study in-depth when actually performing brain uploads.

What is "the computation"? Can we try to taboo that word?

As I said in my previous comment, and earlier in this one, I'm open to the idea of computation being subjective instead of a purely natural concept. Though I'd expect that there's not that many free variables in pinning down the meaning. As for tabooing, I think that is kind of hard, as one very simple way of viewing computation is "doing things according to rules".

You have an expression . This is in your mind and relies on subjective interpretations of what the symbols mean. You implement that abstract program (that abstract doing-things, a chain of rules of inference, a way that things interact) into a computer. The transistors were utilized because they matched the conceptual idea of how switches should function, but they have more complexities than the abstract switch, which introduces design constraints throughout the entire chip. The chip's ALU implements this through a bunch of transistors, which are more fundamentally made up of silicon in specific ways that regulate how electricity moves. There's layers and layers of complexities even as it processes the specific binary representations of the two numbers and shifts them in the right way. But, despite all this, all that fundamental behavior, all the quantum effects like tunneling which restrict size and positioning, it is computing the answer. You see the result, , and are pretty confident that no differences between your simple model of the computer and reality occurred.

This is where I think arguments about subjectivity of computation can be made. Introduce a person who is talking about a different abstract concept, they encode it as binary because that's what you do, and they have an operation that looks like multiplication and produces the same answer for that binary encoding. Then, the interpretation of that final binary output is dependent on the mind, because the mind has a different idea of what they're computing. (But with the abstract idea being different, even if those parts match up) But I think a lot of those cases are non-natural, which is part of why I think even if computation doesn't make sense as a fundamental thing or a completely natural concept, it still covers a wide area of concern and is a useful tool. Similar to how the distinction of values and beliefs is a useful tool even when strictly discussing humans, but even moreso. So then, the two calculators are implementing the same abstract algorithm in their silicon, and then we fall back to two questions 1) is the mind within the edge-cases such that it is not entirely meaningful to talk about an abstract program that it is implementing 2) okay, but even if they share the same computation, what does that imply. I think there could and should be more discussion of the complications around computation, with the easy to confuse interaction between levels of 'completely abstract idea' (platonism?), 'abstract idea represented in the mind' (what I'm talking about with abstract; subjective), 'the physical way that all the parts of this structure behave' (excessive detail but as accurate as possible; objective), 'the way these rules do a specific abstract idea' (chosen because of abstract ideas like a transistor is chosen because it functions like a switch, and the computer program is compiled in such a way because it matches the textual code you wrote which matches the abstract idea in your own mind; objective in that it is behaving in such a way, possibly subjective interpretation of the implications of that behavior).

We could also view computation through the lens of Turing Machines, but then that raises the argument of "what about all these quantum shenanigans, those are not computable by a turing machine". I'd say that finite approximations get you almost all of what you want. Then there's the objection of "turing machines aren't available as a fundamental thing", which is true, and "turing machines assume a privileged encoding", which is part of what I was trying to discuss above.

(I got kinda rambly in this last section, hopefully I haven't left any facets of the conversation with a branch I forgot to jump back to in order to complete)

the lack of argumentation or discussion of this particular assumption throughout the history of the site means it's highly questionable to say that assuming it is "reasonable enough"

While discussion on personal identity has mostly not received a single overarching post focusing solely on arguing all the details, it has been discussed to varying degrees of possible contention points. Thou Art Physics which focuses on getting the idea that you are made up of physics into your head, Identity Isn't in Specific Atoms which tries to dissolve the common intuition of the specific basic atoms mattering, Timeless Identity which is a culmination of various elements of those posts into the idea that even if you duplicate a person they both are still 'you'. There is also more, some of which you've linked, but I consider it strange to say that there's a lack of discussion. The sequence that the posts I've linked are a part of have other discussions, though I agree that they are often from the position of arguing against a baseline of dualism, but I believe they have many points that are relevant to an argument for computationalism. I think there is a lack of discussion about the very specific points you have a tendency to raise, but as I'll discuss, I find myself confused about their relevancy to varying degrees.

There's also the facet of decision theory posting that LW enjoys, which encourage this class of view. With decision problems like Newcomb's Paradox or Parfit's hitchhiker emphasizing the focus of "you can be instantiated inside a simulation to predict your actions, and you should act like that you — roughly — control their actions because of the similarity of your computational implementations". Of course, this works even without assuming the simulations are conscious, but I do think it has led to clearer consideration because it helps break past people's intuitions. Those intuitions are not made for the scenarios that we face, or will potentially have to face.

Bensinger yet again replied in a manner that seemed to indicate he thought he was arguing against a dualist who thought there was a little ghost inside the machine, an invisible homunculus that violated physicalism

Because most often the people suggesting such are dualists, or have a lot of the similar ideas even if they are discussed in an "I am uncertain" manner. I agree Rob could've given a better reply, but it was a reasonable assumption. (I personally found Andesolde's argument confused, with the later parts having a focus on first-person subjective experience that I think is not really useful to consider. There is uncertainties in there, but besides the idea that the mind could be importantly quantum in some way, didn't seem that relevant.)

That's perfectly fine, but "souls don't exist and thus consciousness and identity must function on top of a physical substrate" is very different from "the identity of a being is given by the abstract classical computation performed by a particular (and reified) subset of the brain's electronic circuit," and the latter has never been given compelling explanations or evidence.

I agree it hasn't been argued in depth — but there has definitely been arguments about the extent QM affects the brain. Of which, the usual conclusion was that the effect is minor, and/or that we had no evidence for believing it necessary. I would need a decently strong argument that QM is in some way computationally essential.

the entire brain structure in favor of (a slightly augmented version of) its connectome, and the entire chemical make-up of it in favor of its electrical connections.

More than just the electrical signals matter, this is understood by most. There's plenty of uncertainty about the level of detail needed to simulate/model the brain. Computationalism doesn't imply that only the electrical signals matter, it implies that whatever makes up the computation matters, which can be done via tiny molecules & electrons, water pipes, or circuitry. Simplifying a full molecular simulation to the functional implications of it is just one example of how far we can simplify, which I believe should extend pretty far.

"your mind is a pattern instantiated in matter"

I agree that people shouldn't assume that just neurons/connections are enough, but I doubt that is a strongly held belief; nor is it a required sub-belief of computationalism. You assume too much about Bensinger's reply when he didn't respond, especially as he was responding to subargument in the whole chain.
As well, the quoted sentence by Herd is very general — allowing both the neuron connections and molecular behavior. (There's also the fact that people often handwave over the specifics of what part of the brain you're extracting, because they're talking about the general idea through some specific example that people often think about. Such as a worm's neurons.)

For example, for two calculators, wouldn't you agree with a description of them as having the same 'pattern' even if all the atoms aren't in the same position relative to a table? You agree-reacted on one of dirk's comments:

https://www.lesswrong.com/posts/zPM5r3RjossttDrpw/when-is-a-mind-me?commentId=wziGLYTwM4Nb9gd6E I disagree that your mind is "a pattern instantiated in matter." Your mind is the matter. It's precisely the assumption that the mind is separable from the matter that I would characterize as non-physicalist.

Would the idea that a calculator has some pattern, some logical rules that it is implementing via matter, thus be non-physicalist about calculators? A brain follows the rules of reality, with many implications about how certain molecules constrain movement, how these neuron spikes cause hunger, etcetera. There is a logical/computational core to this that can be reimplemented.

The basic concept of computation at issue here is a feature of the map you could use to approximate reality (i.e., the territory) . It is merely part of a mathematical model that, as I've described in response to Ruby earlier, represents a very lossy compression of the underlying physical substrate

Why shouldn't we decide based on a model/category? Just as there's presumably edge-cases to what counts as a 'human' or 'person'. There very well may be strange setups which we can't reasonably determine to our liking whether we consider it computably implementing a person, a chihuahah, or the weather of Jupiter.
We could try to develop a theory of identity down to the last atom, still operating on a model but at least an extremely specific model, which would presumably force us to narrow in on confusing edge-cases. This would be interesting to do once we have the technology, though I expect there to be edge-cases no matter what, where our values aren't perfectly defined, which might mean preserving option value. I'm also skeptical that most methods present a very lossy compression even if we assume classical circuits. Why would it? (Or, if you're going to raise the idea of only getting some specific sub-class of neuron information, then sure, that probably isn't enough, but I don't care about that)

From this angle where you believe that computation is not fundamental or entirely well-defined, you can simplify the computationalist proposal as "merely" applying in a very large class of cases. Teleporters have no effect on personal identity due to similarity in atomic makeup up to some small allowance for noise (whether simple noise, or because we can't exactly copy all the quantum parts; I don't care if my lip atoms are slightly adjusted). Cloning does not have a strictly defined "you" and "not-you". Awakening from cryogenics counts as a continuation of you. A simulation implementing all the atomic interactions of your mind is very very likely to be you, and a simulation that has simplified many aspects of that down is also still very likely to be you.

Though there are definitely people who believe that the universe can fundamentally be considered computation, which I find plausible, especially due to a lack of other lenses that aren't just "reality is". Of which, your objection does not work without further argumentation with them.

Going back to the calculator example, you would need to provide argumentation for why the essential parts of the brain can't be implemented computationally.

(You link https://www.lesswrong.com/posts/zPM5r3RjossttDrpw/when-is-a-mind-me#5DqgcLuuTobiKqZAe ])

What I value about me is the pattern of beliefs, memories, and values.

The attempted mind-reading of others is (justifiably) seen as rude in conversations over the Internet, but I must nonetheless express very serious skepticism about this claim, as it's currently written. For one, I do not believe that "beliefs" and "values" ultimately make sense as distinct, coherent concepts that carve reality at the joints. This topic has been talked about before on LW a number of times, but I still fully endorse Charlie Steiner's distillation of it in his excellently-written Reducing Goodhart sequence

Concepts can still be useful categorizations even if they aren't hard and fast. Beliefs are often distinct from values in humans. They are vague and intertwine with each other, a belief forming a piece of value that doesn't fade away even once the belief is proven false, a value endorsing a belief for no reason... They are still not one and the same. I also don't see what this has relevance to in the statement. I agree with what they said. I value my pattern of beliefs, memories, and values. I don't care about my specific spatial position for identity (except insofar as I don't want to be in a star), or if I'm solely in baseline reality. They are vague and intertwine with each other, but they do behave differently. Your objections to CEV also seem to me to follow a similar pattern as this, where you go "this does not have a perfect foundational backing" to thus imply "it has no meaning, and there's nothing to be said about it". The consideration of path-dependency in CEV has been raised before, and it is an area that would be great to understand more. My values would say that I meta-value my beliefs to be closer to the truth. There are ambiguities in this area. What about beliefs affecting my values? There's more uncertainty in that region of what I wish to allow.

In any case, the rather abstract "beliefs, memories and values" you solely purport to value fit the category of professed ego-syntonic morals much more so than the category of what actually motivates and generates human behavior, as Steven Byrnes explained in an expectedly outstanding way:

I'd need a whole extra long comment to respond to all the various other parts of your comment chain. Such as indexicality, or the part which does the lines of saying "professed values are not real". Which seems decently false, overly cynical, and also not what Byrnes' linked post tries to imply. I'd say, professed values are often what you tend towards, but that your basic drives are often strong enough to stall out methods like "spend long hours solving some problem" due to many small opportunities. If you were given a big button to do something you profess to value, then you'd press it.

This also raises the question of: Why should I care that the human motivational system has certain basic drives driving it forward? Give me a big button and I'd alter my basic drives to be more in-line with my professed values. The basic drives are short-sighted. (Well, I'd prefer to wait until superintelligent help, because there's lots of ways to mess that up) Of course, that I don't have the big button has practical implications, but I'm primarily arguing against the cynical denial of having any other values than what these basic drives allow.


(I don't entirely like my comment, it could be better. I'd suggest breaking the parent question-post up into a dozen smaller questions if you want discussion, as the many facets could have long comments dedicated to each. Which is part of why there's no single post! You're touching on everything from theory of how the universe works, to how much the preferences we say are real, to whether our models of reality are useful enough for theories of identity, indexicality, whether it makes sense to talk about a logical pattern, etc. Then there's things like andesolde's posts that you cite, but I'm not sure I rely on, where I'd have various objections to their idea of reality as subjective-first. I'll probably find more I dislike about my comment, or realize that I could have worded or explained better once I come around to reading back over it with fresh eyes.)

it fits with that definition

Ah, I rewrote my comment a few times and lost what I was referencing. I originally was referencing the geometric meaning (as an alternate to your statistical definition), two vectors at a right angle from each other.

But the statistical understanding works from what I can tell? You have your initial space with extreme uncertainty, and the orthogonality thesis simply states that (intelligence, goals) are not related — you can pair some intelligence with any goal. They are independent of each other at this most basic level. This is the orthogonality thesis. Then, in practice, you condition your probability distribution over that space with your more specific knowledge about what minds will be created, and how they'll be created. You can consider this as giving you a new space, moving probability around. As an absurd example: if height/weight of creatures were uncorrelated in principal, but then we update on "this is an athletic human", then in that new distribution they are correlated! This is what I was trying to get at with my R^2 example, but apologies that I was unclear since I was still coming at it from a frame of normal geometry. (Think, each axis is an independent normal distribution but then you condition on some knowledge that restricts them such that they become correlated)

I agree that it is an informal argument and that pinning it down to very detailed specifics isn't necessary or helpful at this low-level, I'm merely attempting to explain why orthogonality works. It is a statement about the basic state of minds before we consider details, and they are orthogonal there; because it is an argumentative response to assumptions about "smart -> not dumb goals".

I'm skeptical of the naming being bad, it fits with that definition and the common understanding of the word. The Orthogonality Thesis is saying that the two qualities of goal/value are not necessarily related, which may seem trivial nowadays but there used to be plenty of people going "if the AI becomes smart, even if it is weird, it will be moral towards humans!" through reasoning of the form "smart -> not dumb goals like paperclips". There's structure imposed on what minds actually get created, based on what architectures, what humans train the AI on, etc. Just as two vectors can be orthogonal in R^2 while the actual points you plot in the space are correlated.

I agree, though I haven't seen many proposing that, but also see So8res' Decision theory does not imply that we get to have nice things, though this is coming from the opposite direction (with the start being about people invalidly assuming too much out of LDT cooperation)

Though for our morals, I do think there's an active question of which pieces we feel better replacing with the more formal understanding, because there isn't a sharp distinction between our utility function and our decision theory. Some values trump others when given better tools. Though I agree that replacing all the altruism components is many steps farther than is the best solution in that regard.

Suffering is already on most reader's minds, as it is the central advocating reason behind euthanasia — and for good reason. I agree that policies which cause or ignore suffering, when they could very well avoid such with more work, are unfortunately common. However, those are often not utilitarian policies; and similarly many objections to various implementations of utilitarianism and even classic "do what seems the obviously right action" are that they ignore significant second-order effects. Policies that don't quantify what unfortunate incentives they give are common, and often originators of much suffering. What form society/culture is allowed/encouraged to take, shapes itself further for decades to come, and so can be a very significant cost to many people if we roll straight ahead like in the possible scenario you originally quoted.

Suffering is not directly available to external quantification, but that holds true for ~all pieces of what humans value/disvalue, like happiness, experiencing new things, etcetera. We can quantify these, even if it is nontrivial. None of what I said is obviating suffering, but rather comparing it to other costs and pieces of information that make euthanasia less valuable (like advancing medical technology).

This doesn't engage with the significant downsides of such a policy that Zvi mentions. There are definite questions about the cost/benefits to allowing euthanasia, even though we wish to allow it, especially when we as a society are young in our ability to handle it. Glossing the only significant feature being 'torturing people' ignores:

  • the very significant costs of people dying, which is compounded by the question of what equilibrium the mental/social availability of euthanasia is like
  • the typical LessWrong beliefs about how good technology will get in the coming years/decades. Once we have a better understanding of humans, massively improving whatever is causing them to suffer whether through medical, social, or other means, becomes more and more actionable
  • what the actual distribution of suffering is, I expect most are not at the level we/I would call torture even though it is very unpleasant (there's a meaningful difference between suicidally depressed and someone who has a disease that causes them pain every waking moment, and variations within those)

Being allowed to die is an important choice to let people make, but it does have to be a considered look at how much harm such an option being easily available causes. If it is disputed how likely society is to end up in a bad equilibrium like the post describes, then that's notable, but it would be good to see argument for/against instead.

(Edit: I don't entirely like my reply, but I think it is important to push back against trivial rounding off of important issues. Especially on LW.)

Any opinions on how it compares to Fun Theory? (Though that's less about all of utopia, it is still a significant part)

I think that is part of it, but a lot of the problem is just humans being bad at coordination. Like the government doing regulations. If we had an idealized free market society, then the way to get your views across would 'just' be to sign up for a filter (etc.) that down-weights buying from said company based on your views. Then they have more of an incentive to alter their behavior. But it is hard to manage that. There's a lot of friction to doing anything like that, much of it natural. Thus government serves as our essential way to coordinate on important enough issues, but of course government has a lot of problems in accurately throwing its weight around. Companies that are top down are a lot easier to coordinate behavior. As well, you have a smaller problem than an entire government would have in trying to plan your internal economy.

I definitely agree that it doesn't give reason to support a human-like algorithm, I was focusing in on the part about adding numbers reliably.

Load More