Synopsis: The brain is a quantum computer and the self is a tensor factor in it - or at least, the truth lies more in that direction than in the classical direction - and we won't get Friendly AI right unless we get the ontology of consciousness right.
Followed by: Does functionalism imply dualism?
Sixteen months ago, I made a post seeking funding for personal research. There was no separate Discussion forum then, and the post was comprehensively downvoted. I did manage to keep going at it, full-time, for the next sixteen months. Perhaps I'll get to continue; it's for the sake of that possibility that I'll risk another breach of etiquette. You never know who's reading these words and what resources they have. Also, there has been progress.
I think the best place to start is with what orthonormal said in response to the original post: "I don't think anyone should be funding a Penrose-esque qualia mysterian to study string theory." If I now took my full agenda to someone out in the real world, they might say: "I don't think it's worth funding a study of 'the ontological problem of consciousness in the context of Friendly AI'." That's my dilemma. The pure scientists who might be interested in basic conceptual progress are not engaged with the race towards technological singularity, and the apocalyptic AI activists gathered in this place are trying to fit consciousness into an ontology that doesn't have room for it. In the end, if I have to choose between working on conventional topics in Friendly AI, and on the ontology of quantum mind theories, then I have to choose the latter, because we need to get the ontology of consciousness right, and it's possible that a breakthrough could occur in the world outside the FAI-aware subculture and filter through; but as things stand, the truth about consciousness would never be discovered by employing the methods and assumptions that prevail inside the FAI subculture.
Perhaps I should pause to spell out why the nature of consciousness matters for Friendly AI. The reason is that the value system of a Friendly AI must make reference to certain states of conscious beings - e.g. "pain is bad" - so, in order to make correct judgments in real life, at a minimum it must be able to tell which entities are people and which are not. Is an AI a person? Is a digital copy of a human person, itself a person? Is a human body with a completely prosthetic brain still a person?
I see two ways in which people concerned with FAI hope to answer such questions. One is simply to arrive at the right computational, functionalist definition of personhood. That is, we assume the paradigm according to which the mind is a computational state machine inhabiting the brain, with states that are coarse-grainings (equivalence classes) of exact microphysical states. Another physical system which admits the same coarse-graining - which embodies the same state machine at some macroscopic level, even though the microscopic details of its causality are different - is said to embody another instance of the same mind.
An example of the other way to approach this question is the idea of simulating a group of consciousness theorists for 500 subjective years, until they arrive at a consensus on the nature of consciousness. I think it's rather unlikely that anyone will ever get to solve FAI-relevant problems in that way. The level of software and hardware power implied by the capacity to do reliable whole-brain simulations means you're already on the threshold of singularity: if you can simulate whole brains, you can simulate part brains, and you can also modify the parts, optimize them with genetic algorithms, and put them together into nonhuman AI. Uploads won't come first.
But the idea of explaining consciousness this way, by simulating Daniel Dennett and David Chalmers until they agree, is just a cartoon version of similar but more subtle methods. What these methods have in common is that they propose to outsource the problem to a computational process using input from cognitive neuroscience. Simulating a whole human being and asking it questions is an extreme example of this (the simulation is the "computational process", and the brain scan it uses as a model is the "input from cognitive neuroscience"). A more subtle method is to have your baby AI act as an artificial neuroscientist, use its streamlined general-purpose problem-solving algorithms to make a causal model of a generic human brain, and then to somehow extract from that, the criteria which the human brain uses to identify the correct scope of the concept "person". It's similar to the idea of extrapolated volition, except that we're just extrapolating concepts.
It might sound a lot simpler to just get human neuroscientists to solve these questions. Humans may be individually unreliable, but they have lots of cognitive tricks - heuristics - and they are capable of agreeing that something is verifiably true, once one of them does stumble on the truth. The main reason one would even consider the extra complication involved in figuring out how to turn a general-purpose seed AI into an artificial neuroscientist, capable of extracting the essence of the human decision-making cognitive architecture and then reflectively idealizing it according to its own inherent criteria, is shortage of time: one wishes to develop friendly AI before someone else inadvertently develops unfriendly AI. If we stumble into a situation where a powerful self-enhancing algorithm with arbitrary utility function has been discovered, it would be desirable to have, ready to go, a schema for the discovery of a friendly utility function via such computational outsourcing.
Now, jumping ahead to a later stage of the argument, I argue that it is extremely likely that distinctively quantum processes play a fundamental role in conscious cognition, because the model of thought as distributed classical computation actually leads to an outlandish sort of dualism. If we don't concern ourselves with the merits of my argument for the moment, and just ask whether an AI neuroscientist might somehow overlook the existence of this alleged secret ingredient of the mind, in the course of its studies, I do think it's possible. The obvious noninvasive way to form state-machine models of human brains is to repeatedly scan them at maximum resolution using fMRI, and to form state-machine models of the individual voxels on the basis of this data, and then to couple these voxel-models to produce a state-machine model of the whole brain. This is a modeling protocol which assumes that everything which matters is physically localized at the voxel scale or smaller. Essentially we are asking, is it possible to mistake a quantum computer for a classical computer by performing this sort of analysis? The answer is definitely yes if the analytic process intrinsically assumes that the object under study is a classical computer. If I try to fit a set of points with a line, there will always be a line of best fit, even if the fit is absolutely terrible. So yes, one really can describe a protocol for AI neuroscience which would be unable to discover that the brain is quantum in its workings, and which would even produce a specific classical model on the basis of which it could then attempt conceptual and volitional extrapolation.
Clearly you can try to circumvent comparably wrong outcomes, by adding reality checks and second opinions to your protocol for FAI development. At a more down to earth level, these exact mistakes could also be made by human neuroscientists, for the exact same reasons, so it's not as if we're talking about flaws peculiar to a hypothetical "automated neuroscientist". But I don't want to go on about this forever. I think I've made the point that wrong assumptions and lax verification can lead to FAI failure. The example of mistaking a quantum computer for a classical computer may even have a neat illustrative value. But is it plausible that the brain is actually quantum in any significant way? Even more incredibly, is there really a valid apriori argument against functionalism regarding consciousness - the identification of consciousness with a class of computational process?
I have previously posted (here) about the way that an abstracted conception of reality, coming from scientific theory, can motivate denial that some basic appearance corresponds to reality. A perennial example is time. I hope we all agree that there is such a thing as the appearance of time, the appearance of change, the appearance of time flowing... But on this very site, there are many people who believe that reality is actually timeless, and that all these appearances are only appearances; that reality is fundamentally static, but that some of its fixed moments contain an illusion of dynamism.
The case against functionalism with respect to conscious states is a little more subtle, because it's not being said that consciousness is an illusion; it's just being said that consciousness is some sort of property of computational states. I argue first that this requires dualism, at least with our current physical ontology, because conscious states are replete with constituents not present in physical ontology - for example, the "qualia", an exotic name for very straightforward realities like: the shade of green appearing in the banner of this site, the feeling of the wind on your skin, really every sensation or feeling you ever had. In a world made solely of quantum fields in space, there are no such things; there are just particles and arrangements of particles. The truth of this ought to be especially clear for color, but it applies equally to everything else.
In order that this post should not be overlong, I will not argue at length here for the proposition that functionalism implies dualism, but shall proceed to the second stage of the argument, which does not seem to have appeared even in the philosophy literature. If we are going to suppose that minds and their states correspond solely to combinations of mesoscopic information-processing events like chemical and electrical signals in the brain, then there must be a mapping from possible exact microphysical states of the brain, to the corresponding mental states. Supposing we have a mapping from mental states to coarse-grained computational states, we now need a further mapping from computational states to exact microphysical states. There will of course be borderline cases. Functional states are identified by their causal roles, and there will be microphysical states which do not stably and reliably produce one output behavior or the other.
Physicists are used to talking about thermodynamic quantities like pressure and temperature as if they have an independent reality, but objectively they are just nicely behaved averages. The fundamental reality consists of innumerable particles bouncing off each other; one does not need, and one has no evidence for, the existence of a separate entity, "pressure", which exists in parallel to the detailed microphysical reality. The idea is somewhat absurd.
Yet this is analogous to the picture implied by a computational philosophy of mind (such as functionalism) applied to an atomistic physical ontology. We do know that the entities which constitute consciousness - the perceptions, thoughts, memories... which make up an experience - actually exist, and I claim it is also clear that they do not exist in any standard physical ontology. So, unless we get a very different physical ontology, we must resort to dualism. The mental entities become, inescapably, a new category of beings, distinct from those in physics, but systematically correlated with them. Except that, if they are being correlated with coarse-grained neurocomputational states which do not have an exact microphysical definition, only a functional definition, then the mental part of the new combined ontology is fatally vague. It is impossible for fundamental reality to be objectively vague; vagueness is a property of a concept or a definition, a sign that it is incomplete or that it does not need to be exact. But reality itself is necessarily exact - it is something - and so functionalist dualism cannot be true unless the underdetermination of the psychophysical correspondence is replaced by something which says for all possible physical states, exactly what mental states (if any) should also exist. And that inherently runs against the functionalist approach to mind.
Very few people consider themselves functionalists and dualists. Most functionalists think of themselves as materialists, and materialism is a monism. What I have argued is that functionalism, the existence of consciousness, and the existence of microphysical details as the fundamental physical reality, together imply a peculiar form of dualism in which microphysical states which are borderline cases with respect to functional roles must all nonetheless be assigned to precisely one computational state or the other, even if no principle tells you how to perform such an assignment. The dualist will have to suppose that an exact but arbitrary border exists in state space, between the equivalence classes.
This - not just dualism, but a dualism that is necessarily arbitrary in its fine details - is too much for me. If you want to go all Occam-Kolmogorov-Solomonoff about it, you can say that the information needed to specify those boundaries in state space is so great as to render this whole class of theories of consciousness not worth considering. Fortunately there is an alternative.
Here, in addressing this audience, I may need to undo a little of what you may think you know about quantum mechanics. Of course, the local preference is for the Many Worlds interpretation, and we've had that discussion many times. One reason Many Worlds has a grip on the imagination is that it looks easy to imagine. Back when there was just one world, we thought of it as particles arranged in space; now we have many worlds, dizzying in their number and diversity, but each individual world still consists of just particles arranged in space. I'm sure that's how many people think of it.
Among physicists it will be different. Physicists will have some idea of what a wavefunction is, what an operator algebra of observables is, they may even know about path integrals and the various arcane constructions employed in quantum field theory. Possibly they will understand that the Copenhagen interpretation is not about consciousness collapsing an actually existing wavefunction; it is a positivistic rationale for focusing only on measurements and not worrying about what happens in between. And perhaps we can all agree that this is inadequate, as a final description of reality. What I want to say, is that Many Worlds serves the same purpose in many physicists' minds, but is equally inadequate, though from the opposite direction. Copenhagen says the observables are real but goes misty about unmeasured reality. Many Worlds says the wavefunction is real, but goes misty about exactly how it connects to observed reality. My most frustrating discussions on this topic are with physicists who are happy to be vague about what a "world" is. It's really not so different to Copenhagen positivism, except that where Copenhagen says "we only ever see measurements, what's the problem?", Many Worlds says "I say there's an independent reality, what else is left to do?". It is very rare for a Many World theorist to seek an exact idea of what a world is, as you see Robin Hanson and maybe Eliezer Yudkowsky doing; in that regard, reading the Sequences on this site will give you an unrepresentative idea of the interpretation's status.
One of the characteristic features of quantum mechanics is entanglement. But both Copenhagen, and a Many Worlds which ontologically privileges the position basis (arrangements of particles in space), still have atomistic ontologies of the sort which will produce the "arbitrary dualism" I just described. Why not seek a quantum ontology in which there are complex natural unities - fundamental objects which aren't simple - in the form of what we would presently called entangled states? That was the motivation for the quantum monadology described in my other really unpopular post. :-) [Edit: Go there for a discussion of "the mind as tensor factor", mentioned at the start of this post.] Instead of saying that physical reality is a series of transitions from one arrangement of particles to the next, say it's a series of transitions from one set of entangled states to the next. Quantum mechanics does not tell us which basis, if any, is ontologically preferred. Reality as a series of transitions between overall wavefunctions which are partly factorized and partly still entangled is a possible ontology; hopefully readers who really are quantum physicists will get the gist of what I'm talking about.
I'm going to double back here and revisit the topic of how the world seems to look. Hopefully we agree, not just that there is an appearance of time flowing, but also an appearance of a self. Here I want to argue just for the bare minimum - that a moment's conscious experience consists of a set of things, events, situations... which are simultaneously "present to" or "in the awareness of" something - a conscious being - you. I'll argue for this because even this bare minimum is not acknowledged by existing materialist attempts to explain consciousness. I was recently directed to this brief talk about the idea that there's no "real you". We are given a picture of a graph whose nodes are memories, dispositions, etc., and we are told that the self is like that graph: nodes can be added, nodes can be removed, it's a purely relational composite without any persistent part. What's missing in that description is that bare minimum notion of a perceiving self. Conscious experience consists of a subject perceiving objects in certain aspects. Philosophers have discussed for centuries how best to characterize the details of this phenomenological ontology; I think the best was Edmund Husserl, and I expect his work to be extremely important in interpreting consciousness in terms of a new physical ontology. But if you can't even notice that there's an observer there, observing all those parts, then you won't get very far.
My favorite slogan for this is due to the other Jaynes, Julian Jaynes. I don't endorse his theory of consciousness at all; but while in a daydream he once said to himself, "Include the knower in the known". That sums it up perfectly. We know there is a "knower", an experiencing subject. We know this, just as well as we know that reality exists and that time passes. The adoption of ontologies in which these aspects of reality are regarded as unreal, as appearances as only, may be motivated by science, but it's false to the most basic facts there are, and one should show a little more imagination about what science will say when it's more advanced.
I think I've said almost all of this before. The high point of the argument is that we should look for a physical ontology in which a self exists and is a natural yet complex unity, rather than a vaguely bounded conglomerate of distinct information-processing events, because the latter leads to one of those unacceptably arbitrary dualisms. If we can find a physical ontology in which the conscious self can be identified directly with a class of object posited by the theory, we can even get away from dualism, because physical theories are mathematical and formal and make few commitments about the "inherent qualities" of things, just about their causal interactions. If we can find a physical object which is absolutely isomorphic to a conscious self, then we can turn the isomorphism into an identity, and the dualism goes away. We can't do that with a functionalist theory of consciousness, because it's a many-to-one mapping between physical and mental, not an isomorphism.
So, I've said it all before; what's new? What have I accomplished during these last sixteen months? Mostly, I learned a lot of physics. I did not originally intend to get into the details of particle physics - I thought I'd just study the ontology of, say, string theory, and then use that to think about the problem. But one thing led to another, and in particular I made progress by taking ideas that were slightly on the fringe, and trying to embed them within an orthodox framework. It was a great way to learn, and some of those fringe ideas may even turn out to be correct. It's now abundantly clear to me that I really could become a career physicist, working specifically on fundamental theory. I might even have to do that, it may be the best option for a day job. But what it means for the investigations detailed in this essay, is that I don't need to skip over any details of the fundamental physics. I'll be concerned with many-body interactions of biopolymer electrons in vivo, not particles in a collider, but an electron is still an electron, an elementary particle, and if I hope to identify the conscious state of the quantum self with certain special states from a many-electron Hilbert space, I should want to understand that Hilbert space in the deepest way available.
My only peer-reviewed publication, from many years ago, picked out pathways in the microtubule which, we speculated, might be suitable for mobile electrons. I had nothing to do with noticing those pathways; my contribution was the speculation about what sort of physical processes such pathways might underpin. Something I did notice, but never wrote about, was the unusual similarity (so I thought) between the microtubule's structure, and a model of quantum computation due to the topologist Michael Freedman: a hexagonal lattice of qubits, in which entanglement is protected against decoherence by being encoded in topological degrees of freedom. It seems clear that performing an ontological analysis of a topologically protected coherent quantum system, in the context of some comprehensive ontology ("interpretation") of quantum mechanics, is a good idea. I'm not claiming to know, by the way, that the microtubule is the locus of quantum consciousness; there are a number of possibilities; but the microtubule has been studied for many years now and there's a big literature of models... a few of which might even have biophysical plausibility.
As for the interpretation of quantum mechanics itself, these developments are highly technical, but revolutionary. A well-known, well-studied quantum field theory turns out to have a bizarre new nonlocal formulation in which collections of particles seem to be replaced by polytopes in twistor space. Methods pioneered via purely mathematical studies of this theory are already being used for real-world calculations in QCD (the theory of quarks and gluons), and I expect this new ontology of "reality as a complex of twistor polytopes" to carry across as well. I don't know which quantum interpretation will win the battle now, but this is new information, of utterly fundamental significance. It is precisely the sort of altered holistic viewpoint that I was groping towards when I spoke about quantum monads constituted by entanglement. So I think things are looking good, just on the pure physics side. The real job remains to show that there's such a thing as quantum neurobiology, and to connect it to something like Husserlian transcendental phenomenology of the self via the new quantum formalism.
It's when we reach a level of understanding like that, that we will truly be ready to tackle the relationship between consciousness and the new world of intelligent autonomous computation. I don't deny the enormous helpfulness of the computational perspective in understanding unconscious "thought" and information processing. And even conscious states are still states, so you can surely make a state-machine model of the causality of a conscious being. It's just that the reality of how consciousness, computation, and fundamental ontology are connected, is bound to be a whole lot deeper than just a stack of virtual machines in the brain. We will have to fight our way to a new perspective which subsumes and transcends the computational picture of reality as a set of causally coupled black-box state machines. It should still be possible to "port" most of the thinking about Friendly AI to this new ontology; but the differences, what's new, are liable to be crucial to success. Fortunately, it seems that new perspectives are still possible; we haven't reached Kantian cognitive closure, with no more ontological progress open to us. On the contrary, there are still lines of investigation that we've hardly begun to follow.
Everything computable by a quantum computer is computable by a classical computer (only slower, in some cases). Even if the human brain does in fact do some quantum calculations, a corresponding classical brain could be made. If you really believe that functionalism requires dualism, then I do not see how quantum mechanics can possibly help.
I'm bothered by the fact that you speak of modeling brains with fMRI. fMRI tracks blood flow, not neural activity (they are correlated). It will not be useful AFAIK for scanning a brain at the neuronal level, and we will (most likely) have to map every neural connection before we'd be able to emulate a brain. Speaking of "coarse-grained neurocomputational states" may be nonsensical; we don't know how much of the brain we'll have to emulate to get it right.
Lastly, my recollection from back when I went searching for evidence that the brain was a quantum computer in a feeble and ultimately doomed attempt to maintain my belief in dualism, is that it was very unlikely that the brain used quantum computation.
The role of quantum mechanics in this argument is not to transcend Turing-equivalence. The role of quantum mechanics is to provide a rationale for an ontology containing entities which are fundamental yet have complex states, something which is necessary if you don't want to think of the mind as a non-fundamental state machine. Entangled states as actual states is not absolutely the only way to do this - e.g. there are plenty of topological structures, potentially playing a role in physics, which are complex unities - but it's a natural candidate.
I see I way overestimated the resolution of fMRI: it's of the order of a cubic millimeter, and a cubic millimeter contains about a billion synapses! So even with a really long time series, any model you make is probably going to be pretty crappy - it'll reproduce what your experimental subjects did in the precise situations under which they were scanned, but any other situation is liable to produce something bad.
I meant neuronal states described in a way that is coarse-grained with respect to fundamental physical degrees of freedom.
The standard argument against quantum biology of any sort is that living matter is at room temperature and so everything decoheres. One of the reasons that microtubules are attractive, for someone interested in quantum biology, is that they have some of the right properties to be storing topological quantum entanglement, which is especially robust. It would still be a huge leap from that to what I'm talking about, because I'm saying the whole conscious mind is a single quantum entity, so if it's based on collective excitations of electrons in microtubules (for example), we would still require some way for electrons in different microtubules to be coherently coupled. Any serious attempt in this direction will also have to study the cellular and intercellular medium from a condensed-matter perspective, to see if there are any collective quantum effects, e.g. in the ambient electromagnetic field on subcellular scales, or in the form of phonons in the fibrous intra- and intercellular matrix, which could help to mediate such a coupling.
Why don't you want to think of the mind as non-fundamental? Sounds like rationalization.
I don't understand how a quantum computer satisfies this requirement, but not a classical computer.
This seems like such a necessary component of your argument that I think it was a bad place to skimp on the explanation. The outline you gave did little to convince me, I'm afraid. I could be wrong, but my perception is that I won't be alone here in that position. Split the post in two if it makes it too long...
Maybe it's me, but I really didn't get the "why" of your "functionalism implies dualism" thesis. The qualia issue has been addressed in length by people like Douglas Hofstadter in a (to me) quite convincing way, or indirectly by Eliezer Yudkowsky in the Sequences (in How An Algorithm Feels From Inside for example), and the "borderline cases" issue is just a very classical issue in a reductionist view of "multi-layered map of a single-layered reality". It's the same kind of "borderline cases" you've with not being able to take at quark-level what is exactly part of a given object and what is not, and I really don't see how it implies dualism.
Well, let's review what it is that we're trying to explain. Consider what you are, from the subjective perspective. You're a locus of awareness, experiencing some texture of sensations organized into forms. Then, we have what is supposed to be the physical reality of you: about O(10^26) atoms, executing an intricate nested dance of systems within systems, inside a skull somewhere. An individual sensation - the finest grain of that texture of sensation you're always experiencing, e.g. some pixel of red in your visual field - is supposed to be the very same thing as a particular massed movement of billions of atoms somewhere in your visual cortex. Even if you say that the redness is "how this movement feels" or "how it feels to be this movement" or some similar turn of phrase, you're still tending towards a type of dualism, property dualism, because you're saying that along with its physically recognizable properties, this flow of atoms has a property that otherwise plays no role in physics, the property of "how it feels".
For macroscopic concepts like chair, we can get away with vagueness about borderline cases, because there's no reason to believe that "chair" is anything more than a heuristic concept for talking about certain large clusters of atoms. The experience of a chair is a collaboration between a world of atoms and a mind of perceptions and concepts. But you can't reduce the mind itself in this way, because of the circularity involved.
So what part of the answer do you disagree with?
As near as I can tell, you say our models look too imprecise to explain consciousness. You must know the argument that consciousness ain't that precise - how do you respond? Because when I put this together with the first link, I don't see what you have left to explain. (But I may be slightly drunk.)
No, not the very same thing. Many kinds of "massed movement of billions of atoms" can generate the same sensation. Sure, exactly the same movement of the whole brain will always generate the same sensation, but in real life, it won't just happen, a brain will never be exactly in the same state.
The configuration of atoms on my hard disk has a property of being an ext4 filesystem, while being an ext4 filesystem plays no role in physics, so I believe in property dualism ? Property is part of the map, not of the territory. The property of that hard disk is that it holds that movie file. The same movie file (for me, at the level of the map which is useful to me) exists on my USB key, and on that DVD. The physical configuration of the two is totally different, for me it's the same file.
It's exactly the same with "feeling" or "seeing red". And it doesn't matter that my DVD is slightly damaged so some DVD players will be able to read it, but others won't, making it a "borderline case".
I don't see the problem with that kind of circularity (but maybe I did read too much Hoftsdatder, so "strange loops" have became a normal fundamental concept to me). Also, you seem to forget that perception involve vagueness. Our perceptions aren't binary "red" and "orange". When require to classify something between "red" and "orange", we'll end up with one (one will get slightly higher activation), but overall, the "red" or "orange" symbols in our brain are more-or-less strongly activated and can be activated at the same time for borderline cases. So the borderline cases aren't even that problematic.
Downvoted for many wishy-washy unfounded statements, but mostly for mentioning the word tensor in the synopsis and never again in the rest of the post. Have you considered taking a basic technical writing course?
That's a genuine oversight; I added the synopsis at the last minute. The mind as tensor factor was discussed in a previous post. I've added a comment to the essay, but maybe I should just change the synopsis.
Upvoted for presenting an (IMO outlandish and perhaps confused) view in a reasonable and potentially productive way.
You are obviously smart and dedicated; the fact that I can't perfectly understand what you are talking about is just as likely to be my fault as it is to be yours. I want there to be more smart people pursuing eccentric philosophical quests. I would fund you if I had substantial excess financial capacity, I hope someone else does.
Thanks for saying this.
I think if you did become a professional physicist your chances of finding funding for this project would increase significantly. Even people who are sympathetic to your approach have no good reason for thinking you have the ability to make significant progress in this field. If you do have some qualifications or experience that would lead your readers to see that you aren't just a wackjob you should really include it in your pitches.
Certainly, if I had a physics job already, this post would not exist; I would just be invisibly getting on with the project. Then there's the option of writing a paper. Last time around, the plan was to do something small but solid, like prove a minor conjecture. But I ended up taking the ambitious path, and now I have some ideas to develop which are very cool, but not yet validated (or invalidated). I might talk about them on a physics forum, but talking about them here would prove nothing, since the audience isn't qualified to judge them.
So yes, in terms of obtaining practical assistance, this post was a long shot. I had to actually run out of money before I was willing to sit down and write it. Afterwards I felt this immense relief at making a relatively uncompromising statement of my agenda. Hopefully a few other people got something from it as well.
So is it possible you'll end up with a conclusion that whenever people go unconscious, the quantum state process gets scrambled and they die for real, and then later a new process gets started and some brand new person with their memories wakes up?
Your agenda strikes me as potentially fruitful but insufficiently meta. There are many philosophical problems an FAI would need to be able to solve, and I certainly agree that consciousness is a huge one. But this would seem to me to indicate that we need to find a way to automate philosophical progress generally, rather than find a way to algorithmicize our human-derived intuitions about consciousness. Non? Are you of the opinion that we need to understand how brains do their magic if we're to be sure that our seed AI will be able to figure out how to do similar magic?
Wheeler talks about quantum mechanics as statistically describing the behavior of masses of logical operations. Goertzel is like 'well logical operations are just a rather rigid and unsatisfying form of thought, maybe you get quantum from masses of Mind operations'. As far as crackpot theories go it seems okay, and superficially looks like what you're trying to do in a much more technical way by unifying physics and experience.
Anyway, I wish you good luck on your journey.
(I apologize if this comment is unclear, I am highly distracted.)
The CEV idea there would be to create an AI which is optimizing for expected satisfaction of the utility function that would be output by such a process. If the AI's other functionality is good, it will start with reasonable guesses about what such a process would output, and rapidly improve those guesses. As it further improved, gathered more data, etc, it would better and better approximate that output.