Based on Tononi's earlier work on Integrated Information Theory, apparently, Maguire et al. have come up with a formulation of consciousness as a lossless integration of information that requires noncomputable functions, which implies that consciousness cannot be modeled computationally.

I'm personally skeptical of this, but their paper (seen here: has some impressive looking formal mathematical proofs that I will admit I lack the mathematical competence to judge the veracity of.  Anyone with greater mathematical acumen want to take a look?

New Comment
13 comments, sorted by Click to highlight new comments since: Today at 5:47 PM

These authors argue that consciousness should be based on lossless integration, otherwise the memory traces would be affected each time we remember something. Problem is, this is exactly what happens.

- Jay

So they basically tried to model an infinite memory system and it turned out that it wasn't consistent with physically plausible computational systems. Big deal.


As with most papers in this genre, the trouble probably won't be the math. The trouble will be in the interface between the math and the thing it's supposed to be describing.

Yup. In particular, most people I know don't take Tononi's theory of consciousness seriously, and neither do I.

What are the problems with Tononi's theory? Can you even call it a theory? I thought it was more of a paradigm.

This is the kind of thing that when I take the outside view about my response, it looks bad. There is a scholarly paper refuting one of my strongly-held beliefs, a belief I arrived at due to armchair reasoning. And without reading it, or even trying to understand their argument indirectly, I'm going to brush it off as wrong. Merely based on the kind of bad argument (Bad philosophy doing all the work, wrapped in a little bit of correct math to prove some minor point once you've made the bad assumptions) I expect it to be, because this is what I think it would take to make a mathematical argument against my strongly-held belief, and because other people who share my strongly-held belief are saying that that's the mistake they make.

Still not wasting my time on this though.

To be fair to yourself, would you reject it if it were a proof of something you agreed with?

If they had gone out and 'proven' mathematically that sentient robots ARE possible, I'd be equally skeptical - not of the conclusion, but of the validity of the proof, because the core of the question is not mathematical in nature.

None of the words in that paper's abstract mean what they conventionally mean, nor do they mean what they sound like they mean, nor do they mean things that are useful. This is an esoteric point about math, couched in words that falsely resemble philosophy of mind.

The paper does not use the word "sentient" anywhere. That was added by New Scientist.

From the paper: "In particular, memory functions must be vastly non-lossy, otherwise retrieving them repeatedly would cause them to gradually decay. " I wonder if they've ever met a human being? That's pretty much how we work. Memories don't so much decay as get influenced slightly every time we remember them. That's one reason why they get witnesses to crimes to write stuff down straight away, rather than waiting till a trial, etc.

Sigh. To go from 'brains are pretty good at storing information' to 'therefore brains must never leak lose data in an information theoretic sense' is so misleading that it makes me wonder if it's deliberate.

They seem to be doing this to build up the argument that a non-lossy consciousness isn't computable. (And therefore, humans are special. ) The irony is that they're trying to make human consciousness special by making it more stereotypically robot-like, by implying it can't lose information.

As Douglas_Knight's link points out, the lossless integration model has been falsified experimentally, so the "formulation of consciousness as a lossless integration of information" does not apply to the actual observed consciousness. The math may or may not be interesting in itself, but it has no bearing on whether "sentient robots" are possible. Fortunately, the authors' conclusion is suitably qualified (emphasis mine):

This result implies that if unitary consciousness exists, it cannot be modelled computationally.

In other words, well, unitary consciousness doesn't exist. Score one more for the Buddhists.

This is terrible even by New Scientist standards. I do wish they'd stop writing up random arXiv ramblings as if they'd even passed basic peer review. (Conference proceedings generally haven't.)

I'm not sure what conference proceedings you've been looking at, but the ones I've managed to publish in generally do have basic peer review (I know because I got comments back from the reviewers).

Though, whether or not the quality of peer review done by conferences is any good is another matter entirely.