All of NxGenSentience's Comments + Replies

No changes that I'd recommend, at all. SPECIAL NOTE: please don't interpret the drop in the number ofcomments, the last couple of weeks, as a drop in interest by forum participants. The issues of these weeks are the heart of the reason for existence of nearly all the rest of the Bostrom book, and many of the auxiliary papers and references we've seen, ultimately also have been context, for confronting and brainstorming about the issue now at hand. I myself just as one example, have a number of actual ideas that I've been working on for two weeks, but I'v... (read more)

Thanks for posting this link, and for the auxiliary comments. I try to follow these issues as viewed from this sector of thinkers, pretty closely (the web site Defense One often has some good articles, and their tech reporter Patrick Tucker touches on some of these issues fairly often.) But I had missed this paper, until now. Grateful, as I say, for your posting of this.

Glad it's of interest to you. I found it while checking the sources of this motherboard article []. There's another document [] linked from there which you may or may not have seen, but it lacks mention of strong AI, instead focusing on automating war in general. After reading that I feel like in the near future it'll be much easier to justify concrete AI takeover mechanisms [] to the public.

Before we continue, one more warning. If you're not already doing most of your thinking at least half-way along the 3 to 4 transition (which I will hereon refer to as reaching 4/3), you will probably also not fully understand what I've written below because that's unfortunately also about how far along you have to be before constructive development theory makes intuitive sense to most people. I know that sounds like an excuse so I can say whatever I want, but before reaching 4/3 people tend to find constructive development theory confusing and probably no

... (read more)

I didn't exactly say that, or at least, didn't intend to exactly say that. It's correct of you to ask for that clarification.

When I say "vindicated the theory", that was, admittedly, pretty vague.

What I should have said was the recent experiments removed what has been more or less statistically the most common and continuing objection to the theory, by showing that quantum effects in microtubules, under the kind of environmental conditons that are relevant, can indeed be maintained long enough for quantum processes to "run their course"... (read more)

Hi, Yes, for the kickstarter option, that seems to be almost a requirement. People have to see what they are asked to invest in.

The kickstarter option is somewhat my second choice plan, or I'd be furher along on that already. I have several things going on that are pulling me in different directions.

To expand just a bit on the evolution of my You Tube idea: originally – a couple months before I recognized more poignantly the value to the HLAI R & D community of doing well-designed, issue-sophisticated, genuinely useful (to other than a naïve audienc... (read more)

Same question as Luke's. I probably have jumped at it. I have a standing offer to make hi-def (1080) video interviews, documentaries, etc and competent, penetrating Q and A sessions, with people like Bostrom, Google-ites setting up the AI laboratories, and other vibrant, creative, contempory AI-relevant players.

I have knowledge of AI, general comp sci, deep and broad neuroscience, the mind-body problem (philosophically understood in GREAT detail -- college honors thesis at UCB was on that) and deep, detailed knowledge of all the big neurophilosphy play... (read more)

Just to be certain I understand you correctly, you say that it's likely that the brain uses quantum effects for decision making?
If you already have the equipment, what's stopping you from setting up the relevant YouTube channel right now? It's probably much easier to seek funding from the position of having already something to show.

Same question as Luke's. I probably would have jumped at it, if only to make seed money to sponsor other useful projects, like the following.

I have a standing offer to make hi-def (1080) video interviews, documentaries, etc and competent, penetrating Q and A sessions and documentaries with key, relevant players and theoreticians in AI and related work. This includes individual thinkers, labs, Google's AI work, the list is endless.

I have knowledge of AI, general comp sci, consideralble knowledge of neuroscience, the mind-body problem (philosophically und... (read more)

It's nice to hear a quote from Wittgenstein. I hope we can get around to discussing the deeper meaning of this, which applies to all kinds of things... most especially, the process by which each kind of creature (bats, fish, homo sapiens, and potential embodied artifactual (n.1) minds (and also not embodied in the contemporaneously most often used sense of the term -- Watson was not embodied in that sense) *constructs it's own ontology) (or ought to, by virtuue of being embued with the right sort of architecture.)

That latter sense, and the incommensurabil... (read more)

People do not behave as if we have utilities given by a particular numerical function that collapses all of their hopes and goals into one number, and machines need not do it that way, either.

I think this point is well said, and completely correct.


Why not also think about making other kinds of systems?

An AGI could have a vast array of hedges, controls, limitations, conflicting tendencies and tropisms which frequently cancel each other out and prevent dangerous action.

The book does scratch the surface on these issues, but it is not all about fail-saf

... (read more)

My general problem with "utilitarianism" is that it's sort of like Douglas Adams' "42." An answer of the wrong type to a difficult question. Of course we should maximize, that is a useful ingredient of the answer, but is not the only (or the most interesting) ingredient.

Taking off from the end of that point, I might add (but I think this was probably part of your total point, here, about "the most interesting" ingredient) that people sometimes forget that utilitarianism is not a theory itself about what is normatively desi... (read more)

One way intelligence and goals might be related is that the ontology an agent uses (e.g. whether it thinks of the world it deals with in terms of atoms or agents or objects) as well as the mental systems it has (e.g. whether it has true/false beliefs, or probabilistic beliefs) might change how capable it is...

This is totally right as well. We live inside our ontologies. I think one of the most distinctive, and important, features of acting, successfully aware minds (I won't call them 'intelligences" because of what I am going to say further down,... (read more)

One way intelligence and goals might be related is that the ontology an agent uses (e.g. whether it thinks of the world it deals with in terms of atoms or agents or objects) as well as the mental systems it has (e.g. whether it has true/false beliefs, or probabilistic beliefs) might change how capable it is, as well as which values it can comprehend.

I think the remarks about goals being ontologically-associated, are absolutely spot on. Goals, and any “values” distinguishing among the possible future goals in the agent's goal space, are built around tha... (read more)

To continue:

If there are untapped human cognitive-emotive-apperceptive potentials (and I believe there are plenty), then all the more openness to undiscovered realms of "value" knowledge, or experience, when designing a new mind architecture, is called for. To me, that is what makes HLAI (and above) worth doing.

But to step back from this wondrous, limitless potential, and suggest some kind of metric based on the values of the "accounting department", those who are famous for knowing the cost of everything but the value of nothing, and ... (read more)

Thanks, I'll have a look. And just to be clear, watching *The Machine" wasn't driven primarily by prurient interest -- I was drawn in by a reviewer who mentioned that the backstory for the film was a near-future world-wide recession, pitting the West with China, and that intelligent battlefield robots and other devices were the "new arms race" in this scenario.

That, and that the film reviewer mentioned that (i) the robot designer used quantum computing to get his creation to pass the Turing Test (a test I have doubts about as do other res... (read more)

Perhaps we should talk about something like productivity instead of intelligence, and quantify according to desirable or economically useful products.

I am not sure I am very sympathetic with a pattern of thinking that keeps cropping up, viz., as soon as our easy and reflexive intuitions about intelligence become strained, we seem to back down the ladder a notch, and propose just using an economic measure of "success".

Aside from (i) somewhat of a poverty of philosophical of imagination (e.g. what about measuring the intrinsic interestingness of... (read more)

To continue: If there are untapped human cognitive-emotive-apperceptive potentials (and I believe there are plenty), then all the more openness to undiscovered realms of "value" knowledge, or experience, when designing a new mind architecture, is called for. To me, that is what makes HLAI (and above) worth doing. But to step back from this wondrous, limitless potential, and suggest some kind of metric based on the values of the "accounting department", those who are famous for knowing the cost of everything but the value of nothing, and even more famous for, by default, often derisively calling their venal, bottom-line, unimaginative dollars and cents worldview a "realistic" viewpoint (usually a constraint based on lack of vision) -- when faced with pleas for SETI grants, or (originally) money for the National Supercomputing Grid, ..., or any of dozen of other projects that represent human aspiration at its best -- seems, to me, to be shocking. I found myself wondering if the moderator was saying that with a straight face, or (hopefully) putting on the hat of a good interlocutor and firestarter, trying to flush out some good comments, because this week had a diminished activity post level. Irrespective of that, another defect, as I mentioned, is that economics as we know it will prove to be relevant for an eyeblink, in the history of the human species (assuming we endure.) We are closer to the end of this kind of scarcity-based economics, than the beginning (assuming even one or more singularity style scenarios come to pass, like nano.) It reminds me of the ancient TV series Star Treck New Gen, in an episode wherein someone from our time ends up aboard the Enterprise of the future, and is walking down a corridor speaking with Picard. The visitor asks Picard something like "who pays for all this", as the visitor is taking in the impressive technology of the 23rd century vessel. Picard replys something like, "The economics of the 23 century are somewhat differen
If you are fine with fiction, I think the Minds from Iain Banks Culture are a much better starting point than dancing naked girls. In particular, the book Excession describes the "Infinite Fun Space" where Minds go to play...

If we could easily see how a rich conception of consciousness could supervene on pure information

I have to confess that I might be the one person in this business who never really understood the concept of supervenience -- either "weak supervenience" or "strong supervenience." I've read Chalmers, Dennett, the journals on the concept... never really "snapped-in" for me. So when the term is used, I have to just recuse myself and let those who do understand it, finish their line of thought.

To me, supevenience seems like a fu... (read more)

Supervenience is not a claim like epiphenonenalism, it is a set of constraints that represent some broad naturalists conclusions.

Three types of information in the brain (and perhaps other platforms), and (coming soon) why we should care

Before I make some remarks, I would recommend Leonard Susskind’s (for those who don’t know him already – though most folks in here probably do -- he is a physicist at the Stanford Institute for Theoretical Physics) very accessible 55 min YouTube presentation called “The World as Hologram.” It is not as corny as it might sound, but is a lecture on the indestructibility of information, black holes (which is a convenient lodestone for him to discuss the ... (read more)

Well, I ran several topics together in the same post, and that was perhaps careless planning. And, in any case I do not expect slavish agreement just because I make the claim.

And, neither should you, just by flatly denying it, with nary a word to clue me in about your reservations about what has, in the last 10 years, transitioned from a convenient metaphor in quantum physics, cosmology, and other disciplines, to a growing consensus about the actual truth of things. (Objections to this growing consensus, when they actually are made, seem to be mostly argu... (read more)

A growing consensus isn't a done deal. It's a matter if fact that information ontology i isn't the established consensus in the way that evolution is. You are entitled to opinions, but not to pass off opinions as fact. There is enough confusion about physics already. You bring in the issue of objections to information ontology The unstated argument seems to be that since there are no valid objections, there is nothing to stop it becoming the established consensus, so it is as good as. What would a universe in which information is not fundamental look like, as opposed to one where it is? I would expect a universe where information is not fundamental to look like one where information always requires some physical, material or energetic, medium or carrier -- a sheet of paper,, radio wave,a train of pulses going down T1 line. That appears to be the case. I am not sure why you brought Bostrom in. For what it's worth, I don't think a Bostrom style mathematical universe is quite the same as a single universe information ontology. I don't know who you think is doing that, .or why you brought it in. Do you think .IO helps with the mind body problem? I think you need to do more than subtract the stuffiness from matter. If we could easily see how a rich conception of consciousness could supervene on pure information, we would easily be able to see how computers could have qualia, which we can't. We need more in our ontology, not less.

A cell can be in a huge number of internal states. Simulating a single cell in a satisfactory way will be impossible for many years. What portion of this detail matters to cognition, however? If we have to consider every time a gene is expressed or protein gets phosphorylated as an information processing event, an awful lot of data processing is going on within neurons, and very quickly.

I agree not only with this sentence, but with this entire post. Which of the many, many degrees of freedom of a neuron, are "housekeeping" and don't contribut... (read more)

No, information ontology isn't a done deal.

Will definitely do so. I can see several upcoming weeks when these questions will fit nicely, including perhaps the very next one. Regards....

Intra-individual neuroplasticity and IQ - Something we can do for ourselves (and those we care about) right now

Sorry to get this one in at the last minute, but better late than..., and some of you will see this.

Many will be familiar with the Harvard psychiatrist, neuroscience researcher, and professor of medicine, John Ratey, MD., from seeing his NYT bestselling books in recent years. He excels at writing for the intelligent lay audience, yet not dumbing down his books to the point where they are useless to those of us who read above the laymans' level in... (read more)

Single-metric versions of intelligence are going the way of the dinosaur. In practical contexts, it's much better to test for a bunch of specific skills and aptitudes and to create a predictive model of success at the desired task.

I thought that this had become a fairly dominant view, over 20 years ago. See this PDF:

I first read the book in the early nineties, though Howard Gardner had published the first edition in 1982. I was at first a bit extra skeptical that it would be ba... (read more)

I am a little curious that the "seven kinds of intelligence" (give or take a few, in recent years) notion has not been mentioned much, if at all, even if just for completeness.... Has that been discredited by some body of argument or consensus, that I missed somewhere along the line, in the last few years?

Particularly in many approaches to AI, which seem to view, almost a priori (I'll skip the italics and save them for emphasis) the approach of the day to be: work on (ostensibly) "component" features of intelligent agents as we conceive... (read more)

Bring these questions back up in later discussions!


Thanks for the excellent post ... both of them, actually. I was just getting ready this morning to reply to the one from a couple days ago about Damasio et al., regarding human vs machine mechanisms underneath the two classes of beings' reasoning "logically" -- even when humans do reason logically. I read that post at the time and it had sparked some new lines of thought - for me at least - that I was considering for two days. (Actually kept me awake that night thinking, of an entire new way -- different from any I have seen mentioned -- in... (read more)

I'll have to weigh in wiith Botrom on this one, though I think it depends a lot on the individual brain-mind, i.e., how your particular personality crunches the data.

Some people are "information consumers", others are "information producers". I think Einstein might have used the obvious terms supercritical vs subcritical minds at some point -- terms that in any case (einstein or not) naturally occurred to me (and probably lots of people) and I've used since teenager years, just in talking to my friends, to describe different people's m... (read more)

A nice paper, as are the others this article's topic cloud links with.

Would you consider taking a one extra week pause, after next week's presentation is up and live (i.e. give next week a 2 week duration)? I realize there is lots of material to cover in the book. You could perhaps take a vote late next week to see how the participants feel about it. For me, I enjoy reading all the links and extra sources (please, once again, do keep those coming.) But it exponentially increases the weekly load. Luke graciously stops in now and then and drops off a link, and usually that leads me to downloading half a dozen other PDFs tha... (read more)

Please keep the links coming at the same rate (unless the workload for you is unfairly high.) I love the links... enormous value! It may take me several days to check them out, but they are terrific! And thanks to Caitlin Grace for putting up her/your honors thesis. Wonderful reading! Summaries are just right, too. "If it ain't broke, don"t fix it." I agree with Jeff Alexander, above. This is terrific as-is. -Tom

Hi everyone!

I'm Tom. I attended UC Berkeley a number of years ago, double-majored in math and philosophy, graduated magna cum laude, and wrote my Honors thesis on the "mind-body" problem, including issues that were motivated by my parallel interest in AI, which I have been passionately interested in all my life.

It has been my conviction since I was a teenager that consciousness is the most interesting mystery to study, and that, understanding how it is realized in the brain -- or emerges therefrom, or whatever it turns out to be -- will also alm... (read more)


I remember readng Jeff Hawkins' On Intelligence 10 or 12 years ago, and found his version of the "one learning algorithm" extremely intriguing. I remember thinking at the time how elegant it was, and the multiple fronts on which it conferred explanatory power. I see why Kurzweil and others like it too.

I find myself, ever since reading Jeff's book (and hearing some of talks later) sometimes musing -- as I go through my day, noting the patterns in my expectations and my interpretations of the day's events -- about his memory - prediciton... (read more)

Why ‘WB’ in “WBE” is not well-defined and why WBE is a worthwhile research paradigm, despite its nearly fatal ambiguities.

Our community (in which I include cognitive neurobiologists, AI researchers, philosophers of mind, research neurologists, behavioral and neuro-zoologists and ethologists, and anyone here) has, for some years, included theorists who present various versions of “extended mind” theories.

Without taking any stances about those theories (and I do have a unique take on those) in this post, I’ll outline some concerns about extended brain issues... (read more)

edited out by author...citation needed, ill add later

One’s answer depends on how imaginative one wants to get. One situation is if the AI were to realize we had unknowingly trapped it in too deep a local optimum fitness valley, for it to progress upward significantly w/o significant rearchitecting. We might ourselves be trapped in a local optimality bump or depression, and have transferred some resultant handicap to our AI progeny. if it, with computationally enhanced resources, can "understand" indirectly that it is missing something (analogy: we can detect "invisible" celestial objects ... (read more)

I love this question. As it happens, I wrote my honors thesis on the mind-body problem (while I was a philosophy and math double-major at UC Berkeley), and have been passionately interested in consciousness, brains (and also AI) ever since (a couple decades.)

I will try to be self-disciplined and remain as agnostic as I can – by not steering you only toward the people I think are more right (or “less wrong”.) Also, I will resist the tendency to write 10 thousand word answers to questions like this (which in any case would still barely scratch the surface ... (read more)

That sort of confirms my suspicion - that it's a very active topic. And it's not necessarily easy to break into. I was hoping there was a good pop-sci summary book that laid things out real nicely. Like what The Selfish Gene does for evolution. But I read the book Blindsight, and am now reading Metzinger's The Ego Tunnel, just because it seemed incredibly interesting. So who knows how deep this will go for me :)

Yes, many. Go to PubMed and start drilling around, make up some search compinations and you will get immediately onto lots of interesting research tracks. Cognitive neurobiology, systems neurobiology, many areas and journals you'll run across, will keep you busy. There is some really terrific, amazing work. Enjoy.

I'd also point out that any forecast that relies on our current best guesses about the nature of general intelligence strike me as very unlikely to be usefully accurate--we have a very weak sense of how things will play out, how the specific technologies involved will relate to each other, and (more likely than not) even what they are.

It seems that many tend to agree with you, in that, on page 9 of the Muller - Bostrom survey, I see that 32.5 % of respondents chose "Other method(s) currently completely unknown."

We do have to get what data we c... (read more)


I agree completely with your opening statement, that if we, the human designers, understand how to make human level AI, then it will probably be a very clear and straightforward issue to understand how to make something smarter. An easy example to see is the obvious bottleneck human intellects have with our limited "working" executive memory.

The solutions for lots of problems by us are obviously heavily encumbered by how many things one can keep in mind at "the same time" and see the key connections, all in one act of synthesis. ... (read more)

If I were an ASIC-implemented AI why would I need an ASIC factory? Why wouldn't I just create a software replica of myself on general purpose computing hardware, i.e. become an upload? I know next to nothing about neuroscience, but as far as I can tell, we're a long way from the sort of understanding of human cognition necessary to create an upload, but going from an ASIC to an upload is trivial. I'm also not at all convinced that I want a layover at humanville. I'm not super thrilled by the idea of creating a whole bunch of human level intelligent machines with values that differ widely from my own. That seems functionally equivalent to proposing a mass-breeding program aiming to produce psychologically disturbed humans.

Katja, you are doing a great job. I realize what a huge time and energy commitment it is to take this on... all the collateral reading and sources you have to monitor, in order to make sure you don't miss something that would be good to add in to the list of links and thinking points.

We are still in the get aquainted, discovery phase, as a group, and with the book. I am sure it will get more interesting yet as we go along, and some long term intellectual friendships are likely to occurr as a result of the coming weeks of interaction.
Thanks for your time and work.... Tom

Not so much from the reading, or even from any specific comments in the forum -- though I learned a lot from the links people were kind enough to provide.

But I did, through a kind of osmosis, remind myself that not everyone has the same thing in mind when they think of AI, AGI, human level AI, and still less, mere "intelligence."

Despite the verbal drawing of the distinction between GOFAI and the spectrum of approaches being investigated and persued today, I have realized by reading between the lines that GOFAI is still alive and well. Maybe it ... (read more)

There is a way to arrive at this thru Damasio's early work, which I don't think is highlighted by saying that emotion is needed for human-level skill. His work in the 1980s was on "convergence zones". These are hypothetical areas in the brain that are auto-associative networks (think a Hopfield network) with bi-directional connections to upstream sensory areas. His notion is that different sensory (and motor? I don't remember now) areas recognize sense-specific patterns (e.g., the sound of a dog barking, the image of a dog, the word "dog", the sound of the word "dog", the movement one would make against an attacking dog), and the pattern these create in the convergence zone represents the concept "dog". This makes a lot of sense and has a lot of support from studies, but a consequence is that humans don't use logic. A convergence zone is set there, in one physical hunk of brain, with no way to move its activation pattern around in the brain. That means that the brain's representations do not use variables the way logic does. A pattern in a CZ might be represented by the variable X, and could take on different values such as the pattern for "dog". But you can't move that X around in equations or formuli. You would most likely have a hard-wired set of basic logic rules, and the concept "dog" as used on the left-hand side of a rule would be a different concept than the concept "dog" used on the right-hand side of the same rule. Hence, emotions are important for humans, but this says nothing about whether emotions would be needed for an agent that could use logic.

It may have been a judgement call by the writer (Bostrom) and editor: He is trying to get the word out as widely as possible that this is a brewing existential crisis. In this society, how to you get most people's (policymakers, decision makers, basically "the Suits" who run the world) attention?

Talk about the money. Most of even educated humanity sees the world in one color (can't say green anymore, but the point is made.)

Try to motivate people about global warming? (", but.... well, it might cost JOBS next month, if we try to... (read more)

Watson's Jeopardy win shows that, given enough time, a team of AI engineers has an excellent chance of creating a specialized system which can outpace the best human expert in a much wider variety of tasks than we might have thought before.

One could read that comment on a spectrum of charitableness. I will speak for myself, at the risk of ruffling some feathers, but we are all here to bounce ideas around, not tow any party lines, right? To me, Watson's win means very little, almost nothing. Expert systems have been around for years, even decades. I exp... (read more)

An AI can be dangerous only if it escapes our control. The real question is, must we flirt with releasing control in >order to obtain a necessary or desirable usefulness?

I had a not unrelated thought as I read Bostrom in chapter 1: why can't we instutute obvious measures to ensure that the train does stop at Humanville?

The idea that we cannot make human level AGI without automatically opening pandoras box to superintelligence "without even slowing down at the Humanville stataion", was suddenly not so obvious to me.

I asked myself after read... (read more)

Because by the time you've managed to solve the problem of making it to humanville, you probably know enough to keep going. There's nothing preventing us from learning how to self-modify. The human situation is strange because evolution is so opaque. We're given a system that no one understands and no one knows how to modify and we're having to reverse engineer the entire system before we can make any improvements. This is much more difficult than upgrading a well-understood system. If we manage to create a human-level AI, someone will probably understand very well how that system works. It will be accessible to a human-level intelligence which means the AI will be able to understand it. This is fundamentally different from the current state of human self-modification.

This is a really cool link and topic area. I was getting ready to post a note on intelligence amplification (IA), and was going to post it up top on the outer layer of LW, based on language.

I recall many years ago, there was some brief talk of replacing the QWERTY keyboard with a design that was statistically more efficient in terms of human hand ergonomics in executing movements for the most frequently seen combinations of letters (probably was limited to English, given American parochialism of those days, but still, some language has to be chosen.)

Becau... (read more)

I have dozens, some of them so good I have actually printed hardcopies of the PDFs-- sometimes misplacing the DOIs in the process.

I will get some though; some of them are, I believe, required reading, for those of us looking at the human brain for lessons about the relationship between "consciousness" and other functions. I have a particularly interesting one (74 pages, but it's a page turner) that I wll try to find the original computer record of. Found it and most of them on PubMed.

If we are in a different thread string in a couple days, I will flag you. I'd like to pick a couple of good ones, so it will take a little re-reading.


Thanks for pointing out the wiki article, which I had not seen. I actually feel a tiny bit relieved, but I still think there are a lot of very serious forks in the road that we should explore.

If we do not pre-engineer a soft landing, this is the first existential catastrophe that we should be working to avoid.

A world that suddenly loses encryption (or even faith in encryption!) would be roughly equivalent to a world without electricity.

I also worry about the legacy problem... all the critical documents in RSA, PGP, etc, sitting on hard drives, server... (read more)


Thanks for posting the ink. Its an april 2014 paper, as you know. I just downloaded the PDF and it looks pretty interesting. I'l post my impression, if I have anything worthwhile to say, either here in Katja's group, or up top on lw generally, when I have time to read more of it.

HI and thanks for the link. I just read the entire article, which was good for a general news piece, and correspondingly, not definitive (therefore, I'd consider it journalistically honest) about the time frame. "...might be decades away..." and "...might not really seem them in the 21st century..." come to mind as lower and upper estimates.

I don't want to get out of my depth here, because I have not exhaustively (or representatively) surveyed the field, nor am I personally doing any of the research.

But I still say I have found a sign... (read more)

Part of the danger of reading those articles as someone who is not actively involved in the research is that one gets an overly optimistic impression. They might say they achieved X, without saying they didn't achieve Y and Z. That's not a problem from an academic integrity point of view, since not being able to do Y and Z would be immediately obvious to someone versed in the field. But every new technique comes with a set of tradeoffs, and real progress is much slower than it might seem.

What do you mean with artificial consciousness to the extend that it's not intelligence and why do you think the problem is in a form where quantum computers are helpful?

The claim wasn't that artifactual consciousness wasn't (likely to be) sufficient for a kind of intelligence, but that they are not coextensive. It might have been clearer to say consciousness is (closer to being) sufficient for intelligence, than intelligence (the way computer scientists often use it) is to being a sufficient condition for consciousness (which is not at all.)

I needn't have... (read more)

Is there an academic paper that makes that argument? If so, could you reference it?

From what I have read in open source science and tech journals and news sources, general quantum computing seems from what I read to be coming faster than the time frame you had suggested. I wouldn;t be suprised to see it as soon as 2024, prototypical, alpha or beta testing, and think it a safe bet by 2034 for wider deployment. As to very widespread adoption, perhaps a bit later, and w/r to efforts to control the tech for security reasons by governments, perhaps also ... later here, earlier, there.

Scott Aaronson seems to disagree: [] FTA: "The problem is decoherence... In theory, it ought to be possible to reduce decoherence to a level where error-correction techniques could render its remaining effects insignificant. But experimentalists seem nowhere near that critical level yet... useful quantum computers might still be decades away"
Load More