Posts

Sorted by New

Wiki Contributions

Comments

No changes that I'd recommend, at all. SPECIAL NOTE: please don't interpret the drop in the number ofcomments, the last couple of weeks, as a drop in interest by forum participants. The issues of these weeks are the heart of the reason for existence of nearly all the rest of the Bostrom book, and many of the auxiliary papers and references we've seen, ultimately also have been context, for confronting and brainstorming about the issue now at hand. I myself just as one example, have a number of actual ideas that I've been working on for two weeks, but I've been trying to write them up in white paper form, because they seem a bit longish. also I've talked to a couple of people off site who are busy thinking about this as well and have much to say. Perhaps taking a one week intermission, would give some of us a chance to organize our thoughts more efficiently for postings. There is a lot of untapped incubating that is coming to a head right now among the participants' mindse and we would like a chance to say something about these issues before moving on. (("Still waters run deep" as the cliche goes.) We're at the point of greatest intellectual depth, now. I could speak for hours, were I commenting orally, and trying to be complete -- as opposed to making a skeleton of a comment that would, without context, raise more requests for clarification than be useful. I'm sure I'm not unique. Moderation is fine, though, be assured.

Thanks for posting this link, and for the auxiliary comments. I try to follow these issues as viewed from this sector of thinkers, pretty closely (the web site Defense One often has some good articles, and their tech reporter Patrick Tucker touches on some of these issues fairly often.) But I had missed this paper, until now. Grateful, as I say, for your posting of this.

Before we continue, one more warning. If you're not already doing most of your thinking at least half-way along the 3 to 4 transition (which I will hereon refer to as reaching 4/3), you will probably also not fully understand what I've written below because that's unfortunately also about how far along you have to be before constructive development theory makes intuitive sense to most people. I know that sounds like an excuse so I can say whatever I want, but before reaching 4/3 people tend to find constructive development theory confusing and probably not useful...

I understand this kind of bind. I am over in the AI - Bostrom forum, which I like very much. As it happens, I have been working on a theory with numerous parts that is connected to, and and extension of, ideas and existing theories drawn from several scientific and philosophical subdisciplines. And, I often find myself tyring to meaningfully answer questions within the forum, with replies that I cannot really make transparent and comelling cases for, without having the rest of my theory on the table, to establish context, motivation, justification and so on, because the whole theory (and it's supporting rationale) is, size-wise, outside the word limit and scope of any individual post.

Sometimes I have tried, then squeezed it down, and my comments have ended up looking like cramped, word salad, because of the lack of context -- in the sense I presume you caution that applies to your remarks.

So I will have a look at the materials you counsel as prerequisite concepts, before I go on reading the rest of your remarks.

It is "with difficulty" I am not reading further down, because agency is one of my central preoccupations, both in general mind-body considerations, and in AI most particularly (not narrow AI, but human equivalent and above, or "real AI" as I privately think of it.

In general, I have volumes to say about agency, and have been struggling to carve out a meaningful and coherent, scientifically and neuroscientifically informed set of concepts relating to "agency" for some time.

You also refer to "existential" issues of some kind, which can of course mean many things to many people. But this also makes me want to jump in whole hog and read what you are going to say, because I also have been giving detailed consideration to the role of "existential pressure" (and trying to see what it might amount to, in both ordinary and more unconventional terms by trying to see it through different templates -- some more, some less -- humanly phenomenological) in the formation of features of naturalistic minds and sentience (i.e. in biological organisms, the idea being of course to then abstract this to more general systems.)

A nice route or stepping stone path for examing existential pressure princibles is human, to general terrestrial-biological, to then exobiological (so far as we can reasonably speculate), and then finally move on to AIs, when we have educated our intuitions a little.

The results emerging from those considerations may or may not suggest what we need to include, at least by suitable analogy, in AIs, to make them "strivers" , or "agents", or systems that deliberately do anything, and have "motives" (as opposed to behaviors), desires, and so on...

We must have some theory or theory cluster, about what this may or may not contrubute to the overall functionality of the AI; it's "understanding" of the world that is (we hope) to be shared by us, so it is also front and center among my key proccupations.

An timely clarifying idea I use frequently in discussing agency -- when reminding people that not everything that exhibits behavior automatically qualifies for agency: do google's autopilot cars have "agency"? Do they have "goals"? My view is: "obviously not -- that would be using 'goal' and 'agency' metaphorically."

Going up the ladder of examples, we might consider someone sleepwalking, or a person acting-out a sequence of learned, habituated behaviors while in an absence seizure in epilepsy. Are they exhibiting agency?

The answers might be slightly less clear, and invite more contention, but given the pretty good evidence that absence seizures are not post-ictal failures to remember agency states, but are really automatisms (modern neurologists are remarkably subtle, openminded to these distinctions, and clever in setting up scenarios which discriminate satisfactorily the difference), it seems also, that lack of attention, intention, praxis, i.e. missing agency is the most accurate characterization.

Indeed, apparently tt is satisfactory enough for experts who understand the phenomena that, in the contemporary legal environment in which "insanity" style defenses are out of fashion with judges and the public, nonetheless a veridical establishment of sleepwalking and/or absence seizure status (different cases, of course) while comitting murder or manslaughter, has, even in recent years, gotten some people "innocent" verdicts.

In short, most neurologists who are not in the grip of any dictums of behavioristic apologetics would say -- here too -- no agency, though information processing behavior occurred.

Indeed, in the case of absence seizures, we might further ask about metacognition vs just cognition. But experimentally this is also well understood. Metacognition, or metaconsciousness, or self-awareness, all are by a large consensus now understood as correllated with "Default Node Network" activity.

Absence seizures under whitnessed, lab conditions are not just departures from DFN activity. Indeed, all consciously, intentionally directed activity of any complexity that involves conscious attention on external activities or situations, involve shut down of DFN systems. (Look up Default Node Network on PubMed, if you want more.)

So absence seizure behavior, which can be very complex, involve driving across town, etc, is not agency misplaced or mislaid. It is actually unconscious, "missing-agent" automatism. A brain in a temporary zombie state, the way philosophers of mind use the term zombie.

But back to the autopilot cars, or autopilot Boeng 777s, automatic anythings... even the ubiquitous anti-virus daemons running in background which are automatically "watching" to intercept malware attacks. It seems clear that, while some of the language of agency might be convenient shorthand, it is not literally true.

Rather, these cases are those of mere mechanical, newtonian-level, deterministic causation from conditionally activated preprogrammed behavior sequences. Activation conditions are deterministic. The causal chains thereby activated are deterministic, just as the Interrupt service routines in an ISR jump table are all deterministic.

Anyway... agency is intimately at the heart of AGI - style AI, and we need to be as attentive and rigorous as possible about using the term literally, vs, metaphorically.

I will check out your references and see if I have anything useful to say after I look at what you mention.

I didn't exactly say that, or at least, didn't intend to exactly say that. It's correct of you to ask for that clarification.

When I say "vindicated the theory", that was, admittedly, pretty vague.

What I should have said was the recent experiments removed what has been more or less statistically the most common and continuing objection to the theory, by showing that quantum effects in microtubules, under the kind of environmental conditons that are relevant, can indeed be maintained long enough for quantum processes to "run their course" in a manner that, according to Hameroff and Penrose, makes a difference that can propogate causally to a level that is of significance to the organism.

Now, as to "decision making". I am honestly NOT trying to be coy here, but that is not entirely a transparent phrase. I would have to take a couple thousand words to unpack that (not obfuscate, but unpack), and depending on this and that, and which sorts of decisions (conscious or preconscious, highly attended or habituated and automatic), the answer could be yes or no... that is, even given that consciousness "lights up" under the influence of microtubule-dependent processes like Orch OR suggests -- admittedly something that, per se, is a further condition, for which quantum coherence within the microtubule regime is a necessary but not sufficient condition.

But the latter is plausible so many people, given a pile of other suggestive evidence. The deal breaker has always been the can or can't quantum coherence be maintained in the stated environs.

Orch OR is a very multifaceted theory, as you know, and I should not have said "vindicated" without very careful qualification. Removing a stumbling block is not proof of truth, of a theory with so many moving parts.

I do think, as a physiological theory of brain function, it has a lot of positives (some from vectors of increasing plausibility coming in from other directions, theorists and experiments) and the removal of the most commonly cited objection, on the basis of which many people have claimed Orch OR is a non-starter, is a pretty big deal.

Hameroff is not a wild-eyed speculator (and I am not suggesting that you are claiming he is.)

I find him interesting and worthy of close attention, in part because has accumulated an enormous amount of evidence for microtubule effects, and he knows the math, and presents it regularly.

I first read his Biomolecular Mind hardback book, back in the early 90's, which he actually wrote in the late 80's, at which time he had already amassed quite a bit of empiracle study regarding the role of microtubules in neurons, and in creatures whithout neurons, posessing only microtubules, that exhibit intelligent behavior.

Other experiments in various quarters over quite a few recent years (though there are still those neurobiologists who do disagree) have on the whole seemed to validate Hameroff's claim that it is quantum effects -- not "ordinry" synapse-level effects that can be described without use of the quantum level of description -- that are responsible for anaesthesia's effects on consciousness, in living brains.

Again, not a proof of Orch OR, but an indication that Hameroff is, perhaps, on to some kind of right track.

I do think that evidence is accumulating, from what I have seen in PubMed and elsewhere, that microtubule effects at least partially modulate dendritic computations, and seem to mediate the rapid remodeling of the dendritic tree (spines come and go with amazing rapidity), making it likely that the "integrate and fire" mechanism involves microtubule computation, at least in some cases.

I have seen, for example, experiments that give microtubule corrupting enzymes to some, but not control, neurons and observe dendritic tree behavior. Microtubules are in the loop in learning, attention, etc. Quantum effects in MTs.... evidence seems to grow by the month.

But, to your ending question, I would have to say what I said... which amounts to "sometimes yes, sometimes no," and in the 'yes' cases, not necessarily for the reasons that Hameroff thinks, but maybe partly, and maybe for a hybrid of additional reasons. Stapp's views have a role to play here, I think, as well.

One of my "wish list" items would be to take SOME of Hameroff's ideas and ask Stapp about them, and vice versa, in interviews, after carefully preparing questions and submitting them in advance. I have thought about how the two theories might compliment each other, or which parts of each might be independently verifyable and could be combined in a rationally coherent fashion that has some independent conceptual motivation (i.e. is other than ad hoc.)

I am in the process of preparing and writing a lenghty technical queston for Stapp, to clarify (and see what he thinks of a possible extension of) his theory of the relevance of the quantum zeno effect.

I thought of a way the quantum zeno effect, the way Stapp conceives of it, might be a way to resolve (with caveats) the simulation argument ... i.e. assess whether we are at the bottom level in the hierarchy, or are up on a sim. At least it would add another stipulation to the overall argument, which is significant in itself.

But that is another story. I have said enough to get me in trouble already, for a Friday night (grin).

Hi, Yes, for the kickstarter option, that seems to be almost a requirement. People have to see what they are asked to invest in.

The kickstarter option is somewhat my second choice plan, or I'd be furher along on that already. I have several things going on that are pulling me in different directions.

To expand just a bit on the evolution of my You Tube idea: originally – a couple months before I recognized more poignantly the value to the HLAI R & D community of doing well-designed, issue-sophisticated, genuinely useful (to other than a naïve audience) interviews with other thinkers and researchers -- I had already decided to create a You Tube (hereafter, 'YT') channel of my own. This one will have a different, though complimentary, emphasis.

This (first) YT channel will present a concentrated video course (perhaps 20 to 30 presentations in the plan I have, with more to grown in as audience demand or reaction dictates.) The course presentations, with myself at the whiteboard, graphics, video clips, whatever can help make it both enjoyable and more comprehensible, will consist of what are essential ideas and concepts, that are not only of use to people working in creating HLAI (and above), but are so important that they constitute essential background, without which, I believe, people creating HLAI are at least partly floundering in the dark.

The value add for this course comes from several things. I do have a gift for exposition. My time as a tutor and writer has demonstrated to me (from my audiences) that I have a good talent for playing my own devil's advocate, listening and watching through audience ears and eyes, and getting inside the intuitions likely to occur in the listener. When I was a math tutor in college, I always did that from the outset, and was always complimented for it. My experience with studying this for decades and debating it, metabolizing all the useful points of view on the issues that I have studied – while always trying to push forward to find what is really true – allows me to gather many perspectives together, anticipate the standard objections or misunderstandings, and help people with less experience navigate the issues. I have an unusual mix of accumulated areas of expertise -- software development, neuroscience, philosophy, physics – which contributes to the ability to see and synthesize productive paths that might (and have) been missed elsewhere. Perspective – enough time seeing intellectual fads come and go, to recognize how they worked even “before my time.” Unless one sees – and can critique or free oneself from – contextual assumptions, one is likely to be entrained within conceptual expernalities that define the universe of discourse, possibly pruning away preemptively any chance for genuine progress and novel ideas. Einstein, Crick and Watson, Heisenberg and Bohr, all were able to think new thoughts and entertain new possibilities.

Like someone just posted in Less Wrong, you have a certain number of weirdness points, spend them wisely. People in the grips of an intellectual trance who don't even know they are pruning away anything, cannot muster either the courage, or the creativity, to have any weirdness points to spend.

For example. Apparently, very few people understand the context and intellectual climate … the formative “conceptual externalities” that permeated the intellectual ether at the time Turing proposed his “imitation game.”

I alluded to some of these contextual elements of what – then – was the intellectual culture, without providing any kind of exposition (in other words, just making the claim in passing), in my dual message to you and Luke, earlier today (Friday.)

That kind of thing – were it to be explained rigorously, articulately, engagingly -- is a mild eye-opening moment to a lot of people (I have explained it before to people who are very sure of themselves, who went away changed by the knowledge.) I can open the door to questioning what seems like such a “reasonable dogma”, i.e. that an “imitation game” is all there is, and all there rationally could be, to the question of, and criteria for, human-equivalent mentality.

Neuroscience, as I wrote in the Bostrom forum a couple weeks ago (perhaps a bit too stridently in tone, and not to my best credit, in that case) is no longer held in the spell of the dogma that being “rational” and “scientific” means banishing consciousness from our investigation.

Neither should we be. Further, I am convinced that if we dig a little deeper, we CAN come up with a replacement for the Turing test (but first we have to be willing to look!) … some difference that makes a difference, and actually develop some (at least probabilistic) test(s) for whether a system that behaves intelligently, has, in addition, consciousness.

So, this video course will be a combination of selected topics in scientific intellectual history that are essential to understand, in order to see where we have come from, and then will develop current and new ideas, so see where we might go.

I have a developing theory with elements that seem very promising. It is more than elements, it is becoming, by degrees, a system of related ideas that fit together perfectly, are partly based on accepted scientific results, and are partly extensions that a strong, rational case can be made for.

What is becoming interesting and exciting to me about the extensions, is that sometime during the last year (and I work on this every day, unless I am exhausted from a previous day and need to rest), the individual insights, which were exciting enough individually, and independently arguable, are starting to reveal a systematic cluster of concepts that all fit together.

This is extremely exciting, even a little scary at times. But suddenly, it is as if a lifetime of work and piecemeal study, with a new insight here, another insight there, a possible route of investigation elsewhere... all are fitting into a mosaic.

So, to begin with the point I began with, my time is pulling me in various directions. I am in the Bostrom forum, but on days that I am hot on the scent of another layer of this theory that is being born, I have to follow that. I do a lot of dictation when the ideas are coming quickly.

It is, of course, very complicated. But it will also be quite explainable, with systematic, orderly presentation.

So, that was the original plan for my own YT channel. It was to begin with essential intellectual history in physics, philosophy of mind, early AI, language comprehension, knowledge representation, formal semantics.... and that ball of interrelated concepts that set, to an extent, either correct or incorrect boundary conditions on what a theory has to look like.

Then my intent was to carefully present and argue for (and take devils advocate for) my new insights, one by one, then as a system.

I don't know how it will turn out, or whether I will suddenly discover a dead end. But assuming no dead end, I want it out there where interested theorists can see it and judge it on its merits, up or down, or modify it.

I am going to tun out of word allowance any moment. But it was after planning this, that I thought of the opportunity to do interviews of other thinkers for possibly someone else's YT channel. Both projects are obviously compatible. More later as interest dictates, I have to make dinner. Best, Tom NxGenSentience

Same question as Luke's. I probably have jumped at it. I have a standing offer to make hi-def (1080) video interviews, documentaries, etc and competent, penetrating Q and A sessions, with people like Bostrom, Google-ites setting up the AI laboratories, and other vibrant, creative, contempory AI-relevant players.

I have knowledge of AI, general comp sci, deep and broad neuroscience, the mind-body problem (philosophically understood in GREAT detail -- college honors thesis at UCB was on that) and deep, detailed knowledge of all the big neurophilosphy players' theories.

These players include but are not limited to Dennett, Searle, Dreyfus, Turing, as well as modern players too numerous to mention, plus some under-discussed people like the LBL quantum physicist Henry Stapp (quantum zeno effect and it's relation to the possibility of consciousness and free will, whose papers I have been following assiduously for 15 years and think are absolutely required reading for anyone in this business.)

I have also closely followed Stuart Hameroff and Roger Penrose's "Orch OR" theory -- which has just been vindicated by major experiments refuting the long-running, standard objection to the possibility of quantum intra-neuronal processes (the objection based upon purportedly almost immediate, unavoidable, quantum decoherence caused by the warm, wet, noisy brain milleu) -- an objection Hameroff, Penrose, occasionally Max Tegmark (who has waxed and waned a bit over the last 15 years on this one, as I read his comments all over the web) and others, have mathematically dealth with for years, but has lacked --- until just this last year empirical support.

Said support is now there -- and with with some fanfare, I might add, in the nich scientific and philosophical mind-body and AI theoretic community that follows this -- and vindicates core aspects of this theory (although doesn't of confirm the Platonic qualia aspect.)

Worth digressing, though... for those who see this.... just as a physiological, quantum computational-theoretic account of how the brain does what it does ... particularly how it implements dendritic processing (spatial and temporal summation, triggers to LTP, inter-neuron gap junction transience, etc.) which is by consensus the locus of the bulk of the neuronal integrate and fire desicion making, this Orch OR theory is amazing in its implications. (Essentially squares the entire synaptic-level information processing of the brain as a whole, to begin with. I think this is destined to be a nobel prize-level theory eventually.)

I know Hameroff as a formerly first name basis contact, and could, though it's been a few years, rapidly trigger his memory, and get an on-tape detailed interview with him at any time.

Point is.... I have a standing offer to create detailed and theoretically competent -- thus relevant interviews and discussions -- documentaries, edit them professionally, make them available on DVD, or trnascode them for someone's branded You Tube channel (like MIRI, for example.)

No one has taken me up on that yet, either. I have a 6 thousand dollar digital camera and professional editing software to do this with, but more importantly, have 25 years of detailed study I can draw upon to make interviews that COUNT, are unique, and relevant.

No takers yet. So maybe I will go kickstarter and do them myself, on my own branded you Tube channel. Seems easier if I could get an exisitng organization like MIRI or even AAAI to sponsor my work, however. (I'd also like to cover the AAAI turing test conference in January in Texas, and do this, but need sponsorship at this point, because I am not independently wealthy.)

Same question as Luke's. I probably would have jumped at it, if only to make seed money to sponsor other useful projects, like the following.

I have a standing offer to make hi-def (1080) video interviews, documentaries, etc and competent, penetrating Q and A sessions and documentaries with key, relevant players and theoreticians in AI and related work. This includes individual thinkers, labs, Google's AI work, the list is endless.

I have knowledge of AI, general comp sci, consideralble knowledge of neuroscience, the mind-body problem (philosophically understood in GREAT detail -- college honors thesis at UCB was on that) and deep, long-term evolutionary knowledge of all the big neurophilosphy players' theories.

These players include but are not limited to Dennett, Searle, Dreyfus, Turing, as well as modern players too numerous to mention, plus some under-discussed people like the LBL quantum physicist Henry Stapp (quantum zeno effect and it's relation to the possibility of consciousness and free will, whose papers I have been following assiduously for 15 years and think are absolutely required reading for anyone in this business.)

I have also closely followed Stuart Hameroff and Roger Penrose's "Orch OR" theory -- which has just been vindicated by major experiments refuting the long-running, standard objection to the possibility of quantum intra-neuronal processes (the objection based upon purportedly almost immediate, unavoidable, quantum decoherence caused by the warm, wet, noisy brain milleu) -- an objection Hameroff, Penrose, occasionally Max Tegmark (who has waxed and waned a bit over the last 15 years on this one, as I read his comments all over the web) and others, have mathematically dealth with for years, but has lacked --- until just this last year empirical support.

Said support is now there -- and with with some fanfare, I might add, within the nich scientific and philosophical mind-body and AI theoretic community that follows this work. Experiments vindicate core aspects of this theory (although do not confirm the Platonic qualia aspect.)

Worth digressing, though... for those who see this message.... so I will mention that, just as a physiological, quantum computational-theoretic account of how the brain does what it does ... particularly how it implements dendritic processing (spatial and temporal summation, triggers to LTP, inter-neuron gap junction transience, etc.) which (the dendritic tree) is by consensus the neuronal locus of the bulk of neurons' integrate and fire desicion making, this Orch OR theory is amazing in its implications. (Essentially it squares the entire synaptic-level information processing aggregate estimate, of the brain as a whole, for starters! I think this is destined to be a nobel prize-level theory eventually.)

I know Hameroff on a formerly first name basis contact, and could, though it's been a couple years, rapidly trigger his memory of who I am -- he held me in good stead -- and I could get an on-tape detailed interview with him at any time.

Point is.... I have a standing offer to create detailed and theoretically competent -- thus relevant interviews and discussions -- documentaries, edit them professionally, make them available on DVD, or trnascode them for someone's branded You Tube channel (like MIRI, for example.)

I got this idea, when I was watching an early interview at Google with Kurzweil, by some 2x year-old bright-eyed google-ite employee, who was asking the most shallow, immature, clueless questions! (I thought at the time -- "jeeze, is this the best they can find to plumb Kurzweil's thinking on the future of AI at Google, or in general?")

Anyway, no one has taken me up on that offer to create what could be terrific documentary-interviews, either. I have a 6 thousand dollar digital camera and professional editing software to do this with, not some pocket camera.

But more importantly, I have 25 years of detailed study of the mind body problem and AI, and I can draw upon that to make interviews that COUNT, are unique, and relevant, and unparalleled.

AI is my life's work (that, and the co-entailed problem of mind-body theory generally.) I have been working hard to supplant the Turing test with something that tests for consciousness, instead of relies on the positiivistic denial of the existence of consciousness qua consciousness, beyond behavior. That test came out of an intellectual soil that was dominated with positivism, which in turn was based on a mistaken and defective attempt to metabolize the Newtonian to Quantum phsical transition.

It's partly based on a scientific ontology that is fundamentally false, and has been demonstrably so for 100 years -- Newton's deterministic clockwork universe model that has no room for "consciousness", only physical behavior -- and partly based on an incomplete attempt to intellectually metabolize the true lessons of quantum theory (please see Henry Stapp's papers , on his "stapp files" LBL website, for a crystal clear set of expositions of this point.)

No takers yet. So maybe I will have to go kickstarter too, and do these documentaries myself, on my own branded you Tube channel. (It will be doing a great service to countless thinkers to have GOOD q and a with their peers. I am not without my own original questions about their theories, that I would like to ask, as well.)

Seems easier if I could get an exisitng organization like MIRI or even AAAI to sponsor my work, however. (I'd also like to cover the AAAI turing test conference in January in Texas, and do this, but need sponsorship at this point, because I am not independently wealthy. I am forming a general theory, from which I think the keynote speaker's Turing Test 2 "Lovelace 2.0" might actually be a derivable correllate.)

It's nice to hear a quote from Wittgenstein. I hope we can get around to discussing the deeper meaning of this, which applies to all kinds of things... most especially, the process by which each kind of creature (bats, fish, homo sapiens, and potential embodied artifactual (n.1) minds (and also not embodied in the contemporaneously most often used sense of the term -- Watson was not embodied in that sense) *constructs it's own ontology) (or ought to, by virtuue of being embued with the right sort of architecture.)

That latter sense, and the incommensurability of competing ontologies in competing creatures (where 'creature' is defined defined as a hybrid, and N-tuple, of cultural legacy contructs, endemic evolutionarily bequeathed physiological sensorium, it's individual autobiographical experience...), but not (in my view, in the theory I am developing) opaque to enlightened translatability -- though the conceptual scaffolding for translaiton involves the nature of, purpose of, and boundaries, both logical and temporal of the "specious present", the quantum zeno effect, and other considerations, so it is more suble than meets the eye)... is more of what Wittengensttein was thinking about, considering Kant's answer to skepticism, and lots of other issues.

Your more straightforward point bears merit, however. Most of us have spend a good deal of our lives battling not issue opacity, as much as human opacity to new, expanded, revised, or unconventional ideas.

Note 1.: BY the way, I occasionally write 'artifactual' as opposed to 'artificial' because of the sense in which, as products of nature, everything we do -- including building AIs -- is, ipso facto, a product of nature, and hence, 'artificial' is an adjective we should be careful about.

People do not behave as if we have utilities given by a particular numerical function that collapses all of their hopes and goals into one number, and machines need not do it that way, either.

I think this point is well said, and completely correct.

..

Why not also think about making other kinds of systems?

An AGI could have a vast array of hedges, controls, limitations, conflicting tendencies and tropisms which frequently cancel each other out and prevent dangerous action.

The book does scratch the surface on these issues, but it is not all about fail-safe mind design and managed roll-out. We can develop a whole literature on those topics.

I agree. I find myself continually wanting to bring up issues in the latter class of issues... so copiously so, that frequently it feels like I am trying to redesign our forum topic. So, I have deleted numerous posts-in-progress that fall into that category. I guess those of us who have ideas about fail-safe mind design that are more subtle -- or to put it more neutrally -- do not fit the running paradigm in which the universe of discourse is that of transparent, low-dimensional (low dimensional function range space, not low dimensional function domain space) utility functions, need to start writing our own white papers.

When I hear the Bostrom claims only 7 people in the world are thinking full time and productively about (in essence) fail safe mind design, or that someone at MIRI wrote only FIVE people are doing so (though in the latter case, the author of that remark did say that there might be others doing this kind of work "on the margin", whatever that means), I am shocked.

It's hard to believe, for one thing. Though, the people making those statements must have good reasons for doing so.

But maybe the deriviation of such low numbers could be more understandable, if one stipulates that "work on the problem" is to be counted if and only if candidate people belong to the equivalence class of thinkers restricting their approach to this ONE, very narrow conceptual and computational vocabulary.

That kind of utility function-based discussion (remember when they were called 'heuristics' in the assigned projects, in our first AI courses?) has its value, but it's a tiny slice of the possible conceptual, logical and design pie ... about like looking at the night sky through a soda straw. If we restrict ourselves to such approaches, no wonder people think it will take 50 or 100 years to do AI of interest.

Ourside of the culture of collapsing utility functions and the like, I see lots of smart (often highly mathematical, so they count as serious) papers in whole brain chaotic resonant neurodynamics; new approachs to foundations of mental health issues and disorders of subjective empathy (even some application of deviant neurodynamics to deviant cohort value theory, and defective cohort "theory of mind" -- in the neuropsychiatric and mirror neuron sense) that are grounded in, say, pathologies with transient Default Node Network coupling... and distrubances of phase coupled equilibria across the brain.

If we run out of our own ideas to use from scratch (which I don't think is at all the case ... as your post might suggest, we have barely scratched the surface), then we can go have a look at current neurology and neurobiology, where people are not at all shy about looking for "information processing" mechanisms underlying complex personality traits, even underlying value and aesthetic judgements.

I saw a visual system neuroscientist's paper the other day offering a theory of why abstract (ie. non-representational) art is so intriguing to (not all, but some) human brains. It was a multi-layered paper, discussing some transiently coupled neurodynamical mechanisms of vision (the authors' specialties), some reward system neuromodulator concepts, and some traditional concepts expressed at a phenomenological, psychological level of description. An ambitious paper, yes!

But ambition is good. I keep saying, we can't expect to do real AI on the cheap.

A few hours or days reading such papers is good fertilizer, even if we do not seek to translate, in any direct way (like copying "algorithms" from natural brains) wetware brain research, into our goal, which presumably is to do dryware mind design --- and do it in a way where we choose our own functional limits, not have nature's 4.5 billion years of accidents choose boundary conditions on substrate platforms, for us.

Of course, not everyone is interested in doing this. I HAVE learned in this forum, that "AI" is a "big tent". Lots of uses exist for narrow AI, in thousands of indutries and fields. Thousands of narrow AI systems are already in play.

But, really... aren't most of us interested in this topic because we want the more ambitious result?

Bostrom says "we will not be concerned with the metaphysics of mind..." and "...not concern ourselves whether these entities have genuine self-awareness...."

Well, I guess we won't be BUILDING real minds anytime soon, then. One can hardly expect to create, that which one won't even openly discuss. Bostrom is wrting and speaking, using the language of "agency" and "goals" and "motivational sets", but he is only using those terms metaphorically.

Unless, that is, everyone else in here (other than me) actually is prepared to deny that we -- who spawned those concepts, to describe rich, conscious, intentionally entrained features of the lives of self-aware, genuine conscious creatures -- are different, i.e., that we are conscious and self-aware.

No one here needs a lesson in intellectual history. We all know that people did deny that , back in the behaviorism era. (I have studied the reasons -- philosophical and cultural -- and continue to uncover in great detail, mistaken assumptions out of which that intellectual fad grew.)

Only ff we do THAT again, will we NOT be using "agent" metaphorically, when we apply that to machines with no real consciousness, because ex hypothesi WE'd posess no minds either, in the sense we all know we do posess, as conscious humans.

We'd THEN be using it ('agent", "goal", "motive" ... the whole equivalence class of related nouns and predicates) in the same sense for both classes of entities (ourselves, and machines with no "awareness", where the latter is defined as anyting other than public, 3rd person observable behavior.)

Only in this case, would it not be a metaphor to use 'agent, motive', etc. in describing intelligent (but not conscious) machines, whcih evidently is the astringent conceptual model within which Bostrom wishes to frame HLAI --- proscribing considerations, as he does, of whether they are genuinely self-aware.

But, well, I always thought that that excessively positivistic attitude, had more than a little something to do with the "AI winter" (just like it is widely acknowledged to have been responsible for the neuroscience winter that paralleled it.)

Yet neuroscientists are not embarassed to now say, "That was a MISTAKE, and -- fortunately -- we are over it. We wasted some good years, but are no longer wasting time denying the existence of consciousness, the very thing that makes the brain interesting and so full of fundamental scientific interest. And now, the race is on to understand how the brain creates real mental states."

NEUROSCIENCE has gotten over that problem with discussing mental states qua mental states , clearly.

And this is one of the most striking about-faces in the modern intelllectual history of science.

So, back to us. What's wrong with computer science? Either AI-ers KNOW that real consciousness exists, just like neuroscientists do, and AI-ers just don't give a hoot about making machines that are actually conscious.

Or, AI-ers are afraid of tackling a problem that is a little more interesting, deeper, and harder (a challenge that gets thousands of neuroscientists and neurophilosophers up on the morning.)

I hope the latter is not true, because I think the depth and possibilities of the real thing -- AI with consciousnes -- are what gives it all the attraction (and holds, in the end, for reasons I won't attempt to desribe in a short post, the only possibility of making the things friendly, if not benificient.)

Isn't that what gives AI its real interest? Otherwise, why not just write business software?

Could it be that Bostrom is throwing out the baby with the bathwater, when he stipulates that the discussion, as he frames it, can be had (and meaningful progress made), without the interlocutors (us) being concerned about whether AIs have genuine self awareness, etc?

My general problem with "utilitarianism" is that it's sort of like Douglas Adams' "42." An answer of the wrong type to a difficult question. Of course we should maximize, that is a useful ingredient of the answer, but is not the only (or the most interesting) ingredient.

Taking off from the end of that point, I might add (but I think this was probably part of your total point, here, about "the most interesting" ingredient) that people sometimes forget that utilitarianism is not a theory itself about what is normatively desirable, and least not much of one. For Bentham-style "greatest good for the greatest number" to have any meaning, it has to be supplemented with a view of what property, state of being, action type, etc, counts as a "good" thing, to begin with. Once this is defined, we can then go on to maximize that -- seeking to achieve the most of that, for the most people (or relevant entities.)

But greatest good for the greatest number means nothing until we figure out a theory of normativity, or meta-normativity that can be instantiated across specific, varying situations and scenarios.

IF the "good" is maximizing simple total body weight, then adding up the body weight of all people in possible world A, vs in possible world B, etc, will allow us a utilitarian decision among possible worlds.

IF the "good" were fitness, or mental healty, or educational achievement... we use the same calculus, but the target property is obviously different.

Utilitarianism is sometimes a person's default answer, until you remind them that this is not an answer at all about what is good. It is just an implementation standard for how that good is to be devided up. Kind of a trivial point, I guess, but worth reminding ourselves from time to time that utilitarianism is not a theory of what is actually good, but how that might be distributed, if that admits of scarcity.

Load More