Posts

Sorted by New

Wiki Contributions

Comments

This is great work. Glad that folks here take these Ryle-influenced ideas seriously and understand what it means for a putative problem about mind or agency to dissolve. Bravo.

To take the next (and I think, final step) towards dissolution, I would recommend reading and reacting to a 1998 paper by John McDowell called "The Content of Perceptual Experience" which is critical of Dennett's view and even more Rylian and Wittgensteinian in it's spirit (Gilbert Ryle was one of Dennett's teachers). 

I think it's the closest you'll get to de-mystification and "de-confusion" of psychological and agential concepts. Understanding the difference between personal and subpersonal states, explanations, etc. as well as the difference between causal and constitutive explanations is essential to avoiding confusion when talking about what agency is and what enables agents to be what they are. After enough time reading McDowell, pretty much all of these questions about the nature of agency, mind, etc. lose their grip and you can get on with doing sub-personal causal investigation of the mechanisms which (contingently) enable psychology and agency (here on earth, in humans and similar physical systems).

For what it's worth, one thing that McDowell does not address (and doesn't need to for his criticism to work) but is nonetheless essential to Dennett's theory is the idea that facts about design in organisms can reduce to facts about natural selection. To understand why this can't be done so easily, check out the argument from drift. The sheer possibility of evolution by drift (non-selective forces), confounds any purely statistical reduction of fitness facts to frequency facts. Despite the appearance of consensus, it's not at all obvious that the core concepts that define biology have been explained in terms of (reduced to) facts about maths, physics, and chemistry.

Here's a link to Roberta Millstein's SEP entry on drift (she believes drift can be theoretically and empirically distinguished from selection, so it's also worth reading some folks who think it can't be).

https://plato.stanford.edu/entries/genetic-drift/

Here's the jstor link to the McDowell paper:

https://www.jstor.org/stable/2219740

Here are some summary papers of the McDowell-Dennett debate:

https://philarchive.org/archive/DRATPD-2v1

https://mlagflup.files.wordpress.com/2009/08/sofia-miguens-c-mlag-31.pdf

Yeah, I agree with a lot of this. Especially:

If you want to have some fun, you can reach for Rice's theorem (basically following from Turing's halting problem) which shows that you can't logically infer any semantic properties whatsoever from the code of an undocumented computer program. Various existing property inferrer groups like hackers and biologists will nod along and then go back to poking the opaque mystery blobs with various clever implements and taking copious notes of what they do when poked, even though full logical closure is not available.

I take it that this is how most progress in artificial intelligence, neuroscience, and cogsci has (and will continue) to proceed. My caution - and whole point in wading in here - is just that we shouldn't expect progress by trying to come up with a better theory of mind or agency, even with more sophisticated explanatory tools.

I think it's totally coherent and likely even that future artificial agents (generally intelligent or not) will be created without a general theory of mind or action. 

In this scenario, you get a complete causal understanding of the mechanisms that enable agents to become minded and intentionally active, but you still don't know what that agency or intelligence consist in beyond our simple, non-reductive folk-psychological explanations. A lot of folks in this scenario would be inclined to say, "who cares, we got the gears-level understanding" and I guess the only people who would care would be those who wanted to use the reductive causal story to tell us what it means to be minded. The philosophers I admire (John McDowell is the best example) appreciate the difference between causal and constitutive explanations when it comes to facts about minds and agents, and urge that progress in the sciences is hindered by running these together. They see no obstacle to technical progress in neuroscientific understanding or artificial intelligence; they just see themselves as sorting out what these disciplines are and are not about. They don't think they're in the business of giving constitutive explanations of what minds and agents are, rather, they're in the business of discovering what enable minds and agents to do their minded and agential work. I think this distinction is apparent even with basic biological concepts like life. Biology can give us a complete account of the gears that enable life to work as it does without shedding any light on what makes it the case that something is alive, functioning, fit, goal-directed, successful, etc. But that's not a problem at all if you think the purpose of biology is just to enable better medicine and engineering (like making artificial life forms or agents). To a task like, "given a region of physical space, identify whether there's an agent there" I don't we should expect any theory, philosophical or otherwise, to be able to yield solutions to that problem. I'm sure we can build artificial systems that can do it reliably (probably already have some), but it won't come by way of understanding what makes an agent an agent.

Insofar as one hopes to advance certain engineering projects by "sorting out fundamental confusions about agency" I just wanted to offer that (1) there's a rich literature in contemporary philosophy, continuous with the sciences, about different approaches to doing just that; and (2) that there are interesting arguments in this literature which aim to demonstrate that any causal-historical theory of these things will face an apparently intractable dilemma: either beg the question or be unable to make the distinctions needed to explain what agency and mentality consist in.

To summarize the points I've been trying to make (meanderingly, I'll admit): On the one hand, I applaud the author for prioritizing that confusion-resolution; on the other hand, I'd urge them not to fall into the trap of thinking that confusion-resolution must take the form of stating an alternative theory of action or mind. The best kind of confusion-resolution is the kind that Wittgenstein introduced into philosophy, the kind where the problems themselves disappear - not because we realize they're impossible to solve with present tools and so we give up, but because we realize we weren't even clear about what we were asking in the first place (so the problems fail to even arise). In this case, the problem that's supposed to disappear is the felt need to give a reductive causal account of minds and agents in terms of the non-normative explanatory tools available from maths and physics. So, go ahead and sort out those confusions, but be warned about what that project involves, who has gone down the road before, and the structural obstacles they've encountered both in and outside of philosophy so that you can be clear-headed about what the inquiry can reasonably be expected to yield.

That's all I'll say on the matter. Great back and forth, I don't think there's really much distance between us here. And for what it's worth, mine is a pretty niche view in philosophy, because taken to its conclusion it means that the whole pursuit of trying to explain what minds and agents are is just confused from the gun - not limited by the particular set of explanatory tools presently available - just conceptually confused. Once that's understood, one stops practicing or funding that sort of work. It is totally possible and advisable to keep studying the enabling gears so we can do better medicine and engineering, but we should get clear on how that medical or engineering understanding will advance and what those advances mean for those fundamental questions about what makes life, agents, minds, what they are. Good philosophy helps to dislodge us from the grip of expecting anything non-circular and illuminating in answer to those questions.

Good - though I'd want to clarify that there are some reductionists who think that there must be a reductive explanation for all natural phenomena, even if some will remain unknowable to us (for practical or theoretical reasons).

Other non-reductionists believe that the idea of giving a causal explanation of certain facts is actually confused - it's not that there is no such explanation, it's that the very idea of giving certain kinds of explanation means we don't fully understand the propositions involved. E.g. if someone were to ask why certain mathematical facts are true, hoping for a causal explanation in terms of brain-facts or historical-evolutionary facts, we might wonder whether they understood what math is about.

If you think there's some impossible gap between the human and the nonhuman worlds, then how do you think actual humans got here?

 

There are many types of explanatory claims in our language. Some are causal (how did something come to be), others are constitutive (what is it to be something), others still are normative (why is something good or right). Most mathematical explanation is constitutive, most action explanation is rational, and most material explanation is causal. It's totally possible to think there's a plain causal explanation about how humans evolved (through a combination of drift and natural selection, in which proportion we will likely never know) - while still thinking that the prospects for coming up with a constitutive explanation of normativity are dim (at best) or outright confused (at worst).

A common project shape for reductive naturalists is to try and use causal explanations to form a constitutive explanation for the normative aspects of biological life. If you spend enough time studying the many historical attempts that have been made at these explanations, you begin to see this pattern emerge where a would-be reductive theorist will either smuggle in a normative concept to fill out their causal story (thereby begging the question), or fail to deliver a theory which has the explanatory power to make basic normative distinctions which we intuitively recognize and that the theory should be able to account for (there are several really good tests out there for this - see the various takes on rule-following problems developed by Wittgenstein). Terms like "information" "structure" "fitness" "processing" "innateness" and the like all are subject to this sort of dilemma if you really put them under scrutiny. Magic non-natural stuff (like souls or spirit or that kind of thing) are often devices that people have reached for when forced on to this dilemma. Postulating that kind of thing is just the other side of the coin, and makes exactly the same error.

So I guess I'd say, I find it totally plausible how normative phenomena could be sui generis in much the same way that mathematical phenomena are, without finding it problematic that natural creatures can come to understand those phenomena through their upbringing and education. Some people get wrapped up in bewilderment about how this could even be possible, and I think there's good reason to believe that bewilderment reflects deep misunderstandings about the phenomena themselves, the recourse for which is sometimes called philosophical therapy.

Another point I want to be clear on:

right now people are in a race to feed ginormous input sets to deep learning systems and probably aren't stopping anytime soon

I don't think it's in-principle impossible to get from non-intelligent physical stuff to intelligent physical stuff by doing this - and i'm actually sympathetic to the biological anchors approach described here which was recently discussed on this site. I just think that the training runs will need to pay the computational costs for evolution to arrive at human brains, and for human brains to develop to maturity. I tend to think that - and I think good research in child development backs this up - that the structure of our thought is inextricably linked to our physicality. If anything, I think that'd push the development point out past Karnovsky's 2093 estimate. Again, not it's clearly not in-principle impossible for a natural thing to get the right amount of inputs to become intelligent (it clearly is possible, every human does it when they go from babies to adults); I just often think we underestimate how deeply important our biological histories (evolutionary and ontogenetic) are in this process. So I hope my urgings don't come across as advocating for a return to some kind of pre-darwinian darkness; if anything I hope they can be seen as advocating for an even more thorough-going biological understanding. That must start with taking very seriously the problems introduced by drift, and the problems with the attempts to derive the normative aspects of life from a concept like genetic information (one which is notoriously subject to the dilemma above).

Thanks for the tip on the Basic AI Drives paper. I'll give it a read. My suspicion is that once the "basic drives" are specified comprehensively enough to yield an intelligible picture of agent in question, we'll find that they're so much like us that the alignment problem disappears; they can only be aligned. That's what someone argues in one of the papers I linked above. A separate question I've wondered about, and please point me to any good discussion of this, is to compare our thinking about AI alignment with intelligent alien alignment.

Finally, to answer this:

So basically normative concepts are concepts in everyday language ("life", "health"), which get messy if you try to push them too hard?

No - normative concepts are a narrower class than the messy ones, though many find them messy. Normative concepts are those which structure our evaluative thought and talk (about the good, the bad, the ugly, etc.).

Anyway, good stuff. Keep the questions coming, happy to answer.

Totally get it. There are lots of folks practicing philosophy of mind and technology today in that aussie tradition who I think take these questions seriously and try to cache out what we mean when we talk about agency, mentality, etc. as part of their broader projects.

I'd resist your characterization that I'm insisting words shouldn't be used a particular way, though I can understand why it might seem that way. I'm rather hoping to shed more light on the idea raised by this post that  we don't actually know what many of these words even mean when they're used in certain ways (hence the authors totally correct point about the need to clarify confusions about agency while working on the alignment problem). My whole point in wading in here is just to point out to a thoughtful community that there's a really long rich history of doing just this, and even if you prefer the answers given by aussie materialists, it's even better to understand those positions vis-a-vis their present and past interlocutors. If you understand those who disagree with them, and can articulate those positions in terms they'd accept, you understand your preferred positions even better. I wouldn't say I deplore it, but I am always mildly amused when cogsci, compsci, and stats people start wading into plainly philosophical waters ("sort out our fundamental confusions about agency") and talk as if they're the first ones to get there - or the only ones presently splashing around. I guess I would have thought (perhaps naively) that on a site like this people would be at least curious to see what work has already been done on the questions so they can accelerate their inquiry.

Re: ruling out hard problems - lot's of philosophy is the attempt to better understand the problem's framing such that it either reduces to a different problem, or disappears altogether. I'd urge you to see this as an example of that kind of thing, rather than ruling out certain questions from the gun.

And on anthropocentrism - not sure what the point is supposed to be here, but perhaps it's directed at the "difference in kind" statements I made above. If so, I'd hope we can see light between treating humans as if they were the center of the universe and recognizing that there are at least putatively qualitative differences between the type of agency rational animals enjoy and the type of agency enjoyed by non-rational animals and artifacts. Even the aussie materialists do that - and then set about trying to form a theory of mind and agency in physical terms because they rightly see those putatively qualitative differences as a challenge to their particular form of metaphysical naturalism.

So look, if the author of this post is really serious about (1) they will almost certainly have to talk about what we mean when we use agential words. There will almost certainly be disagreements about whether their characterizations (A) fit the facts, and (B) are coherent with the rest of our beliefs. I don't want to come even close to implying that folks in compsci, cogsci, stats, etc. can't do this - they certainly can. I'm just saying that it's really, really conspicuous to not do so in dialogue with those whose entire discipline is devoted to that task. Philosophers are really good at testing our accounts of an agential concept by saying things like, "okay let's run with this idea of yours that we can define agency and mentality in terms of some bayesian predictive processing, or in terms of planning states, or whaterver, but to see if that view really holds up, we have to be able to use your terms or some innocent others to account for all the distinctions we recognize in our thought and talk about minds and agents." That's the bulk of what philosophers of mind and action do nowadays - they take someone's proposal about a theory of mind or action and test whether it can give an account of some region of our thought and talk about minds and agents. If it can't they either propose addenda, push the burden back to the theorist, or point out structural reasons why the theory faces general obstacles that seem difficult to overcome.

Here's some recent work on the topic, just to make it plain that there are philosophers working on these questions:

https://link.springer.com/article/10.1007%2Fs10676-021-09611-0

https://link.springer.com/article/10.1007/s11023-020-09539-2

And a great article by a favorite philosopher of action on three competing theories of human agency

https://onlinelibrary.wiley.com/doi/10.1111/nous.12178

Hope some of that is interesting, and appreciate the response.

Cheers

Naturalizing normativity just means explaining normative phenomena in terms of other natural phenomena whose existence we accept as part of our broader metaphysics. E.g. explaining biological function in terms of evolution by natural selection, where natural selection is explained by differential survival rates and other statistical facts. Or explaining facts about minds, beliefs, attitudes, etc., in terms of non-humoncular goings-on in the brain. The project is typically aimed at humans, but shows up as soon as you get to biology and the handful of normative concepts (life, function, health, fitness, etc.) that constitute its core subject matter.

Hope that helps. 

No - but perhaps I'm not seeing how they would make the case. Is the idea that somehow their existence augurs a future in which tech gets more autonomous to a point where we can no longer control it? I guess I'd say, why should we believe that's true? Its probably uncontroversial to believe many of our tools will get more autonomous - but why should we think that'll lead to the kind of autonomy we enjoy? 

Even if you believe that the intelligence and autonomy we enjoy exist on a kind of continuum, from like single celled organisms through chess-playing computers, to us - we'd still need reason to believe that the progress along this continuum will continue at a rate necessary to close the gap between where we sit on the continuum and where our best artifacts currently sit on the continuum. I don't doubt that progress will continue; but even if the continuum view were right, I think we sit way further out on the continuum than most people with the continuum view think. Also, the continuum view itself is very, very controversial. I happen to accept the arguments which aim to show that it faces insurmountable obstacles. The alternate view which I accept is that there's a difference in kind between the intelligence and autonomy we enjoy, and the kind enjoyed by non-human animals and chess-playing computers. Many people think that if we accept that, we have to reject a certain form of metaphysical naturalism (e.g. the view that all natural phenomena can be explained in terms of the basic conceptual tools of physics, maths, and logic). 

Some people think that this form of metaphysical naturalism is bedrock stuff; that if we don't accept it, the theists win, blah blah blah, so we must naturalize mentality and agency, it must exist on a continuum, we just need a theory which shows us how. Other people think we can have a non-reductive naturalism which takes as primitive the normative concepts found in biology and psychology. That's the view I hold. So no, I don't think the existence of those things makes a case for worries about AGI. Things which enjoy the kind of mentality and autonomy we enjoy must be like us in many, many ways - that is after all, what enables us to recognize them as having mentality and autonomy like ours. They probably need to have bodies, be mortal, have finite resources, have an ontogenesis period where they go from not like-minded to like-minded (as all children do), have some language, etc. 

Also, I think we have to think really carefully about what we mean when we say "human kind of intelligence" - if you read Jim Conant's logically alien thought paper you come to understand why that extra bit of qualification amounts to plainly nonsensical language. There's only intelligence simpliciter; insofar as we're justified in recognizing it as such, it's precisely in virtue of its bearing some resemblance to ours. The very idea of other kinds of intelligence which we might not be able to recognize is conceptually confused (if it bears no resemblance to ours, in virtue of what are we supposed to call it intelligent? Ex hypothesi? If so, I don't know what I'm supposed to be imagining).

The person who wrote this post rightfully calls attention to the conceptual confusions surrounding most casual pre-formal thinking about agency and mentality. I applaud that, and am urging that the most rigorous, well-trodden paths exploring these confusions are to be found in philosophy as practiced (mostly but not exclusively) in the Anglophone tradition over the last 50 years.

That this should be ignored or overlooked out of pretension by very smart people who came up in cogsci, stats, or compsci is intelligible to me; that it should be ignored on a blog that is purportedly about investigating all the available evidence to find quicker pathways to understanding is less intelligible. I would commend everyone with an interest in this stuff to read Stanford Encyclopedia of Philosophy entries on different topics in philosophy of action and philosophy of mind, then go off their bibliographies for more detailed treatments. This stuff is all explored by philosophers really sympathetic to - even involved in - the projects of creating AGI. But more importantly, it is equally explored by those who either think the project is theoretically and practically possible but prudentially mistaken, or by those who think it is theoretically and practically impossible; let alone a prudential possibility.

Most mistakes here are made in the pre-formal thinking. Philosophy is the discipline of making that thinking more rigorous.

I'm perpetually surprised by the amount of thought that goes into this sort of thing coupled with the lack of attention to the philosophical literature on theories of mind and agency in the past, let's just say 50 years. I mean look at the entire debate around whether or not it's possible to naturalize normativity - most of the philosophical profession has given up on this or accepts the question was at best too hard to answer, at worst, ill-conceived from the start.

These literatures are very aware of, and conversant with, the latest and greatest in cogsci and stats. They're not just rehashing old stuff. There is a lot of good work done there on how to make those fundamental questions around agency tractable. There's also an important strain in that literature which claims there are in-principle problems for the very idea of a generalized theory of mind or agency (sic. Putnam, McDowell, Wittgenstein, the entire University of Chicago philosophy department, etc.).

I entered a philosophy PhD program convinced that there were genuine worries here about AGI, machine ethics, etc. I sat in the back of MIRI conferences quietly nodding along. Then I really started absorbing Wittgenstein and what's sometimes called the "resolute reading" of his corpus and I have become convinced that what we call cognition, intelligence, agency, these are all a family of concepts which have a really unique foothold in biological life - that naturalizing even basic concepts like life turn out to be notoriously tricky (because of their normativity). And that the intelligence we recognize in human beings and other organisms are so bound up in our biological forms of life that it becomes very difficult to imagine something without the desire to evade death, nourish itself, and protect a physical body having any of the core agential concepts required to even be recognized as intelligent. Light dawns gradually over the whole. Semantic and meaning holism. Embedded biology. If a lion could speak, we couldn't understand it. All that stuff.

A great place to start is with Jim Conant's "The Search for the Logical Alien" and then get into Wittgenstein's discussions of rule following and ontogenesis. Then have a look at some of the challenges naturalizing normativity in biology. This issue runs deep.

In the end, this idea that intelligence is kind of an isolatable property, indepedent from the particular forms of life in which it is manifest, is a really, really old idea. Goes back at least to the Gnostics. Every generation recapitulates it in some way. AGI just is that worry re-wrought for the software age.

If anything, this kind of thing may be worth studying just because it calls into question the assumptions of programs like MIRI and their earnest hand-wringing over AGI. At a minimum, it's convinced me that we under-estimate by many orders of magnitude the volume of inputs needed to shape our "models." It starts before we're even born, and we can't discount the centrality of e.g. experience touching things, having fragile bodies, having hunger, etc. in shaping the overall web of beliefs and desires that constitute our agential understanding.

Basically, if you're looking for the foundational issues confronting any attempt to form a gears-level understanding of the kind of goal-directed organization that all life-forms exhibit (e.g. much of biological theory) you would do well to read some philosophy of biology. Peter Godfrey Smith has an excellent introduction that's pretty level-handed. Natural selection really doesn't bear as much weight as folks in other disciplines would like it too - especially when you realize that the possibility of evolution by drift confounds any attempt at a statistical reduction of biological function.

Hope something in there is interesting for you.