I don't think so. Compare the following two requests:
(1) Describe a refrigerator without using the word refrigerator or near-synonyms.
(2) Describe the structure of a refrigerator in terms of moving parts and/or subprocesses.
The first request demands the tabooing of words; the second request demands an answer of a particular (theory-laden) form. I think the OPs request is like request 2. What's more, I expect submitting request 2 to a random sample of people would license the same erroneous conclusion about "refrigerator" as it did about "consciousnes...
Section 1.6 is another appendix about how this series relates to Philosophy Of Mind. My opinion of Philosophy Of Mind is: I’m against it! Or rather, I’ll say plenty in this series that would be highly relevant to understanding the true nature of consciousness, free will, and so on, but the series itself is firmly restricted in scope to questions that can be resolved within the physical universe (including physics, neuroscience, algorithms, and so on). I’ll leave the philosophy to the philosophers.
At the risk of outing myself as a thin-skinned philosop...
I strongly believe that step 1 is sufficient or almost sufficient for step 2, i.e., that it's impossible to give an adequate account of human phenomenology without figuring out most of the computational aspects of consciousness.
Apologies for nitpicking, but your strong belief that step 1 is (almost) sufficient for step 2 would be more faithfully re-phrased as: it will (probably) be possible/easy to give an adequate account of human phenomenology by figuring out most of the computational aspects of consciousness. The way you phrased it (viz., "impossible......
I agree with the thrust of this comment, which I read as saying something like "our current physics is not sufficient to explain, predict, and control all macroscopic phenomena". However, this is a point which Sean Carroll would agree with. From the paper under discussion (p.2): "This is not to claim that physics is nearly finished and that we are close to obtaining a Theory of Everything, but just that one particular level in one limited regime is now understood."
The claim he is making, then, is totally consistent with the need to find further appro...
I see. I'm afraid I don't have much great literature to recommend on computational semantics (though Josh Tenenbaum's PhD dissertation seems relevant). I still wonder whether, even if you disagree with the approaches you have seen in that domain, those might be the kind of people well-placed to help with your project. But that's your call of course.
Depending on your goals with this project, you might get something out of reading work by relevance theorists like Sperber, Wilson, and Carston (if you haven't before). I find Carston's reasoning about how...
Thanks for the response. Personally, I think your opening sentence as written is much, much too broad to do the job you want it to do. For example, I would consider "natural language semantics as studied in linguistics" to include computational approaches, including some Bayesian approaches which are similar to your own. If I were a computational linguist reading your opening sentence, I would be pretty put off (presumably, these are the kind of people you are hoping not to put off). Perhaps including a qualification that it is classical semantics you are talking about (with optional explanatory footnote) would be a happy medium.
I enjoyed the content of this post, it was nicely written, informative, and interesting. I also realise that the "less bullshit" framing is just a bit of fun that shouldn't be taken too seriously. Those caveats aside, I really dislike your framing and want to explain why! Reasons below.
First, the volume of work on "semantics" in linguistics is enormous and very diverse. The suggestion that all of it is bullshit comes across as juvenile, especially without providing further indication as to what kind of work you are talking about (the absence of a signal th...
Fair enough if literally any approach using symbolic programs (e.g. a python interpreter) is considered neurosymbolic, but then there isn't any interesting weight behind the claim "neurosymbolic methods are necessary".
If somebody achieved a high-score on the ARC challenge by providing the problems to an LLM as prompts and having it return the solutions as output, then the claim "neurosymbolic methods are necessary" would be falsified. So there is weight to the claim. Whether it is interesting or not is obviously in the eye of the beholder.
I think the kind of sensible goalpost-moving you are describing should be understood as run-of-the-mill conceptual fragmentation, which is ubiquitous in science. As scientific communities learn more about the structure of complex domains (often in parallel across disciplinary boundaries), numerous distinct (but related) concepts become associated with particular conceptual labels (this is just a special case of how polysemy works generally). This has already happened with scientific concepts like gene, species, memory, health, attention and many more. ...
I actually think what you are going for is closer to JL Austin's notion of an illocutionary act than anything in Wittgenstein, though as you say, it is an analysis of a particular token of the type ("believing in"), not an analysis of the type. Quoting Wikipedia:
"According to Austin's original exposition in How to Do Things With Words, an illocutionary act is an act:
In Leibniz’ case, he’s known almost exclusively for the invention of calculus.
Was this supposed to be a joke (if so, consider me well and truly whooshed)? At any rate, it is most certainly not the case. Leibniz is known for a great many things (both within and without mathematics) as can be seen from a cursory glance at his Wikipedia page.
Rather, they might be mere empty machines. Should you still tolerate/respect/etc them, then?"
My sense is that I'm unusually open to "yes," here.
I think the discussion following from here is a little ambiguous (perhaps purposefully so?). In particular, it is unclear which of the following points are being made:
1: Sufficient uncertainty with respect to the sentience (I'm taking this as synonymous with phenomenal consciousness) of future AIs should dictate that we show them tolerance/respect etc...
2: We should not be confident that sentience is a good c...
Apologies, I had thought you would be familiar with the notion of functionalism. Meaning no offence at all but it's philosophy of mind 101, so if you're interested in consciousness, it might be worth reading about it. To clarify further, you seem to be a particular kind of computational functionalist. Although it might seem unlikely to you, since I am one of those "masturbatory" philosophical types who thinks it matters how behaviours are implemented, I am also a computational functionalist! What does this mean? It means that computational functionalism is...
I am sorry that you got the impression I was trolling. Actually I was trying to communicate to you. None of the candidate criteria I suggested were conjured ex nihilo out of a hat or based on anything that I just made up. Unfortunately, collecting references for all of them would be pretty time consuming. However, I can say that the global projection phrase was gesturing towards global neuronal workspace theory (and related theories). Although you got the opposite impression, I am very familiar with consciousness research (including all of the references y...
I think you're missing something important.
Obviously I can't speak to the reason there is a general consensus that LLM-based chatbots aren't conscious (and therefore don't deserve rights). However, I can speak to some of the arguments that are still sufficient to convince me that LLM-based chatbots aren't conscious.
Generally speaking, there are numerous arguments which essentially have the same shape to them. They consist of picking out some property that seems like it might be a necessary condition for consciousness, and then claiming that LLM-based...
Enjoyable post, I'll be reading the rest of them. I especially appreciate the effort that went into warding off the numerous misinterpretations that one could easily have had (but I'm going to go ahead an ask something that may signal I have misinterpreted you anyhow).
Perhaps this question reflects poor reading comprehension, but I'm wondering whether you are thinking of valence as being implemented by something specific at a neurobiological level or not? To try and make the question clearer (in my own head as much as anything), let me lay out two al...
In other words, you think that even in a world where the distribution of mathematical methods were very specific to subject areas, this methodology would have failed to show that? If so, I think I disagree (though I agree the evidence of the paper is suggestive, not conclusive). Can you explain in more detail why you think that? Just to be clear, I think the methodology of the paper is coarse, but not so coarse as to be unable to pick out general trends.
Perhaps to give you a chance to say something informative, what exactly did you have in mind by "united around methodology" when you made the original comment I quoted above?
Ok, I do really like that move, and generally think of fields as being much more united around methodology than they are around subject-matter. So maybe I am just lacking a coherent pointer to the methodology of complex-systems people.
The extent to which fields are united around methodologies is an interesting question in its own right. While there are many ways we could break this question down which would probably return different results, a friend of mine recently analysed it with respect to mathematical formalisms (paper: https://link.springer.com/arti...
I don't have an answer for your question about how you might become confident that something really doesn't exist (other than a generic 'reason well about social behaviour in general, taking all possible failure modes into account'). However, I would point out that the example you give is about your group of friends in particular, which is a very different case from society at large. Shapeshifting lizardmen are almost certainly not evenly distributed across friendship groups such that every group of a certain size has one, but rather clumped together as we would expect due to homophily.
Edit: I see this point was already addressed in Bezzi's response on filter bubbles.
I think you are very confused about how to interpret disagreements around which mental processes ground consciousness. These disagreements do not entail a fundamental disagreement about what consciousness is as a phenomenon to be explained.
Regardless of that though, I just want to focus on one of your "referents of consciousness" here, because I also think the reasoning you provide for your particular claims is extremely weak. You write the following
... (read more)