Sorted by New

Wiki Contributions



An Introduction to Probability and Inductive Logic by Ian Hacking

Have any of you read this book?

I have been invited to join a reading group based around it for the coming academic year and would like the opinions of this group as to whether it's worth it.

I may join in just for the section on Bayes. I might even finally discover the correct pronunciation of "Bayesian". ("Bay-zian" or "Bye-zian"?)

Here's a link to the book:


You're missing my point somewhat. I'm not saying you can't get better at conversation. Nor am I saying that there aren't tips/instruction you can give. On this very page you see me do so here: Further, I just said above that this is exactly how people normally develop their conversational abilities.

My point is simply that decision procedures/algorithms are not the way to go, because they will not produce natural sounding conversation. In fact, using them to teach someone conversation would be counter-productive, because it would give them a false idea of what conversation is like. It represents conversation as mechanical, and if a person approaches a conversation as if it were mechanical then they will not succeed in having a genuine conversation.


I think the idea of learning conversational social norms and so forth by practice/instruction is a very different issue to consciously using a decision procedure to dictate your conversation.

The instruction you describe is pretty much a description of what most people experience growing up, through a combination of what their parents teach them and experience/trial and error.

This is not the same thing as standing next to someone and going through a mental flow chart, or list of "dos and don'ts" every time it's your turn to say something.

The former is genuinely learning conversation, the latter is trying to fake it.


Um, thanks, but I think wrong thread.


I assume you mean of my reply to HughRistik.

No statistical data, if that's what you want.

However, I think that in this case it isn't needed. It seems clear that following a conversation by rules and algorithms will be unable to replicate genuine conversation. Very little of a conversation is about what is actually said. You have to read body language, you have to read into what isn't said, you have to use intuition because you read these things unconsciously, not consciously.

I can't be bothered to find it at the moment - or in the foreseeable future - because this topic just doesn't mean hours of time to me, but I do recall studies in which people's ability to register body language consciously was compared to our ability to read it by intuition, sub-consciously. The results were something like this: the conscious mind could only spot 2 or 3 body language signs, whereas the unconscious mind was able to pick up on up to 15.


Certainly there are patterns in social interaction.

However, I think that if you go into social interaction aware of these patterns and meaning to act on them, then this very awareness will in fact ruin your social interaction, because one of the rules of genuine social interaction is that it's free flowing and natural-feeling. If you treat it like a formula, you'll break it.


Is self-ignorance a prerequisite of human-like sentience?

I present here some ideas I've been considering recently with regards to philosophy of mind, but I suppose the answer to this question would have significant implications for AI research.

Clearly, our instinctive perception of our own sentience/consciousness is one which is inaccurate and mostly ignorant: we do not have knowledge or sensation of the physical processes occurring in our brains which give rise to our sense of self.

Yet I take it as true that our brains - like everything else - are purely physical. No mysticism here, thank you very much. If they are physical, then everything that occurs within is causally deterministic. I avoid here any implications regarding free will (a topic I regard as mostly nonsense anyway). I simply point out that our brain processes will follow a causal narrative thus: input leads to brain state A leads to brain state B which leads to brain state C, and so on. These processes are entirely physical, and therefore, theoretically (not practically - yet), entirely predictable.

Now, ask yourself this question: what would our self-perception be like, if it was entirely accurate to the physical reality? If there was no barrier of ignorance between our consciousness and the inner workings of our brains?

With every idea, though, emotion, plan, memory and action we had, we would be aware of the brainwave that accompanied it - the specific pattern of neuronal firings, and how they built up to create semantically meaningful information. Further, we'd see how this brain state led to the following brain state, and so on. We would perceive ourselves as purely mechanical.

In addition, as our brain is not a single entity, but a massive network of neurons, collected into different systems (or modules), working together but having separate functions, we would not think of our mental processes as unified - at least nowhere near as much as we do now.We would no longer attribute our thoughts and mental life to an "I", but to the totality of mechanical processes that - when we were ignorant - built up to create a unified sense of "I".

I would tentatively suggest that such a sense of self is incompatible with our current sense of self. That how we act and behave and think, how we see ourselves and others, is intrinsically tied to the way we perceive ourselves as non-mechanical, possessing a mystical will - an I - which goes where it chooses (of course academically you may recognise that you're a biological machine, but instinctually we all behave as if we weren't). In short, I would suggest that our ignorance of our neural processes is necessary for the perception of ourselves as autonomous sentient individuals.

The implications of this, were it true, are clear. It would be impossible to create an AI which was both able to perceive and alter its own programming, while maintaining a human-like sentience. That's not to say that such an AI would not be sentient - just that it would be sentient in a very different way to how we are.

Secondly, we would possibly not even be able to recognise this other-sentience, such was the difference. For every decision or proclamation the AI made, we would simply see the mechanical programming at work, and say "It's not intelligent like we are, it's just following mechanical principles". (Think, for example, of Searle's Chinese Room, which I take only shows that if we can fully comprehend every stage of an information manipulation process, most people will intuitively think it to be not sentient). We would think our AI project unfinished, and keep trying to add that "final spark of life", unaware that we had completed the project already.


When considering the initial probability question regarding Linda, it strikes me that it isn't really a choice between a single possibility and two conjoined possibilities.

Giving a person an exclusive choice between "bank teller" OR "bank teller and feminist" will make people imply that "bank teller" means "bank teller and not feminist".

So both choices are conjoined items, it's just that one of them is hidden.

Given this, people may not be so incorrect after all.

Edit: People should probably stop giving this post points, given Sniffnoy's linking of a complete destruction of this objection :)


I think you brush upon a quite important point here: good conversation is less about being good at conversation and more about not being bad at it. People will talk quite happily with someone who is utterly boring, so long as it's not for too long and they've got nothing better to do.

People are only really put off a conversation when a person does something odd.

Prime among these are non-sequiturs, unusually extreme opinions (especially about topics people normally don't have extreme opinions about), and discussing topics which are generally understood as not being suitable for general conversation (such as topics which are invasive/personal, obscure, or too academic for the context - it's fine to talk intellectually in the appropriate place, but not to strangers at a bar/club).

Load More