There is nothing wrong with wanting to be something you are not. But you should also want to have accurate beliefs about yourself. And being a sort of person who prefers beer over charity doesn't make you a bad person. And I have no idea how to you can change your true preferences, even if you want to.
I think the problem isn't that your actions are inconsistent with your beliefs, it's that you have some false beliefs about yourself. You may believe that "death is bad", "charity is good", and even "I want to be a person who would give to charity instead of buying a beer". But it does not follow that you believe "giving to charity is more important to me than buying a beer".
This explanation is more desirable, because if actions don't follow from beliefs, then you have to explain what they follow from instead.
It seems you are no longer ruling out a science of other minds
No, by "mind" I just mean any sort of information processing machine. I would have said "brain", but you used a more general "entity", so I went with "mind". The question of what is and isn't a mind is not very interesting to me.
I've already told you what it would mean
Where exactly?
Is the first half of the conversation meaningful and the second half meaningless?
First of all, the meaningfulness of words depends on the observer. "Robot pain&q...
category error, like "sleeping idea"
Obviously I agree this is meaningless, but I disagree about the reasoning. A long time ago I asked you to prove that "bitter purple" (or something) was a category error, and your answer was very underwhelming.
I say that "sleeping idea" is meaningless, because I don't have a procedure for deciding if an idea is sleeping or not. However, we could easily agree on such procedures. For example we could say that only animals can sleep and for every idea, "is this idea sleeping" is ans...
That is a start, but we can't gather data from entities that cannot speak
If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I'm confident much can be said, even if I can't explain the algorithm how exactly that would work.
On the other hand, if the mind is so primitive that it cannot form the thought "X feels a like Y", then does X actually feel like Y to it? And of course, the mind has to have feelings in the first place. Note, my previo...
We can't compare experiences qua experiences using a physicalist model, because we don't have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.
We can derive that model by looking at brain states and asking the brains which states are similar to which.
Even if it is an irrational personal pecadillo of someone to not deliberately cause pain , they still need to know about robot pain.
They only need to know about robot pain if "robot pain" is a phrase that describes something. They could a...
But you could not have used it to make a point about links between meaning, detectabiity, and falsehood.
No, but I can use it to make a point about how low your bar for meaningfulness is. Does that not count for some reason? I asked you before to propose a meaningless statement of your own. Do none exist? Are none of them grammatically correct?
???
Now you imply that they possible could be detected, in which case I withdraw my original claim
Yes, the unicorns don't have to be undetectable be definition. They're just undetectable by all methods that I'm ...
I doubt that's a good thing. It hasn't been very productive so far.
Well, you used it,.
I can also use"ftoy ljhbxd drgfjh". Is that not meaningless either? Seriously, if you have no arguments, then don't respond.
What happens if a robot pain detector is invented tomorrow?
Let me answer that differently. You said invisible unicorns don't exist. What happens if an invisible unicorn detector is invented tomorrow? To make a detector for a thing, that thing has to have known properties. If they did invent a robot pain detector tomorrow, how would you check that it really detects robot pain? You're supposed to be able to check that somehow.
You keep saying it s a broken concept.
Yes. I consider that "talking about consciousness". What else is there to say about it?
That anything should feel like anything,
If "like" refers to similarity of some experiences, a physicalist model is fine for explaining that. If it refers to something else, then I'll need you to paraphrase.
Circular as in
"Everything is made of matter. matter is what everything is made of." ?
Yes, if I had actually said that. By the way, matter exists in you universe too.
...Yes: it's relevant beca
Sure, and if X really is the best approximation of Y that Bob can understand, then again Alice is not dishonest. Although I'm not sure what "approximation" means exactly.
But there is also a case where Alice tells Bob that "X is true", not because X is somehow close to Y, but because, supposedly, X and Y both imply some Z. This is again a very different case. I think this is just pure and simple lying. That is, the vast majority of lies ever told fall into this category (for example, Z could be "you shouldn't jail me", X could ...
Case 1: Alice tells Bob that "X is true", Bob then interprets this as "Y is true"
Case 2: Alice tells Bob that "X is true", because Bob would be too stupid to understand it if she said "Y is true". Now Bob believes that "X is true".
These two cases are very different. You spend the first half of your post in case 1, and then suddenly jump to case 2 for the other half.
<...> then perhaps telling a lie in a way that you know will communicate a true concept is not a lie.
This is fair.
There are certain truths which literally cannot be spoken to some people.
But this is a completely different case. Lies told to stupid people are still lies, the stupid people don't understand the truth behind them, and you have communicated nothing. You could argue that those lies are somehow justified, but there is no parallel between lying to stupid people and things like "You're the best".
Well, I can imagine a post on SSC with 5 statements about the next week, where other users would reply with probabilities of each becoming true, and arguments for that. Then, after the week, you could count the scores and name the winners in the OP. It would probably get a positive reaction. Why not give it a try?
I'm not sure what the 5 statements should be though. I think it must be "next week" not "next year", because you can't enjoy a game if you've forgotten you're playing it. Also, for it to be a game, it has to be repeatable, but ...
There are way too many "shoulds" in this post. If anyone can have fun predicting important events at all, then it would probably be people in this forum. Can we make something like this happen? Would we actually want to participate? I'm not sure that I do.
That is not a fact, and you have done nothing to argue it, saying instead that you don;t want to talk about morality
Yes, I said it's not a fact, and I don't want to talk about morality because it's a huge tangent. Do you feel that morality is relevant to our general discussion?
and also don;'t want to talk about consciousness.
What?
A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,that are not addressed by talk of brain states and preferences.
What facts am I fail...
It's obvious - we need buzzfeed to create a "which celebrities will get divorced this year" quiz (with prizes?). There is no way people will be interested in predicting next year's GDP.
There is a common mistake in modeling humans, to think that they are simple. Assuming that "human chose a goal X" implies "human will take actions that optimally reach X" would be silly. Likewise assuming that humans can accurately observe their own internal state is silly. Humans have a series of flaws and limitations that obscure the simple abstractions of goal and belief. However, saying that goals and beliefs do not exist is a bit much. They are still useful in many cases and for many people.
By the way, it sounds a little like you're referring to so some particular set of beliefs. I think naming them explicitly would add clarity.
What I have asserted makes sense with my definiions. If you are interpreting in terms of your own definitions....don't.
I'm trying to understand your definitions and how they're different from mine.
I think it is false by occam;'s razor, which automaticaly means it is meaningful, beause it it were meanignless I would not know how to apply occam's razor or anything else to it.
I see that for you "meaningless" is a very narrow concept. But does that agree with your stated definition? In what way is "there is an invisible/undetectable unico...
You're right. Instead it means that he doesn't have the willpower required to become a doctor. Presumably, this is something he didn't know before he started school.