Cognitive psychologists generally make better predicitons about human behavior than neuroscientists.
I grant you that; my assertion was one of type, not of degree. A predictive explanation will generally (yes, I am retracting my 'almost always' quantifier) be reductionist, but this a very different statement than the most reductionist explanation will be the best.
Here it seems to me like you think about philosophy as distinct from empirical reality.
Less 'distinct' and more 'abstracted'. The put it as pithy (and oversimplified) as possible, empiricis...
Mary is presumed to have all objective knowedge and only objectve knowledge, Your phrasing is ambiguous and therefire doesnt address the point.
The behavior of the neurons in her skull is an objective fact, and this is what I mean to referring to. Apologies for the ambiguity.
When you say Mary will know what happens when she sees red, do you mean she knows how red looks subjectively, or she knows something objective like what her behaviour will be
The latter. The former is purely experiential knowledge, and as I have repeatedly said is contained in a ...
I think your post seems to have been a reply to me. I'm the one who still accepts physicalism. AncientGreek is the one who rejects it.
Whose idea of reductionism are you criticising? I think your post could get more useful by being more clear about the idea you want to challenge.
Hmm.
I think this is closest I get to having a "Definiton 3.4.1" in my post
...the other reductionism I mentioned, the 'big thing = small thing + small thing' one...
Essentially, the claim is that to accurately explain reality, non-reductionist explanations aren't always wrong.
The confusion, however, that I realized elsewhere in the thread, is that I conflate 'historical explanation' with 'predictiv...
That's what I mean by complexity, yeah.
I don't know if I made this was clear, but the point I make is independent of what high level principles explain thing, only that they are high level. The ancestors that competed across history to produce the organism of interest are not small parts making up a big thing, unless you subscribe to causal reductionism where you use causes instead of internal moving parts. But I don't like calling this reductionism (out even a theory, really) because it's, as I said, a species of causality, broadly construed.
[Why You Don't Think You're Beautiful](http://skepticexaminer.com/2016/05/dont-think-youre-beautiful/)
[Why You Don't Think You're Beautiful](http://intentionalinsights.org/why-you-dont-think-youre-beautiful)
Mary's room seems to be arguing that,
[experiencing(red)] =/= [experiencing(understanding([experiencing(red)] )] )]
(translation: the experience of seeing red is no the experience of understanding how seeing red works)
This is true, when we take those statements literally. But it's true in the same sense a Gödel encoding of statement in PA is not literally that statement. It is just a representation, but the representation is exactly homomorphic to its referent. Mary's representation of reality is presumed complete ex hypothesi, therefore she will understan...
There are two things you could mean when you say 'reductionism is right'. That reality is reductionist in the "big thing = small thing + small thing" sense, or that reductionist explanations are better by fiat.
Reality is probably reductionist. I won't assign perfect certainty, but reductionist reality is simpler than magical reality.
As it currently stands, we don't have a complete theory of reality, so the only criteria we can judge theories is that they 1) are accurate, 2) are simple.
I am not arguing about the rightness or wrongness of reductio...
At least this tells me I didn't make a silly mistake in my post. Thank you for the feedback.
As for your objections,
All models are wrong, some models are useful.
exactly captures my conceit. Reductionism is correct in the sense that is, in some sense, closer to reality than anti- or contra-reductionism. Likely in a similar sense that machine code is closer to the reality of a physical computation than a .cpp file, though the analogy isn't exact, for reasons that should become clear.
I'm typing this on a laptop, which is a intricate amalgam of various kind...
"I think you're wrong" is not a position.
They way you're saying this, it makes it seem like we're both in the same boat. I have no idea what position you're even holding.
I feel like I'm doing the same thing over and over and nothing different is happening, but I'll quote what I said in another place in this thread and hope I was a tiny bit clearer.
http://lesswrong.com/lw/nnc/the_ai_in_marys_room/day2
...I think the distinction between 'knowing all about' and 'seeing' red is captured in my box analogy. The brain state is a box. There is another box
Arguably it could simulate itself seeing red and replace itself with the simulation.
I think the distinction between 'knowing all about' and 'seeing' red is captured in my box analogy. The brain state is a box. There is another box inside it, call this 'understanding'. We call something inside the first box 'experienced'. So the paradox hear is the two distinct states [experiencing (red) ] and [experiencing ( [understanding (red) ] ) ] are both brought under the header [knowing (red)], and this is really confusing.
The big box is all knowledge, including the vague 'knowledge of experience' that people talk about in this thread. The box-inside-the-box is verbal/declarative/metaphoric/propositional/philosophical knowledge, that is anything that is fodder for communication in any way.
The metaphor is intended to highlight that people seem to conflate the small box with the big box, leading to confusion about the situation. Inside the metaphor, perhaps this would be people saying "well maybe there are objects inside the box which aren't inside the box at all". W...
I have no idea what your position even is and you are making no effort to elucidate it. I had hoped this line
I don't understand what disagreement is occurring here, hopefully I've given someone enough ammunition to explain.
Was enough to clue you in to the point of my post.
I'd highly recommend this sequence to anyone reading this: http://lesswrong.com/lw/5n9/seeing_red_dissolving_marys_room_and_qualia/
This thrust of the argument, applied to this situation, is simply that 'knowledge' is used to mean two completely different things here. On one hand, we have knowledge as verbal facts and metaphoric understanding. On the other we have averbal knowledge, that is, the superset containing verbal knowledge and non-verbal knowledge.
To put it as plainly as possible: imagine you have a box. Inside this box, there is a another, smaller...
I think the argument is asserting that Mary post-brain surgery is a identical to Mary post-seeing-red. There is no difference; the two Mary's would both attest to having access some ineffable quality of red-ness.
To put it bluntly, both Marys say the same things, think the same things, and generally are virtually indistinguishable. I don't understand what disagreement is occurring here, hopefully I've given someone enough ammunition to explain.
Somewhere I got the impression that ... Sarah Perry of Ribbonfarm were LWers at some point.
She was/is. Her (now dead) blog, The View From Hell, is on the lesswrong wiki list of blogs. She has another blog, at https://theviewfromhellyes.wordpress.com which updates, albeit at a glacial pace.
I'm sorry that I over estimated my achievements. Thank you for being civil.
What do you expect to happen if you feed your code a problem that has no Turing-computable solution?
I'm actually quite interested in this. For something like the busy beaver function, it just runs forever with the output being just fuzzy and gets progressively less fuzzy but never being certain.
Although I wonder about something like super-tasks somehow being described for my model. You can definite get input from arbitrarily far in the future, but you can do even crazier things...
For some strange reason, your post wasn't picked up by my RSS feed and the little mail icon wasn't orange, Sorry to keep you waiting for a reply for so long.
The Halting proof is for Turing machines. My model isn't a turing machine, it's supposed to be more powerful.
You'll have to mathematically prove that it halts for all possible problems.
Not to sound condescending, but this is why I'm posting it on a random internet forum and not sending it to a math professor or something.
I don't think this is revolutionary, and I think there is very good possibili...
It's probably stupid to reply to comment from more than three years ago, but Antisocial personality disorder does not imply violence. There are examples of psychopaths who were raised in good homes that grew up to become successful assholes.
I wrote a hypercomputer 60-ish lines of python. It's (technically) more powerful than every supercomputer in the world.
Edit: actually, I spoke too soon. I have written code which outlines a general scheme that can be modified to construct schemes in which hyper computers could possible constructed (including itself). I haven't proven that my scheme allows for hypercomputation, but a scheme similar to could (probably), including itself.
Edit: I was downvoted for this, which suppose was justified.
What my code does is simulates a modified version of CGoL (Joh...
Edit: I dug through OP's post history and found this thread. The thread gives better context to what /u/reguru is trying to say.
A tip: very little is gained by couching your ideas in this self-aggrandizing, condescending tone. Your over-reliance on second person is also an annoying tic, though some make it work. You don't, however.
You come off as very arrogant and immature, and very few people will bother wading through this. The few that will do it only in hopes of correcting you.
If you're at all interested in improving the quality of your writing, con... (read more)