All of ImNotAsSmartAsIThinK's Comments + Replies

Edit: I dug through OP's post history and found this thread. The thread gives better context to what /u/reguru is trying to say.


A tip: very little is gained by couching your ideas in this self-aggrandizing, condescending tone. Your over-reliance on second person is also an annoying tic, though some make it work. You don't, however.

You come off as very arrogant and immature, and very few people will bother wading through this. The few that will do it only in hopes of correcting you.

If you're at all interested in improving the quality of your writing, con... (read more)

-1reguru7y
That's not even following the position, as you already created a map or look to create one. Indeed, if I can remove that I would, because it misses the point, or rather that could be worked on in more detail. But maybe from the context of the arational reality, everything is arational. If you, however, are inside it, some actions may seem superior over other actions. Lack of awareness means in my opinion, a measurement of awareness, not a separate definition. I don't think you or I cannot reach "Total awareness" while still thinking within the paradox of limited awareness. Because if I realize my awareness is limited, I become a little more aware, yet on top of that I missed my unawareness. Of course, there's probably those who has done so and know. How is this refuting any of the arguments made? You either agree or you don't, then you say why. Useful or uselessness factor is another discussion which I didn't even bring up I think. How is this relevant to any of the ideas made in the post? The argument is that it is a human projection, that reality is arational. I never made the argument that just because something is a human projection doesn't mean it removes any positive connotations. You just are aware that it is a human projection, and that it is fine to be that way. (according to the argument) I can't see why it would be the case. Atom is a map. The void is a map. One can be invited to the void, yet speaking of it or finding it, was a map. Then one can notice one was there all along. You couldn't find a counter-argument to my claim. Because by silencing thoughts you realize that reality does not cease to exist, even of patterns and such. A map of neural pathways is still a map, for example, because it is a thought. It does not need a map, it simply exists as an experience, which is the point of my post. So maps are not the territory, literally. Of course, you can, by realizing everything is a map, except the territory which cannot have a map, be expla

Cognitive psychologists generally make better predicitons about human behavior than neuroscientists.

I grant you that; my assertion was one of type, not of degree. A predictive explanation will generally (yes, I am retracting my 'almost always' quantifier) be reductionist, but this a very different statement than the most reductionist explanation will be the best.

Here it seems to me like you think about philosophy as distinct from empirical reality.

Less 'distinct' and more 'abstracted'. The put it as pithy (and oversimplified) as possible, empiricis... (read more)

Mary is presumed to have all objective knowedge and only objectve knowledge, Your phrasing is ambiguous and therefire doesnt address the point.

The behavior of the neurons in her skull is an objective fact, and this is what I mean to referring to. Apologies for the ambiguity.

When you say Mary will know what happens when she sees red, do you mean she knows how red looks subjectively, or she knows something objective like what her behaviour will be

The latter. The former is purely experiential knowledge, and as I have repeatedly said is contained in a ... (read more)

1TheAncientGeek7y
You have said that according to you, stipulatively, subjective knowledge is a subset of objective knowledge. What we mean by objective knowledge is generally knowledge that can be understood at second hand, without being in a special state or having had particular experiences. You say that the subjective subset of objective knowledge is somehow opaque, so that it does not have the properties usually associated with objective knowledge..but why should anyone believe it is objective, when it lacks the usual properties, and is only asserted to be objective? I can't see how that has been proven. You can't prove that redness is physically encoded in the relevant sense just by noting that physical changes occur in brains, because 1 There's no physical proof of physicalism 2 An assumption of physicalism is question begging 3 You need an absence of non physical proeties, states and processes, not just the presence of physical changes 4 Physicalism as a meaningful claim, and not just a stipulative label needs to pay its way in explanation...but its ability to explain subjective knowledge is just at is in question. Its hard to prove the existence of subjective knowledge in an objective basis. What else would you expect.? There is a widespread belief in subjective, experiential knowledge and the evidence for it is subjective. The alternative is the sort of thing caricatured as 'how was it for me, darling'.

I think your post seems to have been a reply to me. I'm the one who still accepts physicalism. AncientGreek is the one who rejects it.

0entirelyuseless7y
I realize that. A reply doesn't necessarily have to be an argument against the person it is a reply to.

Whose idea of reductionism are you criticising? I think your post could get more useful by being more clear about the idea you want to challenge.

Hmm.

I think this is closest I get to having a "Definiton 3.4.1" in my post

...the other reductionism I mentioned, the 'big thing = small thing + small thing' one...

Essentially, the claim is that to accurately explain reality, non-reductionist explanations aren't always wrong.

The confusion, however, that I realized elsewhere in the thread, is that I conflate 'historical explanation' with 'predictiv... (read more)

0ChristianKl7y
There the open question of what + means. To me your post didn't feel inaccurate but confused. A mix of saying trival things and throwing around terms where I don't know exactly what you mean and I'm not sure whether you have thought about what you mean exactly either. Cognitive psychologists generally make better predicitons about human behavior than neuroscientists. Here it seems to me like you think about philosophy as distinct from empirical reality. I get the impression that you try to understand reductionism without seeing how it's actually applied and not applied in reality. You can also make great predicions on believes that the function of the heart is pumping blood even if there are no "function-atoms" around.

That's what I mean by complexity, yeah.

I don't know if I made this was clear, but the point I make is independent of what high level principles explain thing, only that they are high level. The ancestors that competed across history to produce the organism of interest are not small parts making up a big thing, unless you subscribe to causal reductionism where you use causes instead of internal moving parts. But I don't like calling this reductionism (out even a theory, really) because it's, as I said, a species of causality, broadly construed.

[Why You Don't Think You're Beautiful](http://skepticexaminer.com/2016/05/dont-think-youre-beautiful/)

[Why You Don't Think You're Beautiful](http://intentionalinsights.org/why-you-dont-think-youre-beautiful)
2Viliam7y
This works only in comments. In articles, there is an "Insert/edit link" button in the toolbar. Click the button, paste the link into the "Link URL" field (and leave the remaining fields unmodified). Yes, LW uses two completely different systems for editing articles and editing comments.

Mary's room seems to be arguing that,

[experiencing(red)] =/= [experiencing(understanding([experiencing(red)] )] )]

(translation: the experience of seeing red is no the experience of understanding how seeing red works)

This is true, when we take those statements literally. But it's true in the same sense a Gödel encoding of statement in PA is not literally that statement. It is just a representation, but the representation is exactly homomorphic to its referent. Mary's representation of reality is presumed complete ex hypothesi, therefore she will understan... (read more)

-1TheAncientGeek7y
Mary is presumed to have all objective knowedge and only objectve knowledge, Your phrasing is ambiguous and therefire doesnt address the point. When you say Mary will know what happens when she sees red, do you mean she knows how red looks subjectively, or she knows something objective like what her behaviour will be.....further on you mention predicting her reactions, Is that supposed to relate to the objective/ subjective distinction somehow? So? The overall point is about physicalism, and to get to 'physicalism is false', all you need is the existence of subjective knowledge, not its usefulness in making prediction. So again I don't see the relevance Maybe. I don't see the problem. There is still an unproblematic sense in which Mary has all objective knowledge, even if it doesn't allow her to do certain things. If that was the point.
0TheOtherDave8y
There are three possibilities worth disambiguating here. 1) Mary predicts that she will do X given some assumed set S1 of knowledge, memories, experiences, etc., AND S1 includes Mary's knowledge of this prediction. 2) Mary predicts that she will do X given some assumed set S2 of knowledge, memories, experiences, etc., AND S2 does not include Mary's knowledge of this prediction. 3) Mary predicts that she will do X independent of her knowledge, memories, experiences, etc.

There are two things you could mean when you say 'reductionism is right'. That reality is reductionist in the "big thing = small thing + small thing" sense, or that reductionist explanations are better by fiat.

Reality is probably reductionist. I won't assign perfect certainty, but reductionist reality is simpler than magical reality.

As it currently stands, we don't have a complete theory of reality, so the only criteria we can judge theories is that they 1) are accurate, 2) are simple.

I am not arguing about the rightness or wrongness of reductio... (read more)

At least this tells me I didn't make a silly mistake in my post. Thank you for the feedback.

As for your objections,

All models are wrong, some models are useful.

exactly captures my conceit. Reductionism is correct in the sense that is, in some sense, closer to reality than anti- or contra-reductionism. Likely in a similar sense that machine code is closer to the reality of a physical computation than a .cpp file, though the analogy isn't exact, for reasons that should become clear.

I'm typing this on a laptop, which is a intricate amalgam of various kind... (read more)

0Vamair08y
Isn't that what people mean when they say reductionism is right?

"I think you're wrong" is not a position.

They way you're saying this, it makes it seem like we're both in the same boat. I have no idea what position you're even holding.

I feel like I'm doing the same thing over and over and nothing different is happening, but I'll quote what I said in another place in this thread and hope I was a tiny bit clearer.

http://lesswrong.com/lw/nnc/the_ai_in_marys_room/day2

I think the distinction between 'knowing all about' and 'seeing' red is captured in my box analogy. The brain state is a box. There is another box

... (read more)
0TheAncientGeek8y
I just explained the position I am holding. I have explained elsewhere why the loophole doesnt work:- Moving on to your argument:- Confusing to whom? Let's suppose that person is Frank Jackson. In the knowledge Argument, Jackson credits Mary with all objective knowledge, and only objective knowledge precisely because he is trying to establish the existence of subjective knowledge .. what Mary doesnt know must be subjective, if there is something Mary doesn't know. So the eventual point s that there s more to knowledge than objective knowledge. So you don't show that Jackson is wrong by agreeing with him. But I don't know that you think Jackson is wrong,

Arguably it could simulate itself seeing red and replace itself with the simulation.

I think the distinction between 'knowing all about' and 'seeing' red is captured in my box analogy. The brain state is a box. There is another box inside it, call this 'understanding'. We call something inside the first box 'experienced'. So the paradox hear is the two distinct states [experiencing (red) ] and [experiencing ( [understanding (red) ] ) ] are both brought under the header [knowing (red)], and this is really confusing.

The big box is all knowledge, including the vague 'knowledge of experience' that people talk about in this thread. The box-inside-the-box is verbal/declarative/metaphoric/propositional/philosophical knowledge, that is anything that is fodder for communication in any way.

The metaphor is intended to highlight that people seem to conflate the small box with the big box, leading to confusion about the situation. Inside the metaphor, perhaps this would be people saying "well maybe there are objects inside the box which aren't inside the box at all". W... (read more)

I have no idea what your position even is and you are making no effort to elucidate it. I had hoped this line

I don't understand what disagreement is occurring here, hopefully I've given someone enough ammunition to explain.

Was enough to clue you in to the point of my post.

-1TheAncientGeek8y
I'm disagreeing that you have a valid refutation of the KA. However, I don't know if you even think you have, since you haven't responded to my hints that you should clarify.

I'd highly recommend this sequence to anyone reading this: http://lesswrong.com/lw/5n9/seeing_red_dissolving_marys_room_and_qualia/

This thrust of the argument, applied to this situation, is simply that 'knowledge' is used to mean two completely different things here. On one hand, we have knowledge as verbal facts and metaphoric understanding. On the other we have averbal knowledge, that is, the superset containing verbal knowledge and non-verbal knowledge.

To put it as plainly as possible: imagine you have a box. Inside this box, there is a another, smaller... (read more)

0TheAncientGeek8y
I'd highly recommend reading the ]original paper.](http://home.sandiego.edu/~baber/analytic/Jackson.pdf) I am not following the box analogy. What kinds of knowledge do the boxes represent?

I think the argument is asserting that Mary post-brain surgery is a identical to Mary post-seeing-red. There is no difference; the two Mary's would both attest to having access some ineffable quality of red-ness.

To put it bluntly, both Marys say the same things, think the same things, and generally are virtually indistinguishable. I don't understand what disagreement is occurring here, hopefully I've given someone enough ammunition to explain.

-1TheAncientGeek8y
I don't understand what the point of that point is. Do you think you argung against the intended conclusion of the Knowledge Argumemt in some way? If so, you are not...the loophole you have found s quite irrelevant,

Somewhere I got the impression that ... Sarah Perry of Ribbonfarm were LWers at some point.

She was/is. Her (now dead) blog, The View From Hell, is on the lesswrong wiki list of blogs. She has another blog, at https://theviewfromhellyes.wordpress.com which updates, albeit at a glacial pace.

I'm sorry that I over estimated my achievements. Thank you for being civil.

What do you expect to happen if you feed your code a problem that has no Turing-computable solution?

I'm actually quite interested in this. For something like the busy beaver function, it just runs forever with the output being just fuzzy and gets progressively less fuzzy but never being certain.

Although I wonder about something like super-tasks somehow being described for my model. You can definite get input from arbitrarily far in the future, but you can do even crazier things... (read more)

For some strange reason, your post wasn't picked up by my RSS feed and the little mail icon wasn't orange, Sorry to keep you waiting for a reply for so long.

The Halting proof is for Turing machines. My model isn't a turing machine, it's supposed to be more powerful.

You'll have to mathematically prove that it halts for all possible problems.

Not to sound condescending, but this is why I'm posting it on a random internet forum and not sending it to a math professor or something.

I don't think this is revolutionary, and I think there is very good possibili... (read more)

It's probably stupid to reply to comment from more than three years ago, but Antisocial personality disorder does not imply violence. There are examples of psychopaths who were raised in good homes that grew up to become successful assholes.

I wrote a hypercomputer 60-ish lines of python. It's (technically) more powerful than every supercomputer in the world.

Edit: actually, I spoke too soon. I have written code which outlines a general scheme that can be modified to construct schemes in which hyper computers could possible constructed (including itself). I haven't proven that my scheme allows for hypercomputation, but a scheme similar to could (probably), including itself.

Edit: I was downvoted for this, which suppose was justified.

What my code does is simulates a modified version of CGoL (Joh... (read more)

2gjm8y
Your scheme may well he more powerful than a Turing machine (i.e., if there were something in the world that behaves according to your model then it could do computations impossible to a mere Turing machine) but much of what you write seems to indicate that you think you have implemented your scheme. In Python. On an actual computer in our universe. Obviously that is impossible (unless Python running on an actual computer in our universe can do things beyond the capabilities of Turing machines, which it can't). Could you clarify explicitly whether you think what you have implemented is "more powerful than every supercomputer in the world" in any useful sense? What do you expect to happen if you feed your code a problem that has no Turing-computable solution? (What I expect to happen: either it turns out that you have a bug and your code emits a wrong answer, or your code runs for ever without producing the required output.)
1PhilGoetz8y
Could some other people who have read Causal Universes comment on whether EY implies in it that hypercomputation is possible? What is hypercomputation?
1taryneast8y
I'm willing to suspend judgement pending actual results. Demonstrate it does what you claim and I'll be very interested. Note you probably already know this, but in case you don't: AFAIK the Halting problem has a mathematical proof... you will require the same to prove that your system solves it. ie just showing that it halts on many programs won't be enough (Turing machines do this too). You'll have to mathematically prove that it halts for all possible problems.