KyriakosCH

My studies are in Philosophy (I am a graduate of the University of Essex), and Ι work as a literary translator (English to Greek). Published translations of mine include works by E.A. Poe, R.L. Stevenson and H.P. Lovecraft. I sometimes post articles at https://www.patreon.com/Kyriakos

Wiki Contributions

Comments

Machine language is a known lower level; neurons aren't; perhaps in the future there will be more microscopic building blocks examined; maybe there is no end to the division itself. 

In a computer it would indeed make no sense for a programmer to examine something below machine language, since you are compiling or otherwise acting upon it. But it's not a known isomorphism to the mind.

 

If you'd like a parallel to the above, from the history of philosophy, you might be interested in comparing dialectic reasoning and Aristotelian logic. It's not by accident that Aristotle explicitly argued that for any system to include the means to prove something (proof isn't there in dialectics, not past some level, exactly because no lower level is built into the system) it has to be set with at least one axiom: the inability of anything to simultaneously include and not include a quality (in math you'd more often see this as A∨¬A). In dialectics (Parmenides, Zeno etc), this explicitly is argued against, the possibility of infinite division of matter being one of their premises.

How memories are stored certainly matters, it is too much of an assumption that levels are sealed off. Such an assumption may be implicitly negated in a model, but obviously this doesn't mean something has changed; the nature of material systems has this issue, unlike mathematical ones.

Another poignant property of material systems is that at times there is a special status of observer for them. In the case of the mind, you have the consciousness of the person and while certainly it can be juxtaposed to other instances of it, it is a different relation from the one which would allow carefree use of the term "anecdote". Notice "special", which in no way means infallible or anything of such a class, but it does connote a qualitative difference: apart from other means of observation - those available to everyone else, like the tool you mentioned - there is also the sense through consciousness itself, which here was for reasons of brevity referred to as intuition. 

Of course consciousness itself is problematic as an observer. But it is used - in a different capacity - in all other input procedures, since you need an observer to take those in as well. If one treats consciousness as a block which acts with built-in biases, it is too much to believe those are cancelled if one simply uses it as an observer of another type of input. It's due to this (particular) loop that posing a question about intuition is not without merit. 

Going by practice, it does seem likely that intertwined (nominally separate, as over-categories) memories will be far easier to recall at will, than any loosely related (by stream of consciousness) collection of declarative memories. However it is not known if the stored memories (of either type) actually are stored individually or not; they are many competing models for how a memory is stored and recalled, up to the lowest/"lowest" -for there may be no lowest in reality - level of neurons.

That said, I was only asking about other people's intuitive sense of what works better. It isn't possible to answer using a definitive model, due to the number of unknowns. 

I mean more cost-effective, so to speak. My sense is that while procedural is easier to sustain (for years, or even for the entirety of your life), it really is more suitable for focused projects instead of random/general knowledge accumulation. Then again it is highly likely that procedural memories help with better organization overall, acting as a more elegant system. In that, declarative memories are more like axioms, with procedural being either rules or just application of rules, with far fewer axioms needed. 

I agree. Although my question was not whether 3d is real/independent of the observer. I was wondering why for us it had to be specifically 3d instead of something else.

For all we know, maybe "3d" isn't 3d either, in that any way of viewing things would end up seeming to be 3d. In a set system, with known axioms, examined from the outside, 3d just follows 2d. But if as an observer you are 3d-based, it doesn't have to follow that this is a progression from 2d at all and it might just be a different system.

About a local variation of Rock-Paper-Scissors and how it self-negated its own problematic dynamic

You are confusing "reason to choose" (which is obviously not there; optimal strategy is trivial to find) with "happens to be chosen". Ie you are looking at what is said from an angle which isn't crucial to the point.

Everyone is aware that scissors is not be chosen at any time if the player has correctly evaluated the dynamic. Try asking a non-sentence in a formal logic system to stop existing cause it evaluated the dynamic, and you'll get why your point is not sensible.

About a local variation of Rock-Paper-Scissors and how it self-negated its own problematic dynamic

Thank you, I will have a look!

My own interest in recollecting this variation (an actual thing, from my childhood years) is that intuitively it seems to me that this type of limited setting may be enough so that the inherent dynamic of 'new player will go for the less than optimal strategy', and the periodic ripple effect it creates, can (be made to) mimic some elements of a formal logic system, namely the interactions of non-sentences with sentences.

So I posted this as a possible trigger for more reflection, not for establishing the trivial (optimal strategy in this corrupted variation of the game) ^_^

About a local variation of Rock-Paper-Scissors and how it self-negated its own problematic dynamic

Edit (I rewrote this reply, cause it was too vague in the original :) )

 

Very correct in regards to every player actually having identified this (indeed, if all players are aware of the new balance, they will pick up that glue is a better type of scissors so scissors should not be picked). But imagine a player comes in and hasn't picked up this identity, while (for different reasons) they have picked up an aversion to choose rock from previous players. Then scissors still has a chance to win (against paper), and effectively rock is largely out, so the triplet scissors-paper-glue has glue as the permanent winner. This in turn (after a couple of games) is picked up and stabilizes the game as having three options for all (scissors no longer chosen), until a new player who is unaware joins.

Essentially the dynamic of the 4-choice game allows for periodic returns to a 3-choice, which is what can be used to trigger ongoing corrections to other systems.

Reply to Paul Christiano on Inaccessible Information

" Presumably the machine learning model has in some sense discovered Newtonian mechanics using the training data we fed it, since this is surely the most compact way to predict the position of the planets far into the future. "

To me, this seems to be an entirely unrealistic presumption (also true for any of its parallels; not just when it is strictly about the position of planets). Even the claim that NM is "surely the most compact [...]" is questionable, given that obviously we know from history that there had been models able to predict just the position of stars since ancient times, and in this hypothetical situation where we somehow have knowledge of the position of planets (maybe through developments in telescopic technology) there is no reason to assume analogous models with the ancient ones with stars couldn't apply, thus NM would not be specifically needed to be part of what the machine was calculating.


Furthermore, I have some issue with the author's sense that the machine calculating something is somehow calculating it in a manner which inherently allows for the calculation to be translatable in many ways. While a human thinker inevitably thinks in ways which are open to translation and adaptation, this is true because as humans we do not think in a set way: any thinking pattern or collections of such patterns can - in theory - consist of a vast number of different neural connections and variations. Only as a finished mental product can it seem to have a very set meaning. For example, if we ask a child if their food was nice, they may say "yes, it was", and we would have that statement as something meaning something set, but we would never actually be aware of the set neural coding of that reply, for the simple reason that there isn't just one.

For a machine, on the other hand, a calculation is inherently an output on a non-translatable, set basis. Which is another way of saying that the machine does not think. This problem isn't likely to be solved by just coding a machine in such a way that it could have many different possible "connections" when its output would be the same, cause with humans this happens naturally, and one can suspect that human thinking itself is in a way just a byproduct of something not tied to actual thinking but the sense of existence. Which is, again, another way of saying that a machine is not alive. Personally, I think AI in the way it is currently imagined, is not possible. Perhaps some hybrid of machine-dna may produce a type of AI, but it would again be due to the DNA forcing a sense of existence and it would still take very impressive work to use that to advance Ai itself; I think it can be used to study DNA itself, though, through the machine's interaction with it.

Load More