FinalState
FinalState has not written any posts yet.

FinalState has not written any posts yet.

Familiar things didn't kill you. No, they are interested in familiarity. I just said that. It is rare but possible for a need for familiarity (as defined mathematically instead of linguistically) to result in sacrifice of a GIA instance's self...
Because it is the proxy for survival. You cannot avoid something you by definition cannot have any memory of (nor could have your ancestors)
Self Defense of course requires first fear of loss (aversion to loss is integral, fear and will to stop it is not), awareness of self and then awareness that certain actions could cause loss of self.
ohhhh... sorry... There is really only one, and everything else is derived from it. Familiarity. Any other values would depend on the input, output and parameters. However familiarity is inconsistent with the act of killing familiar things. The concern comes in when something else causes the instance to lose access to something it is familiar with, and the instance decides it can just force that to not happen.
The concept of agent is logically inconsistent with the General Intelligence Algorithm. What you are trying to refer to with Agent/tool etc are just GIA instances with slightly different parameters, inputs, and outputs.
Even if it could be logically extended to the point of "Not even wrong" it would just be a convoluted way of looking at it.
EDIT: To edit and simplify my thoughts, in order to get a General Intelligence Algorithm Instance to do anything requires masterful manipulation of parameters with full knowledge of generally how it is going to behave as a result. A level of understanding of psychology of all intelligent (and sub-intelligent) behavior. It is not feasible that someone would accidentally program something that would become an evil mastermind. GIA instances could easily be made to behave in a passive manner even when given affordances and output, kind of like a person that was happy to assist in any way possible because they were generally warm or high or something.... (read more)
What on earth is this retraction nonsense?
This one is actually true.
That is just wrong. SAI doesn't really work like that. Those people have seen too many sci fi movies. It's easy to psychologically manipulate an AI if you are smart enough to create one in the first place. To use terms I have seen tossed around, there is no difference between tool and agent AI. The agent only does things that you program it to do. It would take a malevolent genius to program something akin to a serial killer to cause that kind of scenario.
I am open to arguments as to why that might be the case, but unless you also have the GIA, I should be telling you what things I would want to do first and last. I don't really see what the risk is, since I haven't given anyone any unique knowledge that would allow them to follow in my footsteps.
A paper? I'll write that in a few minutes after I finish the implementation. Problem statement -> pseudocode -> implementation. I am just putting some finishing touches on the data structure cases I created to solve the problem.
Mathematically predictable but somewhat intractable without a faster running version of the instance, with the same frequency of input. Or predictable within ranges of some general rule.
Or just generally predictable with the level of understanding afforded to someone capable of making one in the first place, that for instance could describe the cause of just about any human psychological "disorder".