Posts

Sorted by New

Wiki Contributions

Comments

esweet3y10

Thanks for this response. I heard a similar discussion recently, with someone talking about whether an algorithm's reward function was activated because it got the answer correct or because it knew it was what the programmers wanted it to do. It's not clear since the decision-making pathways are not always clear, especially with more complex machine learning. 

The inner optimizer thing is really interesting; I hadn't heard it coined like that before. Is it in AI's interest (a big assumption that is has interests at all, I know) to become so human-specific that it loses its ability to generalize? Variability would decrease in the population and the probability mechanisms of machine learning would approach certainty, thus rendering the AI basically ineffective.

esweet3y10

Does it have to be deterministic though? Can a program be open-ended to the effect that process is optimized and outcome is undetermined? (Perhaps navigating the world like that is "intelligence" without the "artificial.") I think AI is capable of learning on its own though, or at least programming other algorithms without human input. And one of the issues there is that once it learns language, as you point out, it will be able to do things we can't really fathom right now, I think. 

Thanks for the sequence rec. I'll check it out!

esweet3y10

I think you are understanding correctly and I see your point. So the question becomes: we intervene before it becomes cyclical so that the focus is process and not outcome? That's where the means and the ends remain separate. In effect, can a non-deterministic AI model be written?