Sorted by New

Wiki Contributions




In reading your posts the past couple days, I've had two reoccurring thoughts:

  1. In Bayesian terms, how much have your gross past failures affected your confidence in your current thinking? On a side note - it's also interesting that someone who is as open to admitting failures as you are still writes in the style of someone who's never once before admitted a failure. I understand your desire to write with strength - but I'm not sure if it's always the most effective way to influence others.

  2. It also seems that your definition of "intelligence" is narrowly tailored - yet your project of Friendly AI would appear to require a deep knowledge of multiple types of human intelligence. Perhaps I'm reading you wrong - but if your view of human intelligence is in fact this narrow, will this not be evident in the robots you one day create?

Just some thoughts.

Thanks again for taking the time to post.

Take care,