All of Oskar Press Mathiasen's Comments + Replies

It seems to me that you believe that there is a amount of work in the topic of philosophy, that would allow us to be certain, that if we can create a super intelligent ai we can guarantee that it is safe, but that this amount of work is impossible to do, before we develop super intelligent ai, or at least close to impossible. But it doesn’t seem obvious to me (and presumably others) that this is true. It might be a very long time before we create ai, or we might succeed in motivating enough people to work on the problem, that we could achieve this level of philosophical understanding, before it is possible to create super intelligent ai.

You can get a better sense of where I'm coming from by reading Some Thoughts on Metaphilosophy []. Let me know if you've already read it and still have questions or disagreements.