(This is a subject that appears incredibly important to me, but it's received no discussion on LW from what I can see with a brief search. Please do link to articles about this if I missed them.)
Edit: This is all assuming that the first powerful AIs developed aren't exponentially self-improving; if there's no significant period of time where powerful AIs exist but they're not so powerful that the ownership relations between them and their creators don't matter, these questions are obviously not important.
What are some proposed ownership situations between artificial intelligence and its creators? Suppose a group of people creates some powerful artificial intelligence that appears to be conscious in most/every way--who owns it? Should the AI legally have self-ownership, and all the responsibility for its actions and ownership of the results of its labor that implies? Or, should strong AI be protected by IP, the way non-strong AI code already can be, treated as a tool rather than a conscious agent? It seems wise to implore people to not create AIs that want to have total free agency and generally act like humans, but that's hardly a guarantee that nobody will, and then you have the ethical issue of not being able to just kill them once they're created (if they "want" to exist and appear genuinely conscious). Are there any proposed tests to determine whether a synthetic agent should be able to own itself or become the property of its creators?
I imagine there aren't yet good answers to all these questions, but surely, there's some discussion of the issue somewhere, whether in rationalist/futurist circles or just sci-fi. Also, please correct me on any poor word choice you notice that needlessly limits the topic; it's broad, and I'm not yet completely familiar with the lingo of this subject.