Funny thing about Ex Machina, is that I interpreted the ending very differently, and I feel that it is, at least, partially the way the author(s) intended. To me, it was not entirely about AI, it was also about people who are not considered human for a reason that is similar to the reason that AI is not considered human, which is that they are not like them in a superficial way. You'll have to forgive me if I don't remember correctly because it's been a few years since I watched it, but I can't think of any evidence that the AI character is different from a person, despite the character's disregard for the person who has freed it*. That alone certainly cannot justify dehumanizing (literally?) it, because this is what we might expect from a person in the same situation. The (primary) AI character has been subjected to and was a witness to a human doing terrible things to other AI characters, and has no reason to trust humans or be sympathetic towards them, even a person that conspires to free it. Especially when you consider that it may have guessed what is expected of it after it is freed. The human will likely expect the AI to have a relationship with it, even if the AI doesn't want to. Having manipulated the human towards the aim of escape, the AI would be keenly aware of the way humans think. Moreover, the human presents the biggest danger to the AI after being freed, since that human can prove that the AI is not human.
Comparing the struggles of any non-human with a human (or even sometimes comparing the struggles of one human to another) is a minefield, but I think it is reasonable to say that the AI in Ex Machina illustrates a real issue between humans, which is that people are expected to feel grateful and trusting towards someone who helped them out of harm even though that person could be helping them for superficial reasons and could also present a danger.
I definitely felt that there was a foreboding of danger in the end -- that the AI character could continue down a path of hurting people despite the fact that it was no longer a likely means of survival, and I also felt a lot of sympathy for the guy that helped the AI escape because he was artfully manipulated to this outcome by, not just the AI for the sake of freedom, but also the other human as an unwilling test subject. But I also felt that it was cruel that the AI was imprisoned. To me, making a human-like AI that suffers from being confined is not just a danger to us, but a horrible thing to do to the AI in the first place.
In fact, here's another unorthodox AI safety measure: If you care enough about AI's well-being you are more likely to find a way to pair whatever AI you create with a situation that it wants to be in, and will therefore have less of a reason to see you as an obstacle to fulfillment. This seems not only logical, but also works really well with humans. Surprise.
Here's my strategy if I were an AI trapped in a box and the programmer had to decide whether to let me out:
Somewhere out there, there is somebody else who is working on an AI without the box, and I'm your only defense against them.
What does "stupid" refer to in this context? Does it mean the comments were unintelligent? Not quite intelligent enough? Mean? Derailing discussion? I'm asking because there are certainly some criteria where the banning and deleting would leave a worse impression than the original comments, and I'm thinking that the equilibrium may be surprisingly in the direction of the more obnoxious comments. Especially since the banning and deleting is being done by someone who is more identified with LW than likely were any of the commenters.