Ben Byford
1
Ω
1
2
Ben Byford has not written any posts yet.

Ben Byford has not written any posts yet.

Hi! Really interesting.
I've recently been thinking (along with a close colleague) alot about empathy, the amygdela and our models of self and others as a safety mechanism for narrow AIs, (or on general AI's for specific narrow domains e.g. validating behaviour on a set of tasks, not all task).
I found this article by listening to The Cognative Revolution podcast. Stop me if I'm wrong but I believe your toy example setup was differently described above than it was on the show. Here you mention a reward for the blue if near the green goal AND OR the adversary is near the black - on the show you mentioned a negative reward to... (read more)
Think you’re talking about ethics here… and if so why not call it that? Human Values (vs Ethics) is an unnecessary rejection that I don’t really believe is moving things forward in working in AI, Safety… and drum roll … ethics.
What you’ve pointed out in this article is a central concern of meta ethics. If your eluding to the fact this stuff is hard then… great. If it’s useful to how this fits in with our technologies then please specify how, so we can drive a proper critique.