Anyone have an AI (narrow or AGI) that can learn and respond to quizzes?
There is a new Kaggle competition out which you may be interested in
whether the AI can test these policies. Even if the AI can find pernicious policies that rank high on its objective function, it will never implement them unless it can ascertain this fact
I don't know if the AI should be taking responsibility for testing its own policies, especially in the initial stages. We should have a range of tests that humans apply that the formative AI runs on each iteration so that we can see how it is progressing.
This is the fundamental problem that is being researched - the top layer of abstraction would be that difficult to define one called "Be Friendly".
Instead of friendly AI maybe we should look at "dont be an asshole" AI (DBAAAI) - this may be simpler to test and monitor.
These are all task specific problem definition issues that occurred while fine tuning algorithms (but yes they do show how things could get out of hand)
Humans already do this very well, for example tax loopholes that are exploited but are not in the 'spirit of the law'.
The ideal (but incredibly difficult) solution would be for AI's to have multiple layers of abstraction, where each decision gets passed up and is then evaluated as "is this really what they wanted", or "am I just gaming the system".
Actually I think it is me not seeing them. Some do have the date at the top header, like http://arxiv.org/abs/1401.5577
But most don't, nor in the footer or at the end of the paper.
I realise now I was looking in the wrong spot - papers like this https://intelligence.org/files/TowardIdealizedDecisionTheory.pdf have the date in the bottom left of the first page. Checking other PDF's shows the same thing, so I assume that is one of the standards?
Why is the date or year of publication usually missing from PDF versions of research publications?
Is this a convention, perhaps specific to certain fields? I find it frustrating at times and am curious as to the reason behind it.
In terms of books, there are a couple of recommended lists:
I dont have any recommendations for books, but the course "https://www.coursera.org/course/optimization" is very good. Even if it has finished you can join and watch / download the lectures.
Would minimising the number of CPU cycles work as a lazy incentive.
This assumes that lesser CPU cycles will produce an outcome that is satisified rather than optimised, though in our current state of understanding any optimisation routines take a lot more computing effort than 'rough enough' solutions.
Perhaps getting the AGI's to go Green will kill two birds with one stone.
I love the taste of bitter flavors - beer, coffee, olives, etc but not strong alcohol.
I also love spicy foods / curries - for those with low tolerance of bitter flavours, do you avoid these foods as well?
Interesting thought experiment. Do we know an AI would enter a different mental state though?
I am finding it difficult to imagine the difference between software "knowing all about" and "seeing red"