Um, it is, isn't it?
I agree. My reason for posting the link here is as reality check-- LW seems to be full of people firmly convinced that brain-uploading is the only only viable path to preserving consciousness, as if the implementation "details" were an almost-solved problem.
Finally, someone with a clue about biology tells it like it is about brain uploading
http://mathbabe.org/2015/10/20/guest-post-dirty-rant-about-the-human-brain-project/
In reading this, suggest being on guard against own impulse to find excuses to dismiss the arguments presented because they call into question some beliefs that seem to be deeply held by many in this community.
It depends. Writing a paper is not a realtime activity. Answering a free-response question can be. Proving a complex theorem is not a realtime activity, solving a basic math problem can be. It's a matter of calibrating the question difficulty so that is can be answered within the (soft) time-limits of an interview. Part of that calibration is letting the applicant "choose their weapon". Another part of it is letting them use the internet to look up anything they need to.
Our lead dev has passed this test, as has my summer grad student. There are two applicants being called back for second interviews (but the position is still open and it is not too late) who passed during their first interviews. Just to make sure, I first gave it to my 14 year old son and he nailed it in under half an hour.
Correct, this is a staff programmer posting. Not faculty or post-doc (though when/if we do open a post-doc position, we'll be doing coding tests for that also, due to recent experiences).
it's not strictly an AI problem-- any sufficiently rapid optimization process bears the risk of irretrievably converging on an optimum nobody likes before anybody can intervene with an updated optimization target.
individual and property rights are not rigorously specified enough to be a sufficient safeguard against bad outcomes even in an economy moving at human speeds
in other words the science of getting what we ask for advances faster than the science of figuring out what to ask for
(Note that transforming a sufficiently well specified statistical model into a lossless data compressor is a solved problem, and the solution is called arithmetic encoding - I can give you my implementation, or you can find one on the web.
The unsolved problems are the ones hiding behind the token "sufficiently well specified statistical model".
That said, thanks for the pointer to arithmetic encoding, that may be useful in the future.
The point isn't understanding Bayes theorem. The point is methods that use Bayes theorem. My own statistics prof said that a lot of medical people don't use Bayes because it usually leads to more complicated math.
To me, the biggest problem with Bayes theorem or any other fundamental statistical concept, frequentist or not, is adapting it to specific, complex, real-life problems and finding ways to test its validity under real-world constraints. This tends to require a thorough understanding of both statistics and the problem domain.
That's not the skill that's taught in a statistics degree.
Not explicitly, no. My only evidence is anecdotal. The statisticians and programmers I've talked to appear to overall be more rigorous in their thinking than biologists. Or at least better able to rigorously articulate their ideas (the Achilles heel of statisticians and programmers is that they systematically underestimate the complexity of biological systems, but that's a different topic). I found that my own thinking became more organized and thorough over the course of my statistical training.
Comments