This relates to:
"freedom to self-alter its own error function"
How? By changing the function alone or by changing the input to that function?
I will (and can) make the edits.
I call it a polemic analogy.
"seems, if not in conflict with"
I think you noticed that there is no contradiction, but I agree that I need to clarify.
Faced with a massive lack of information and the task to predict the future it is clear that it would be pure luck to make the best decision. Operating with that mindset might even be hindering.
" I must seek C*(A+B) at a lower cost."
I was trying to get into what to choose / look for in a finite set with competition.
A B C ... are terms of criteria that I estimate to be fulfilled to some degree. For simplicity they shall be binary logic terms. Every option that I have has more properties than I even know about and those I do know and find relevant, I either seek or avoid. Any term might contain many such properties.
Knowing what others are looking and paying for and that the world is very complicated, I find it more sensible to intentionally not use the same function to assess options. Instead I must design my net to "fish" in other areas of the choice property space. This applies to HR or any other investment.
Thank you for commenting.
"is offputting enough"
That would be a sensibility of yours and not a rational argument.
"implication that young women are not competent, and the generalization overall, and the unstated implication that HR has anywhere near the power that you ascribe to it"
I made no such statements. "Many" is not the same as "all". I include employees of headhunting companies as HR workers and these do have power when it comes to early screening including the assessment of qualification. I had plenty such talks where I could not even make the other understand what I do. Also I do use the term HR in a broader sense meaning the entire system only including those who work in the HR department, but not restricted to them.
I lost the formatting when I pasted the text. I managed to switch to the Markdown interpretation and bring the list back.
I came across this:
The New Dawn of AI: Federated Learning
" This [edge] update is then averaged with other user updates to improve the shared model."
I do not know how that is meant but when I hear the word "average" my alarms always sound.
Instead of a shared NN each device should get multiple slightly different NNs/weights and report back which set was worst/unfit and which best/fittest.Each set/model is a hypothesis and the test in the world is a evolutionary/democratic falsification.Those mutants who fail to satisfy the most customers are dropped.
When it comes to intelligence, rationality, depression, autism the evolutionary selection aspect is interesting, because we all know that the mentioned mental properties are lowering your chances to raise many children today.
Too much good quickly turns bad.
As we know and you mentioned, humans do learn from small data. We start with priors that are hopefully not too strong and go through the known processes of scientific discovery. NN do not have that meta process or any introspection (yet).
"You cannot solve this problem by minimizing your error over historical data. Insofar as big data minimizes an algorithm's error over historical results ... Big data compensates for weak priors by minimizing an algorithm's error over historical results. Insofar as this is true, big data cannot reason about small data."
NN also do not reduce/idealize/simplify, explicitly generalize and then run the results as hypothesis forks. Or use priors to run checks (BS rejection / specificity). We do.
Maybe there will be a evolutionary process where huge NNs are reduced to do inference at "the edge" that turns into human like learning after feedback from "the edge" is used to select and refine the best nets.
Why did you not go for engineering (like me)? Still some math proves but no one listens and they will not test it either.
Twice I made the mistake to ask 'why' it is the way it is. All I got was "look at the prove, it works out". That is why have have little respect for mathematicians i.g..
Because the SIMD approach is bad for 2D on 2D matrix multiplication NVIDIA has introduced:
Tensor Cores in the Volta architecture.
Article about it: