I’ll say that a model linearly represents a binary feature f if there is a linear probe out of the model’s latent space which is accurate for classifying f
If a model linearly represents features a and b, then it automatically linearly represents a∧b and a∨b.
I think I misunderstand your definition. Let feature a be represented by x_1 > 0.5, and let feature b be represented by x_2 > 0.5. Let x_i be iid uniform [0, 1]. Isn't that a counterexample to (a and b) being linearly representable?
Here is something I'd like to see: You give the machine the formally specified ruleset of a game (go, chess, etc), wait while the reinforcement learning does its job, and out comes a world-class computer player.
Here is one reason, but it's up for debate:
Deep learning courses rush through logistic regression and usually just mention SVMs. Arguably it's important for understanding deep learning to take the time to really, deeply understand how these linear models work, both theoretically and practically, both on synthetic data and on high dimensional real life data.
More generally, there are a lot of machine learning concepts that deep learning courses don't have enough time to introduce properly, so they just mention them, and you might get a mistaken impression about their relative importance.
Another related thing: right now machine learning competitions are dominated by gradient boosting. Deep learning, not really. This says nothing about starting with deep learning or not, but a good argument against stopping at deep learning.
In the last two days I alone wrote a prototype that can take a whiteboard photo, and automatically turn it into a mindmap-like zoomable chart. Pieces of the chart then can be rearranged and altered individually:
This was part of a company hackathon, and I had some infrastructure to help me regarding the visualization, but with the shape recognition/extraction, it was just me and the nasty python bindings for OpenCV.
Oh my god, look at 0-4-year old assaults, both ED visits and deaths. (Assault is the leading TBI-related cause of death for 0-4-year olds.) Some of those falling 4 year olds were assaulted.
There are worse fates than not being able to top your own discovery of general relativity.
That's not a top-level comment, so it's excluded by my script from this version. I won't manually edit the output, sorry. There's another version where non-top-level comments are kept, too. Your quote is in there:
Top quote contributors by statistical significance level:
Top quote contributors by karma score collected in 2014:
Top quote contributors by total (2009-2014) karma score collected: