All of hold_my_fish's Comments + Replies

Did the Industrial Revolution decrease costs or increase quality?

The detailed examples made this exceptionally interesting.

A minor nitpick: it is more accurate to draw the efficient frontier with axis-aligned line segments. To see why, consider points P=(1,1), Q=(3,2), R=(4,4). These points are all on the efficient frontier, because no point dominates any other in both cost and quality. But the straight line from P to R passes to the upper-left of Q, making it look as if Q is not on the efficient frontier. The solution is to draw the efficient frontier as (1,1)-(3,1)-(3,2)-(4,2)-(4,4). (It's a bit uglier though!)

3jasoncrawford1moGood point, except in cases where can create any linear combination of any two solutions. But you can't always do that.
2Adele Lopez2moAh, good catch thanks!
Young kids catching COVID: how much to worry?

This seems reasonable, but I wonder whether "long-term complications" might be a bit underrated. It seems like there are a lot of viruses that have long-term effects or other non-obvious consequences. (I should add that I'm not a biologist, so this is not an informed opinion.)

The example I'm most familiar with is chicken pox causing shingles, decades later from the initial sickness. In that case, shingles is (I think) typically more severe than the original sickness, and is quite common: 1 out of 3 people develop it in their lifetime, according to the CDC.... (read more)

3Steven Byrnes2moYeah thanks! I guess was thinking that kids who don't get bad cases at the time are unlikely to have long-term effects. I think polio is like that. In particular, I assume that only the bad COVID cases get into the nervous system, where I'm especially concerned. So that's how I got a lower number. But I dunno either :-)
Human instincts, symbol grounding, and the blank-slate neocortex

Thanks for your reply!

A few points where clarification would help, if you don't mind (feel free to skip some):

  • What are the capabilities of the "generative model"? In general, the term seems to be used in various ways. e.g.
    • Sampling from the learned distribution (analogous to GPT-3 at temp=1)
    • Evaluating the probability of a given point
    • Producing the predicted most likely point (analogous to GPT-3 at temp=0)
  • Is what we're predicting the input at the next time step? (Sometimes "predict" can be used to mean filling in missing information, but that doesn't seem to
... (read more)
3Steven Byrnes3moThink of a generative model as something like "This thing I'm looking at is a red bouncy ball". Just looking at it you can guess pretty well how much it would weigh if you lifted it, how it would feel if you rubbed it, how it would smell if you smelled it, and how it would bounce if you threw it. Lots of ways to query these models! Powerful stuff! If a model is trained to minimize a loss function L, that doesn't mean that, after training, it winds up with a very low value of L in every possible case. Right? I'm confused about why you're confused. :-P
Human instincts, symbol grounding, and the blank-slate neocortex

I find myself returning to this because the idea of a "common cortical algorithm" is intriguing.

It seems to me that if there is a "common cortical algorithm" then there is also a "common cortical problem" that it solves. I suspect it would be useful to understand what this problem is.

(As an example of why isolating the algorithm and problem could be quite different, consider linear programming. To solve a linear programming problem, you can choose a simplex algorithm or an interior-point method, and these are fundamentally different approaches that are bot... (read more)

3Steven Byrnes3moThanks! The way I'm currently thinking about it is: In everywhere but the frontal lobe, the task is something like But it's different X & Y for different parts of the cortex, and there can even be cascades where one region needs to predict the residual prediction error from another region (ref [https://arxiv.org/abs/1709.04654]). And there's also a top-down attention mechanism such that not all prediction errors are equally bad. The frontal lobe is a bit different in that it's choosing what action to take or what thought to think (at least in part). That's not purely a prediction task, because it has more than one right answer. I mean, you can predict that you'll go left, then go left, and that's a correct prediction. Or you can predict that you'll go right, then go right, and that's a correct prediction too! So it's not just predictions; we need reinforcement learning / rewards too. In those cases, the task is "Find a generative model that is making correct predictions AND leading to high rewards," presumably. But I don't think that's really something that the neocortex is doing, per se. I think it's the basal ganglia (BG), which sends outputs to the frontal lobe. I think the BG looks at what the neocortex is doing, calculates a value function (using TD learning, and storing its information in the striatum), and then (loosely speaking) the BG reaches up into the neocortex and fiddles with it, trying to suppress the patterns of activity that it thinks would lead to lower rewards and trying to amplify the patterns of activity that it thinks would lead to higher rewards. See my Predictive coding = RL + SL + Bayes + MPC [https://www.lesswrong.com/posts/cfvBm2kBtFTgxBB7s/predictive-coding-rl-sl-bayes-mpc] for my old first-cut attempt to think through this stuff. Meanwhile I've been reading all about the striatum and RL stuff, more posts forthcoming I hope. Happy for any thoughts on that. :-)