# All of hold_my_fish's Comments + Replies

Did the Industrial Revolution decrease costs or increase quality?

The detailed examples made this exceptionally interesting.

A minor nitpick: it is more accurate to draw the efficient frontier with axis-aligned line segments. To see why, consider points P=(1,1), Q=(3,2), R=(4,4). These points are all on the efficient frontier, because no point dominates any other in both cost and quality. But the straight line from P to R passes to the upper-left of Q, making it look as if Q is not on the efficient frontier. The solution is to draw the efficient frontier as (1,1)-(3,1)-(3,2)-(4,2)-(4,4). (It's a bit uglier though!)

3jasoncrawford1moGood point, except in cases where can create any linear combination of any two solutions. But you can't always do that.
Young kids catching COVID: how much to worry?

This seems reasonable, but I wonder whether "long-term complications" might be a bit underrated. It seems like there are a lot of viruses that have long-term effects or other non-obvious consequences. (I should add that I'm not a biologist, so this is not an informed opinion.)

The example I'm most familiar with is chicken pox causing shingles, decades later from the initial sickness. In that case, shingles is (I think) typically more severe than the original sickness, and is quite common: 1 out of 3 people develop it in their lifetime, according to the CDC.... (read more)

3Steven Byrnes2moYeah thanks! I guess was thinking that kids who don't get bad cases at the time are unlikely to have long-term effects. I think polio is like that. In particular, I assume that only the bad COVID cases get into the nervous system, where I'm especially concerned. So that's how I got a lower number. But I dunno either :-)
Human instincts, symbol grounding, and the blank-slate neocortex

A few points where clarification would help, if you don't mind (feel free to skip some):

• What are the capabilities of the "generative model"? In general, the term seems to be used in various ways. e.g.
• Sampling from the learned distribution (analogous to GPT-3 at temp=1)
• Evaluating the probability of a given point
• Producing the predicted most likely point (analogous to GPT-3 at temp=0)
• Is what we're predicting the input at the next time step? (Sometimes "predict" can be used to mean filling in missing information, but that doesn't seem to