All of Chris Beacham's Comments + Replies

Feels like it needed an ending... So you could open up the aperture of attention to get equanimity, but then what about the arguments?  just ignore them since they're from a "hell realm" ? (That seems like it may lead to being unable to learn certain distressing information that is nonetheless true. 

PS I'm really enjoying the Opening the Heart of Compassion book!

2romeostevensit7mo
The factual contents of the arguments become fine once disarmed of the emotional payload ime.

Got this in my email and wanted to leave a note in addition to my double upvote that this was a great post. I’ve been extremely distressed by Cost Disease and see/saw it as one of the chief ills of our society. Am really bugged by a national conversation about forgiving student loans but minimal investigation into what these loans were even for.
Great post, good investigation and I hope someone does the same investigation for medicine.

When I think of my own college experience, it was awesome the wide variety of classes I was able to take and not slow down my graduation or impede my CS major. Here are some of my favorites : 4 semesters of performance art, 3D sculpture, linguistics, constructed languages, scuba diving, skiing, tree-climbing, lesbian fiction, singing tutoring, computer graphics (wrote own raytracer), religion in the Middle Ages, history of film, judo, modern dance, Alexander Technique… I am sure there were more, I was ravenous through the course catalog. I graduated in... (read more)

I just had omicron while travelling in Canada for New Years and the biggest negative was being isolated during my vacation/ away from home. All 3 people in my group ended up getting separate hotels for two weeks, we probably spent 5k on hotels and cancelled flights and covid testing. Any enjoyment we hoped to get from the trip was ruined. Had to jump through lots of hoops to get back home too - returning via land border as by air requires 2 + weeks after a positive PCR (which I wasn’t able to get until a week into the illness, due to holidays).

I recommen... (read more)

I mean, ’Ratio Breaks’ seems like it’s just lying there. 

1jmh2y
Agree. And I think that name allows for the implication that the ratio need not be fixed for all people or across all tasks.

Are large models like Mu-Zero, or GPT3 trained with these kinds of dropout/modularity/generalizability techniques? Or should we expect that we might be able to make even more capable models by incorporating this? 
 

1dkirmani2y
Good question! I'll go look at those two papers. * The GPT-3 paper doesn't mention dropout, but it does mention using Decoupled Weight Decay Regularization, which is apparently equivalent to L2 regularization under SGD (but not Adam!). I imagine something called 'Weight Decay' imposes a connection cost. * The MuZero paper reports using L2 regularization, but not dropout. My intuition says that dropout is more useful when working with supervised learning on a not-massive dataset for a not-massive model, although I'm not yet sure why this is. I suspect this conceptual hole is somehow related to Deep Double Descent, which I don't yet understand on an intuitive level (Edit: looks like nobody does). I also suspect that GPT-3 is pretty modular even without using any of those tricks I listed.