wangscarpet

Posts

Sorted by New

Wiki Contributions

Comments

Bayeswatch 13: Spaceship

I really loved reading this series. Came for the puns, stayed for the story. Thank you for writing!

Jitters No Evidence of Stupidity in RL

In continuous control problems what you're describing is called "bang-bang control", or switching between different full-strength actions. In continuous-time systems this is often optimal behavior (because you get the same effect doing a double-strength action for half as long over a short timescale). Until you factor non-linear energy costs in, in which case a smoother controller becomes preferred.

The Duplicator: Instant Cloning Would Make the World Economy Explode

Kiln People is a fantastic science fiction story which explores the same question, if the embodied copies are temporary (~24 hours). It explores questions of employment, privacy, life-purpose, and legality in a world where this cloning procedure is common. I highly recommend it to those interested.

Factors of mental and physical abilities - a statistical analysis

"Now, suppose that in addition to g, you learn that Bob did well on paragraph comprehension. How does this change your estimate of Bob's coding speed? Amazingly, it doesn't. The single number g contains all the shared information between the tests."

I don't think this is right, if some fraction of the test for g is paragraph comprehension. If g is the weighted average between paragraph comprehension and addition skill, knowing g and paragraph comprehension gives you addition skill.

DeepMind: Generally capable agents emerge from open-ended play

Yep, they're different. It's just an architecture. Among other things, Chess and Go have different input/action spaces, so the same architecture can't be used on both without some way to handle this.

This paper uses an egocentric input, which allows many different types of tasks to use the same architecture. That would be the equivalent of learning Chess/Go based on pictures of the board.

Covid 6/10: Somebody Else’s Problem

Can you extrapolate the infectiousness ratio between the newest most virulent strain and the original? I assume the original has all but died out, but maybe by chaining together estimates of intermediate strains?

Book Review of 5 Applied Bayesian Statistics Books

I'm reading BDA3 right now, and I'm on chapter 6. You described it well. It takes a lot of thinking to get through, but is very comprehensive. I like how it's explicitly not just a theory textbook. They demonstrate each major point by describing a real-world problem (measuring cancer rates across populations, comparing test-prep effectiveness), and attacking it with multiple models (usually frequentist to show limitations and then their Bayesian model more thoroughly. It has a focus on learning the tools well enough to apply them to real-world problems.

I plan to start skimming soon. It seems the first two sections are pedagogical, and the remainder covers techniques which I would like to know about but don't need in detail.

Edit: One example I really enjoyed, and which felt very relevant to today, was on estimating lung-cancer hotspots in America. It broke the country down by county, and first displayed a map of the USA with counties in the top 10% of lung-cancer rates. Much of the highlighted region was in the rural southwest and Rocky mountain region. It asked, what do you think makes these regions have such high rates? It then showed another map, this one of counties in the bottom 10% of lung-cancer rates, and the map focused on the same regions!

Turns out, this was mostly the result of these regions containing many low-population counties, which meant rare-event sampling could skew high very easily, just by chance. If the base rate is 5 per 10,000, and you have 2 cases in a county with 1,000 people, you look like a superfund site. But sample the next year and you might find 0 cases: a county full of young health-freaks.

If you model lung-cancer rates as a hierarchical model with a distribution for county cancer-rates, and each county as being sampled from this, and then sampling cancer events from it's specific rate, then you can get a Bayes-adjusted incidence rate for each county which will regress small counties to the mean.

This made me read Covid charts which showed hot-spot counties much differently. I noticed that the counties they list are frequently small. Right now, all the counties on the NYTimes list, for example have less than 20,000 people in them, which is, I believe, in the bottom 25% of counties by size roughly.

Iterated Trust Kickstarters

I think it comes from a feeling that proportion of blame needs to add to one, and by apologizing first you're putting more of the blame on your actions. You often can't say "I apologize for the 25% of this mess I'm responsible for."

I think the general mindset of apportioning blame (as well as looking for a single blame-target) is a dangerous one. There's a whole world of things that contribute to every conflict outside of the two people having it.

Load More