Wiki Contributions


Hi Trevor! I think you made some good points about the benefits of trying different models for AI safety research, especially to guard against tail risks. I'm excited to test out this relatively low-budget alternative to working in the Bay.

Cavendish is actually under 3 hours from Boston (and 2.5 hours from Harvard or MIT). Rent isn't $1000, but we did find a beautiful place for the summer for under $1/sqft/month.

We've just announced ourselves (Cavendish Labs) here! We plan on having a visiting scholars program that will allow those currently working full-time elsewhere to try out work on alignment for a couple weeks or so; more on that later.

This sounds like a classic example of the bias-variance tradeoff: adding parameters to your model means it can more accurately fit your data (lower bias), but is more sensitive to fluctuations in that data (higher variance). Total error when making predictions on new data is minimized when the bias and variance errors are balanced.

Another example: given  data points, you can always draw a polynomial of degree  that fits them perfectly. But the interpolated output values may vary wildly with slight perturbations to your measured data, which is unlikely to represent a real trend. Often a simple linear regression is the most appropriate model.

Is it still possible to me to join? Let me know!