Also check out "personalized pagerank", where the rating shown to each user is "rooted" in what kind of content this user has upvoted in the past. It's a neat solution to many problems.
Predicting the future is hard, so it’s no surprise that we occasionally miss important developments.
However, several times recently, in the contexts of Covid forecasting and AI progress, I noticed that I missed some crucial feature of a development I was interested in getting right, and it felt to me like I could’ve seen it coming if only I had tried a little harder. (Some others probably did better, but I could imagine that I wasn't the only one who got things wrong.)
Maybe this is hindsight bias, but if there’s something to it, I want to distill the nature of the mistake.
First, here are the examples that prompted me to take notice:
Predicting the course of the Covid pandemic:
Relatedly, over time as capital demands increase, we might see huge projects which are collaborations between multiple countries.
I also think that investors could plausibly end up with more and more control over time if capital demands grow beyond what the largest tech companies can manage. (At least if these investors are savvy.)
(The things I write in this comment are commonly discussed amongst people I talk to, so not exactly surprises.)
To clarify: I did sign something when I joined the company, so I'm still not completely free to speak (still under confidentiality obligations). But I didn't take on any additional obligations when I left.
Unclear how to value the equity I gave up, but it probably would have been about 85% of my family's net worth at least. But we are doing fine, please don't worry about us.
Suppose Alice and Bob are two Bayesian agents in the same environment. They both basically understand how their environment works, so they generally agree on predictions about any specific directly-observable thing in the world - e.g. whenever they try to operationalize a bet, they find that their odds are roughly the same. However, their two world models might have totally different internal structure, different “latent” structures which Alice and Bob model as generating the observable world around them. As a simple toy example: maybe Alice models a bunch of numbers as having been generated by independent rolls of the same biased die, and Bob models the same numbers using some big complicated neural net.
Now suppose Alice goes poking around inside of her world model, and somewhere in there...
This model does allow for that. :) We can use this model whenever our two agents agree predictively about some parts of the world X; it's totally fine if our two agents learned their models from different sources and/or make different predictions about other parts of the world.
I am trying to gather a list of answers/quotes from public figures to the following questions:
I am writing them down here if you want to look/help: https://docs.google.com/spreadsheets/d/1HH1cpD48BqNUA1TYB2KYamJwxluwiAEG24wGM2yoLJw/edit?usp=sharing
I have already tried to collect the most complete collection of quotes here. But it is already very outdated.
Historically produce shopping was mostly in open-air markets, but in the US produce is now typically sold in buildings. Most open-air produce sales are probably at farmers markets, but these focus on the high end. I like that Boston's Haymarket more similar to the historical model: competing vendors selling conventional produce relatively cheaply.
It closes for the weekend at 7pm on Saturdays, and since food they don't sell by the end of the market is mostly going to waste they start discounting a lot. You can get very good deals, though you need to be cautious: what's left at the end is often past the end of it's human-edible life.
Today Lily was off at a scouting trip, and I asked Anna what she wanted to do. She remembered that a previous time Lily was...
I went a few times but eventually got grossed out by all the mold. (At least they don't sell live pangolins there.)