Dave Orr

Google AI PM; Foundation board member

Wiki Contributions

Comments

Sorted by

Really cool project! And the write-up is very clear.

In the section about options for reducing the hit to helpfulness, I was surprised you didn't mention scaling the vector you're adding or subtracting -- did you try different weights? I would expect that you can tune the strength of the intervention by weighting the difference in means vector up or down.

The usual reason is compounding. If you have an asset that is growing over time, paying taxes from it means not only do you have less of it now, but the amount you pulled out now won't compound indefinitely into the future.  You want to compound growth for as long as possible on as much capital as possible. If you could diversify without paying capital gains you would, but since the choice is something like, get gains on $100 in this one stock, or get gains on $70 in this diversified basket of stocks, you might stay with the concentrated position even if you would prefer to be diversified.

This reminds me of a Brin short story which I think exactly discusses what you're talking about: https://www.davidbrin.com/tankfarm.htm

Dave Orr110

Cool concept. I'm a bit puzzled by one thing though -- presumably every time you use a tether, it slows down and drops to a lower orbit. How do you handle that? Is the idea that it's so much more massive than the rockets its boosting that its slowdown is negligible? Or do we have to go spin it back up every so often?

"If you are playing with a player who thinks that "all reds" is a strong hand, it can take you many, many hands to figure out that they're overestimating their hands instead of just getting anomalously lucky with their hidden cards while everyone else folds!"

As you guessed, this is wrong. If someone is playing a lot of hands, your first hypothesis is that they are too loose and making mistakes. At that point, each additional hand they play is evidence in favor of fishiness, and you can quickly become confident that they are bad.

Mistakes in the other direction are much harder to detect. If someone folds for 30 minutes, they plausibly just had a bad run of cards. We've all been there. They do have some discipline, but because folding is so common, each additional fold only adds a small bit of is Bayesian evidence that the person is a rock.

I wonder if there's a way to give the black box recommended a different objective function. CTR is bad for the obvious clickbait reasons, but signals for user interaction are still valuable if you can find the right signal to use.

I would propose that returning to the site some time in the future is a better signal of quality than CTR, assuming the future is far enough away. You could try a week, a month, and a quarter.

This is maybe a good time to use reinforcement learning, since the signal is far away from the decision you need to make. When someone interacts with an article, reward the things they interacted with n weeks ago. Combined with karma, I bet that would be a better signal than CTR.

Dave Orr102

Children are evidently next word completers.

I would be very unhappy if a non disparagement agreement were sprung on me when I left the company. And I would be very reluctant to sign one entering any company.

Luckily we don't have those at Google Deepmind.

Load More