I've always had a fascination with decisions that rely on accepting/declining some continuous value. For instance, you might receive an unexpected email offering to purchase your domain name e.g. eeegnu.com, and depending on some huge number of factors, (i.e. how attached am I to the domain name, is the offer enticing enough, do I want to go through the hassle, etc.) you'll arrive at a decision. Intuitively, we'd think that there's some hypothetical $ value X where every offer < X yields No, and every offer ≥ X yields Yes, or framing this probabilistically, we'd expect a plot of $ from 0 to infinity against the probability of choosing yes to be non-decreasing. This might look... (read 702 more words →)
These are great points, and ones which I did actually think about when I was brainstorming this idea (if I understand them correctly.) I intend to write out a more thorough post on this tomorrow with clear examples (I originally imagined this as extracting deeper insights into chess), but to answer these:
- I did think about these as translators for the actions of models into natural language, though I don't get the point about extracting things beyond what's in the original model.
- I mostly glossed over this part in the brief summary, and the motivation I had for it comes from how (unexpectedly?) it works for GAN's to just start with random noise, and
... (read more)