by gwern1 min read24th Apr 20218 comments
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
New Comment
8 comments, sorted by Click to highlight new comments since: Today at 6:25 AM

Normalization-free Bayes: I was musing on Twitter about what the simplest possible still-correct computable demonstration of Bayesian inference is, that even a middle-schooler could implement & understand. My best candidate so far is ABC Bayesian inference*: simulation + rejection, along with the 'possible worlds' interpretation.

Someone noted that rejection sampling is simple but needs normalization steps, which adds complexity back. I recalled that somewhere on LW many years ago someone had a comment about a Bayesian interpretation where you don't need to renormalize after every likelihood computation, and every hypothesis just decreases at different rates; as strange as it sounds, it's apparently formally equivalent. I thought it was by Wei Dai, but I can't seem to refind it because queries like 'Wei Dai Bayesian decrease' obviously pull up way too many hits, it's probably buried in an Open Thread somewhere, my Twitter didn't help, and Wei Dai didn't recall it at all when I asked him. Does anyone remember this?

* I've made a point of using ABC in some analyses simply because it amuses me that something so simple still works, even when I'm sure I could've found a much faster MCMC or VI solution with some more work.


Incidentally, I'm wondering if the ABC simplification can be taken further to cover subjective Bayesian decision theory as well: if you have sets of possible worlds/hypotheses, let's say discrete for convenience, and you do only penalty updates as rejection sampling of worlds that don't match the current observation (like AIXI), can you then implement decision theory normally by defining a loss function and maximizing over it? In which case you can get Bayesian decision theory without probabilities, calculus, MCM, VI, etc or anything more complicated than a list of numbers and a few computational primitives like coinflip().

Doing another search, it seems I made at least one comment that is somewhat relevant, although it might not be what you're thinking of: https://www.greaterwrong.com/posts/5bd75cc58225bf06703751b2/in-memoryless-cartesian-environments-every-udt-policy-is-a-cdt-sia-policy/comment/kuY5LagQKgnuPTPYZ

Funny that you have your great LessWrong whale as I do, and that you recall that it may be from Wei Dai as well (while him not recalling)

 https://www.lesswrong.com/posts/X4nYiTLGxAkR2KLAP/?commentId=nS9vvTiDLZYow2KSK

Humanities satirical traditions: I always enjoy the CS/ML/math/statistics satire in the annual SIGBOVIK and Ig Nobels; physics has Arxiv April Fools papers (like "On the Impossibility of Supersized Machines") & journals like Special Topics; and medicine has the BMJ Christmas issue, of course.

What are the equivalents in the humanities, like sociology or literature? (I asked a month ago on Twitter and got zero suggestions...)

Danbooru2021 is out. We've gone from n=3m to n=5m (w/162m tags) since Danbooru2017. Seems like all the anime you could possibly need to do cool multimodal text/image DL stuff, hint hint.

2-of-2 escrow: what is the exploding Nash equilibrium? Did it really originate with NashX? I've been looking for the history & real name of this concept for years now and have failed to refind it. Anyone?

Gwern,  i wonder what you think about this question i asked a while ago on causality in relation to the article you posted on reddit. Do we need more general causal agents for addressing issues in RL environments? 

Apologies for posting here, i didn't know how to mention/tag someone on a post in LW. 

https://www.lesswrong.com/posts/BDf7zjeqr5cjeu5qi/what-are-the-causality-effects-of-an-agents-presence-in-a?commentId=xfMj3iFHmcxjnBuqY

[+][comment deleted]1y 3

New to LessWrong?