Posts

Sorted by New

Wiki Contributions

Comments

I read his thesis as

  1. FB use reduces the effectiveness of AI safety researchers and
  2. the techniques in the CFAR handbook can help people resist attention hijacking schemes like FB, therefore
  3. a FB group for EAs is a high leverage place to spread the CFAR handbook

I got an Email that said it will be Dec 17th

Has a date been set for the 2022 event? Wondering if I'll be in the bay day-of or traveling for Christmas

I tried to sketch a toy problem with tune-able “factorizability”

Draw n samples each from two equivariant normal distributions with different means a and b. The task is to estimate the median of the 2n combined samples.

If a << b it factorizes completely - the network just needs to estimate max(A) and min(B) and in the last step output their midpoint. As we move a and b closer together more and more comparisons across A and B are important for accuracy, so the problem is partially factorizable. When a = b the problem doesn’t (naively) factorize at all.

(I assume small enough neural networks can’t find the fancy recursive solution*, so the circuit complexity will behave like the number of comparisons in the naive algorithm.)

*https://en.wikipedia.org/wiki/Median_of_medians

Some forms of biased recall are Bayesian. This is because "recall" is actually a process of reconstruction from noisy data, so naturally priors play a role.

Here's a fun experiment showing how people's priors on fruit size (pineapples > apples > raspberries ...) influenced their recollection of synthetic images where the sizes were manipulated: A Bayesian Account of Reconstructive Memory

I think this framework captures about half of the examples of biased recall mentioned in the Wikipedia article.