Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

"We'll probably display this until the New Year"

I'd guess plenty are planning to donate after Jan 1st for tax reasons, so perhaps best to keep highlighting the donation drive through the first week of Jan.

Also I donated $1,000. Lightcone's works have brought me a lot of direction and personal value over the years, so I'm happy I'm able to lend some support now

Dmitriy1910

At least in California, pay-for-use toilets are uncommon in part because they're illegal

Dmitriy50

I'd add the best in class drug testing resource: sending a sample to https://drugsdata.org/

GC/MS equipment can distinguish hundreds of substances and report all present, even trace contaminants. Far superior to at-home reagent kits or test strips.

More generally, I find it troubling that you relegated drug testing resources to an appendix, and there only linked to weak at-home kits and a lab providing infrared spectroscopy (much less sensitive than GC/MS). Relatedly, your description of street ketamine as "usually pretty pure" comes off as flippant. It makes me feel you don't have the reader's safety in mind, which in turn makes me trust your recommendations much less.

I read his thesis as

  1. FB use reduces the effectiveness of AI safety researchers and
  2. the techniques in the CFAR handbook can help people resist attention hijacking schemes like FB, therefore
  3. a FB group for EAs is a high leverage place to spread the CFAR handbook

I got an Email that said it will be Dec 17th

Has a date been set for the 2022 event? Wondering if I'll be in the bay day-of or traveling for Christmas

I tried to sketch a toy problem with tune-able “factorizability”

Draw n samples each from two equivariant normal distributions with different means a and b. The task is to estimate the median of the 2n combined samples.

If a << b it factorizes completely - the network just needs to estimate max(A) and min(B) and in the last step output their midpoint. As we move a and b closer together more and more comparisons across A and B are important for accuracy, so the problem is partially factorizable. When a = b the problem doesn’t (naively) factorize at all.

(I assume small enough neural networks can’t find the fancy recursive solution*, so the circuit complexity will behave like the number of comparisons in the naive algorithm.)

*https://en.wikipedia.org/wiki/Median_of_medians

Some forms of biased recall are Bayesian. This is because "recall" is actually a process of reconstruction from noisy data, so naturally priors play a role.

Here's a fun experiment showing how people's priors on fruit size (pineapples > apples > raspberries ...) influenced their recollection of synthetic images where the sizes were manipulated: A Bayesian Account of Reconstructive Memory

I think this framework captures about half of the examples of biased recall mentioned in the Wikipedia article.