AABoyles

Comments

Anti-Aging: State of the Art

Fair point. It does seems like "pandemic" is a more useful category if it doesn't include a whole bunch of "things that happened but didn't kill a lot of people."

Anti-Aging: State of the Art

Without aging, COVID-19 would not be a global pandemic, since the death rate in individuals below 30 years old is extremely low.

A pandemic is an epidemic that occurs across multiple continents. Note that we can accordingly envision a pandemic with a death rate of zero, but a pandemic none-the-less. Accordingly, I think you've somewhat overstated the punchline about aging and COVID-19, though I agree with the broader point that if aging were effectively halted at 30, the death rates would be much, much lower.

D&D.Sci

If I wasn't trying to not-spend-time-on-this, I would fit a Random Forest or a Neural Network (rather than a logistic regression) to capture some non-linear signal, and, when it predicted well, fire up an optimizer to see how much in which stats really helps.

D&D.Sci

Fun! I wish I had a lot more time to spend on this, but here's a brief and simple basis for a decision:

library(readr)
library(dplyr)
library(magrittr)

training <- read_csv("https://raw.githubusercontent.com/H-B-P/d-and-d-sci/main/d_and_d_sci.csv")

training %<>%
  dplyr::mutate(outcome = ifelse(result=="succeed", 1, 0))

model <- glm(outcome ~ cha + con + dex + int + str + wis, data = training, family = "binomial")

summary(model)

start <- data.frame(str = c(6), con = c(14), dex = c(13), int = c(13), wis = c(12), cha = c(4))
predict.glm(model, start, type="response")
# > 0.3701247

wise <- data.frame(str = c(6), con = c(15), dex = c(13), int = c(13), wis = c(20), cha = c(5))
predict.glm(model, wise, type="response")
# > 0.7314005

charismatic <- data.frame(str = c(6), con = c(14), dex = c(13), int = c(13), wis = c(12), cha = c(14))
predict.glm(model, charismatic, type="response")
# > 0.6510629

wiseAndCharismatic <- data.frame(str = c(6), con = c(14), dex = c(13), int = c(13), wis = c(20), cha = c(6))
predict.glm(model, wiseAndCharismatic, type="response")
# > 0.73198

Gonna go with wiseAndCharismatic (+8 Wisdom, +2 Charisma).

Developmental Stages of GPTs

It would also be very useful to build some GPT feature "visualization" tools ASAP.

Do you have anything more specific in mind? I see the Image Feature Visualization tool, but in my mind it's basically doing exactly what you're already doing by comparing GPT-2 and GPT-3 snippets.

AABoyles's Shortform

If it's not fast enough, it doesn't matter how good it is

Sure! My brute-force bitwise algorithm generator won't be fast enough to generate any algorithm of length 300 bits, and our universe probably can't support any representation of any algorithm of length greater than (the number of atoms in the observable universe) ~ 10^82 bits. (I don't know much about physics, so this could be very wrong, but think of it as a useful bound. If there's a better one (e.g. number of Planck volumes in the observable universe), substitute that and carry on, and also please let me know!)

Part of the issue with this might be programs that don't work or do anything (Beyond the trivial, it's not clear how to select for this, outside of something like AlphaGo.)

Another class of algorithms that cause problems are those that don't do anything useful for some number of computations, after which they begin to output something useful. We don't really get to know if they will halt, so if the useful structure emerges after some number of steps, we may not be committed to or able to run it that long.

AABoyles's Shortform

Anything sufficiently far enough away from you is causally isolated from you. Because of the fundamental constraints of physics, information from there can never reach here, and vice versa. you may as well be in separate universes.

The performance of AlphaGo got me thinking about algorithms we can't access. In the case of AlphaGo, we implemented the algorithm (AlphaGo) which discovered some strategies we could never have created. (Go Master Ke Jie famously said "I would go as far as to say not a single human has touched the edge of the truth of Go.")

Perhaps we can imagine a sort of "logical causal isolation." An algorithm is logically causally isolated from us if we cannot discover it (e.g. in the case of the Go strategies that AlphaGo used) and we cannot specify an algorithm to discover it (except by random accident) given finite computation over a finite time horizon (i.e. in the lifetime of the observable universe).

Importantly, we can devise algorithms which search the entire space of algorithms (e.g. generate all permutations all possible strings of bits less than length n as n approaches infinity), but there's little reason to expect that such a strategy will result in any useful outputs of some finite length (there appear to be enough atoms in the universe () to represent all possible algorithms of length .

There's one important weakness in LCI (that doesn't exist in Physical Causal Isolation). We can randomly jump to algorithms of arbitrary lengths. This stipulation gives us the weird ability to pull stuff from outside our LCI-cone into it. Unfortunately, we cannot do so with the expectation of arriving at a useful algorithm. (There's an interesting question about which I haven't yet thought about the distribution of useful algorithms of a given length.) Hence we must add the caveat to our definition of LCI "except by random accident."

We aren't LCI'd from the strategies AlphaGo used, because we created AlphaGo and AlphaGo discovered those strategies (even if human Go masters may never have discovered them independently). I wonder what algorithms exist beyond not just our horizons, but the horizons of all the algorithms which descend from everything we are able to compute.

The Bentham Prize at Metaculus

A second round is scheduled to begin this Saturday, 2020-02-08. New predictors should have a minor advantage in later rounds as the winners will have already exhausted all the intellectual low-hanging fruit. Please join us!

Load More