Lanrian

Wiki Contributions

Comments

I'm curious how much of a concern you think this is, now, 1 year later. I haven't heard the "total number of mRNA shots (for any disease)"-concern from other places, and I'm wondering if that's for good reasons.

Competence does not seem to aggressively overwhelm other advantages in humans: 

[...]

g. One might counter-counter-argue that humans are very similar to one another in capability, so even if intelligence matters much more than other traits, you won’t see that by looking at  the near-identical humans. This does not seem to be true. Often at least, the difference in performance between mediocre human performance and top level human performance is large, relative to the space below, iirc. For instance, in chess, the Elo difference between the best and worst players is about 2000, whereas the difference between the amateur play and random play is maybe 400-2800 (if you accept Chess StackExchange guesses as a reasonable proxy for the truth here).

The usage of capabilities/competence is inconsistent here. In points a-f, you argue that general intelligence doesn't aggressively overwhelm other advantages in humans. But in point g, the ELO difference between the best and worst players is less determined by general intelligence than by how much practice people have had.

If we instead consistently talk about domain-relevant skills: In the real world, we do see huge advantages from having domain-specific skills. E.g. I expect elected representatives to be vastly better at politics than medium humans.

If we instead consistently talk about general intelligence: The chess data doesn't falsify the hypothesis that human-level variation in general intelligence is small. To gather data about that, we'd want to analyse the ELO-difference between humans who have practiced similarly much but who have very different g.

(There are some papers on the correlation between intelligence and chess performance, so maybe you could get the relevant data from there. E.g. this paper says that (not controlling for anything) most measurements of cognitive ability correlates with chess performance at about ~0.24 (including IQ iff you exclude a weird outlier where the correlation was -0.51).)

Another fairly common argument and motivation at OpenAI in the early days was the risk of "hardware overhang," that slower development of AI would result in building AI with less hardware at a time when they can be more explosively scaled up with massively disruptive consequences. I think that in hindsight this effect seems like it was real, and I would guess that it is larger than the entire positive impact of the additional direct work that would be done by the AI safety community if AI progress had been slower 5 years ago.

Could you clarify this bit? It sounds like you're saying that OpenAI's capabilities work around 2017 was net-positive for reducing misalignment risk, even if the only positive we count is this effect. (Unless you think that there's substantial reason that acceleration is bad other than giving the AI safety community less time.) But then in the next paragraph you say that this argument was wrong (even before GPT-3 was released, which vaguely gestures at the "around 2017"-time). I don't see how those are compatible.

(If 1 firing = 1 bit, that should be 34 megabit ~= 4 megabyte.)

This random article (which I haven't fact-checked in the least) claims a bandwidth of 8.75 megabit ~= 1 megabyte. So that's like 2.5 OOMs higher than the number I claimed for chinchilla. So yeah, it does seem like humans get more raw data.

(But I still suspect that chinchilla gets more data if you adjust for (un)interestingness. Where totally random data and easily predictable/compressible data are interesting, and data that is hard-but-possible to predict/compress is interesting.)

There's a billion seconds in 30 years. Chinchilla was trained on 1.4 trillion tokens. So for a human adult to have as much data as chinchilla would require us to process the equivalent of ~1400 tokens per second. I think that's something like 2 kilobyte per second.

Inputs to the human brain are probably dominated by vision. I'm not sure how many bytes per second we see, but I don't think it's many orders of magnitudes higher than 2kb.

The acronym is definitely used for reinforcement learning. ["RLHF" "reinforcement learning from human feedback"] gets 564 hits on google, ["RLHF" "reward learning from human feedback"] gets 0.

Reinforcement* learning from human feedback

Ah, I see, it was today. Nope, wasn't trying to join. I first interpreted "next" thursday as thursday next week, and then "June 28" was >1 month off, which confused me. In retrospect, I could have deduced that it was meant to say July 28.

Also, next Thursday (June 28) at noon Pacific time is the Schelling time to meet in the Walled Garden and discuss the practical applications of this.

Is the date wrong here?

Some previous LW discussion on this: https://www.lesswrong.com/posts/9W9P2snxu5Px746LD/many-weak-arguments-vs-one-relatively-strong-argument

(author favors weak arguments; plenty of discussion and some disagreements in comments; not obviously worth reading)

Load More