ricraz

Richard Ngo. I'm an AI safety research engineer at DeepMind (all opinions my own, not theirs). I'm from New Zealand and now based in London; I also did my undergrad and masters degrees in the UK (in Computer Science, Philosophy, and Machine Learning). Blog: thinkingcomplete.blogspot.com

ricraz's Comments

How special are human brains among animal brains?

I think whether the additional complexity is mundane or not depends on how you're producing the agent. Humans can scale up human-designed engineering products fairly easily, because we have a high-level understanding of how the components all fit together. But if you have a big neural net whose internal composition is mostly determined by the optimiser, then it's much less clear to me. There are some scaling operations which are conceptually very easy for humans, and also hard to do via gradient descent. As a simple example, in a big neural network where the left half is doing subcomputation X and the right half is doing subcomputation Y, it'd be very laborious for the optimiser to swap it so the left half is doing Y and the right half is doing X - since the optimiser can only change the network gradually, and after each gradient update the whole thing needs to still work. This may be true even if swapping X and Y is a crucial step towards scaling up the whole system, which will later allow much better performance.

In other words, we're biased towards thinking that scaling is "mundane" because human-designed systems scale easily (and to some extent, because evolution-designed systems also scale easily). It's not clear that AIs also have this property; there's a whole lot of retraining involved in going from a small network to a bigger network (and in fact usually the bigger network is trained from scratch rather than starting from a scaled-up version of the small one).

How special are human brains among animal brains?

A couple of intuitions:

  • Koko the gorilla had partial language competency.
  • The ability to create and understand combinatorially many sentences - not necessarily with fully recursive structure, though. For example, if there's a finite number of sentence templates, and then the animal can substitute arbitrary nouns and verbs into them (including novel ones).
  • The sort of things I imagine animals with partial language saying are:
    • There's a lion behind that tree.
    • Eat the green berries, not the red berries.
    • I'll mate with you if you bring me a rabbit.

"Once one species gets a small amount of language ability, they always quickly master language and become the dominant species" - this seems clearly false to me, because most species just don't have the potential to quickly become dominant. E.g. birds, small mammals, reptiles, short-lived species..

How special are human brains among animal brains?

It's not that we'd wipe out another species which started to demonstrate language. Rather, since the period during which humans have had language is so short, it'd be an unlikely coincidence for another species to undergo the process of mastering language during the period in which we already had language.

How special are human brains among animal brains?

+1. It feels like this argument is surprisingly prominent in the post given that it's a n=1 anecdote, with potential confounders as mentioned above.

How special are human brains among animal brains?

Nice post; I think I agree with most of it. Two points I want to make:

Or is this “qualitative difference” illusory, with the vast majority of human cognitive feats explainable as nothing more than a scaled-up version of the cognitive feats of lower animals?

This seems like a false dichotomy. We shouldn't think of scaling up as "free" from a complexity perspective - usually when scaling up, you need to make quite a few changes just to keep individual components working. This happens in software all the time: in general it's nontrivial to roll out the same service to 1000x users.

One possibility is that the first species that masters language, by virtue of being able to access intellectual superpowers inaccessible to other animals, has a high probability of becoming the dominant species extremely quickly.

I think this explanation makes sense, but it raises the further question of why we don't see other animal species with partial language competency. There may be an anthropic explanation here - i.e. that once one species gets a small amount of language ability, they always quickly master language and become the dominant species. But this seems unlikely: e.g. most birds have such severe brain size limitations that, while they could probably have 1% of human language, I doubt they could become dominant in anywhere near the same way we did.

There's some discussion of this point in Laland's book Darwin's Unfinished Symphony, which I recommend. He argues that the behaviour of deliberate teaching is uncommon amongst animals, and doesn't seem particularly correlated with intelligence - e.g. ants sometimes do it, whereas many apes don't. His explanation is that students from more intelligent species are easier to teach, but would also be more capable of picking up the behaviour by themselves without being taught. So there's not a monotonically increasing payoff to teaching as student intelligence increases - but humans are the exception (via a mechanism I can't remember; maybe due to prolonged infancy?), which is how language evolved. This solves the problem of trustworthiness in language evolution, since you could start off by only using language to teach kin.

A second argument he makes is that the returns from increasing fidelity of cultural transmission start off low, because the amount of degradation is exponential in the number of times a piece of information transmitted. Combined with the previous paragraph, this may explain why we don't see partial language in any other species, but I'm still fairly uncertain about this.

"No evidence" as a Valley of Bad Rationality

I think the fact that chemotherapy isn't a very good example demonstrates a broader problem with this post: that maybe in general your beliefs will be more accurate if you stick with the null hypothesis until you have significant evidence otherwise. Doing so often protects you from confirmation bias, bias towards doing something, and the more general failure to imagine alternative possibilities. Sure, there are some cases where, on the inside view, you should update before the studies come in, but there are also plenty of cases where your inside view is just wrong.

Can crimes be discussed literally?

Yeah, "built on lies" is far from a straightforward summary - it emphasises the importance of lies far beyond what you've argued for.

The system relies on widespread willingness to falsify records, and would (temporarily) grind to a halt if people were to simply refuse to lie.

The hospital system also relies on widespread willingness to take out the trash, and would (temporarily) grind to a halt if people were to simply refuse to dispose of trash. Does it mean that "the hospital system is built on trash disposal"? (Analogy mostly, but not entirely, serious).

everyone says Y and the system wouldn't work without it, so it's not reasonable to call it fraud.

This seems like a pretty reasonable argument against X being fraudulent. If X are making claims that everyone knows are false, then there's no element of deception, which is important for (at least my layman's understanding of) fraud. Compare: a sports fan proclaiming that their team is the greatest. Is this fraud?

Is the coronavirus the most important thing to be focusing on right now?

On 1: How much time do people need to spend reading & arguing about coronavirus before they hit dramatically diminishing marginal returns? How many LW-ers have already reached that point?

On 3a: I'm pretty skeptical about marginal thought from people who aren't specialists actually doing anything - unless you're planning to organise tests or similar. What reason do you have to think LW posts will be useful?

On 3b: It feels like you could cross-apply this logic pretty straightforwardly to argue that LW should have a lot of political discussion; it has many of the same upsides, and also many of the same downsides. The very fact that LW has so much coronavirus coverage already demonstrates that the addictiveness of discussing this topic is comparable to that of politics.

Is the coronavirus the most important thing to be focusing on right now?
Answer by ricrazMar 19, 202029

I think LW has way too much coronavirus coverage. It was probably useful for us to marshal information when very few others were focusing on it. That was the "exam" component Raemon mentioned. Now, though, we're stuck in a memetic trap where this high-profile event will massively distract us from things that really matter. I think we should treat this similarly to Slate Star Codex's culture wars, because it seems to have a similar effect: recognise that our brains are built to overengage with this sort of topic, put it in an isolated thread, and quarantine it from the rest of the site as much as possible.

[AN #80]: Why AI risk might be solved without additional intervention from longtermists

Paul is implicitly conditioning his actions on being in a world where there's a decent amount of expected value left for his actions to affect. This is technically part of a decision procedure, rather than a statement about epistemic credences, but it's confusing because he frames it as an epistemic credence.

Load More