"Conversely, if gorillas and chimps were capable of learning complex sign language for communication, we'd expect them to evolve/culturally develop such a language."
I haven't read much about the whole Koko situation, but my understanding is that part of the claim was that Koko was *unusually* adept with language.
A priori, if language comes "packaged for free" with some other high order cognitive functionalities that for whatever reason can only be maintained in a small proportion of chimps (maybe calorie availability, increased risk taking behaviour or something else), then it seems perfectly plausible that the capability for language would be present in some proportion of chimps >0 but below the critical threshold for language formation.
Alternatively, it also seems possible that the process of creating grammar is more difficult than the process of producing language in an already constructed grammar. In this case you could have a pretty high proportion of animals capable of producing language after instruction, but incapable of inventing language.
I think they probably would, but admit that it's unprovable and people have good reason to disagree.
The difference to my mind is the difference between:
I think the difference between the two of these would drive a lot of dictators actions.
I don't know as much about China, but you can see the first dynamic pretty clearly in Putin's actions. It'd be hard to argue that it's good for Russian national security for the Gazprom retirement plan to be "Falling into artic waters in the middle of the night", but it makes Putin like 0.001% safer.
On the other hand, if there was literally no benefit to doing so, I think Putin would be content and optimally happy retiring to a personal solar system sized dacha.
Maybe this is controversial, but I think that dictators do care about other people, just far less than they care about their own power and safety. It's well known, for example, that Kim Jong Un has a massive softspot for children.
On the other hand, the only reason democratic leaders don't act like dictators is because they can't.
I might be less concerned if the country leading ai development was a parliamentary democracy and not a presidential one, but the level of personal power held by the president of the USA will (imo) lead them to be exactly as prone to malevolent actions as someone like Xi in the CCP.
Like many Americans, I think Dario seems overly rosy about the democratic credentials of the USA and probably overly pessimistic about the CCP.
It wasn't more than a week ago when the president of the US was blustering about invading an allied state, and I have no doubts that Donald Trump would commit worldwide atrocities if they had access to ASI.
On the other hand, it's far from clear to me that autocracies would automatically become more repressive with ASI, it seems plausible to me that the psychological safety of being functionally unremovable could lead to a more blasse attitude towards dissonance. Who gives a shit if they can't unthrone you anyway?
Alternatively, I most often see rote memorization recommended by people studying fields that are inherently somewhat organised.
It's easy to see why anki might work well for something like "memorizing lots of words in kanji" because the work of organising concepts into buckets is already embedded in the kanji and kanji radicals.
It's less obvious to me how you could, for example, learn optimal riichi mahjong with this type of method; and probably because of that I've never seen someone recommend that.
I'd just note, that you should be cautious of people "answering" this question in hindsight.
In both of the two subjects that I feel most professionally confident in and have had the chance to teach (maths and computer science) you'll see people sharing a common refrain. "If only I'd learnt {Complicated method/language/Mental Model} first, I'd have saved myself so much time."
The most common examples I've seen of this are people who are convinced that teaching kids pointer juggling is gonna give them a stronger foundation for CS, or the cult of "Linear Algebra Done Right" (a book that I love, but that isn't a good introduction to the field imo).
"Lies to children" exist for a reason, and while some might be skippable, many form useful intellectual scaffolds.
This is getting a bit into the weeds, but I find that this blog post mirrors my experience with Turing incomplete languages: https://neilmitchell.blogspot.com/2020/11/turing-incomplete-languages.html?m=1 (it also has the advantage of talking about a language that I've used in industry, and can personally attest to a little).
Even if there's a sophisticated and more accurate way of describing the problem space, practicality can often push you back to a more general description with an extra hacky constraint (resource limits) shoved on top.
Is this type of course structure typical in US universities? It seems very strange to me that real analysis wouldn't be a first semester class, or that such a large proportion of classes in a maths degree would be on anything but maths.
A world where alignment is impossible should be safer than a world where alignment is very difficult.
Here's why I think this:
Suppose we have two worlds. In world A, alignment is impossible.
In this world, suppose an ASI is invented. This ASI wants to scale in power as quickly and thoroughly as possible, this ASI has the following options:
Notably, the agent cannot either retrain itself, or train another more powerful agent to act on its behalf, since it can't align the resulting agent. This should restrict the vast majority of potential growth (even if it might still be easily enough to overpower humans in a given scenario).
In world B, the ASI agent can do all of the above, but can also train a successor agent, we should expect the ASI to be able to get vastly more intelligent vastly quicker.