Considering that China seems pretty serious about investing heavily in AI in the near future, it may be kind of important that Stuart Russel's work of AI alignment advocacy Human Compatible is made as accessible as possible to a Chinese audience, but there don't seem to be any Chinese translations.

Is it not necessary, perhaps? Is it possible that the most influential people in the field over there are all necessarily fluent in english, so that they can engage with the research literature and use the tools?
Otherwise, what's getting in the way of producing this? What should be done?

New to LessWrong?

New Answer
New Comment

4 Answers sorted by

I've talked with someone in EA Hong Kong who follows the progress of translation of effective altruism into Chinese language and culture; it is not trivial to do so optimally, and suboptimal translations carry substantial risks. Some excerpts mentioned in the linked post:

Doing mass outreach in another language creates irreversible “lock in” [...] China faces especially high risk of lock in, because you also face the risk of government censorship

Likewise, one of the possible translations of “existential risk” (生存危机) is very close to the the name of a computer game (生化危机), so doesn’t have the credibility one might want.

To do this well, we’ll need people who are both experts in the local culture and effective altruism in the West. We’ll also need people who are excellent writer and communicators in the new language.

Initial efforts to expand effective altruism into new languages should focus on making strong connections with a small number of people who have relevant expertise, via person-to-person outreach instead of mass media.

The arguments about EA being niche and difficult to communicate through low-fidelity means apply just as strongly to EA-style AI safety. However, the author also says:

If written materials are used, then it’s better to focus on books, academic articles and podcasts aimed at a niche audience.

The Chinese translation of Human Compatible just came out last month, published by CITIC! The first chapter is here.

Let me know if you would like more information - I'm working at the Center for Human-Compatible AI on this. 

Book translations generally happen because a local publisher decides it would be worth it, so they buy the local-language sales rights (either from the original publisher or the original author, depending on whether the author kept those rights or sold them to the publisher) and hire a translator.

In this case, Human Compatible was published by Viking Press, who are a part of Penguin Group. According to Wikipedia, Penguin has its own division in China. They might or might not already be working on a translation of their own, or possibly negotiating with some other Chinese publisher for the sale of the rights.

If someone wanted to work with this, I would expect that the first step would be to get in contact with Viking Press and try to find out whether there's any translation effort or rights negotiation already in the works. If there isn't, getting a Chinese publisher (either Penguin's Chinese division or someone else) interested might be a good bet. That would probably require convincing them that Chinese people are interested in buying the book; I don't know what would persuade them of that.

Maybe there's a way of saying "I'm willing to buy 1000 books and gift them to people" that would persuade them that it makes sense to do the translation?

Any AI they create will be shaped by their values. 

Why? Why should we assume China can simply solve the alignment problem and the AI will follow their values?

7purge3y
"shaped by their values" != "aligned with their values".  I think Stuart is saying not that China will solve the alignment problem, but that they won't be interested in solving it because they're focused on expanding capabilities, and translating a book won't change that.

If so, I think he's wrong here. The book may lead them to realize that unaligned AGI doesn't actually constitute an improvement in capabilities. It's the creation of a new enemy. A bridge that might fall down is not a useful bridge and a successful military power, informed of that, wouldn't want to build it.

It's in no party's interests to create AGI that isn't aligned with at least the people overseeing the research project.

An AGI aligned with a few living humans is generally going to lead to better outcomes than an AGI aligned with nobody at all, there is enough shared, to know that, and no one coherently extrapolated is as crass or parochial as the people we are now. Alignment theory should be promoted with every party.

2ChristianKl3y
If you understand that there's an alignment problem then "shaped by their values" = "aligned with their values". That's especially true in a country that has a strong central leadership.
-1Stuart Anderson3y
-
1Stuart Anderson3y
-
2ChristianKl3y
Design specs very but they all include an AGI that actually values human life which is the key AI safety consideration and why it's desireable to get the book translated.