Lê Nguyên Hoang

Lê Nguyên Hoang's Comments

Bayes-Up: An App for Sharing Bayesian-MCQ

I promoted Bayes-up on my YouTube channel a couple of times 😋 (and on Twitter)

https://www.youtube.com/channel/UC0NCbj8CxzeCGIF6sODJ-7A/

Plausibly, almost every powerful algorithm would be manipulative

The YouTube algorithm is arguably an example of a "simple" manipulative algorithm. It's probably a combination of some reinforcement learning and a lot of supervised learning by now; but the following arguments apply even for supervised learning alone.

To maximize user engagement, it may recommend more addictive contents (cat videos, conspiracy, ...) because it learned from previous examples that users who clicked on one such content tended to stay longer on YouTube afterwards. This is massive user manipulation at scale.

Is this an existential risk? Well, some of these addictive contents are radicalizing and angering users,. This arguably increases the risk of international tensions, which increases the risk of nuclear war. This may not be the most dramatic increase in existential risk; but it's one that seems already going on today!

More generally, I believe that by pondering a lot more the behavior and impact of the YouTube algorithm, a lot can be learned about complex algorithms, including AGI. In a sense, the YouTube algorithm is doing so many different tasks that it can be argued to be already quite "general" (audio, visual, text, preference learning, captioning, translating, recommending, planning...).

More on this algorithm here: https://robustlybeneficial.org/wiki/index.php?title=YouTube

A rant against robots

This is probably more contentious. But I believe that the concept of "intelligence" is unhelpful and causes confusion. Typically, Legg-Hutter intelligence does not seem to require any "embodied intelligence".

I would rather stress two key properties of an algorithm: the quality of the algorithm's world model and its (long-term) planning capabilities. It seems to me (but maybe I'm wrong) that "embodied intelligence" is not very relevant to world model inference and planning capabilities.

A rant against robots

By the way, I've just realized that the Wikipedia page on AI ethics begins with robots. 😤