Agreed. Advocacy seems to me to be ~very frequently tied to bad epistemics, for a variety of reasons. So what is missing to me in this writeup (and indeed, in most of the discussions about the issue): why does it make sense to make laypeople even more interested?
The status quo is that relevant people (ML researchers at large, AI investors, governments and international bodies like UN) are already well-aware of the safety problem. Institutions are set up, work is being done. What is there to be gained from involving the public to an even greater extent, poison and inevitably simplify the discourse, add more hard-to-control momentum? I can imagine a few answers (at present not enough being done, fear of the market forces eventually overwhelming the governance, "democratic mindset"), but none of those seem convincing in the face of the above.
To tie with the environmental movement: wouldn't it be much better for the world if it was an uninspiring issue. It seems to me that this would prevent the anti-nuclear movement being solidified by the momentum, the extinction rebellion promoting degrowth etc, and instead semi-sensible policies would get considered somewhere in the bureaucracy of the states?
This is potentially a naive question, but how well would the imagining deal with missing data? Say that 1% (or whatever the base rate is) of tissue samples would be destroyed during slicing or expansion - would we be able to interpolate those missing pieces somehow? Do we know any bounds on the error would that introduce in the dynamics later?
I strong downvoted, because I think public protest are not a good way of pushing for change.
There are some rare situations in which protests are a good choice, but mostly as the option of last resort. A possible counterpoint, that you are mostly advocating for awareness as opssosed to specific points is null, since pretty much everyone is aware of the problem now - both society as a whole, policymakers in particular, and people in AI research and alignment.
FYI, in ther answer you linked to, there is another, way easier way of doing it (& it worked for me):
tl;dr:
- have the Android command line tools installed on a development machine, and USB debugging enabled on your device. The device does not need to be rooted
adb forward tcp:9222 localabstract:chrome_devtools_remote
wget -O tabs.json http://localhost:9222/json/list
Interesting point of view. I don't think I agree with the sex triggers section: it seems that applying this retroactively would predict that the internet and video games would be banned by now (it is of course the case that in many instances they are stigmatized, but nowhere near the extent that would result in banning them).
Also, the essay does not touch on the most important piece of equation, which is the immense upside of AGI - the metaphore about the nuclear weapons spitting out gold, up until they got large enough. This means there is a huge incentive for private companies to unilaterally improve the tech, plus the Moore's law of the compute being cheaper every year. If you can get the AI comprehend text a bit better (or do any sort of other "backend" task), this is much different from the production of child porn, growing weed, or killing people more effectively, which are very localized sources of profit. I think only human cloning comes as the a close example, but still not quite (the gains are very uncertain and temporarily discontinued, it's more difficult to hide the experiments, the technology is much more specialised, whilist compute is needed in every other part of the economy, and 'doing AI' is not so well-defined category as 'using human stem cells').
Suppose you want to make a binary decision with a specified bias $p$. If, say, $p=1/8$ then you can throw a coin 3 times, and if you got, say, $HHH$, you take it as positive, else negative.
But if $p$ is a big number (say $1/1000$), or a weird number, say $1/\pi$, then this method fails. There is another really beautiful method I learned some time ago, which allows you to get any biased coin in a constant =2 expected number of throws! (I lost the source, unfortunately)
It works as follows: you throw the coin until the first time you get a head - assume this happened on your $n$-th throw. Then, you accept if and only if the $n$-th digit in the binary expansion of $p$ is 1. It is easy to show that this comes out to the bias exactly = p, and the expected number of coin throws is always 2.
This line of reasoning, of "AGI respecting human autonomy" has the problem that our choices, undertaken freely (to whatever extent it is possible to say so), can be bad - not because of some external circumstances, but because of us being human. It's like in the Great Divorce - given an omnipotent, omnibenevolent God, would a voluntary hell exist? This is to say: if you believe in respecting human autonomy, then how you live your life now very much matters, because you are now shaping your to-be-satisfsfied-for-eternity preferences.
Of course, the answer is that "AGI will figure this out somehow". Which is equivalent to saying "I don't know". Which I think contradicts the argument "If all goes well, it literally doesn't matter what you do; how you live is essentially up to you from that point on".
The correct argument is, IMO: "there is a huge uncertainty, so you might as well live your life as you are now, but any other choice is pretty much equally defensible".
I was trying to guess what the idea is before reading the post, and my first thought was: in a multi-player game, there is a problem where, say, two players are in a losing position, and would like to resign (and go play something else), two other players are in a so-so position and want to possibly resign, and the final player is clearly winning and wants to continure. But there is no incentive to straight-up resign unilaterally, as then you have to sit and wait idly until the game finishes.
So, we introduce "fractional resignations", we get something like [1, 1, 0.6, 0.6, 0.1], compare it to the pre-agreeded threshold (say, =3) - and end the game if it passes this bar.
Can you please link some of those Youtube channels you mentioned in the comment? I'd like to learn more about the topic - ideally, grasp the big ideas & what-I-don't-know (coming from the pure math angle, so not much grounding in the natural sciences).
For reference, I found Introduction to Biology - The Secret of Life (an MIT course at edX) to be very helpful in this kind of exploration.
Each time you can also apply this argument in reverse: I don't like X about my city, so I'm happy that in the figure, the company will relocate me to NYC. And since NYC is presumed to be overall better, there are more instances of the latter rather than the former.
It seems to me you are taking the argument seriously, but very selectively.
(I think both kinds of thoughts pretty often, and I'm overall happy about the incoming move).