Defense Against The Super-Worms
I am not writing this to give anyone any ideas. However, it seems to me that hackers are going to be able to leverage distributed language models in the near future, and I think that we should be creating proactive countermeasures before the tech is powerful enough to cause real damage. I predict that with globally distributed language model training having been proven possible, we will soon see bot-net style distributed inference. If you have enough devices on a home network, it is already trivial to run a very large model with the compute distributed across your phone, iPad, laptop, etc. In my opinion, it is only a matter of time before hackers leverage distributed model inference to create self-propagating Super-Worms. These Super-Worms would function similar to a normal bot-net, with the added functionality of being able to recurse over their own code when they notice chunks of their compute being successfully eradicated by new additions to antivirus software. Because the compute would be distributed, it seems obvious that the virus would be able to run more than just a single 405b-parameter model across the entire network. Given enough time, it could run many models in parallel, synchronously (or asynchronously) communicating with one another to improve their ability to spread. The virus, whose purpose would likely be to mine bitcoin or collect user data, would then be able to cybernetically adjust its own priorities. Should it dedicate more compute to self-improvement? Or, if the anti-virus software developers seem to be lagging behind, should it should dedicate more compute to mining bitcoin? Of course, hackers won't be the only ones who can launch these distributed Super-Worms into the ether. Nation states, collectives, and anyone with access to the internet will be able to take one of these distributed viruses and tailor its ethos to their own desires. This, to me, seems to be the biggest risk with improving AI technology. Not a singleton deciding its ti