As I think more about this, the LLM as a collaborator alone might have a major impact. Just off the top of my head, a kind of Rube Goldberg attack might be <redacted for info hazard>. Thinking about it in one's isolated mind, someone might never consider carrying something like that out. Again, I am trying to model the type of person who carries out a real attack, and I don't estimate that person having above-average levels of self confidence. I suspect the default is to doubt themselves enough to avoid acting in the same way most people do about their entreprenurial ideas.
However, if they either presented it to an LLM for refinement, or if the LLM suggested it, there could be just enough psychological boost of validity to push them over the edge to trying it. And after a few successes on the news of either "dumb" or "bizarre" or "innovative" attacks being successful due to "AI telling these people how to do it" then the effect might get even stronger.
To my knowledge, one could have bought an AR-15 since the mid to late 1970s. My cousin has a Colt from 1981 he bought when he was 19. Yet people weren't mass shooting each other, even during times when the overall crime/murder rate was higher than it is now. Some confluence of factors has driven the surge, one of them probably being a strong meme, "Oh, this actually tends to '''work.''" Basically, a type of social proofing of efficacy.
And I am willing to bet $100 that the media will report big on the first few cases of "Weird Attacks Designed by AI."
It seems obvious to me that the biggest problems in alignment are going to be the humans, both long before the robots, and probably long after.
Solving for "A viable attack, maximum impact" given an exhaustive list of resources and constraints seems like precisely the sort of thing GPT-4-level AI can solve with aplomb when working hand in hand with a human operator. As the example of shooting a substation, humans could probably solve this in a workshop-style discussion with some Operations Research principles applied, but I assume the type of people wanting to do those things probably don't operate in such functional and organized ways. When they do, it seems to get very bad.
The LLM can easily supply cross-domain knowledge and think within constraints. With a bit of prompting and brainstorming, one could probably come up with a dozen viable attacks in a few hours. So the lone bad actor doesn't have to assemble a group of five or six people who are intelligent, perhaps educated, and also want to do an attack. I suspect the only reason people already aren't prompting for such methods and then setting up automation of them is the existence of guardrails. When truly opensource LLMs get to GPT-4.5 capability and good interfaces to the internet and other software tools (such as phones), we may see a lot of trouble. Fewer people would have the drive and intellect needed (at least early on) to carry out such an attack, but those few could cause very outsized trouble.
TL;DR: The "Fun" starts waaaaaaay before we get to AGI.
So is the hypothetical Puce just otherwise Blue tribers who tolerate or welcome some amount of forbidden talk, media, ideas?
What would you call an educated leftist who has no objection at all to Alt-right or anti-vaxxers speaking freely on twitter? What about one who is actively bothered when those people get deplatformed or legally interfered with, even if it is something truly repugnant such as neonazis? I have read a few corners of leftist media that express these ideas. Is this Puce, Grey, or something else?
In MBTI terms, you may have an Se blindspot. Se, or "External Sensation" is just what is right in front of you, what you see. People with high Se tend to be pretty good at status symbols, both reading them and communicating in them (and they also often fall pray to "what you see is all there is" illusions/delusions, as well as "X resembles y enough that x=y, and I'm done with any need for further information.").
Se Blindspot can make people basically fail to grok social status cues at all, and "Your strongpoint is your weakpoint" applies here.
I think OP is painting with a broad brush. However, he probably has a point that social attitudes end up shaping the experience itself. Similar to the above poster talking about age gaps or miscarriages.
A problem in your objection, as well as any rebuttal to it, is how would we separate social contagion from the data? It seems that if OP is right, we wouldn't have the data to say he's right or wrong. If he's wrong, the data wouldn't really show that or not either. Embedded social attitudes are a matter of the fish not knowing the water in which it swims.
If indeed, that water is so think that OP (as well as several others who have responded) feels it is even taboo to admit their own experience was not traumatizing, then such a deep social fact is also likely to permeate all the data.
Now, in defense of the taboo (like all taboos), sexual molestation is basically such a bad thing in some sense that we don't want to allow any talk that would make this bad thing potentially happen more. The taboo is like a field around a Schelling fence that is trying to innoculate everyone against walking even within 200m of that fence. For whatever reason, the taboo also has some utility that should not be dismissed until it is also understood carefully.
In other words, it is taboo specifically because his talking about it risks pushing us deep into nuances that are risky. In fact, even assuming OPs position in a broad and hard form is fully correct, then it wouldn't undo the damage that people felt from being molested, and talking about it could hurt more. So, the entire topic is likely to be an infohazard, actually regardless of the truth value of OP's comment.