Email me at assadiguive@gmail.com, if you want to discuss anything I posted here or just chat.
I'm not sure that's even true of leading questions. You can ask a leading question for the benefit of other readers who will see the question, understand the objection the question is implicitly raising, and then reflect on whether it's reasonable.
Vietnam was different because it was an intervention on behalf of South Vietnam which was an American client state, even if the Gulf of Tonkin thing was totally fake. There was no "South Iraq" that wanted American soldiers.
Also, I bet most people who temporarily lose their grip on reality from contact with LLMs return to a completely normal state pretty quickly. I think most such cases are LLM helping to induce temporary hypomania rather than a permanent psychotic condition.
This feels a bit like two completely different posts stitched together: one about how LLMs can trigger or exacerbate certain types of mental illness and another about why you shouldn't use LLMs for editing, or maybe should only use them sparingly. The primary sources about LLM related mental illness are interesting, but I don't think they provide much support at all for the second claim.
It took me a minute to read this as an exclamatory O, rather than as "[There are] zero things I would write, were I better at writing."
Can you be more concrete about what "catching the ears of senators" means? That phrase seems like it could refer to a lot of very different things of highly disparate levels of impressiveness.
It is not a paraphrase; the denotation of these sentences is not precisely the same. However, it is also not entirely surprising that these two phrases would evoke similar behavior from the model.
Interesting post. Just so you know, there are a few stray XML tags that aren't rendering properly.
I think the extent to which it's possible to publish without giving away commercially sensitive information depends a lot on exactly what kind of "safety work" it is. For example, if you figured out a way to stop models from reward hacking on unit tests, it's probably to your advantage to not share that with competitors.