Please do not use AI to write for you
I've recently seen several articles here that were clearly generated or heavily assisted by AI. They are all dreadful. They are verbose, they are full of "on the one hand" and "on the other", they never make any assertion without also making room for the opposite, and end with "conclusions" that say nothing. Please do not do this. ———— I wrote the above, unassisted, and it states my own views. To illustrate the point I then had GPT-4o rewrite it. Here was the prompt: I have recently seen a substantial increase in the proportion of postings to a certain online forum that were clearly generated by AI, although not marked as such. The writing is verbose and meandering, full of "on the one hand" and "on the other", never making an assertion without also suggesting its opposite, sounding oh-so-measured and even-handed that it ends up saying nothing at all. Please write an article for me, written in the first person, arguing that AI tools in the current state of the art should never be used for this purpose. Below the line is the chatbot's extruded meat product. This is the whole of its first output, unedited, and I did not press it to continue. The first paragraph starts out ok, being merely verbose and repetitive but setting out the basic claim. And then the second paragraph backpedals it past where it began. The third paragraph floats off into the cloud layer, and so on for a total of nine paragraphs and 875 words that would be a waste of anyone's time to read, unless they want to see if their judgement agrees with mine. It did not notice that I was using it in exactly the way I had asked it to argue against, and that the article displays all the faults that it purports to diagnose. The splash image is a photograph from Wikipedia, CC BY-SA 3.0. I'll spare you the image that GPT-4o came up with, but y'know, I do find the AI illustrations that everyone heads their blog posts with these days pretty annoying. (ETA: Well, there was supposed to be an image, I filled o
Another item for the section on “When Should You Actually Act?”:
Is it actually a good idea? By definition, the thing the naive reasoner has just thought up has no history. (Has there ever been a culture in which coupling was customarily negotiated by the two people using Ask culture rather than Guess culture? Dating apps are all I can think of.) Reasoning is a movement on the map, not the territory. If the map is wrong, the conclusion may be wrong, no matter how airtight the reasoning.