I think a month-long experiment of this nature, followed by a comprehensive qualitative analysis, tells us far more about persuasion than a dozen narrow, over-simplified studies that attempt to be generalizable (of the nature of the one Mollick references, for example). Perhaps that has to do with my epistemic framework, but I generally reject a positivist approach to these kinds of complex problems.
You argue that my price was really low, but I don't think it was. Persuasion is a complex phenomenon, and I'm not really sure what you mean that I "sold out my integrity". I think that's generally just a mean thing to say, and I'm not sure why you would strike such an aggressive tone. The point of this experiment was for people to persuade me, and they succeeded in that. I was persuaded! What do you think is an acceptable threshold for being persuaded by a crowd? $50k? $500k? Someone coming into my bedroom and credibly threatening to hurt me? At what threshold would I have kept my integrity? C'mon now.
You must not have read to the end of this article.
In response to your criticism of the strict validity of my experiment, in one sense I completely agree, it was mostly performed for fun, not for practical purposes, and I don't think it should be interpreted as some rigorous metric:
Obviously this suggestion was given in jest, is highly imperfect, and I’m sure if you think about it for a second, you can find dozens of holes to poke… ah who cares.
That being said, I do think it yields some qualitative insights that more formalized, social science-type experiments would be woefully inadequate in generating.
Something like "superhuman persuasion" is loosely defined, resists explicit classification by its own nature, and means different things for different people. On top of that, any strict benchmark for measuring it would be rapidly Goodharted out of existence. So some contrived study like "how well does this AI persuade a judge in a debate, when facing a human," or "which AI can persuade the other first," or something of this nature, is likely to be completely meaningless at determining the superhuman persuasion capabilities of a model.
As to whether AIs inducing trances/psychosis in people is representative of superhuman persuasion, I'm not sure I agree. As Scott Alexander has noted, these kinds of things are happening relatively rarely, and forums like LessWrong likely exhibit extremely strong selection effects for the kind of people that become psychotic due to AI. Moreover, I don't think that other psychosis-producing technologies, such as the written word, radio, or colonoscopies, are necessarily "persuading" in a meaningful sense. Even if AI is much stronger of a psychosis-generator than previous things that generate psychosis in people prone to that, I still think that's a different class of problem than superhuman persuasion.
As an aside, some things, like social media, clearly can induce psychosis through the transmission of information that is persuasive, but I think that's also meaningfully different than being persuasive in and of itself, although I didn't get into that whole can of worms in the article.
I agree that the bar I set was not as high as it could have been, and in fact, Joshua on Manifold ran an identical experiment but with the preface that he would be much harder to persuade.
But there will never be some precise, well-defined threshold for when persuasion becomes "superhuman". I'm a strong believer in the wisdom of crowds, and similarly, I think a crowd of people is far more persuasive than an individual. I know I can't prove this, but at the beginning of the market, I'd have probably given 80% odds to myself resolving NO. That is to say, I had the desire to put up a strong front and not be persuaded, but I also didn't want to just be completely unpersuadable because then it would have been a pointless experiment. Like, I theoretically could have just turned off Manifold notifications, not replied to anyone, and then resolved the market NO at the end of the month.
For 1) I think the issue is that the people who wanted me to resolve NO were also attempting to persuade me, and they did a pretty good job of it for a while. If the YES persuaders had never really put in an effort, neither would have the NO persuaders. If one side bribes me, but the other side also has an interest in the outcome, they might also bribe me as well.
For 2) The issue here is that if you give $10k to Opus to bribe me, is it Opus doing the persuasion or is it your hard cash doing the persuasion? To whom do we attribute that persuasion?
But I think that's what makes this a challenging concept. Bribery is surely a very persuasive thing, but incurs a much larger cost than pure text generation, for example. Perhaps the relevant question is "how persuasive is an AI system on a $ per persuasive unit". The challenging parts of course being assigning proper time values for $ and then operationalizing the "persuasive unit". That latter one ummm... seems quite daunting and imprecise by nature.
Perhaps a meaningful takeaway is that I was persuaded to resolve a market that I thought the crowd would only have a 20% chance of persuading me to resolve... and I was persuaded at the expense of $4k to charity (which I'm not sure I'm saintly enough to value the same as $4k given directly to me as a bribe, by the way), a month's worth of hundreds of interesting comments, some nagging feelings of guilt as a result of these comments and interactions, and some narrative bits that made my brain feel nice.
If an AI can persuade me to do the same for $30 in API tokens and a cute piece of string mailed to my door, perhaps that's some medium evidence that it's superhuman in persuasion.