I agree that the bar I set was not as high as it could have been, and in fact, Joshua on Manifold ran an identical experiment but with the preface that he would be much harder to persuade.
But there will never be some precise, well-defined threshold for when persuasion becomes "superhuman". I'm a strong believer in the wisdom of crowds, and similarly, I think a crowd of people is far more persuasive than an individual. I know I can't prove this, but at the beginning of the market, I'd have probably given 80% odds to myself resolving NO. That is to say, I had the desire to put up a strong front... (read more)
I think a month-long experiment of this nature, followed by a comprehensive qualitative analysis, tells us far more about persuasion than a dozen narrow, over-simplified studies that attempt to be generalizable (of the nature of the one Mollick references, for example). Perhaps that has to do with my epistemic framework, but I generally reject a positivist approach to these kinds of complex problems.
I'm not really sure what your argument is, here? This surely generalizes about as well, if not better, than any argument around persuasion could.
Yes it does. I've set a bar that humans can absolutely meet (because they did). Do you think that an independently operating AI system (of today's capabilities,
In response to your criticism of the strict validity of my experiment, in one sense I completely agree, it was mostly performed for fun, not for practical purposes, and I don't think it should be interpreted as some rigorous metric:
Obviously this suggestion was given in jest, is highly imperfect, and I’m sure if you think about it for a second, you can find dozens of holes to poke… ah who cares.
That being said, I do think it yields some qualitative insights that more formalized, social science-type experiments would be woefully inadequate in generating.
Something like "superhuman persuasion" is loosely defined, resists explicit classification by its own nature, and means different things for different... (read more)
One might imagine that if an AI could one day be extremely persuasive—far more persuasive than humans—then this could pose serious risks in a number of directions. Humans already fall victim to all sorts of scams, catfishes, bribes, and blackmails. An AI that can do all of these things extremely competently, in a personally targeted manner, and with infinite patience would create... (read 2625 more words →)
Manifold becomes a battleground for a long-standing debate
What is a futarchy and how do we break it?
It’s not that often that an entirely new system of government is proposed, which is why Robin Hanson’s proposal from the turn of the century for a form of government based around the capability of markets to aggregate information has had some staying power in forecasting circles.
Futarchy, as it’s called, proposes:
Continuing to use democratic processes for determining “what it is” that we want. Think values, moral outcomes, and metrics for determining the welfare or growth of a community.
Using betting markets to allow anyone to bet on conditional probabilities for which candidate/policy/decision would lead to the best
I agree that the bar I set was not as high as it could have been, and in fact, Joshua on Manifold ran an identical experiment but with the preface that he would be much harder to persuade.
But there will never be some precise, well-defined threshold for when persuasion becomes "superhuman". I'm a strong believer in the wisdom of crowds, and similarly, I think a crowd of people is far more persuasive than an individual. I know I can't prove this, but at the beginning of the market, I'd have probably given 80% odds to myself resolving NO. That is to say, I had the desire to put up a strong front... (read more)