LESSWRONG
LW

RobinHanson
4679Ω476890
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Futarchy's fundamental flaw
RobinHanson22d30

You say that markets give evidential conditionals while decisions want causal conditionals. For this comment, I'm not taking a position on which conditional we want for decisions. I'm just saying that both trades and the decision advised should use the same conditional, but I'm not saying which one that is. 

Reply
Futarchy's fundamental flaw
RobinHanson23d20

Market estimates will converge to the most profitable P(X if A), the one that wins bets vs other versions. And that is the version you want to use when you make decisions.

Reply
Futarchy's fundamental flaw
RobinHanson23d242

I respond here: https://www.overcomingbias.com/p/decision-conditional-prices-reflect

Reply
Futarchy using a sealed-bid auction to avoid liquidity problems
RobinHanson24d50

"if a investor doesn't review a proposal, we assume that they are submitting an unconditional sell bid."  Of ALL of their shares, at any price? Seems a way to force a sale at a low price. 

Also call markets don't aggregate info as well as continuous double auctions, and you aren't offering any incentives to find and add info. 

Reply
How prediction markets can create harmful outcomes: a case study
RobinHanson3mo60

Metaculus wouldn't work if it didn't offer incentives for participants. The fact that they aren't monetary doesn't mean they won't induce the same sort of problems you worry about above.

Surely we should compare, for particular topics, the magnitude of actual sabotage to the magnitude of the info value gained. And there are many ways to design markets to reduce the rate of sabotage. 

Reply
Contra Hanson on AI Risk
RobinHanson2y77

Seems to me I spent a big % of my post arguing against the rapid growth claim. 

Reply
Contra Hanson on AI Risk
RobinHanson2y25

Come on, most every business tracks revenue in great detail. If customers were getting unhappy with the firm's services and rapidly switching en mass, the firm would quickly become very aware, and looking into the problem in great detail. 

Reply
Contra Hanson on AI Risk
RobinHanson2y106

You complain that my estimating rates from historical trends is arbitrary, but you offer no other basis for estimating such rates. You only appeal to uncertainty. But there are several other assumptions required for this doomsday scenario. If all you have is logical possibility to argue for piling on several a priori unlikely assumptions, it gets hard to take that seriously. 

Reply
Contra Hanson on AI Risk
RobinHanson2y2216

You keep invoking the scenario of a single dominant AI that is extremely intelligent. But that only happens AFTER a single AI fooms to be much better than all other AIs. You can't invoke its super intelligence to explain why its owners fail to notice and control its early growth. 

Reply
Replicating and extending the grabby aliens model
RobinHanson3y70

I comment on this paper here: https://www.overcomingbias.com/2022/07/cooks-critique-of-our-earliness-argument.html

Reply
Load More
No wikitag contributions to display.
37What Evidence Is AlphaGo Zero Re AGI Complexity?
7y
44
36What Program Are You?
16y
43
31Least Signaling Activities?
16y
103
28Rationality Toughness Tests
16y
17
68Most Rationalists Are Elsewhere
16y
34
164Rational Me or We?
16y
156
36The Costs of Rationality
16y
81
44Test Your Rationality
16y
87