Robert_AIZI

Wiki Contributions

Comments

I think Yair is saying that the people putting in money randomly is what allows "beat the market" to be profitable. Isn't the return on beating the market proportional to the size of the market? In which case, if more people put money into the prediction markets suboptimally, this would be a moneymaking opportunity for professional forecasters, and you could get more/better information from the prediction markets.

This might not be the problem you're trying to solve, but I think if predictions markets are going to break into normal society they need to solve "why should a normie who is somewhat risk-averse, doesn't enjoy wagering for its own sake, and doesn't care about the information externalities, engage with prediction markets". That question for stock markets is solved via the stock market being overall positive-sum, because loaning money to a business is fundamentally capable of generating returns.

Now let me read your answer from that perspective:

users would not bet USD but instead something which appreciates over time or generates income (e.g. ETH, Gold, S&P 500 ETF, Treasury Notes, or liquid and safe USD-backed positions in some DeFi protocol)

Why not just hold Treasury Notes or my other favorite asset? What does the prediction market add?

use funds held in the market to invest in something profit-generating and distribute part of the income to users

Why wouldn't I just put my funds directly into something profit-generating?

positions are used to receive loans, so you can free your liquidity from long (timewise) markets and use it to e.g. leverage

I appreciate that less than 100% of my funds will be tied up in the prediction market, but why tie up any? 

The practical problem is that the zero-sum monetary nature of prediction markets disincentives participation (especially in year+ long markets) because on average it's more profitable to invest in something else (e.g. S&P 500). It can be solved by allowing to bet other assets, so people would bet their S&P 500 shares and on average get the same expected value, so it will be not disincentivising anymore.

But once I have an S&P 500 share, why would I want to put it in a prediction market (again, assuming I'm a normie who is somewhat risk-averse, etc)

Surely, they would be more interested if they had free loans (of course they are not going to be actually free, but they can be much cheaper than ordinary uncollateralized loans). 

So if I put $1000 into a prediction market, I can get a $1000 loan (or a larger loan using my $1000 EV wager as collateral)? But why wouldn't I just get a loan using my $1000 cash as collateral?

Overall I feel listed several mechanisms that mitigate potential downsides of prediction markets, but they still pull in a negative direction, and there's no solid upside to a regular person who doesn't want to wager money for wager's sake, doesn't think they can beat the market, and is somewhat risk averse (which I think is a huge portion of the public).

Also, there are many cases where positive externalities can be beneficial for some particular entity. For example, an investment company may want to know about the risk of a war in a particular country to decide if they want to invest in the country or not. In such cases, the company can provide rewards for market participants and make it a positive-sum game for them even from the monetary perspective.

This I see as workable, but runs into a scale issue and the tragedy of the commons. Let's make up a number and say the market needs a 1% return on average to make it worthwhile after transaction fees, time investment, risk, etc. Then $X of incentive could motivate $100X of prediction market. But I think the issue of free-riders makes it very hard to scale X so that $100X ≈ [the stock market].

Overall, in order to make prediction markets sustainably large, I feel like you'd need some way to internalize the positive information externalities generated by them. I think most prediction markets are not succeeding at that right now (judging from them not exploding in popularity), but maybe there would be better monetization options if they weren't basically regulated out of existence.

Cool to hear from someone at manifold about this! I agree the information and enjoyment value can make it worthwhile (and even pro-social), but if it's zero net monetary value, that surely limits their reach. I appreciate prediction markets from a perspective of "you know all that energy going into determining stock prices? That could be put towards something more useful!", but I worry they won't live up to that without a constant flow of money.

Subsidization doesn’t lead to increased activity in practice unless it makes the market among the top best trading opportunities.

That's really interesting! Is there a theory for why this happens? Maybe traders aren't fully rational in which markets they pursue, or small subsidies move markets to "not worth it" from "very not worth it"?

While it's true that they don't generate income and are zero-sum games in a strictly monetary sense, they do generate positive externalities.
...
The practical problem is that the zero-sum monetary nature of prediction markets disincentives participation

I think we're in agreement here. My concern is "prediction markets could be generating positive externalities for society, but if they aren't positive-sum for the typical user, they will be underinvested in (relative to what is societally optimal), and there may be insufficient market mechanisms to fix this". See my other comment here.

Thanks for the excellent answer!

On first blush, I'd respond with something like "but there's no way that's enough!" I think I see prediction markets as (potentially) providing a lot of useful information publicly, but needing a flow of money to compensate people for risk-aversion, the cost of research, and to overcome market friction. Of your answers:

  • Negative-sum betting probably doesn't scale well, especially to more technical and less dramatic questions.
  • Subsidies make sense, but could they run into a tragedy-of-the-commons scenario? For instance, if a group of businesses want to forecast something, they could pool their money to subsidize a prediction market. But there would be incentive to defect by not contributing to the pool, and getting the same exact information since the prediction market is public - or even to commission a classical market research study that you keep proprietary.
  • Hedging seems fine.

If that reasoning is correct, prediction markets are doomed to stay small. Is that a common concern (and on which markets can wager on that? :P)

In a good prediction market design users would not bet USD but instead something which appreciates over time or generates income (e.g. ETH, Gold, S&P 500 ETF, Treasury Notes, or liquid and safe USD-backed positions in some DeFi protocol).

Isn't this just changing the denominator without changing the zero- or negative-sum nature? If everyone shows up to your prediction market with 1 ETH instead of $1k, the total amount of ETH in the market won't increase, just as the total amount of USD would not have increased. Maybe "buy ETH and gamble it" has a better expected return than holding USD, but why would it have a better expected return than "buy ETH"? Again, this is in contrast to a stock market, where "give a loan to invest in a long-term-profitable-but-short-term-underfunded business" is positive-sum in USD terms (as long as the business succeeds), and can remain positive sum when averaged over the whole stock market.

Also, Manifold solves it in a different way -- positions are used to receive loans, so you can free your liquidity from long (timewise) markets and use it to e.g. leverage.

I must confess I don't understand what you mean here. If 1000 people show up with $1000 each, and wager against each other on some predictions that resolve in 12 months, are you saying they can use those positions as capital to get loans and make more bets that resolve sooner? I can see how this would let the total value of the bets in the market sum to more than $1M, but once all the markets resolve, the total wealth would still be $1M, right? I guess if someone ends up with negative value and has to pay cash to pay off their loan, that brings more dollars into the market, but it doesn't increase the total wealth of the prediction market users.

GPT is decoder only. The part labeled as "Not in GPT" is decoder part. 

I think both of these statements are true. Despite this, I think the architecture shown in "Not in GPT" is correct, because (as I understand it) "encoder" and "decoder" are interchangeable unless both are present. That's what I was trying to get at here:

4. GPT is called a “decoder only” architecture. Would “encoder only” be equally correct? From my reading of the original transformer paper, encoder and decoder blocks are the same except that decoder blocks attend to the final encoder block. Since GPT never attends to any previous block, if anything I feel like the correct term is “encoder only”.

See this comment for more discussion of the terminology.

Thanks, this is a useful corrective to the post! To shortcut safety to "would I trust my grandmother to use this without bad outcomes", I would trust a current-gen LLM to be helpful and friendly with her, but I would absolutely fear her "learning" factually untrue things from it. While I think it can be useful to have separate concepts for hallucinations and "intentional lies" (as another commenter argues), I think "behavioral safety" should preclude both, in which case our LLMs are not behaviorally safe.

I think I may have overlooked hallucinations because I've internalized that LLMs are factually unreliable, so I don't use LLMs where accuracy is critical, so I don't see many hallucinations (which is not much of an endorsement of LLMs).

Asking for some clarifications:
1. For both problems, should the solution work for an adversarially chosen set of m entries?
2. For both problems, can we read more entries of the matrix if it helps our solution? In particular can we WLOG assume we know the diagonal entries in case that helps in some way.

I agree my headline is an overclaim, but I wanted a title that captures the direction and magnitude of my update from fixing the data. On the bugged data, I thought the result was a real nail in the coffin for simulator theory - look, it can't even simulate an incorrect-answerer when that's clearly what's happening! But on the corrected data, the model is clearly "catching on to the pattern" of incorrectness, which is consistent with simulator theory (and several non-simulator-theory explanations). Now that I'm actually getting an effect I'll be running experiments to disentangle the possibilities!

Load More