The financial terms aren't good enough to entice me. Besides that…
Pretty much all of your weird explanations are too vague. In particular "[s]ome other explanation that's of this level of 'very weird'" is voids the whole thing. It'd be fine for a blog post, but not as a prediction resolution criteria.
"I reserve the right to appeal to the LW community. [I will not abuse this right]" is too vague too. The LW community is not a monolithic entity. I think you need to specify exactly how you plan to appeal to the LW community.
Nope. That's a separate bet. I'd happily bet against that (given good enough terms to overcome friction), but that's still originating from Earth.
Yes, I am implying that it's the only recourse allowed. Doing otherwise exposes me to asymmetric litigation risk, due to the extreme asymmetry of the bet amounts. I believe reputation is a sufficient motivator, given how much effort I've spend accumulating reputation on this website.
I respect your offer, but I'd need much better terms from you than from lc. Lc is someone I've interacted with before, and with whom I have established a level of trust.
Some thoughts:
Thank you for the offer. I think your offer is reasonable. The problem is that $10 is too low a price for "something I have to remember for a year". In theory, this could be fixed by increasing the wager amount, but $100k is above my risk limit for a bet (even something as simple as "the sun will rise tomorrow").
I think we've both established a market spread…which is kind of the point of this exercise. You get skin-in-the-game points for maxing out the market's available liquidity at a 0.1% price point.
There's a few other details I though of since my last ...
I will accept under the following conditions:
My Offer:
Here's the short version: Suppose you think a prediction is mispriced but it's distant in the future. Instead of buying credits that pay out on resolution, you buy futures instead. You don't have to tie up capital, since payment is due on resolution instead of upfront. Your asset equals your liability. There is no beta.
Financial derivatives solve the shorttermism problem in traditional securities markets. If you use them in prediction markets, then they will (theoretically) do the exact same thing, by (theoretically) operating the exact same way. In practi...
"Predictive market derivatives" is on my list of things I should write about. What, precisely, do you mean by shorttermism? Do you mean how placing long-term bets in prediction markets ties up capital that could otherwise be put to use? (I think I understand you, but I want to be certain first.)
Financial derivatives work the same way in prediction markets as they do on existing securities markets. I already wrote a little bit about derivatives in existing securities markets, but am not sure that post fully answers your question.
My personal confidence of "no aliens" is so high it rounds to 100%. Placing a bet is basically just a loan with a weird "if aliens are real I don't get paid back" tacked on. The real question then is "at what rate am I willing to lend $X0,000 for 10 years to a stranger? If you can guarantee that I'll beat the stock market by 5% (conditional on no aliens) then I'm good to go, but the "guarantee" is very important. It needs to take into account things like bankruptcy on your end. I don't think our spread is wide enough to make that feasible. It'd take a lot of paperwork.
This is why we need real, formal, legal prediction markets, with derivatives. They would solve all of these problems.
Most bets I see are on the order of $10-$1000 which, according to the Kelly Criterion, implies negligible confidence. I'm willing to bet substantially more than that.
If we had a real prediction market with proper derivatives, low fees, high liquidity, reputable oracles, etcetera, then I'd just use the standard exchange, but we don't. Consequently, market friction vastly outweighs actual probabilities in importance.
...Perhaps you mean that the other person should come up with the odds, and then you'll determine your bet amount using the Kelly criterion, assu
That is an honorable offer (I appreciate it, really), but it has negative expected value for me due to counterparty risk, friction, variance, etcetera. (See bayesed's comment.) I'd need substantially better odds for the expected profit to exceed the friction.
I'm willing to bet five figures, in theory, but there's a ton of factors that need to be accounted for like capital tie-up, counterparty risk, the value of my time, etc. So if your odds aren't lower than 90%, then it's probably not even worthwhile to bet. Too much friction.
I am willing to publicly bet you at 99% odds that, within the next 10 years, there will be no conclusive proof that we have been visited by craft of intelligent, nonhuman origin. I am willing to bet according to the Kelly Criterion, which means I am willing to bet a significant fraction of my total net worth.
[Edit: This gets really complicated really fast. I mean that I'm willing to publicly bet at 99% implied odds on my side, after various costs and risks are factored in. The various costs and risks far outweigh my <1% chance of losing the bet for mundane reasons. A counterparty to this bet would need confidence in a UFO existence far lower than 99% for a bet to make sense.]
"Perhaps you might point to some examples of how it’s best applied?" ⇒ "I'd be curious to read some examples of how it’s best applied."
By changing from a question to a statement, the request for information is transferred from a single person [me] to anyone reading the comment thread. This results in a diffusion of responsibility, which reduces the implicit imposition placed on the original parent.
Another advantage of using statements instead of questions is that they tend to direct me toward positive claims, instead of just making demands for rigor. This avoids some of the more annoyingly asymmetric aspects of Socratic dialogue.
I also tend to write concisely. A trick I often use is writing statements instead of questions. I feel statements are less imposing, since they lack the same level of implicit demand that they be responded to.
One solution is to limit the number of banned users to a small fraction of overall commentors. I've written 297 posts so far and have banned only 3 users from commenting on them. (I did not ban Duncan or Said.)
My highest-quality criticism comes from users who I have never even considered banning. Their comments are consistently well-reasoned and factually correct.
I think the most virtuous solution to your hypothetical is to say "I don't know anything about existential risk, but I'd bet at 75% confidence that a mathematician will prove that 2+2≠5" (or something along those lines).
Your comment is contingent on several binary possibilities about my intentions. I appreciate your attempt to address all leaves of the decision tree. Here I will help limit the work you have to do by pinning things down.
To clarify,
My post serves one purpose: to register a public prediction. I am betting reputation. But it makes no sense to bet reputation on something everyone agrees on. It only makes sense to bet on things people disagree on. I'm hoping people will make counter-predictions because that can help verify, in the future, that the claims I made...
Thanks. These seem like good definitions. They actually set the bar high for your prediction, which is respectable. I appreciate you taking this seriously.
If you'll permit just a little bit more pedantic nitpicking, do you mind if I request a precise definition of nanotech? I assume you mean self-replicating nanobots (grey goo) because, technically, we already have nanotech. However, putting the bar at grey goo (potential, of course—the system doesn't have to actually make it for real) might be setting it above what you intended.
What are your definitions for "Strong AGI", "the alignment problem succeed[s]" and "humanity survives"?
It seems to me like that benefit is very marginal compared to the simple expected value loss if the world survives.
Edit: Caplan wins alpha but Yudkowsky wins mere liquidity.
You're right. I have edited the post to remove the bit about tying up capital. That part was wrong.
$100 is only useful if you spend it or invest it. If Yudkowsky spends it, that means he burdned through all his other investments and bankrupted himself. And it makes no sense to invest it because he only keeps the money if the world ends and investments become worthless.
I think the right place for this discussion is on my other post The Caplan-Yudkowsky End-of-the-World Bet Scheme Doesn't Actually Work.
Not directly, but that's not quite what I'm getting at. What I really mean is that Eliezer is honor-bound to maintain enough collateral to repay Caplan in case the world survives. [Edit: lc pointed out a flaw in that logic.]
One way Eliezer could benefit from this bet (which I left out in my post) is by spending literally all of his money and then suddenly earning back the $200 right before he has to repay Caplan. But that would require him to literally bankrupt himself, which outweights any benefit he could accrue from this bet. (Except in the world I alre...
I guess the basic answer to my question is that you're quite motivated by biological plausibility. There are many reasons why this might be, so I shouldn't guess at the specific motives.
You're right. I want to know how my own brain works.
But if you're more interested in a broader mathematical understanding of how intelligence, in general, works, then that could explain some of our motivational disconnect.
Here's a quote from Ziz's post My Journey to the Dark Side.
Reject morality. Never do the right thing because it’s the right thing. Never even think that concept or ask that question unless it’s to model what others will think. And then, always in quotes. Always in quotes and treated as radioactive. Make the source of sentiment inside you that made you learn to care about what was the right thing express itself some other way.
Here's a quote from Ziz's post Neutral and Evil.
...If you’re reading this and this is you, I recommend aiming for lawful evil. K
Ziz's blog is openly, obviously evil. And not in a fun, trolly way—or even inadvertent mundane evil. Boringly explicit evil, with literal endorsement of the Star Wars Sith religion.
Thank you for these. They used to be my best source of COVID information. Technically they still are, but I have stopped reading them since the information is no longer important enough. I look forward to reading the other stuff you write.
I appreciate your epistemic honesty regarding the historical record.
As for the theory of wireheading, I think it's drifting away from the original topic of my post here. I created a new post Self-Reference Breaks the Orthogonality Thesis which I think provides a cleaner version of what I'm trying to say, without the biological spandrels. If you want to continue this discussion, I think it'd be better to do so there.
Thank you for the explanations. They were crystal-clear.
[W]hat's so interesting about the PP analysis of the Pong experiment, then, if you agree that the random-noise-RL thing doesn't work very well compared to alternatives? IE, why derive excitement about a particular deep theory about intelligence (PP) based on a really dumb learning algorithm (noise-based RL)?
What alternatives? Do you mean like flooding the network with neurotransmitters? Or do you mean like the stuff we use in ML? There's lots of better-performing algorithms that you can implement ...
If you want to get into that level of technical granularity then there are major things that need to change before applying the PP methodology in the paper to real biological neurons. Two of the big ones are brainwave oscillations and existing in the flow of time.
Mostly what I find interesting is the theory that the bulk of animal brain processing goes into creating a real-time internal simulation of the world, that this is mathematically plausible via forward-propagating signals, and that error and entropy are fused together.
When I say "free energy minimization" I mean the idea that error and surprise are fused together (possibly with an entropy minimizer thrown in).
Good point. I've recently been talking with someone whose native language isn't English and we've been using "pretty" as the imprecise translation of a non-gender-specific adjective. I have removed the word "sexy" entirely.
The world model is actually an integral, but it can be approximated by a search by searching for several good hypothesis instead of integrating over all hypothesis.
Can you tell me what you mean by this statement? When you say "integral" I think "mathematical integral (inverse of derivative)" but I don't think that's what you intend to communicate.
There's a lot of stuff that scares me about that post.
Resolution Criteria
- Suppose your counterparty bets on 200:1 odds. Suppose the odds of a LW poll getting trolling results are >0.5%. Then your counterparty loses all of their alpha on that alone (because an incorrect result costs them 200× more than it costs you).
- "I reserve the right to appeal to the LW community to adjudicate resolution if I believe I am being stiffed." is too vague. If you don't specify exactly how you plan for the LW community to adjudicate resolution, then that's just asking fo
... (read more)