Another (outer) alignment failure story

The ending of the story feels implausible to me, because there's a lack of explanation of why the story doesn't side-track onto some other seemingly more likely failure mode first. (Now that I've re-read the last part of your post, it seems like you've had similar thoughts already, but I'll write mine down anyway. Also it occurs to me that perhaps I'm not the target audience of the story.) For example:

  1. In this story, what is preventing humans from going collectively insane due to nations, political factions, or even individuals blasting AI-powered persuasion/propaganda at each other? (Maybe this is what you meant by "people yelling at each other"?)

  2. Why don't AI safety researchers try to leverage AI to improve AI alignment, for example implementing DEBATE and using that to further improve alignment, or just an adhoc informal version where you ask various AI advisors to come up with improved alignment schemes and to critique/defend each others' ideas? (My expectation is that we end up with one or multiple sequences of "improved" alignment schemes that eventually lock in wrong solutions to some philosophical or metaphilosophical problems, or has some other problem that is much subtler than the kind of outer alignment failure described here.)

My research methodology

Why did you write "This post [Inaccessible Information] doesn't reflect me becoming more pessimistic about iterated amplification or alignment overall." just one month before publishing "Learning the prior"? (Is it because you were classifying "learning the prior" / imitative generalization under "iterated amplification" and now you consider it a different algorithm?)

For example, at the beginning of modern cryptography you could describe the methodology as “Tell a story about how someone learns something about your secret” and that only gradually crystallized into definitions like semantic security (and still people sometimes retreat to this informal process in order to define and clarify new security notions).

Why doesn't the analogy with cryptography make you a lot more pessimistic about AI alignment, as it did for me?

The best case is that we end up with a precise algorithm for which we still can’t tell any failure story. In that case we should implement it (in some sense this is just the final step of making it precise) and see how it works in practice.

Would you do anything else to make sure it's safe, before letting it become potentially superintelligent? For example would you want to see "alignment proofs" similar to "security proofs" in cryptography? What if such things do not seem feasible or you can't reach very high confidence that the definitions/assumptions/proofs are correct?

(USA) N95 masks are available on Amazon

You seem pretty knowledgeable in this area. Any thoughts on the mask that is linked to in my post, the Kimberly-Clark N95 Pouch Respirator? (I noticed that it's being sold by Amazon at 1/3 the price of the least expensive N95 mask on your site.)

Chinese History

Can you try to motivate the study of Chinese history a bit more? (For example, I told my grandparents' stories in part because they seem to offer useful lessons for today's world.) To me, the fact that 6 out of the 10 most deadly wars were Chinese civil wars alone does not seem to constitute strong evidence that systematically studying Chinese history is a highly valuable use of one's time. It could just mean that China had a large population and/or had a long history and/or its form of government was prone to civil wars. The main question I have is whether its history offers any useful lessons or models that someone isn't likely to have already learned from studying other human history.

(USA) N95 masks are available on Amazon

You could try medical tape and see if you can seal the mask with it, without shaving your beard.

Tips/tricks/notes on optimizing investments

When investing in individual stocks, check its borrow rate for short selling. If it's higher than say 0.5%, that means short sellers are willing to pay a significant amount to borrow the stock in order to short it, so you might want to think twice about buying the stock in case they know something you don't. If you still want to invest in it, consider using a broker that has a fully paid lending program to capture part of the borrow fees from short sellers, or writing in-the-money puts on the stock instead of buying the common shares. (I believe the latter tends to net you more of the borrow fees, in the form of extra extrinsic value on the puts.)

Anti-EMH Evidence (and a plea for help)

In addition to jmh's explanation, see covered call. Also, normally when you do a "buy-write" transaction (see above article), you're taking the risk that the stock falls by more than the premium of the call option, but in this case, if that were to happen, I can recover any losses by holding the stock until redemption. And to clarify, because I sold call options that expired in November without being exercised, I'm still able to capture any subsequent gains.

Anti-EMH Evidence (and a plea for help)
  • I'm now selling at-the-money call options against my remaining SPAC shares, instead of liquidating them, in part to capture more upside and in part to avoid realizing more capital gains this year.
  • Once the merger happens (or rather 2 days before the meeting to approve the merger, because that's the redemption deadline), there is no longer a $10 floor.
  • Writing naked call options on SPACs is dangerous because too many people do that when they try to arbitrage between SPAC options and warrants, causing the call options to have negative extrinsic value, which causes people to exercise them to get the common shares, which causes your call options to be assigned, which causes you to end up with a short position in the SPAC which you'll be forced to cover because your broker won't have shares available to borrow. (Speaking from personal experience. :)
Anti-EMH Evidence (and a plea for help)

Gilch made a good point that most investing is like "picking up pennies in front of a steamroller" (which I hadn't thought of in that way before). Another example is buying corporate or government bonds at low interest rates, where you're almost literally picking up pennies per year, while at any time default or inflation could quickly eat away a huge chunk of your principle.

But things like supposedly equivalent assets that used to be closely priced now diverging seems highly suspicious.

Yeah, I don't know how to explain it, but it's been working out for the past several weeks (modulo some experiments I tried to improve upon the basic trade which didn't work). Asked a professional (via a friend) about this, and they said the biggest risk is that the price delta could stay elevated (above your entry point) for a long time and you could end up paying stock borrowing cost for that whole period until you decide to give up and close the position. But even in that case, the potential losses are of the same order of magnitude as the potential gains.

Anti-EMH Evidence (and a plea for help)

At this point, it is very clear that Trump will not become president. But you can still make 20%+ returns shorting ‘TRUMPFEB’ on FTX.

There is a surprisingly large number of people who believe the election was clearly "stolen" and the Supreme Court will eventually decide for Trump. There's a good piece in the NYT about this today. Should they think that the markets are inefficient because they can make 80% returns longing ‘TRUMPFEB’ on FTX? Presumably not, but that means by symmetry your argument is at least incomplete.

I can think of various other ways to easily get 10%+ returns in months in the crytpo markets. For example several crypto futures are extremely underpriced relative to the underlying coin.

This sounds more like my cup of tea. :) Can you provide more details either publicly or privately?

Load More