All of Wei_Dai's Comments + Replies

Another (outer) alignment failure story

The ending of the story feels implausible to me, because there's a lack of explanation of why the story doesn't side-track onto some other seemingly more likely failure mode first. (Now that I've re-read the last part of your post, it seems like you've had similar thoughts already, but I'll write mine down anyway. Also it occurs to me that perhaps I'm not the target audience of the story.) For example:

  1. In this story, what is preventing humans from going collectively insane due to nations, political factions, or even individuals blasting AI-powered persua

... (read more)

In this story, what is preventing humans from going collectively insane due to nations, political factions, or even individuals blasting AI-powered persuasion/propaganda at each other? (Maybe this is what you meant by "people yelling at each other"?)

It seems like the AI described in this story is still aligned enough to defend against AI-powered persuasion (i.e. by the time that AI is sophisticated enough to cause that kind of trouble, most people are not ever coming into contact with adversarial content)

Why don't AI safety researchers try to leverage AI t

... (read more)
My research methodology

Why did you write "This post [Inaccessible Information] doesn't reflect me becoming more pessimistic about iterated amplification or alignment overall." just one month before publishing "Learning the prior"? (Is it because you were classifying "learning the prior" / imitative generalization under "iterated amplification" and now you consider it a different algorithm?)

For example, at the beginning of modern cryptography you could describe the methodology as “Tell a story about how someone learns something about your secret” and that only gradually crystal

... (read more)
8paulfchristiano8dIn my other response to your comment I wrote: I guess SSH itself would be an interesting test of this, e.g. comparing the theoretical model of this paper [https://eprint.iacr.org/2010/095.pdf] to a modern implementation. What is your view about that comparison? e.g. how do you think about the following possibilities: 1. There is no material weakness in the security proof. 2. A material weakness is already known. 3. An interested layperson could find a material weakness with moderate effort. 4. An expert could find a material weakness with significant effort. My guess would be that probably we're in world 2, and if not that it's probably because no one cares that much (e.g. because it's obvious that there will be some material weakness and the standards of the field are such that it's not publishable unless it actually comes with an attack) and we are in world 3. (On a quick skim, and from the author's language when describing the model, my guess is that material weaknesses of the model are more or less obvious and that the authors are aware of potential attacks not covered by their model.)
5paulfchristiano8dI think that post is basically talking about the same kinds of hard cases as in Towards Formalizing Universality [https://ai-alignment.com/towards-formalizing-universality-409ab893a456] 1.5 years earlier (in section IV), so it's intended to be more about clarification/exposition than changing views. See the thread with Rohin above for some rough history. I'm not sure.It's possible I would become more pessimistic if I walked through concrete cases of people's analyses being wrong in subtle and surprising ways. My experience with practical systems is that it is usually easy for theorists to describe hypothetical breaks for the security model, and the issue is mostly one of prioritization (since people normally don't care too much about security). For example, my strong expectation would be that people had described hypothetical attacks on any of the systems discussed in the article you linked [http://www.ibiblio.org/weidai/temp/Provable_Security.pdf] prior to their implementation, at least if they had ever been subject to formal scrutiny. The failures are just quite far away from the levels of paranoia that I've seen people on the theory side exhibit when they are trying to think of attacks. I would also expect that e.g. if you were to describe almost any existing practical system with purported provable security, it would be straightforward for a layperson with theoretical background (e.g. me) to describe possible attacks that are not precluded by the security proof, and that it wouldn't even take that long. It sounds like a fun game. Another possible divergence is that I'm less convinced by the analogy, since alignment seems more about avoiding the introduction of adversarial consequentialists and it's not clear if that game behaves in the same way. I'm not sure if that's more or less important than the prior point. I would want to do a lot of work before deploying an algorithm in any context where a failure would be catastrophic (though "before letting it be
(USA) N95 masks are available on Amazon

You seem pretty knowledgeable in this area. Any thoughts on the mask that is linked to in my post, the Kimberly-Clark N95 Pouch Respirator? (I noticed that it's being sold by Amazon at 1/3 the price of the least expensive N95 mask on your site.)

Chinese History

Can you try to motivate the study of Chinese history a bit more? (For example, I told my grandparents' stories in part because they seem to offer useful lessons for today's world.) To me, the fact that 6 out of the 10 most deadly wars were Chinese civil wars alone does not seem to constitute strong evidence that systematically studying Chinese history is a highly valuable use of one's time. It could just mean that China had a large population and/or had a long history and/or its form of government was prone to civil wars. The main question I have is whether its history offers any useful lessons or models that someone isn't likely to have already learned from studying other human history.

That 6 out of 10 of the most deadly conflicts were Chinese civil wars is strong evidence China had a long history and a gigantic population relative to the rest of the world. (I think it's evidence that China was prone to fewer, larger wars.) To me, history is the study of people. If most people are in a one place then that is where most of the history is too.

I think the crux of our intuitive gap lies in the identification of useful lessons and models. If Chinese history is a useful source of models then I should be able to think of several off of the top ... (read more)

(USA) N95 masks are available on Amazon

You could try medical tape and see if you can seal the mask with it, without shaving your beard.

Tips/tricks/notes on optimizing investments

When investing in individual stocks, check its borrow rate for short selling. If it's higher than say 0.5%, that means short sellers are willing to pay a significant amount to borrow the stock in order to short it, so you might want to think twice about buying the stock in case they know something you don't. If you still want to invest in it, consider using a broker that has a fully paid lending program to capture part of the borrow fees from short sellers, or writing in-the-money puts on the stock instead of buying the common shares. (I believe the latter... (read more)

1Isma3moWhat is it that guarantees that whoever borrows the stock does so in order to short it? Couldn't they just be borrowing it to go further long?
Anti-EMH Evidence (and a plea for help)

In addition to jmh's explanation, see covered call. Also, normally when you do a "buy-write" transaction (see above article), you're taking the risk that the stock falls by more than the premium of the call option, but in this case, if that were to happen, I can recover any losses by holding the stock until redemption. And to clarify, because I sold call options that expired in November without being exercised, I'm still able to capture any subsequent gains.

1jmh4moJust curious but where are you trading/investing? USA or elsewhere? I'm wondering about the type of options -- are they USA or European execution rights? And, yes, I should have been clear on the potential downside of limiting gain to "during the life of the option"
Anti-EMH Evidence (and a plea for help)
  • I'm now selling at-the-money call options against my remaining SPAC shares, instead of liquidating them, in part to capture more upside and in part to avoid realizing more capital gains this year.
  • Once the merger happens (or rather 2 days before the meeting to approve the merger, because that's the redemption deadline), there is no longer a $10 floor.
  • Writing naked call options on SPACs is dangerous because too many people do that when they try to arbitrage between SPAC options and warrants, causing the call options to have negative extrinsic value, which
... (read more)
2gilch4moI closed my HCAC put at a small profit today.
2gilch4moLooks like HCAC wants to acquire Canoo. Roth Capital analysts are targeting $30.
2gilch4moLooks like TRNE became Desktop Metal Inc. (DM). I had sold a 12.5 May put on TRNE and closed it today by buying back the DM put at a profit.
2gilch4moDuly noted. That seems like a good time to buy back the calls. And then buy some more to exercise yourself. Not sure how long this lasts. Maybe it's enough to watch it twice a day, or maybe you have to program an order in advance. How much warning do we get to redeem the shares? Maybe that's when you buy back the put. Although, if everybody thinks that at once and drives up the IV even more, maybe that's time to sell another one instead.
Anti-EMH Evidence (and a plea for help)

Gilch made a good point that most investing is like "picking up pennies in front of a steamroller" (which I hadn't thought of in that way before). Another example is buying corporate or government bonds at low interest rates, where you're almost literally picking up pennies per year, while at any time default or inflation could quickly eat away a huge chunk of your principle.

But things like supposedly equivalent assets that used to be closely priced now diverging seems highly suspicious.

Yeah, I don't know how to explain it, but it's been working out fo... (read more)

Anti-EMH Evidence (and a plea for help)

At this point, it is very clear that Trump will not become president. But you can still make 20%+ returns shorting ‘TRUMPFEB’ on FTX.

There is a surprisingly large number of people who believe the election was clearly "stolen" and the Supreme Court will eventually decide for Trump. There's a good piece in the NYT about this today. Should they think that the markets are inefficient because they can make 80% returns longing ‘TRUMPFEB’ on FTX? Presumably not, but that means by symmetry your argument is at least incomplete.

I can think of various other ways

... (read more)
1deluks9174moI sent you a pm
Anti-EMH Evidence (and a plea for help)

Thanks for the link. I was hoping that it would be relevant to my current situation, but having re-read it, it clearly isn't, as it suggests:

It’s much less risky to just sell the stocks as soon as you think there’s a bubble, which foregoes any additional gains but means you avoid the crash entirely (by taking it on voluntarily, sort of).

But this negates the whole point of my strategy, which is to buy these stocks at a "risk-free" price hoping for a bubble to blow up later so I can sell into it.

2Vaniver4moYeah, I don't think that advice applies to your situation. [In my way of thinking about things, you were trying to time the market less than you had skill for, and are now attempting to adjust towards calibration.] IMO, the first problem here is psychological: you need to decide whether you're doing something more like regret-minimization or more like profit-maximization. In the regret-minimization world, I think the dominant strategy is deciding on (and then adjusting as new info comes in) price schedules. Think "If it hits $20, I'll sell to this fraction; if it hits $21, I sell to this other fraction, etc." or "Each day from here to the merger date I'll sell f(t)% or $N of the remaining stock I hold" or whatever. The thing that's going on here is that, regardless of whatever actually happens, you can point to your strategy and say "I did the thing that I believed would do well across all possible worlds, and so rather than focusing too much on whether or not this me did well or poorly, I focus on whether I believe in my strategy for expected gains." I called it the 'profit-maximization' world, but in my mind it's actually closer to the 'hug the query [https://www.lesswrong.com/posts/2jp98zdLo898qExrr/hug-the-query]' world; rather than trying to come up with a strategy that defends against future-you criticizing present-you, you try to figure out which world you're actually in, and put both your computation / emotions / resources into that. Sleepless nights worrying about prices is one of the tools used here; when you make a move and then regret it in the future, that regret is you learning what to do the next time. There's obviously gradations here; the point is not to maximize hard in one direction, but to have a unified and coherent approach to how your emotions will interact with your investing, and how your investing will interact with your emotions. Once you have that, I think things flow more clearly. [If I were trying to correct short-term inefficienci
Cryonics without freezers: resurrection possibilities in a Big World

Now I’m curious. Does studying history make you update in a similar way?

History is not one of my main interests, but I would guess yes, which is why I said "Actually, I probably shouldn’t have been so optimistic even before the recent events..."

I feel that these times are not especially insane compared to the rest of history, though the scale of the problems might be bigger.

Agreed. I think I was under the impression that western civilization managed to fix a lot of the especially bad epistemic pathologies in a somewhat stable way, and was unpleasantly surprised when that turned out not to be the case.

Persuasion Tools: AI takeover without AGI or agency?

You mention "defenses will improve" a few times. Can you go into more detail about this? What kind of defenses do you have in mind? I keep thinking that in the long run, the only defenses are either to solve meta-philosophy so our AIs can distinguish between correct arguments and merely persuasive ones and filter out the latter for us (and for themselves), or go into an info bubble with trusted AIs and humans and block off any communications from the outside. But maybe I'm not being imaginative enough.

4Daniel Kokotajlo5moI think I mostly agree with you about the long run, but I think we have more short-term hurdles that we need to overcome before we even make it to that point, probably. I will say that I'm optimistic that we haven't yet thought of all the ways advances in tech will help collective epistemology rather than hinder it. I notice you didn't mention debate; I am not confident debate will work but it seems like maybe it will. In the short run, well, there's also debate I guess. And the internet having conversations being recorded by default and easily findable by everyone was probably something that worked in favor of collective epistemology. Plus there is wikipedia, etc. I think the internet in general has lots of things in it that help collective epistemology... it just also has things that hurt, and recently I think the balance is shifting in a negative direction. But I'm optimistic that maybe the balance will shift back. Maybe.
Open & Welcome Thread – October 2020

By "planting flags" on various potentially important and/or influential ideas (e.g., cryptocurrency, UDT, human safety problems), I seem to have done well for myself in terms of maximizing the chances of gaining a place in the history of ideas. Unfortunately, I've recently come to dread more than welcome the attention of future historians. Be careful what you wish for, I guess.

Open & Welcome Thread – October 2020

Free speech norms can only last if "fight hate speech with more speech" is actually an effective way to fight hate speech (and other kinds of harmful speech). Rather than being some kind of human universal constant, that's actually only true in special circumstances when certain social and technological conditions come together in a perfect storm. That confluence of conditions has now gone away, due in part to technological change, which is why the most recent free speech era in Western civilization is rapidly drawing to an end. Unfortunately, its social s... (read more)

3Aaro Salosensaari6moI noticed this comment on main page and would push back on the sentiment: I don't think there ever has been such conditions that "more speech" was universally agreed to be better way than restrictions to fight hate speech (or more generically, speech deemed harmful), or there is in general something inevitable about not having free speech in certain times and places because it is simply not workable in certain conditions. (Maybe it isn't, but that is kinda useless to speculate beforehand and it is obvious when one does not certainly have such conditions.) Free speech, in particular talking about and arguing for free speech is more of commitment to certain set of values (against violence to dissemination of ideas etc), often made in presence of opposition to those values, and less of something that has been empirically deemed to be best policy at some past time but the conditions of those times are for some reason lost. Freedom of speech is not an on-off thing; the debate about free speech seems to be quite a constant in the West since the idea's conception, while the hot topics will change. (When Life of Brian came out, Pythons found themselves debating its merits with clergymen on TV: the debate can be found on YouTube and feels antiquated to watch.) Moreover, there is something that bugs me in the claim that with certain technological and social conditions, free speech becomes unworkable. The part about social conditions is difficult, as one could say that social conditions in places with free press were the necessary social conditions for free press, and places without had not the necessary conditions, but that feels bit too circutous. If we allow some more leeway and pick an example of place with some degree of freedom in speech, one can quite often point to places in the same historical period with broadly similar conditions where nevertheless the free speech norms were not there. Sometimes it is the same place just a bit later where the free speech had brok
A tale from Communist China

This ended up being my highest-karma post, which I wasn't expecting, especially as it hasn't been promoted out of "personal blog" and therefore isn't as visible as many of my other posts. (To be fair "The Nature of Offense" would probably have a higher karma if it was posted today, as each vote only had one point back then.) Curious what people liked about it, or upvoted it for.

9ryan_b6moI liked that it provided a personal perspective into an important window of history. Whether by instinct or design, you also neatly organized it into information-decision-information-decision, which is exactly the kind of analysis we want to be able to do. Separately from my appreciation: it fits the zeitgeist, since the US is in political crisis and modern China is and has been an important factor in world events for years. Lastly, though I can't put my finger precisely on why, it feels relevant to the events of Petrov Day this year. Sort of the inverse, if that makes any sense: not world-ending, but personal world-ending; plenty of time to make the decision but a stupendous amount of information to consider; the consequences stretched out over years and decades rather than a few hours.
4maia6moI found it mildly useful to hear about someone's experiences in this kind of situation, and it's an interesting story. It's also a very easily digestible post.
Open & Welcome Thread – October 2020

There's a time-sensitive trading opportunity (probably lasting a few days), i.e., to short HTZ because it's experiencing an irrational spike in prices. See https://seekingalpha.com/article/4379637-over-1-billion-hertz-shares-traded-on-friday-because-of-bankruptcy-court-filings for details. Please only do this if you know what you're doing though, for example you understand that HTZ could spike up even more and the consequences of that if it were to happen and how to hedge against it. Also I'm not an investment advisor and this is not investment advice.

A tale from Communist China

Lessons I draw from this history:

  1. To predict a political movement, you have to understand its social dynamics and not just trust what people say about their intentions, even if they're totally sincere.
  2. Short term trends can be misleading so don't update too much on them, especially in a positive direction.
  3. Lots of people who thought they were on the right side of history actually weren't.
  4. Becoming true believers in some ideology probably isn't good for you or the society you're hoping to help. It's crucial to maintain empirical and moral uncertainties.
  5. Risk tails are fatter than people think.

I draw a few more lessons from this (and from conversations with other survivors and escapees from horrific regimes):

6. Change is both gradual and terrifyingly fast - there is often months or years of buildup and warning, before weeks of crisis.  

7. Terrifyingly fast is not instantaneous.  It costs a lot, but one can get out if one actually believes the evidence in time.

A tale from Communist China

Another detail: My grandmother planed to join the Communist Revolution together with two of her classmates, who made it farther than she did. One made it all the way to Communist controlled territory (Yan'an) and later became a high official in the new government. She ended up going to prison in one of the subsequent political movements. Another one almost made it before being stopped by Nationalist authorities, who forced her to write a confession and repentance before releasing her back to her family. That ended up being dug up during the Cultural Revolution and got her branded as a traitor to Communism.

Covid 10/15: Playtime is Over

Upvoted for the important consideration, but your own brain is a source of errors for which it's hard to decorrelate, so is it really worse (or worse enough to justify the additional costs of the alternative) to just trust Zvi instead of your own judgement/integration of diverse sources?

ETA: Oh, I do read the comments here so that helps to catch Zvi's errors, if any.

2Zvi6moNote that my own blog also has an active comments section that has some good people in it (and some not so good of course), if you want to watch for errors without checking other sources directly. I do think that if there's something that impacts your decisions a lot, you should do your own investigations too!
Open & Welcome Thread – October 2020

My grandparents on both sides of my family seriously considered leaving China (to the point of making concrete preparations), but didn't because things didn't seem that bad, until it was finally too late.

2Ben Pace6moThat's pretty scary. I expect I have much more flexibility than your family did – I have no dependents, I have no property / few belongings to tie me down, and I expect flight travel is much more readily available to me in the present-day. I also expect to notice it faster than the supermajority of people (not disanalogous to how I was prepped for Covid like a month before everyone else).
Open & Welcome Thread – October 2020

Writing a detailed post is too costly and risky for me right now. One of my grandparents was confined in a makeshift prison for ten years during the Cultural Revolution and died shortly after, for something that would normally be considered totally innocent that he did years earlier. None of them saw that coming, so I'm going to play it on the safe side and try to avoid saying things that could be used to "cancel" me or worse. But there are plenty of articles on the Internet you can find by doing some searches. If none of them convinces you how serious the problem is, PM me and I'll send you some links.

6Ben Pace6moI do expect to be able to vacate a given country in a timely manner if it seems to be falling into a cultural Revolution.
1Rudi C6moThe rss link: https://us4.campaign-archive.com/feed?u=412bdf6ca38cdf29c3374de56&id=06c013e05f [https://us4.campaign-archive.com/feed?u=412bdf6ca38cdf29c3374de56&id=06c013e05f]
Everything I Know About Elite America I Learned From ‘Fresh Prince’ and ‘West Wing’

There's a number of ways to interpret my question, and I kind of mean all of them:

  1. If my stated and/or revealed preferences are that I don't value joining the elite class very much, is that wrong in either an instrumental or terminal sense?
  2. For people who do seem to value it a lot, either for themselves or their kids (e.g., parents obsessed with getting their kids into an elite university), is that wrong in either an instrumental or terminal sense?

By "either an instrumental or terminal sense" I mean is "joining the elite" (or should it be) an terminal v... (read more)

5Dagon6moMany many humans don't really distinguish between terminal and instrumental values when making such decisions, and can't really tell you WHY they desire such things for them or their children. I'd say that joining (or staying in, or moving up in) the elite class is a common desire that maps pretty easily to hunter/gatherer tribal status, and is very understandable as something one might desire as a default position. For those who have explicit terminal goals, joining the elite can well be instrumental for many of them - it does ease a lot of activities, influence, and resource direction. But there are likely other instrumental paths to the same sort of influence and resource control, and agents will have to trade off where they think they can best focus.
Open & Welcome Thread – October 2020

Except it's like, the Blight has already taken over all of the Transcend and almost all of the Beyond, even a part of the ship itself and some of its crew members, and many in the crew are still saying "I'm not very worried." Or "If worst comes to worst, we can always jump ship!"

8Alexei6moIf you think we should be more worried, I’d appreciate a more detailed post. This is all new to me.
Open & Welcome Thread – October 2020

Watching cancel culture go after rationalists/EA, I feel like one of the commentators on the Known Net watching the Blight chase after Out of Band II. Also, Transcend = academia, Beyond = corporations/journalism/rest of intellectual world, Slow Zone = ...

(For those who are out of the loop on this, see https://www.facebook.com/bshlgrs/posts/10220701880351636 for the latest development.)

Except it's like, the Blight has already taken over all of the Transcend and almost all of the Beyond, even a part of the ship itself and some of its crew members, and many in the crew are still saying "I'm not very worried." Or "If worst comes to worst, we can always jump ship!"

4ryan_b6moThis isn't something I monitor, but I really appreciate the sense of scope and depth your reference provided.
What Does "Signalling" Mean?

eg, birds warning each other that there is a snake in the grass

Wait, this is not the example in the Wikipedia page, which is actually "When an alert bird deliberately gives a warning call to a stalking predator and the predator gives up the hunt, the sound is a signal."

I found this page which gives a good definition of signaling:

Signalling theory (ST) tackles a fundamental problem of communication: how can an agent, the receiver, establish whether another agent, the signaller, is telling or otherwise conveying the truth about a state of affairs or eve

... (read more)
2DanielFilan7moIndeed, to me 'signalling' is doing some action which is differentially costly depending on whether some fact is or isn't true - so mere assertion doesn't count, even if it conveys information.
Open & Welcome Thread - September 2020

Did it make you or your classmates doubt your own morality a bit? If not, maybe it needs to be taught along with the outside view and/or the teacher needs to explicitly talk about how the lesson from history is that we shouldn't be so certain about our morality...

Open & Welcome Thread - September 2020

I wonder if anyone has ever written a manifesto for moral uncertainty, maybe something along the lines of:

We hold these truths to be self-evident, that we are very confused about morality. That these confusions should be properly reflected as high degrees of uncertainty in our moral epistemic states. That our moral uncertainties should inform our individual and collective actions, plans, and policies. ... That we are also very confused about normativity and meta-ethics and don't really know what we mean by "should", including in this document...

Yeah, I rea... (read more)

5lsusr7moNon-dualist philosophies such as Zen place high value on confusion (they call it "don't know mind") and have a sophisticated framework for communicating this idea. Zen is one of the alternative intellectual traditions I alluded to in my controversial post [https://www.lesswrong.com/posts/YqAiK5pnor4LWKhJ4/the-illusion-of-ethical-progress] about ethical progress. The Dao De Jing 道德经, written 2.5 thousand years ago, includes strong warnings against ontological certainty (and, by extension, moral certainty). If we naïvely apply the Lindy Effect then Chinese civilization is likely to continue for thousands more years while Western science annihilates itself after mere centuries. This may not be a coincidence. Here is the manifesto you are looking for: Unfortunately, the duality of emptiness and form [http://www.lsusr.com/blog/emptiness-and-form.html] is difficult to translate into English.
Open & Welcome Thread - September 2020

I don't recall learning in school that most of "the bad guys" from history (e.g., Communists, Nazis) thought of themselves as "the good guys" fighting for important moral reasons. It seems like teaching that fact, and instilling moral uncertainty in general into children, would prevent a lot of serious man-made problems (including problems we're seeing play out today). So why hasn't civilization figured that out already? Or is not teaching moral uncertainty some kind of Chesterton's Fence, and teaching it widely would make the world even worse off on expectation?

5Vaniver7moThis is sort of a rehash of sibling comments, but I think there are two factors to consider here. The first is the rules. It is very important that people drive on the correct side of the road, and not have uncertainty about which side of the road is correct, and not very important whether they have a distinction between "correct for <country> in <year>" and "correct everywhere and for all time." The second is something like the goal. At one point, people thought it was very important that society have a shared goal, and worked hard to make it expansive; things like "freedom of religion" are the things civilization figured out to both have narrow shared goals (like "keep the peace") and not expansive shared goals (like "as many get to Catholic Heaven as possible"). It is unclear to me whether we're better off with moral uncertainty as generator for "narrow shared goals", whether narrow shared goals is what we should be going for.
6lsusr7moStates evolve to perpetuate themselves. Civilization has figured it out (in the blind idiot god [https://www.lesswrong.com/posts/pLRogvJLPPg6Mrvg4/an-alien-god] sense of "figured it out") that moral uncertainty is teachable and decreases trust in the state ideology. You have it backward. The states in existence today promote moral certainty in children for exactly the same reason the Communist and Nazi states did.
3ChristianKl7moWe want to teach children to accept the norms of our society and the narrative we tell about it. A lot of what we teach is essential pro-system propaganda. Teaching moral uncertainty doesn't help with that and it also doesn't help with getting students to score better on standardized tests which was the main goal of educational reforms of the last decades.
4Kaj_Sotala7moOften expressing any understanding towards the motives of a "bad guy" is taken as signaling acceptance for their actions. There was e.g. controversy around the movie Downfall [https://en.wikipedia.org/wiki/Downfall_(2004_film)#Controversy] for this:
6ryan_b7moI expect it is this. General moral uncertainty has all kinds of problems in expectation, like: * It ruins morality as a coordination mechanism among the group. * It weakens moral conviction in the individual, which is super bad from the perspective of people who believe there are direct consequences for a lack of conviction (like Hell). * It creates space for different and possibly weird moralities to arise; I don't know of any moral systems that think it is a good thing to be a member of a different moral system, so I expect all the current moral systems to agree on this one. I feel like the first bullet point is the real driving force behind the problems it would prevent, anyhow. Moral uncertainty doesn't cause people to do good things; it keeps them from doing good things (that are different from other groups' definitions of good things).
4cousin_it7moWouldn't more moral uncertainty make people less certain that Communism or Nazism were wrong?
3RyanCarey7moI guess it's because high-conviction ideologies outperform low-conviction ones, including nationalistic and political ideologies, and religions. Dennett's Gold Army/Silver Army [https://marginalrevolution.com/marginalrevolution/2013/01/the-army-of-economists.html] analogy explains how conviction can build loyatly and strength, but a similar thing is probably true for movement-builders. Also, conviction might make adherents feel better, and therefore simply be more attractive.
5ESRogs7moI would guess that teaching that fact is not enough to instill moral uncertainty. And that instilling moral uncertainty would be very hard.
3TurnTrout7moIf I had to guess, I'd guess the answer is some combination of "most people haven't realized this" and "of those who have realized it, they don't want to be seen as sympathetic to the bad guys".
4gbear6057moThat's definitely how it was taught in my high school, so it's not unknown.

I wonder if anyone has ever written a manifesto for moral uncertainty, maybe something along the lines of:

We hold these truths to be self-evident, that we are very confused about morality. That these confusions should be properly reflected as high degrees of uncertainty in our moral epistemic states. That our moral uncertainties should inform our individual and collective actions, plans, and policies. ... That we are also very confused about normativity and meta-ethics and don't really know what we mean by "should", including in this document...

Yeah, I rea... (read more)

"The Holy Grail" of portfolio management

I have changed my mind about shorting stocks and especially call options. The problem is that sometimes a stock I shorted rises sharply on significant or insignificant news (which I didn't notice myself until the price already shot up a lot), and I get very worried that maybe it's the next Tesla and will keep rising and wipe out all or a significant fraction of my net worth, and so I panic buy the stock/options to close out the short position. Then a few days later people realize that the news wasn't that significant and the stock falls again. Other than r... (read more)

What should we do once infected with COVID-19?

I haven't been following developments around hydroxychloroquine very closely. My impression from incidental sources is that it's probably worth taking along with zinc, at least early in the course of a COVID-19 infection. I'll probably do a lot more research if and when I actually need to make a decision.

3Rob Bensinger8moA couple minutes after I wrote this question I found out Scott Alexander said July 29 [https://www.reddit.com/r/slatestarcodex/comments/hzoxh1/learned_epistemic_helplessness_covid19_and_hcq/fzlkb6e/?context=3] :
Tips/tricks/notes on optimizing investments

With a little patience and a limit order, you can usually get the midpoint between bid and ask, or close to it.

How do you do this when the market is moving constantly and so you'd have to constantly update your limit price to keep it at the midpoint? I've been doing this manually and unless the market is just not moving for some reason, I often end up chasing the market with my limit price, and then quickly get a fill (probably not that close to the midpoint although it's hard to tell) when the market turns around and moves into my limit order.

1gilch8moWell, it's obviously not going to be the midpoint when it fills because you can only buy at someone's ask or sell at someone's bid. But with a limit order, you can be the best bid or ask. I don't usually chase it. If you're buying a call and the market drops, you get a fill. If it rallies, maybe you wait 15 minutes and adjust, or try again tomorrow. For LEAPS it wouldn't be unreasonable to try for a few days. The market is usually calmer in the middle of the trading day, maybe because the big players are eating lunch, although it can get chaotic again near the close. Except for a very liquid underlying near the money, you're almost always trading options with a market maker. The market maker will set the bid and ask based on his models. If he gets a fill, and can't get enough of the opposite side, he'll buy or sell shares of the underlying to neutralize his delta. He's not making directional bets, just making money on the spreads. If you offer a trade near the midpoint, but even slightly in the market-maker's favor, he'll usually trade with you, when he gets around to it. This could easily take fifteen minutes. He also doesn't like you narrowing the spread on him, because that means someone might trade with you directly and he doesn't get his cut, so if he can handle your volume, he'll just take your order off the book.
Tips/tricks/notes on optimizing investments

Good points.

And in a margin account, a broker can typically sell any of your positions (because they’re collateral) to protect its interests, even part of a spread, which can again expose you to delta risk if they don’t close your whole box at once.

I guess technically it's actually "expose you to gamma risk" because the broker would only close one of your positions if doing so reduced margin requirements / increased buying power, and assuming you're overall long the broad market, that can only happen if doing so decreases overall delta risk. Another wa... (read more)

3gilch8moYeah, that sounds right. But gammas can turn into delta as the market moves. If you do box with American options and get assigned early, the shares (or short shares) will hedge you for a while because they'll have a similar contribution to your overall portfolio delta as the option they replaced, but it's not going to have the same behavior as an option when the price moves. So you'd want to close and reposition before that happens, which, of course, requires capital and commissions. You would think. Sometimes you get liquidated by an algorithm though. I've heard that Interactive Broker's liquidation algorithms are especially aggressive, which is part of how they can offer such competitive margin loan rates. (They also have a "liquidate last" feature that lets you protect some positions from the algorithm for longer. Definitely use that for the boxes.) Yes. I have no first-hand experience with this. I have heard things on forums from people, but I can't call that a reliable source.
Tips/tricks/notes on optimizing investments

Another way to get leverage in a retirement account is with leveraged ETFs.

Yeah, and another way I realized after I wrote my comment is that you can also buy stock index futures contracts in IRA accounts, and I forgot exactly but I think you can get around 5x max leverage that way. Compared to leveraged ETFs this should incur less expense cost and allow you to choose your own rebalancing schedule for a better tradeoff between risk and trading costs. (Of course at the cost of having to do your own rebalancing.)

Also after writing my comment, I realized th... (read more)

Tips/tricks/notes on optimizing investments
  1. Look for sectors that crash more than they should in a market downturn, due to correlated forced deleveraging, and load up on them when that happens. The energy midstream/MLP sector is a good recent example, because a lot of those stocks were held in closed end funds in part for tax reasons, those funds all tend to use leverage, and because they have a maximum leverage ratio that they're not allowed to exceed, they were forced to deleverage during the March crash, which caused more price drops and more deleveraging, and so on.
Tips/tricks/notes on optimizing investments

What are some reputable activist short-sellers?

I'm reluctant to give out specific names because I'm still doing "due diligence" on them myself. But generally, try to find activist short-sellers who have a good track record in the past, and read/listen to some of their interviews/reports/articles to see how much sense they make.

Where do you go to identify Robinhood bubbles?

I was using Robintrack.net but it seems that Robinhood has stopped providing the underlying data. So now I've set up a stock screener to look for big recent gains, and then check w... (read more)

Tips/tricks/notes on optimizing investments

Note on 5: Before you try this, make sure you understand what you're getting into and the risks involved. (There are rarely completely riskless arbitrage opportunities, and this isn't one of them.)

  1. Stock borrowing cost might be the biggest open secret that few investors know about. Before buying or shorting any individual stock, check its borrowing cost and "utilization ratio" (how much available stock to borrow have already been borrowed for short selling) using Interactive Broker's Trader Workstation. If borrowing cost is high and utilization ratio isn'
... (read more)
What posts on finance would your find helpful or interesting?

Technical analysis, momentum, trend following, and the like, from an EMH-informed perspective.

I've been dismissive of anything that looks at past price information, but given that markets are clearly sometimes inefficient due to short selling being constrained by availability and cost of borrowing stock (which causes prices to be too high which can cause short squeezes), this can "infect" the market with inefficiency during other times as well (because potential short sellers are afraid of being short squeezed), which means there's no (obvious) theoretical reason to dismiss technical analysis and the like anymore.

4Alexei7moHmm, so I'm realizing I can't write much about this without revealing / implying certain key information about how we view things at my hedge fund. :/ But I found this document, which is pretty sane and goes into some details on this and other basic topics: https://bigquant.com/community/uploads/default/original/3X/7/d/7d3a0a5a4f0f2fbfd87b2c6e70037dd3c8f48e2c.pdf [https://bigquant.com/community/uploads/default/original/3X/7/d/7d3a0a5a4f0f2fbfd87b2c6e70037dd3c8f48e2c.pdf]

Oh nice, I can definitely write about this. This is basically what I do all day.

"The Holy Grail" of portfolio management

Recently I started thinking that it's a good idea to add short positions (on individual stocks or call options) to one's portfolio. Then you can win if either the short thesis turns out to be correct (e.g., the company really is faking its profits), or the market tanks as a whole and the short positions act as a hedge. I wrote about some ways to find short ideas in a recent comment.

Question for the audience: do you know of a good way to measure the worst case correlation?

Not sure if this is the best way, but I've just been looking at the drawdown perce... (read more)

6Wei_Dai7moI have changed my mind about shorting stocks and especially call options. The problem is that sometimes a stock I shorted rises sharply on significant or insignificant news (which I didn't notice myself until the price already shot up a lot), and I get very worried that maybe it's the next Tesla and will keep rising and wipe out all or a significant fraction of my net worth, and so I panic buy the stock/options to close out the short position. Then a few days later people realize that the news wasn't that significant and the stock falls again. Other than really exceptional circumstances like the recent Kodak situation, perhaps it's best to leave shorting to professionals who can follow the news constantly and have a large enough equity cushion that they can ride out any short-term spikes in the stock price. I think my short portfolio is still showing an overall profit, but it's just not worth the psychological stress involved and the constant attention that has to be paid.
Tips/tricks/notes on optimizing investments

Possible places to look for alpha:

  1. Articles on https://seekingalpha.com/. Many authors there give free ideas/tips as advertisement for their paid subscription services. The comments section of articles often have useful discussions.
  2. Follow the quarterly reports of small actively managed funds (or the portfolio/holdings reports on Morningstar, which show fund portfolio changes) to get stock ideas.
  3. Follow reputable activist short-sellers on Twitter. (They find companies that commit fraud, like Luckin Coffee or Wirecard, and report on them after shorting the
... (read more)
5Wei_Dai8mo1. Look for sectors that crash more than they should in a market downturn, due to correlated forced deleveraging, and load up on them when that happens. The energy midstream/MLP sector is a good recent example, because a lot of those stocks were held in closed end funds in part for tax reasons, those funds all tend to use leverage, and because they have a maximum leverage ratio that they're not allowed to exceed, they were forced to deleverage during the March crash, which caused more price drops and more deleveraging, and so on.
2Wei_Dai8moNote on 5: Before you try this, make sure you understand what you're getting into and the risks involved. (There are rarely completely riskless arbitrage opportunities, and this isn't one of them.) 1. Stock borrowing cost might be the biggest open secret that few investors know about. Before buying or shorting any individual stock, check its borrowing cost and "utilization ratio" (how much available stock to borrow have already been borrowed for short selling) using Interactive Broker's Trader Workstation. If borrowing cost is high and utilization ratio isn't very low (not sure why that happens sometimes) that means some people are willing to pay a high cost per day to hold a short position in the stock, which means it very likely will tank in the near future. But if utilization ratio is very high, near 100%, that means no new short selling can take place so the stock can easily zoom up more due to lack of short selling pressure and potential for short squeeze, before finally tanking. If you do decide you want to bet against the short sellers and buy the stock anyway, at least hold the position at a broker that offers a Fully Paid Lending Program, so you can capture part of the borrowing cost that short sellers pay.
2Eigil Rischel8mo* What are some reputable activist short-sellers? * Where do you go to identify Robinhood bubbles? (Maybe other than "lurk r/wallstreetbets and inverse whatever they're hyping"). I guess this question is really a general question about where you go for information about the market, in a general sense. Is it just reading a lot of "market news" type sites?
The Wrong Side of Risk

Recently I had the epiphany that an investor's real budget constraint isn't how much money they have (with portfolio margin you can get 6x or even 12x leverage) but how much risk-taking capacity they have. So another way of making what I think is your main point is that the market pays you to take (certain kinds of) risks, so don't waste your risk-taking capacity by taking too little risk. But one should be smart and try to figure out where the market is paying the most per unit of risk.

Standard finance theory says the market should pay you the most for ta... (read more)

4gilch8moEmpirically, option implied volatility tends to exceed realized volatility for most stocks most of the time. This is plainly visible if you plot both implied and historical volatility on the same chart, and even more obvious if you use moving averages for each to smooth the noise. This is the well-known "option seller's edge", an effect that has been quite persistent historically. And not just for puts, due to put-call parity [https://en.wikipedia.org/wiki/Put%E2%80%93call_parity], this applies to calls as well. Empirically, a covered short strangle portfolio not only beat the index, it had performance comparable to a hedge fund. [http://www.cboe.com/micro/buywrite/cambridge-2011-highlightsfromsellingvolatility.pdf] Empirically, this strategy [https://financial-hacker.com/algorithmic-options-trading/] of selling a naked 30-day at-the-money SPY option (randomly a call or put) shows a positive expectancy, while the reverse strategy of buying the option shows the opposite. I'm not actually recommending you do this (because it's easy to accidentally Bet the Farm by selling naked options), but it illustrates the edge. As for why this should happen, yeah, the market is risk-averse, so there's a risk premium for taking that risk off their hands. How big that premium is depends not just on the amount of risk, but the demand for insurance and the competition between insurers. If there were too many competing insurers to supply the puts (or if they were too big), then the margins would be too thin (but not negative!) for retail traders to profit from. But that's not what we see happening. I wouldn't say differing from "the average investor" is what matters, but from the investors who control the most money.
Alignment By Default

So similarly, a human could try to understand Alice's values in two ways. The first, equivalent to what you describe here for AI, is to just apply whatever learning algorithm their brain uses when observing Alice, and form an intuitive notion of "Alice's values". And the second is to apply explicit philosophical reasoning to this problem. So sure, you can possibly go a long way towards understanding Alice's values by just doing the former, but is that enough to avoid disaster? (See Two Neglected Problems in Human-AI Safety for the kind of disaster I have i... (read more)

2johnswentworth8moI mostly agree with you here. I don't think the chances of alignment by default are high. There are marginal gains to be had, but to get a high probability of alignment in the long term we will probably need actual understanding of the relevant philosophical problems.
Alignment By Default

To help me check my understanding of what you're saying, we train an AI on a bunch of videos/media about Alice's life, in the hope that it learns an internal concept of "Alice's values". Then we use SL/RL to train the AI, e.g., give it a positive reward whenever it does something that the supervisor thinks benefits Alice's values. The hope here is that the AI learns to optimize the world according to its internal concept of "Alice's values" that it learned in the previous step. And we hope that its concept of "Alice's values" includes the idea that Alice w... (read more)

2John_Maxwell8moMy take is that corrigibility is sufficient to get you an AI that understands what it means to "keep improving their understanding of Alice's values and to serve those values". I don't think the AI needs to play the "genius philosopher" role, just the "loyal and trustworthy servant" role. A superintelligent AI which plays that role should be able to facilitate a "long reflection" where flesh and blood humans solve philosophical problems. (I also separately think unsupervised learning systems could in principle make philosophical breakthroughs. Maybe one already has [https://twitter.com/AmandaAskell/status/1284307770024448001].)
8johnswentworth8moThere's a lot of moving pieces here, so the answer is long. Apologies in advance. I basically agree with everything up until the parts on philosophy. The point of divergence is roughly here: I do think that resolving certain confusions around values involves solving some philosophical problems. But just because the problems are philosophical does not mean that they need to be solved by philosophical reasoning. The kinds of philosophical problems I have in mind are things like: * What is the type signature of human values? * What kind of data structure naturally represents human values? * How do human values interface with the rest of the world? In other words, they're exactly the sort of questions for which "utility function" and "Cartesian boundary" are answers, but probably not the right answers. How could an AI make progress on these sorts of questions, other than by philosophical reasoning? Let's switch gears a moment and talk about some analogous problems: * What is the type signature of the concept of "tree"? * What kind of data structure naturally represents "tree"? * How do "trees" (as high-level abstract objects) interface with the rest of the world? Though they're not exactly the same questions, these are philosophical questions of a qualitatively similar sort to the questions about human values. Empirically, AIs already do a remarkable job reasoning about trees, and finding answers to questions like those above, despite presumably not having much notion of "philosophical reasoning". They learn some data structure for representing the concept of tree, and they learn how the high-level abstract "tree" objects interact with the rest of the (lower-level) world. And it seems like such AIs' notion of "tree" tends to improve as we throw more data and compute at them, at least over the ranges explored to date. In other words: empirically, we seem to be able to solve philosophical problems to a surprising degree by throwing data and compute at
Tips/tricks/notes on optimizing investments

I don't have a detailed analysis to back it up, but my guess is that CEFs are probably superior because call options don't pay dividends so you're not getting as much tax benefit as holding CEFs. It's also somewhat tricky to obtain good pricing on options (the bid-ask spread tends to be much higher than on regular securities so you get a terrible deal if you just do market orders).

3gilch8moI pretty much never use market orders for options. With a little patience and a limit order, you can usually get the midpoint between bid and ask, or close to it. This would be especially important for LEAPS.
Tips/tricks/notes on optimizing investments

For people in the US, the best asset class to put in a tax-free or tax-deferred account seems to be closed-end funds (CEF) that invest in REITs. REITs because they pay high dividends, which would usually be taxed as non-qualified dividends, and CEF (instead of ETF or open-end mutual funds) because these funds can use leverage (up to 50%), and it's otherwise hard or impossible to obtain leverage in a tax-free/deferred account (because they usually don't allow margin). (The leverage helps maximize the value of tax-freeness or deferral, but if you don't like ... (read more)

1gilch8moAnother way to get leverage in a retirement account is with leveraged ETFs. I am using some of those in my IRA currently. You can get up to 3x for some index ETFs. I'm still interested in these CEFs for diversification though, how do you find these?
2ESRogs8moAnother way to get leverage in an IRA is to buy long-dated call options (as recommended in Lifecycle Investing [https://www.lesswrong.com/posts/4wL5rcS97rw58G98B/review-of-lifecycle-investing] ). Would you expect CEFs to be superior?
Property as Coordination Minimization

Many different landlords can make many different decisions, whereas one Housing Bureau will either make one decision for everyone, or make unequal decisions in a corrupt way.

In our economy we have all three of:

  1. individual landlords making decisions about property that they directly own
  2. groups of people pooling capital to buy property, then hiring professional managers to make decisions on behalf of the group (c.f. REIT)
  3. property (e.g., public housing projects, parks) that is owned by various government departments/agencies, and managed by bureaucrats
... (read more)
2Vaniver8moMy impression is that this is mostly because of external competitive pressures; my impression is that when the Housing Bureau is the primary source of housing, it is mysteriously the case that better connected people get better housing. When you can buy your own house or enter the affordable housing lottery, most of the rich choose to buy their own house. (It might still be the case that the politically well-connected poor end up with disproportionately many affordable housing slots compared to the unconnected poor, but that's less of a corrupting force because the stakes are smaller overall.) Like, a system where people are free to coordinate at whatever level makes local sense seems like it's obviously superior, and there are ways in which having corporations allows you to hit better points in the 'aggregated individual benefit minus coordination cost' space. The basic question here for me is something like "rule of law" vs. "rule of men"; for example, Washington DC has the Height Act that prohibits buildings above a certain height (actually related to the street width, but in general it's about 11 stories tall). This gives DC its particular character, and ensures the major government buildings remain impressive compared to their surroundings. When embarking on a construction project in DC, there's no question about how high the government will let you build; it simply won't be above the height cap. Similarly, a rule that banned backyard cottages in general, or third floors in general, might make sense, as would a law that caused property taxes to be proportional to demand on public services (like traffic and sewer and trash) or to be periodically reassessed (so that improvements in the property lead to increased taxes) instead of simply reassessed at sale. Similarly it could make sense to tax ongoing construction proportional to the length of the construction. That way the externalities would be priced in, either with a clear policy restriction or a tax based
Predictions for GPT-N

Anyone want to predict when we'll reach the same level of translation and other language capability as GPT-3 via iterated amplification or another "aligned" approach? (How far behind is alignment work compared to capability work?)

3capybaralet8moI think GPT-3 should be viewed as roughly as aligned as IDA would be if we pursued it using our current understanding. GPT-3 is trained via self-supervised learning (which is, on the face of it, myopic), so the only obvious x-safety concerns are something like mesa-optimization. In my mind, the main argument for IDA being safe is still myopia. I think GPT-3 seems safer than (recursive) reward modelling, CIRL, or any other alignment proposals based on deliberately building agent-y AI systems. -------------------- In the above, I'm ignoring the ways in which any of these systems increase x-risk via their (e.g. destabilizing) social impact and/or contribution towards accelerating timelines.
Six economics misconceptions of mine which I've resolved over the last few years

Another big update for me is that according to modern EMH, big stock market movements mostly reflect changes in risk premium, rather than changes in predicted future cash flows. (The recent COVID-19 crash however was perhaps driven even more by liquidity needs.)

Important thing to note here: the Fama and French crowd tend to call a lot of things "risk premiums" which may or may not reflect any actual taste for risk; they're just outputs of a factor model. That doesn't mean that they're meaningless, but calling it a "risk premium" is often rather misleading. I wouldn't be the least bit surprised if one of their time-dependent "risk premiums" were actually just a factor corresponding to liquidity needs.

Load More