Open & Welcome Thread - February 2020

by ryan_b1 min read4th Feb 2020115 comments

18

Open Threads
Personal Blog

If it’s worth saying, but not worth its own post, here's a place to put it. (You can also make a shortform post)

And, if you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ.

The Open Thread sequence is here.

116 comments, sorted by Highlighting new comments since Today at 10:09 AM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Anyone else look at the coronavirus outbreak and think this is how a future existential accident will play out, with policy always one step behind what's necessary to control it, because governments don't want to take the risk of "overreacting" and triggering large or very large political and economic costs "for no good reason". So they wait until absolutely clear evidence emerge, by which time it will be too late.

Why will governments be more afraid of overreacting than underreacting? (Parallel: We don't seem to see governments doing anything in this outbreak that could be interpreted as overreacting.) Well, every alarm up to that point will have been a false alarm. (Parallel: SARS, MERS, Ebola, every flu pandemic in recent history.)

5Matthew Barnett9moI share this reaction [https://www.lesswrong.com/posts/MnrQMLuEg5wZ7f4bn/matthew-barnett-s-shortform#4bJtYxFtjHyrJxwEs] . I think that a lot of people are under-reacting due to misperception of overreaction, signaling wisdom and vague outside view stuff. I can tell because so far everyone who has told me to "stop panicking" won't give me any solid evidence for why my fears are underrated. It now seems plausible that unless prominent epidemiologists are just making stuff up and the deathrate is also much smaller than its most commonly estimated value, then between 60-160 million people will die from it within about a year. Yet when I tell people this they just brush it off! [ETA: Please see comments below. This estimate does not imply I think there is a greater than 50% chance of this happening.]
4Rafael Harth9moI see this problem all the time with regard to things that can be classified as "childish". Beside pandemics, the most striking examples in my mind are risk of nuclear war and risk of AI [https://www.lesswrong.com/posts/2CgFWnjPqj5Ljb89n/excitement-vs-childishness], but I expect there are lots of others. I don't exactly think of it as signaling wisdom, but as signaling being a serious-person-who-undestands-that-unserious-problems-are-low-status (the difference being that it doesn't necessitate thinking of yourself as particularly "smart" or "wise").
2shminux9moThat seems high. If you assume that it's as contagious as the regular flu, and given that every year about 5-15% of people get infected (https://en.wikipedia.org/wiki/Influenza#Epidemic_and_pandemic_spread), [(https://en.wikipedia.org/wiki/Influenza#Epidemic_and_pandemic_spread),] that makes roughly 700 million infected, and given the expected mortality rate in single percents (currently 7% and dropping of all closed cases, estimated 1% in general), we arrive at the 10 million deaths estimate without any containment measures in place. Given the containment measures, the number of infections and deaths is likely to be a fraction of that, likely under a million dead.
7Matthew Barnett9moFrom this thread [https://forum.effectivealtruism.org/posts/HbqQyTDjvw2wGGKYw/covid-19-brief-for-friends-and-family] , * The coronavirus spreads a little faster than the flu. * You have some natural immunity to flu even though each season the strain is different. You probably have no immunity against this coronavirus. * We have a reliable vaccine against seasonal flu. We will not have a vaccine or effective treatment for coronavirus for some time. * Seasonal flu is very well characterized and understood. This virus is still under intensive study, and all the numbers I give have uncertainty, which means that it may be worse than our best guess. Long-term effects of catching the virus are unknown. Also, my estimates from a few days ago were out of date and I did more research in the intervening time and found that the case fatality rate was probably lower than I was previously lead to believe (I did research back in January and then stopped for a while since it was draining to research it). My current estimate that you can quote me on is that there is a 10% chance of the virus killing more than 50 million people [ETA: Update, I did more research. 5% is now probably my current estimate.]. I used language that did not reflect my probability estimates here as I used the word "plausible" but not in a sense that implied probable.
2John_Maxwell9moCan't remember where, but I remember reading that for people in their 20s and 30s, the death rate is only 0.1%.
1Rafael Harth9mohttps://predictionbook.com/predictions/198261 [https://predictionbook.com/predictions/198261]
1Lukas_Gloor9moOne would think the incentives for an international body like the WHO would be different, but the way they handled it sadly suggests otherwise. (That said, I don't actually know whether a stronger early reaction by the WHO would have changed anything, because it seems like most of the necessary stuff happens on national levels anyway.)

See these news stories about the WHO being blamed for being too aggressive about swine flu, which probably caused it to learn a wrong lesson:

3brianwang7128moYou might also be interested in the 1976 mass vaccination program in the US for swine flu, which was a case of perceived overreaction (given the anticipated pandemic never materialized) and also hurt the reputation of public health generally: https://www.discovermagazine.com/health/the-public-health-legacy-of-the-1976-swine-flu-outbreak [https://www.discovermagazine.com/health/the-public-health-legacy-of-the-1976-swine-flu-outbreak] Or in "The Cutter Incident" in 1955, where a rush to get a polio vaccine out in advance of the next polio season resulted in some batches containing live polio virus, with several children receiving the vaccine actually getting polio instead: https://en.wikipedia.org/wiki/Cutter_Laboratories#The_Cutter_incident [https://en.wikipedia.org/wiki/Cutter_Laboratories#The_Cutter_incident] There's definitely a history of incidents in public health of perceived overreaction followed by public backlash, which could potentially be playing into public health officials' heads nowadays. I don't know if becoming more conservative and less-quick-to-take-action is necessarily a wrong lesson, though – even if you think, just simply on the numbers, that taking preventative measures in each of these incidents was correct ex ante given the stakes involved, reputational risks are real and have to be taken into account. As much as "take action to prepare for low probability, high consequence scenarios when the expected cost < expected benefit" applies to personal preparation, it doesn't translate easily to governmental action, at least not when "expected cost" doesn't factor in "everyone will yell at you and trust you less in the future if the low probability scenario doesn't pan out, because people don't do probabilities well." This does put us in a bit of a bind, since ideally you'd want to have public health authorities be able to take well-calibrated actions against <10%-likely scenarios. But they are, unfortunately, constrained by public perception
3[comment deleted]9mo

Statements of (purported) empirical fact are often strong Bayesian evidence of the speaker's morality and politics (i.e., what values one holds, what political coalition one supports), and this is a root cause of most group-level bad epistemics. For example someone who thinks eugenics is immoral is less likely (than someone who doesn't think that) to find it instrumentally useful to make a statement like (paraphrasing) "eugenics may be immoral, but is likely to work in the sense that selective breeding works for animals", so when someone says that, it is evidence for them not thinking that eugenics is immoral and therefore not belonging to a political coalition that holds "eugenics is immoral" as a part of its ideology.

I think for many people this has been trained into intuition/emotion (you automatically think that someone is bad/evil or hate them if they express a statement of fact that your political enemies are much more likely to make than your political allies) or even reflexive policy (automatically attack anyone who makes such statements).

This seems pretty obvious to me, but some people do not seem to be aware of it (e.g., Richard Dawkins seemed surprised by people's reacti

... (read more)
9Wei_Dai9moAn idea for explaining this: some group happened to adopt such norms due to a historical accident, and there happened to be enough low hanging epistemic fruit that could be picked up by a group operating under such norms that the group became successful enough for the norms to spread by conquest and emulation. This also suggests that one reason for the decay of these norms is that we are running out of low hanging fruit.
4Zack_M_Davis9moI discuss this in "Heads I Win, Tails?—Never Heard of Her" [https://www.lesswrong.com/posts/DoPo4PDjgSySquHX8/heads-i-win-tails-never-heard-of-her-or-selective-reporting] ("Reality itself isn't on anyone's side, but any particular fact, argument, sign, or portent might just so happen to be more easily construed as "supporting" the Blues or the Greens [...]"). I suspect Dawkins was motivatedly playing dumb, or "living in the should-universe" [https://www.lesswrong.com/posts/9HGR5qatMGoz4GhKj/above-average-ai-scientists]. Indignation (e.g., at people motivatedly refusing to follow a simple logical argument because of their political incentives) often manifests itself as expression of incomprehension, but is distinguishable from literal incomprehension (e.g., by asking Dawkins to bet beforehand on what he thinks is going to happen after he Tweets that).
4Wei_Dai9moRobin Hanson [https://twitter.com/robinhanson/status/1229820954768838656] also doesn't seem to be aware of what I wrote in the parent comment: But why are some people low-decouplers? I think it's because "Statements of (purported) empirical fact are often strong Bayesian evidence of the speaker’s morality and politics" so we can't simply "choose the opposite" without understanding this and its implications. To put it another way, a lot of times when someone says "By X, I don’t mean Y" they actually secretly do believe Y, so if another person "CHOOSE to hear Y", that's not a completely unreasonable heuristic, and we can't just "not accept their claim that they can decide what words mean & how language works" without acknowledging this.
5Wei_Dai9moCopy-pasting a followup to this with Robin Hanson via DM (with permission). Robin: You can of course suspect people of many things using many weak clues. But you should hold higher standards of evidence when making public accusations that you say orgs should use to fire people, cancel speeches, etc. Me: My instinct is to support/agree with this, but (1) it's not an obvious interpretation of what you tweeted and (2) I think we need to understand why the standards of evidence for making public accusations and for actual firing/canceling have fallen so low (which my own comment didn't address either) and what the leverage points are for changing that, otherwise we might just be tilting at windmills when we exhort people to raise those standards (or worse, making suicide charges, if we get lumped with "public enemies").

I think I figured out a stunningly simple way to modulate interstellar radio signals so that they contain 3-D spatial information on the point of origin of an arbitrarily short non-repeating signal. I applied this method to well-known one-off galactic radio transients and got sane results. I would love to write this up for ARXIV.

Anybody got a background in astronomy that can help make sure I write this up for ARXIV in a way that uses the proper terminology and software?

5Stefan429moThat is a very broad description - are you talking about locating Fast Radio Bursts? I would be very surprised if that was easily possible. Background: Astronomy/Cosmology PhD student
2CellBioGuy9moI'm afraid it actually only works for narrow band radio signals of potentially technological origin in the galactic disk. I will send more via p.m.

Global equity markets may have underestimated the economic effects of a potential COVID-19 pandemic because the only historical parallel to it is the 1918 flu pandemic (which is likely worse than COVID-19 due to a higher fatality rate) and stock markets didn't drop that much. But maybe traders haven't taken into account (and I just realized this) that there was war-time censorship in effect which strongly downplayed the pandemic and kept workers going to factories, which is a big disanalogy between the two cases, so markets could drop a lot more this time around. The upshot is that maybe it's not too late to short the markets.

5ChristianKl9moHow can a private individual with a few thousand dollars to invest effectively trade on the idea that equity markets underestimate this? Might this be a way to make money for rationalists?

I bought some S&P 500 put options (SPXW Apr 17 2020 3000 Put (PM) to be specific) a couple of weeks ago. They were 40% underwater at some point because the market kept going up (which confused me a lot), but is up 125% as of today. (Note that it's very easy to lose 100% of your investment when trading options. In my case, all I'd have to do is not sell the options until April 17 and S&P 500 hasn't dropped to 3000 by then.) I had to open a brokerage account (most have no minimum I think) then apply to trade options then wait a day to be approved. You can also sell stocks short. You can also bet against foreign markets and specific stocks this way.

The above is for information purposes only. It is not intended to be investment advice. Seek a duly licensed professional for investment advice.

The option I bought is up 700% since I bought them, implying that as of 2/10/2020 the market thought there was less than 1/8 chance things would be as bad as they are today. At least for me this puts a final nail in the coffin of EMH.

Chart for SPXW Apr 17 2020 3000 Put

Added on Mar 24: Just in case this thread goes viral at some point, to prevent a potential backlash against me or LW (due to being perceived as caring more about making money than saving lives), let me note that on Feb 8 I thought of and collected a number of ideas for preventing or mitigating the pandemic that I foresaw and subsequently sent them to several people working in pandemic preparedness, and followed up with several other ideas as I came across them.

Thank you for sharing this info. My faith is now shaken.

From someone replying to you on Twitter:

Someone made a profitable trade ergo markets aren’t efficient?

This is why I said "at least for me". You'd be right to discount the evidence and he would be right to discount the evidence even more, because of more room for selection bias.

ETA: Hmm, intuitively this makes sense but I'm not sure how it squares up with Aumann Agreement. Maybe someone can try to work out the actual math?

4Pattern9moWould pre-registration fix the issue?
1Mason Bially5moI always thought the EMH was obviously invalid due to it's connection with the P=NP issue (which is to say the EMH iff P=NP).

The position is now up 1000%. I find myself unsure what to do at this point. (Aside from taking some profit out) should I close out the position, and if so put the money into what?

Also, I find myself vexed with thoughts like "if only I had made this other trade, I could have made even more profits" or "if only I had put even more money into the bet ..." How do professional or amatuer traders deal with this?

An update on this trade in case anyone is interested. The position is now up 1500%. I also have another position which is up 2300% (it's a deeper out-of-the-money put, which I realized would be an even better idea after seeing a Facebook post by Danielle Fong). For proper calibration I should mention that a significant part of these returns is due to chance rather than skill:

  1. VIX (a measure of stock market volatility priced into options) was unreasonably low when I bought the puts (apparently because traders got used to central banks rescuing the stock market on every downturn), meaning the put options were underpriced in part due to that, but I didn't know this.
  2. Russia decided not to cooperate with Saudi Arabia in lowering oil production, in order to hurt the US shale oil industry. This is not something I could have reasonably predicted.
  3. I also didn't predict that the CDC would bungle their testing kits, and the FDA would delay independent testing by others so much, thus making containment nearly impossible in the US.

Another reason for attributing part of the gains (from betting on the coronavirus market crash) to luck, from Rob Henderson's newsletter which BTW I highly recommend:

The geneticist Razib Khan has said that the reason the U.S. took so long to respond to the virus is that Americans do not consider China to be a real place. For people in the U.S., “Wuhan is a different planet, mentally.” From my view, it didn’t seem “real” to Americans (or Brits) until Italy happened.

Not only have I lived in China, my father was born in Wuhan and I've visited there multiple times.

2ryan_b5moIt feels like your background should be attributed differently than things like the Saudi-Russian spat, or the artificially deflated VIX. In Zvi's terminology this is an Unknown Known [https://www.lesswrong.com/posts/7w2tCRocphK9yW8rG/unknown-knowns]; it isn't as though you weren't updating based on it. It was merely an unarticulated component of the prior.
3Matthew Barnett8moAfter today's crash, what are you at now?

Up 2600% and 5200%. ETA: Now back down to 2300% and 4200%.

1yangshuo10158moHave you sold those put options by now? Looks like the Fed and Treasury 6 trillion stimulation package boosted the market a lot. I had similar put position which dropped significantly during the past 2 days of Market rally. Do you think it is still good to hold the put options?

I did sell some of the puts, but not enough of them and not near enough to the bottom to not leave regrets. I definitely underestimated how fast and strong the monetary and fiscal responses were, and paid too much attention to epidemiological discussions relative to developments on those policy fronts. (The general lesson here seems to be that governments can learn to react fast on something they have direct experience with, e.g., Asian countries with SARS, the US with the 2008 financial crisis.) I sold 1/3 of remaining puts this morning at a big loss (relative to paper profits at the market bottom) and am holding the rest since it seems like the market has priced in the policy response but is being too optimistic about the epidemiology. The main reason I sold this morning is that the Fed might just "print" as much money as needed to keep the market at its current level, no matter how bad the real economy gets.

1homsit8moWhy are deeper out-of-the-money puts better here? Have been scratching my head at this one for a while, but haven't been able to figure it out.
5Wei_Dai8moOne explanation is that the deeper out-of-the-money put (which remains out-of-the-money) benefits from both a fall in the underlying security and an increase in VIX. The shallower out-of-the-money put (which became in-the-money as a result of the market drop) benefits from the former, but not so much from the latter. Maybe another way to explain it is that the deeper out-of-the-money put was more mispriced to begin with.
3amanaplan8moFor a given dollar notiional investment, you are buying more vega with deeper OTM puts (or just more contracts). Basically the same as why getting things correct on 10 20-to-1 bets pays more than getting a 1-to-1 (even odds) bet.
3jmh9moNot 100% on this but I suspect the in the money puts start to be dominated by the inherent value so you have to pay for that in the money portion of the option price. The out of the money put is pure volatility.

Epistemic status: I am not a financial advisor. Please double-check anything I say before taking me seriously. But I do have a little experience trading options. I am also not telling you what to do, just suggesting some (heh) options to consider.

Your "system 1" does not know how to trade (unless you are very experienced, and maybe not even then). Traders who know what they are doing make contingency plans in advance to avoid dangerous irrational/emotional trading. They have a trading system with rules to get them in and out. Whatever you do, don't decide it on a whim. But doing nothing is also a choice.

Options are derivatives, which makes their pricing more complex than the underlying stock. Options have intrinsic value, which is what they're worth if exercised immediately, and the rest is extrinsic value, which is their perceived potential to have more intrinsic value before they expire. Options with no intrinsic value are called out of the money. Extrinsic value is affected by time remaining and the implied volatility (IV), or the market-estimated future variance of the underlying. When the market has a big selloff like this, IV increases, which inflates the extrinsic value of o

... (read more)
5Wei_Dai9moThanks, this is a really helpful intro to options. One thing you didn't address which makes me hesitant to do any more options trading is the ask-bid spread, which can easily be 10% or more for some of the options I'm looking at. I don't know how to tell when the ask-bid spread makes strategies such as "sell 1 put and buy 2 puts at lower strike" not worth doing (because potential profit is eaten up by transaction costs). Also, picking the strike price and expiration date is also a mystery to me. I did it by intuition and it seems have worked out well enough, but was probably far from optimal. I don't see how a trading system can incorporate new complex and often ambiguous evidence in real time. I definitely take your point about emotional trading being dangerous though. What does this mean? Is there a book you can recommend for a more complete education about options? Maybe I can quickly flip through it to help me figure out what to do.
3gilch9moI meant close one of the profitable puts you already own, and then use the money to buy two more. (Incidentally, the spread you are describing is called a backspread, which is also worth considering when you expect a big move, as the short option can offset some of the problematic Greeks of the long ones.) Maybe you can vary the ratios. It depends on what's available. I don't know how many puts you have, but how aggressive you should be depends on your forecast, account size, and risk tolerance. I don't know your transaction costs either, but commissions have gotten pretty reasonable these days. This can vary widely among brokers. TD Ameritrade, for example, charges only $0.65 per contract and lets you close contracts under $0.05 for free. Tastyworks charges $1.00 per contract, but always lets you close for free. They also cap their commissions at $10 per leg (which can add up if you trade at high enough volume). Firstrade charges $0. That is not a typo. (There are still some regulatory fees that add up to less that a cent.) If your commissions are much higher than these, maybe you need a new broker.
2gilch9moSystematic trading is not the same thing as algorithmic trading. They're related, but algorithmic trading is taken to the extreme where a computer can do all the work. Normal systematic trading can have a human element, and you can provide the "forecast" component (instead of technical signals or something), and the rules tell you what to do based on your current forecast. You need to have an exit already planned when you get in. Not just how to deal with a win, but also how to handle a loss, or you may be tempted to take profits too early, or be in denial and ride a loss too far. The adage is "cut your losses short and let your profits run". Emotional trading tends to do the opposite and lose money. (BTW, the rule is the opposite for short option spreads.) Carver's Systematic Trading is a good introduction to the concept. This one I have read.
2gilch9moOptions are much less liquid than the underlying, since the market is divided among so many strikes and dates. If the spread is less than 10% of the ask price, that's actually considered pretty good for an option. You can also look at open interest (the number of open contracts) and volume (the number traded today) for each contract to help judge liquidity (this information should also be available from the broker.) Typically strike prices closer to the underlying price are more liquid. Also, the monthly (third-Friday) contracts tend to be more liquid than the Weeklys. (Weeklys weren't available before, so monthly contracts are something of a Schelling point. They also open sooner.) Do not trade options with market orders. Use limit orders and make an offer at about the midpoint between bid and ask. The market maker will usually need some time to get around to your order. You'll usually get a fill within 15 minutes. If not, you may have to adjust your price a little before someone is willing to take the other side of the deal. A little patience can save a lot of money.
1gilch9moGamma is the rate of change of Delta. It's how curved your P&L graph is. Gamma opposes Theta. If you want more Deltas (like owning shares) and you expect a big move, Gammas are a way to get them cheaply, because they turn in to Deltas. (Of course, Deltas are negative for long puts.) Options are complex, but maybe not that complex. Option pricing models do use differential equations, but everybody uses computers for that. Trading options is not beyond the reach on anyone who passed a calculus class, but I'm still not sure if you can pick it up that quickly. I did not learn all of this from a textbook. I know there are books that cover this. Hull's Options, Futures and Other Derivatives is the introductory textbook I hear recommended, but I have not read it myself (you might want to skip the futures chapters.) There may be shorter introductions. I think Tastyworks was supposed to have a good intro somewhere.
1gilch9moUse the Greeks! Watch them and adjust them to your needs. They trade off against each other, but a spread can have the sum or difference of them. Keep in mind that extrinsic value is perceived potential and the Greeks make a lot more sense. The strikes nearest the underlying price have the most extrinsic and liquidity. Those deeper in the money have more Delta. Each Delta is like owning a share (puts have negative Deltas). Those further out of the money have more Gamma for the price. These relationships are nonlinear, because the underlying price variance is assumed to have a normal distribution (which is close enough to true most of the time). Theta is not constant. It gets stronger the closer you get to expiration. Think about future variance as a bell curve spreading out from the current price like <. There's much less time to vary left near the tip of the curve. For this reason, when holding a long option position, you probably want 60-90 days so you're not exposed to too much Theta. But that also means more Vega, due to the higher extrinsic value.
5Matthew Barnett9moOn one hand, people in the mainstream still seem too optimistic to me. Like, apparently [https://www.cnbc.com/2020/02/28/coronavirus-odds-of-a-pandemic-doubled-to-40percent-moodys-analytics-says.html] , This seems super optimistic to me. I don't see why people are still forecasting majority probability that it will be contained. On the other hand, I've been convinced to be more optimistic than the 15-20% prediction of disaster I had the other day. I did a more detailed dive into the evidence for a case fatality rate in the 2-3% range and I know think that it's very likely lower. Still, at 0.5% - 1% it would be much more severe than an average flu season and the market might take it seriously simply due to the shock factor. There is also the potential for an effective anti-viral [https://www.metaculus.com/questions/3745/will-remdesivir-be-fda-approved-to-treat-covid-19-by-the-end-of-2020/] being developed by the end of the year, which makes me a bit more hopeful. I am not well calibrated about whether the ~12% market drop is appropriate given the evidence above.
4ioannes_shade9moI bought some $APT (US-based mask manufacturer) in mid-January. Sold off most of it this week as it 8x'd. I put most of the earnings into other coronavirus-relevant names: $AIM, $INO, $COCP, $MRNA, $TRIB. Also considering $GILD but haven't bought any yet ($MRNA and $GILD aren't really corona pure-plays because they're large-ish biotech companies with multiple product lines). I'll revisit these allocations when the market opens on Monday. I don't have a good sense of how smart this is... there's a lot of hype in this sector and I haven't carefully diligenced any of these picks, they're just names that seem to be doing something real and haven't had a crazy run-up yet. I also pulled back a lot of my portfolio into cash.
2Three-Monkey Mind9moHabituation, meditation, and/or alcohol.
6steven04619moIf the market is genuinely this beatable, it seems important for the rationalist/EA/forecaster cluster to take advantage of future such opportunities in an organized way, even if it just means someone setting up a Facebook group or something. (edit: I think the evidence, while impressive, is a little weaker than it seems on first glance, because my impression from Metaculus [http://metaculus.com] is the probability of the virus becoming widespread has gotten higher in recent days for reasons that look unrelated to your point about what the economic implications of a widespread virus would be.)
5Matthew Barnett9moDo you care to share those reasons? I've also been following Metaculus and my impression has been a slow progression of updates as the outbreak has gotten bigger, rather than a big update. However, the stock market looks like it did a big update.
2steven04619moI don't know what the reasons are off the top of my head. I'm not saying the probability rise caused most of the stock market fall, just that it has to be taken into account as a nonzero part of why Wei won his 1 in 8 bet.
4Gurkenglas9moThe easy way is for Wei_Dai to take your money, invest it as he would his, and take 10% of the increase.
6Matthew Barnett9moOne way of framing the EMH is to say that in normal circumstances, it's hard to beat the market. But we are in a highly abnormal circumstance -- same with Bitcoin. One could imagine that even if the EMH false in its strong form, you have to wait years before seeing each new opportunity. This makes the market nearly unexploitable.

the absolutely important part that people seem to miss with a basic 101 understanding of EMH is "hard" in no way means "impossible"

People do hard things all the time! It takes work and time and IQ and learning from experience but they do it.

3Wei_Dai9moI'm not sure I understand your point. Investing in an index fund lets you double your money every 5 to 10 years. If every 10 years there's an opportunity to quickly 5x your money or more (on top of the normal market growth), how does it make sense to call that "nearly unexploitable"?
2Matthew Barnett9moHmm, true, but if you took that argument to its logical extreme the existence of a single grand opportunity implies the market is exploitable. I mean technically, yeah, but when I talk about EMH I mostly mean that $20 bills don't show up every week.
1Phil9moThat's a tautology: Anytime I can beat the market is a highly abnormal time. You can only beat the market in a highly abnormal time.
2Matthew Barnett9moEh, I'm not so sure. If I noticed that every Wednesday the S&P went up 1%, and then fell 1% the next day, that would allow me to regularly beat it, no? Unless we are defining "abnormal" in a way that makes reference to the market.
3John_Maxwell9moIt seems like your opinion has changed a lot since our conversation 7 months ago, when you wrote [https://www.lesswrong.com/posts/HPxTwyrgzCRWWoCPh/open-and-welcome-thread-august-2019#EjbuxbYwySGsgGnHm] :
1MTGandP7moNow that April 17 has passed, how much did you end up making on this bet?

I rolled a lot of the puts into later expirations, which have become almost worthless. I did cash out or convert into long positions some of them, and made about 10x my initial bet as a result. (In other words I lost about 80% of my paper profits.) It seems like I have a tendency to get out of my bets too late (same thing happened with Bitcoin), which I'll have to keep in mind in the future. BTW I wrote about some of my other investments/bets recently at https://ea.greaterwrong.com/posts/g4oGNGwAoDwyMAJSB/how-much-leverage-should-altruists-use/comment/RBXqgYshRhCJsCvWG, in case you're interested.

1ioannes_shade9moDo you know of a way to buy puts with <$1000? I don't understand options well and would like to make some small options trades to drive my learning, but from what I see on my broker (Fidelity), all puts for something like $SPX cost several thousand at minimum.
8gilch9moI feel that I should also point out that long options are a risky play. They do eventually expire, and may expire worthless. You have to get the timing right as well as the direction, and deflating volatility could mean they lose most of their value sooner than you expect. You could lose your entire investment. If you want to experiment, either do a "paper" trade (simulate and track it, but don't actually do it), or make sure it's money you can afford to lose on a very small percentage of your account. 5% of the account is considered big for a single trade, even for experienced option traders who know what they are doing, and I basically never go that high on a long position. I'd recommend you keep it to 1% or less.
8gilch9moYou can try puts on SPY instead. It's an ETF that tracks the same index: the S&P 500, but the share price is 1/10th, so the options are proportionally cheaper as well. There's also the XSP mini options, but I think SPY still has better liquidity. Also, if you have the right kind of account, you can try spreads, buying one option and selling another to help pay for it. You could also consider a call option on an inverse index ETF, like SH, which is designed to rise when SPX falls. Its share price is even lower than SPY, currently about 1/100th of SPX or under $30/share. Most options on this will cost hundreds or less per contract, not thousands.
2ioannes_shade9moThank you – super helpful.
7jmh9moSelling short is not what someone with a few thousand should be doing. The puts on the other hand allow you to set your loses when you enter the position.
2[comment deleted]8mo
6John_Maxwell9moYou can buy stock in companies like Zoom and Slack that enable remote work. I did this about a month ago and their stocks have gone up about 30% since then.
1gilch9moYou could buy an inverse ETF, like SH for a short-term bearish forecast. An advantage of inverse ETFs over options is that they do not require you to apply for margin account or option trading privileges. [Epistemic status: I am not a financial advisor! Double check anything I say. For educational purposes only. This is information to consider, not a recommendation to buy anything in particular. I have no idea where the market bottom is. Maybe we're already there.] SH closely tracks the daily -1x performance of the S&P 500, but may not be aligned that well over long periods. There are a number of other inverse ETFs you might consider, including -2x (SDS) and -3x (SPXU) leveraged ones (which have even worse alignment over long periods, especially during high-volatility periods, such as right now), as well as ETFs tracking the inverse of other indexes. For longer periods consider "safe haven" investments like TLT.

Offering 100-300h of technical work on an AI Safety project

I am a deep learning engineer (2y exp), I currently develop vision models to be used on satellite images (I also do some software engineering around that) (Linkedin profile https://www.linkedin.com/in/maxime-riche-73696182/). On my spare time, I am organizing a EA local group in Toulouse (France), learning RL, doing a research project on RL for computer vision (only expecting indirect utility from this) and developing an EAA tool (EffectiveAnimalAdvocacy). I have been in the French EA community for 4 years. In 2020, I chose to work part time to dedicate 2 to 3 days of work per week to EA aligned projects.Thus for the next 8 months, I have ~10h / week that I want to dedicate to assist an AI safety project. For myself, I am not looking for funds, nor to publish myself a paper, nor a blog post.To me the ideal project would be:

  • a relevant technical AI safety project (research or not). I am looking for advice on the "relevant" part.
  • where I would be able to help the project to achieve better quality results than otherwise without my contribution. (e.g. through writing better code, doing more experiments, testing other de
... (read more)

A descriptive model of moral change, virtue signaling, and cause allocation that I thought of in part as response to Paul Christiano's Moral public goods . (It was previously posted deep in a subthread and I'm reposting it here to get more attention and feedback before possibly writing it up as a top-level post.)

  1. People are socially rewarded for exhibiting above average virtue/morality (for certain kinds of virtue/morality that are highly regarded in their local community) and punished for exhibiting below average virtue/morality.
  2. As a result, we evolved two internal mechanisms: preference alteration (my own phrase) and preference falsification. Preference alteration is where someone's preferences actually change according to the social reward gradient, and preference falsification is acting in public according to the reward gradient but not doing so in private. The amounts of preference alteration and preference falsification can vary between individuals. (We have preference alteration because preference falsification is cognitively costly, and we have preference falsification because preference alteration is costly in terms of physical resources.)
  3. Preference alteration changes o
... (read more)
9paulfchristiano10motl;dr: seems like you need some story for what values a group highly regards / rewards. If those are just the values that serve the group, this doesn't sound very distinct from "groups try to enforce norms which benefit the group, e.g. public goods provision" + "those norms are partially successful, though people additionally misrepresent the extent to which they e.g. contribute to public goods." Calling this the "public goods model" still seems backwards. "Larger countries have higher ODA" is a prediction of "the point of ODA is to satisfy the donor's consequentialist altruistic preferences." The "public goods model" is an attempt to model the kind of moral norms / rhetoric / pressures / etc. that seem non-consequentialist. It suggests that such norms function in part to coordinate the provision of public goods, rather than as a direct expression of individual altruistic preferences. (Individual altruistic preferences will sometimes be why something is a public good.) I agree that there are likely to be failures of this system (viewed teleologically as a mechanism for public goods provision or conflict resolution) and that "moral norms are reliably oriented towards provide public goods" is less good than "moral norms are vaguely oriented towards providing public goods." Overall the situation seems similar to a teleological view of humans. I agree with this, but it seems orthogonal to the "public goods model," this is just about how people or groups aggregate across different values. I think it's pretty obvious in the case of imperfectly-coordinated groups (who can't make commitments to have their resource shares change as beliefs about relative efficacy change), and I think it also seems right in the case of imperfectly-internally-coordinated people. Relevant links: if we can't lie to others, we will lie to ourselves [https://sideways-view.com/2016/11/26/if-you-cant-lie-to-others-you-must-lie-to-yourself/] , the monkey and the machine [https://sideways-view.co
3Wei_Dai10moI didn't have a clear model in mind when I wrote that, and just wrote down "overcompensate" by intuition. Thinking more about it, I think a model that makes sense here is to assume that your private actions can be audited by others at some cost (think of Red Guards going into people's homes to look for books, diaries, assets, etc., to root out "counter-revolutionaries"), so if you have something to hide you'd want to avoid getting audited by avoiding suspicion, and one way to do that is to put extra effort into public displays of virtue. People whose private actions are virtuous would not have this extra incentive. I guess I've been using "virtue signaling" because it's an established term that seems to be referring to the same kind of behavior that I'm talking about. But I acknowledge that the way I'm modeling it doesn't really match the concept of "signaling" from economics, and I'm open to suggestions for a better term. (I'll also just think about how to reword my text to avoid this confusion.)
3Wei_Dai10moIt's entirely possible that I misunderstood or missed some of the points of your Moral public goods [https://www.lesswrong.com/posts/pqKwra9rRYYMvySHc/moral-public-goods] post and then reinvented the same ideas you were trying to convey. By "public goods model" I meant something like "where we see low levels of redistribution and not much coordination over redistribution, that is best explained by people preferring a world with higher level of redistribution but failing to coordinate, instead of by people just not caring about others." I was getting this by generalizing from your opening example: Your sections 1 and 2 also seemed to be talking about this. So this is what my "alternative model" was in reaction to. The "alternative model" says that where we see low levels of redistribution (to some target class), it's because people don't care much about the target class of redistribution and assign the relevant internal moral faction a small budget, and this is mostly because caring about the target class is not socially rewarded. Your section 3 may be saying something similar to what I'm saying, but I have to admit I don't really understand it (perhaps I should have tried to get clarification earlier but I thought I understood what the rest of the post was saying and could just respond to that). Do you think you were trying to make any points that have not been reinvented/incorporated into my model? If so please explain what they were, or perhaps do a more detailed breakdown of your preferred model, in a way that would be easier to compare with my "alternative model"? I think it depends on a lot of things so it's hard to give a full story, but if we consider for example the question of "why is concern about 'social justice' across identity groups currently so much more highly regarded/rewarded than concerns about 'social justice' across social classes" the answer seems to be that a certain moral memeplex happened to be popular in some part of academia and then
2ChristianKl9moI don't think that it's just social justice across identity groups being at the right place at the right time. As a meme it has the advantage that it allows people who are already powerful enough to effect social structures to argue why they should have more power. That's a lot harder for social justice across social classes.
7Vaniver10moThis seems incomplete; if I hold money in different currencies, it seems right for me to adopt 'market rates' for conversion between them, which seems like preference alteration. But the root cause isn't that it'd be cognitively costly for me to keep a private ledger of how I want to exchange between pounds and yen and dollars and a separate public ledger, it's that I was only ever using pounds and yen and dollars as an investment vehicle. It seems quite possible that similar things are true for preferences / time use / whatever; someone who follows TV shows so that they have something to talk about with their coworkers is going to just follow whatever shows their coworkers are interested in, because they're just using it as an investment vehicle instead of something to be pursued in its own right. It also seems like the factions changing directions is quite important here; you might not change the total budget spent on global altruism at all while taking totally different actions (i.e. donating to different charities).
6Wei_Dai10moSorry for the delayed reply, but I was confused by your comment and have been trying to figure out how to respond. Still not sure I understand but I'm going to take a shot. Watching a TV show in order to talk about it with coworkers is an instance of instrumental preferences (which I didn't talk about specifically in my model but was implicitly assuming as a background concept). When I wrote "preference alteration" I was referring to terminal preferences/values. So if you switch what show you watch in order to match your coworkers' interests (and would stop as soon as that instrumental value went away), that's not covered by either "preference alteration" or "preference falsification", but just standard instrumental preferences. However if you're also claiming to like the show when you don't, in order to fit in, then that would be covered under "preference falsification". Does this indicate a correct understanding of your comment, and does it address your point? If so, it doesn't seem like the model is missing anything ("incomplete"), except I could perhaps add an explicit explanation of instrumental preferences and clarify that "preference alteration" is talking about terminal preferences. Do you agree? Sure, this is totally compatible with my model and I didn't intend to suggest otherwise.
8Vaniver10moI think the core thing going on with my comment is that I think for humans most mentally accessible preferences are instrumental, and the right analogy for them is something like 'value functions' instead of 'reward' (as in RL). Under this view, preference alteration is part of normal operation, and so should probably be cast as a special case of the general thing, instead of existing only in this context. When someone who initially dislikes the smell of coffee grows to like it, I don't think it's directly because it's cognitively costly to keep two books, and instead it's because they have some anticipation-generating machinery that goes from anticipating bad things about coffee to anticipating good things about coffee. [It is indirectly about cognitive costs, in that if it were free you might store all your judgments ever, but from a functional perspective downweighting obsolete beliefs isn't that different from forgetting them.] And so it seems like there are three cases worth considering: given a norm that people should root for the sports team where they grew up, I can either 1) privately prefer Other team while publicly rooting for Local team, 2) publicly prefer Local team in order to not have to lie to myself, or 3) publicly prefer Local team for some other reason. (Maybe I trust the thing that generated the norm is wiser than I am, or whatever.) Maybe another way to think about this how the agent relates to the social reward gradient; if it's just a fact of the environment, then it makes sense to learn about it the way you would learn about coffee, whereas if it's another agent influencing you as you influence it, then it makes sense to keep separate books, and only not do so when the expected costs outweigh the expected rewards.
2TurnTrout10moI agree. As far as I can tell, people seem to be predicting their on-policy Q function when considering different choices. See also attainable utility theory [https://www.lesswrong.com/s/7CdoznhJaLEKHwvJW/p/C74F7QTEAYSTGAytJ] and the gears of impact [https://www.lesswrong.com/s/7CdoznhJaLEKHwvJW/p/coQCEe962sjbcCqB9].
2elityre9mo[The following is a musing that might or might not be adding anything.] One thing that comes to mind here is framing myself as a mesa-optimizer in a (social) training process. Insofar as the training process worked, and I was successfully aligned, my values are the values of the social gradient. Or the I might be an unaligned optimizer intending to execute a treacherous turn (though in this context, the "treacherous turn" is not a discreet moment when I change my actions, but rather a continual back-and-forth between serving selfish interests and serving the social morality, depending on the circumstances). I'm not sure that that is always what it feels like. I can feel pride at my moral execution.

After the hospitals fill up, the COVID-19 death rate is going to get a lot higher. How much higher? What's the fatality rate from untreated COVID-19?

This article may be an answer: it lumps together ICU, mechanical ventilation, and death into a "primary composite end point". That seems like an OK proxy for "death without treatment", right?

If so, Table 1 suggests fatality rate of 6% overall, 0% ages 0-14, 2% ages 15-49, 7% ages 50-64, 21% ages 65+. There's more in the table about pre-existing conditions and so on.

(ETA one more: 2.5% for people of all ages with no preexisting condition.)

Thoughts? Any other data?

(ETA: This should be viewed as a lower bound on fatality rate, see comments.)

7cousin_it9moLook at Table 3, most people in the study received some kind of treatment, in particular 40% received oxygen. You can't figure out the untreated fatality rate from this.
3steve21529moMissed that! Thanks! I agree. It's a lower bound. Interesting list of treatments. I'm a bit confused why a majority needed antibiotics, for example. I guess the virus opens the door for bacterial infections...?
2romeostevensit9mopneumonia comorbid in a huge number of cases.
4steve21529moI thought pneumonia was a condition / symptom / cluster of symptoms, not a disease. You can have pneumonia caused by COVID-19, or pneumonia caused by a bacterial infection, or pneumonia caused by some other viral infection, etc. It's confusing because there's a so-called "pneumonia vaccine". It's really a "vaccine against a particular bacterial infection that often causes pneumonia". You can correct me if I'm wrong :)
7romeostevensit9moHaving a respiratory infection makes you much more vulnerable to bacterial pneumonia secondary infection which is what is being seen in a lot of the deadly cases.
3steve21529moThis not-particularly-reliable source [https://twitter.com/katyw2004/status/1236848300143280128] says "So far, there have been very few concurrent or subsequent bacterial infections, unlike Influenza where secondary bacterial infections are common and a large source of additional morbidity and mortality". So ... I guess the doctors were giving antibiotics as a preventive measure that turned out to be unnecessary? Maybe??
3steve21529moThanks for explaining!!
1leggi9moFor cases receiving antibiotics I would want to distinguish between prophylactic and therapeutic prescribing. Are they being given "just in case" or are they being used to treat a bacterial infection (confirmed by testing)? The general health/disease history and current medications of the patients most affected should also be considered when looking at the stats.
4Wei_Dai9moIt seems to be a good paper to consider, which I hadn't seen before. The number of people reaching "primary composite end point" would also probably increase without treatment though, so it can only serve as a lower bound. The same table gives 15.7% as "severe cases", so 6-16% seems a reasonable range, which is not too different from 5-20% I estimated earlier [https://www.lesswrong.com/posts/RukXjEvMfqDKRJaup/what-will-be-the-big-picture-implications-of-the-coronavirus?commentId=2mcb949cKj2TxpiRK] .
1steve21529moGood point. Thanks!

Steven Pinker is running a general-education course on rationality at Harvard University. There are some interesting people booked as guest lecturers. Details on Pinker's website, including links that will get you to video of all the lectures (there have been three so far).

I've watched only the first, which suggests unsurprisingly that a lot of the material will be familiar to LW regulars.

The syllabus also includes (either as required or optional reading) https://www.lesswrong.com/posts/ujTE9FLWveYz9WTxZ/what-cost-for-irrationality , https://www.lesswrong.com/posts/XTXWPQSEgoMkAupKt/an-intuitive-explanation-of-bayes-stheorem ,
https://www.lesswrong.com/posts/QxZs5Za4qXBegXCgu/introduction-to-game-theorysequence-guide , https://slatestarcodex.com/2018/01/24/conflict-vs-mistake/ , https://80000hours.org/key-ideas/ , and https://arbital.com/p/bayes_rule/?l=1zq ; its "other resources" sections also include the following mentions:

LessWrong.com is a forum for the “Rationality community,” an informal network of bloggers who seek to call attention to biases and fallacies and apply reason more rigorously (sometimes to what may seem like extreme lengths).

Slate Star Codex https://slatestarcodex.com/ is an anagram of “Scott Alexander,” the author of the tutorial recommended above and a prominent member of the “rationality community.” This deep and witty blog covers diverse topics in social science, medicine, events, and everyday life.

80,000 Hours, https://80000hours.org/, an allusion to the number of hours in your career, is a
... (read more)
2gjm10moNice to see that Steven Pinker has the same N-blindness as Scott himself :-).
1Raj Thimmiah5moWhat do you mean by n-blindness?
2gjm5moSLATE STAR CODEX is almost an anagram of SCOTT ALEXANDER. I think I remember Scott saying it was meant to actually be an anagram and he goofed. Pinker says it's an anagram. (But I misremembered: it has an extra S as well as missing an N, so it's pole-blindness rather than just N-blindness. Also, perhaps I'm also misremembering about the origins of the name; maybe Scott didn't actually goof at all, but just decided to make do with an imperfect anagram.)

Is there something you think we can all do on LessWrong can do to help with the coronavirus? 

We have a justified practical advice thread and some solid posts about quarantine prepations, not acting out of social fears, and a draft model of risks from using delivery services.

We also have a few other questions:

Finally, here's the advice that my house and some friends put together.

I'm interested if people have ideas for better ways we could organise info on LessWrong or something.

An observation on natural language being illogical: I've noticed that at least some native Chinese speakers use 不一定 (literally "not certain") to mean "I disagree", including when I say "I think there's 50% chance that X." At first I was really annoyed with the person doing that ("I never said I was certain!") but then I noticed another person doing it so now I think it's just a standard figure of speech at this point, and I'm just generally annoyed at ... cultural evolution, I guess.

Google's AI folks have made a new chatbot using a transformer-based architecture (but a network substantially bigger than full-size GPT2). Blog post; paper on arXiv. They claim it does much better than the state of the art (though I think everyone would agree that the state of the art is rather unimpressive) according to a human-evaluated metric they made up called "sensibleness and specificity average", which means pretty much what you think it does, and apparently correlates with perplexity in the right sort of way.

I'd be curious how people relate to this Open Thread compared to their personal ShortForm posts. I'm trying to get more into LessWrong posting and don't really understand the differences between these.

This has probably already been discussed, and if so please link me to that discussion if it's easy.

6Wei_Dai9moI haven't seen any previous discussion about this. I think the main relevant difference for me is that it's much easier to find someone's ShortForm posts than their Open Thread posts (which are hidden among all of their other comments). I don't really see the point of ShortForm posts compared to regular posts, so I basically post in Open Thread if I don't want something to be part of my "permanent record" (e.g., posting a rough idea or a draft for feedback), and make a regular post otherwise.

Further alarming evidence of humanity's inability to coordinate (especially in an emergency), and also relevant to recent discussions around terminology: ‘A bit chaotic.’ Christening of new coronavirus and its disease name create confusion

1Lukas_Gloor9moThis news article [https://coconuts.co/bali/news/indonesian-official-says-japanese-coronavirus-patient-who-visited-bali-infected-with-sars-cov-2-not-covid-19/] has a bizarre story about this:

I noticed that even though I may not be as optimized in the matter of investments as others (hi, Wei Dai!), the basic rationality principles still help a lot. This morning, when I went to invest my usual chunk of my paycheck, I reflected on my actions and realized that the following principles were helping me (and had helped me in the past) buy stuff that were likely undervalued:

  • pre-commitment (to a certain fund allocation)
  • think about it for more than 5 min (to putting in the up front leg work and reading to determine my investing approach)
  • use the try hard
... (read more)

ETA: It's out of stock again just a couple of hours later, but you can sign up for be notified when it's back in stock.

Possible source of medicine for COVID-19. Someone in the FB group Viral Exploration suggested inhousepharmacy.vu as an online pharmacy to buy medicine without prescription. I don't know them personally but they seem trustworthy enough. (ETA: Another seemingly trustworthy person has also vouched for it.) Hydroxychloroquine has been out of stock until a few minutes ago. I bought some myself in case the medical system get overwhelmed. Relevan

... (read more)
1jmh9moIt sounded like that was for a treatment which suggests you need to know you have COVID-19 before starting to take the med. Is the thinking that you might not be able to get treatment after the diagnosis?
3Wei_Dai9moI imagine that hospitals will be overwhelmed, if 40-70% of the population eventually get COVID-19, so I'm buying mostly in case I or my family get symptoms and it's impossible to obtain medical attention at that point. I'm also considering (or will consider later) taking hydroxychloroquine for prophylaxis (during local peak of the infection in my area), since it's used for that purpose for malaria [https://www.cdc.gov/malaria/resources/pdf/fsp/drugs/hydroxychloroquine.pdf].

Hi! I have been reading lesswrong for some years but have never posted, and I'm looking for advice about the best path towards moving permanently to the US to work as a software engineer.

I'm 24, single, currently living in Brazil and making 13k a year as a full stack developer in a tiny company. This probably sounds miserable to a US citizen but it's actually a decent salary here. However, I feel completely disconnected from the people around me; the rationalist community is almost nonexistent in Brazil, specially in a small town like the on... (read more)

1gilch9moI work as a software developer for an American company, but my perspective is mostly limited to my own experience. I have also been involved in some hiring decisions and interviews. You can sometimes get hired without a degree, if you can prove you have the skills. LinkedIn is helpful for finding work if you can connect with recruiters. It may be easier to find a job when you already have one, as that proves you can currently do work. Open-source work was helpful for me. The quality matters more than the quantity. It can show that you know how to use version control, and, depending on the project, that you can coordinate work with a team.
2Ricardo Meneghin9moThanks for giving your perspective! Good to know some hire without requiring a degree. Guess I'll start building a portfolio that can demonstrate I have the necessary skills, and keep applying.

As of right now, I think that if business-as-usual continues in AI/ML, most unskilled labor in the transportation/warehousing of goods will be automatable by 2040.

Scott Anderson, Amazon’s director of Robotics puts it at over 10 years. https://www.theverge.com/2019/5/1/18526092/amazon-warehouse-robotics-automation-ai-10-years-away.

I don’t think it requires any fundamental new insights to happen by 2040, only engineering effort and currently available techniques.

I believe the economic incentives will align with this automation once it becomes achievable.

Tran

... (read more)
3MakoYass9moCan I infer via nominative determinism that Scott Anderson is a friend of the rationalist community?
4landfish9moHe is indeed.
0aleph_four9moLet’s add another Scott to our coffers.
1MakoYass9moThe other other Scott A