Jacob Falkovich

Writes Putanumonit.com and helps run the New York LW meetup. @yashkaf on Twitter.


Are PS5 scalpers actually bad?

PS5 scalpers redistribute consoles away from those willing to burn time to those willing to spend money. Normally this would be a positive — time burned is just lost, whereas the money is just transferred from Sony to the scalpers who wrote the quickest bot. However, you can argue that gaming consoles in particular are more valuable to people with a lot of spare time to burn than to people with day jobs and money!

Disclosure: I'm pretty libertarian and have a full-time job but because there weren't any good exclusives in the early months I decided to ignore the scalpers. I followed https://twitter.com/PS5StockAlerts and got my console at base price in April just in time for Returnal. Returnal is excellent and worth getting the PS5 for even if costs you a couple of hours or an extra $100.

MIRI location optimization (and related topics) discussion

Empire State of Mind

I want to second Daniel and Zvi's recommendation of New York culture as an advantage for Peekskill. An hour away from NYC is not so different from being in NYC — I'm in a pretty central part of Brooklyn and regularly commute an hour to visit friends uptown or further east in BK and Queens. An hour in traffic sucks, an hour on the train is pleasant. And being in NYC is great. 

A lot of the Rationalist-adjacent friends I made online in 2020 have either moved to NYC in the last couple of months or are thinking about it, as rents have dropped up to 20% in some neighborhoods and everyone is eager to rekindle their social life. New York is also a vastly better dating market for male nerds given a slightly female-majority sex ratio and thousands of the smartest and coolest women on the planet as compared to the male-skewed and smaller Bay Area.  

Peekskill is also 2 hours from Philly and 3 from Boston, which is not too much for a weekend trip. That could make it the Schelling point for East Coast megameetups/conferences/workshops since it's as easy to get to as NYC and a lot cheaper to rent a giant AirBnB in.

Won't Someone Think of the Children

I love living in Brooklyn, but the one thing that could make us move in the next year or two is a community of my tribe that are willing to help each other with childcare, from casual babysitting to homeschooling pods. I'm keenly following the news of where Rationalist groups are settling, especially those who plan to (like us) or already have kids. A critical mass of Rationalist parents in Peekskill may be enticing enough for us to move there, since we could have the combined benefits of living space, proximity to NYC, and the community support we would love.

Monastery and Throne

I don't think that nudgers are consequentialists who also try to accurately account for public psychology. I think 99% of the time they are doing something for non-consequentialist reasons, and using public psychology as a rationalization. Ezra Klein pretty explicitly cares about advancing various political factions above mere policy outcomes, IIRC on a recent 80,000 Hours podcast Rob was trying to talk about outcomes and Klein ignored him to say that it's bad politics.

Politics is way too meta

I understand, I think we have an honest disagreement here. I'm not saying that the media is cringe in an attempt to make it so, as a meta move. I honestly think that the current prestige media establishment is beyond reform, a pure appendage of power. It's impact can grow weaker or stronger, but it will not acquire honesty as a goal (and in fact, seems to be giving up even on credibility). 

In any case, this disagreement is beyond the scope of your essay. What I learn from it is to be more careful of calling things cringe or whatever in my own speech, and to see this sort of thing as an attack on the social reality plane rather than an honest report of objective reality.

Politics is way too meta

Other people have commented here that journalism is in the business of entertainment, or in the business of generating clicks etc. I think that's wrong. Journalism is in the business of establishing the narrative of social reality. Deciding what's a gaffe and who's winning, who's "controversial" and who's "respected", is not a distraction from what they do. It's the main thing.

So it's weird to frame this is "politics is way too meta". Too meta for whom? Politicians care about being elected, so everything they say is by default simulacrum level 3 and up. Journalists care about controlling the narrative, so everything they say is by default simulacrum level 3 and up. They didn't aim at level 1 and miss, they only brush against level 1 on rare occasion, by accident.

Here are some quotes from our favorite NY Times article, Silicon Valley's Safe Space:

the right to discuss contentious issues

The ideas they exchanged were often controversial

even when those words were untrue or could lead to violence

sometimes spew hateful speech

step outside acceptable topics

turned off by the more rigid and contrarian beliefs

his influential, and controversial, writings

push people toward toxic beliefs

These aren't accidental. Each one of the bolded words just means "I think this is bad, and you better follow me". They're the entire point of the article — to make it so that it's social reality to think that Scott is bad.

So I think there are two takeaways here. One is for people like us, EAs discussing charity impact or Rationalists discussing life-optimization hacks. The takeaway for us is to spend less time writing about the meta and more about the object level. And then there's a takeaway about them, journalists and politicians and everyone else who lives entirely in social reality. And the takeaway is to understand that almost nothing they say is about objective reality, and that's unlikely to change.

Above the Narrative

I agree that advertising revenue is not an immediate driving force, something like "justifying the use of power by those in power" is much closer to it and advertising revenue flows downstream from that (because those who are attracted to power read the Times).

I loved the rest of Viliam's comment though, it's very well written and the idea of the eigen-opinion and being constrained by the size of your audience is very interesting.

Jacob's Twit, errr, Shortform

Here's my best model of the current GameStop situation, after nerding out about it for two hours with smart friends. If you're enjoying the story as a class warfare morality play you can skip this, since I'll mostly be talking finance. I may all look really dumb or really insightful in the next few days, but this is a puzzle I wanted to figure out. I'm making this public so posterity can judge my epistemic rationality skillz — I don't have a real financial stake either way.

Summary: The longs are playing the short game, the shorts are playing the long game.

At $300, GameStop is worth about $21B. A month ago it was worth $1B, so there's $20B at stake between the long-holders and short sellers.

Who's long right now? Some combination of WSBers on a mission, FOMOists looking for a quick buck, and institutional money (i.e., other hedge funds). The WSBers don't know fear, only rage and loss aversion. A YOLOer who bought at $200 will never sell at $190, only at $1 or the moon. FOMOists will panic but they're probably a majority and today's move shook them off. The hedgies care more about risk, they may hedge with put options or trust that they'll dump the stock faster than the retail traders if the line breaks.

The interesting question is who's short. Shorts can probably expect to need a margin equal to ~twice the current share price, so anyone who shorted too early or for 50% of their bankroll (like Melvin and Citron) got squeezed out already. But if you shorted at $200 and for 2% of your bankroll you can hold for a long time. The current borrowing fee is 31% APR, or just 0.1% a day. I think most of the shorts are in the latter category, here's why:

Short interest has stayed at 71M shares even as this week saw more than 500M shares change hands. I think this means that new shorts are happy to take the places of older shorts who cash out, they're only constrained by the fact that ~71M are all that's available to borrow. Naked shorts aren't really a thing, forget about that. So everyone short $GME now is short because they want to be, if they wanted to get out they could. In a normal short squeeze the available float is constrained, but this hasn't really happened with $GME.

WSBers can hold the line but can't push higher without new money that would take some of these 71M shares out of borrowing circulation or who will push the price up so fast the shorts will get margin-called or panic. For the longs to win, they probably need something dramatic to happen soon.

One dramatic thing that could happen is that people who sold the huge amount of call options expiring Friday aren't already hedged and will need to buy shares to deliver. It's unclear if that's realistic, most option sellers are market makers who don't stay exposed for long. I don't think there were options sold above the current price of $320, so there's no gamma left to squeeze.

I think $GME getting taken off retail brokerages really hurt the WSBers. It didn't cause panic, but it slowed the momentum they so dearly needed and scared away FOMOists. By the way, I don't think brokers did it to screw with the small people, they're their clients after all. It just became too expensive for brokerages to make the trade because they need to post clearing collateral for two days. They were dumb not to anticipate this, but I don't think they were bribed by Citadel or anything.

For the shorts to win they just need to wait it out not get over-greedy. Eventually the longs will either get bored or turn on each other — with no squeeze this becomes just a pyramid scheme. If the shorts aren't knocked out tomorrow morning by a huge flood of FOMO retail buys, I think they'll win over the next weeks.

Is Rationalist Self-Improvement Real?

This is a self-review, looking back at the post after 13 months.

I have made a few edits to the post, including three major changes:
1. Sharpening my definition of what counts as "Rationalist self-improvement" to reduce confusion. This post is about improved epistemics leading to improved life outcomes, which I don't want to conflate with some CFAR techniques that are basically therapy packaged for skeptical nerds.
2. Addressing Scott's "counterargument from market efficiency" that we shouldn't expect to invent easy self-improvement techniques that haven't been tried.
3. Talking about selection bias, which was the major part missing from the original discussion. My 2020 post The Treacherous Path to Rationality is somewhat of a response to this one, concluding that we should expect Rationality to work mostly for those who self-select into it and that we'll see limited returns to trying to teach it more broadly.

The past 13 months also provided more evidence in favor of epistemic Rationality being ever more instrumentally useful. In 2020 I saw a few Rationalist friends fund successful startups and several friends cross the $100k mark for cryptocurrency earnings. And of course, LessWrong led the way on early and accurate analysis of most COVID-related things. One result of this has been increased visibility and legitimacy, and of course another is that Rationalists have a much lower number of COVID cases than all other communities I know.

In general, this post is aimed at someone who discovered Rationality recently but is lacking the push to dive deep and start applying it to their actual life decisions. I think the main point still stands: if you're Rationalist enough to think seriously about it, you should do it.

Review: LessWrong Best of 2018 – Epistemology

There's a whole lot to respond to here, and it may take the length of Surfing Uncertainty to do so. I'll point instead to one key dimension.

You're discussing PP as a possible model for AI, whereas I posit PP as a model for animal brains. The main difference is that animal brains are evolved and occur inside bodies.

Evolution is the answer to the dark room problem. You come with prebuilt hardware that is adapted a certain adaptive niche, which is equivalent to modeling it. Your legs are a model of the shape of the ground and the size of your evolutionary territory. Your color vision is a model of berries in a bush, and your fingers that pick them. Your evolved body is a hyperprior you can't update away. In a sense, you're predicting all the things that are adaptive: being full of good food, in the company of allies and mates, being vigorous and healthy, learning new things. Lying hungry in a dark room creates a persistent error in your highest-order predictive models (the evolved ones) that you can't change.

Your evolved prior supposes that you have a body, and that the way you persist over time is by using that body. You are not a disembodied agent learning things for fun or getting scored on some limited test of prediction or matching. Everything your brain does is oriented towards acting on the world effectively. 

You can see that perception and action rely on the same mechanism in many ways, starting with the simple fact that when you look at something you don't receive a static picture, but rather constantly saccade and shift your eyes, contract and expand your pupil and cornea, move your head around, and also automatically compensate for all of this motion. None of this is relevant to an AI who processes images fed to it "out of the void", and whose main objective function is something other than maintaining homeostasis of a living, moving body.

Zooming out, Friston's core idea is a direct consequence of thermodynamics: for any system (like an organism) to persist in a state of low entropy (e.g. 98°F) in an environment that is higher entropy but contains some exploitable order (e.g. calories aren't uniformly spread in the universe but concentrated in bananas), it must exploit this order. Exploiting it is equivalent to minimizing surprise, since if you're surprised there some pattern of the world that you failed to make use of (free energy). 

Now if you just apply this basic principle to your genes persisting over an evolutionary time scale and your body persisting over the time scale of decades and this sets the stage for PP applied to animals.

For more, here's a conversation between Clark, Friston, and an information theorist about the Dark Room problem.

Load More