If it’s worth saying, but not worth its own post, you can put it here.

Also, if you are new to LessWrong and want to introduce yourself, this is the place to do it. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome. If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, and seeing if there are any meetups in your area.

The Open Thread sequence is here.

New Comment
54 comments, sorted by Click to highlight new comments since: Today at 9:58 AM

On Functional Decision Theory (Wolfgang Schwarz)

I recently refereed Eliezer Yudkowsky and Nate Soares's "Functional Decision Theory" for a philosophy journal. My recommendation was to accept resubmission with major revisions, but since the article had already undergone a previous round of revisions and still had serious problems, the editors (understandably) decided to reject it. I normally don't publish my referee reports, but this time I'll make an exception because the authors are well-known figures from outside academia, and I want to explain why their account has a hard time gaining traction in academic philosophy. I also want to explain why I think their account is wrong, which is a separate point.

I'm not gonna go comment on his blog because his confusion about the theory (supposedly) isn't related to his rejection of the paper, and also because I think talking to a judge about the theory out of band would bias their judgement of the clarity of the writing in future (it would come to seem more clear and readable to them than it is, just as it would to me) and is probably bad civics, but I just have to let this out because someone is wrong on the internet, damnit

FDT says you should not pay because, if you were the kind of person who doesn't pay, you likely wouldn't have been blackmailed. How is that even relevant? You are being blackmailed.

So he's using a counterexample that's predicated on a logical inconsistency and could not happen. If a decision theory fails in situations that couldn't really happen, that's actually not a problem.

Again

If you are in Newcomb's Problem with Transparent Boxes and see a million in the right-hand box, you again fare better if you follow CDT. Likewise if you see nothing in the right-hand box.

is the same deal, if you take the right box, that's logically inconsistent with the money having been there to take, that scenario can't happen (or happens only rarely, if he's using that version of newcomb's problem), and it's no mark against a decision procedure if it doesn't win in those conditions. It will never have to face those conditions.

What if someone is set to punish agents who use FDT, giving them choices between bad and worse options, while CDTers are given great options? In such an environment, the engineer would be wise not build an FDT agent.

What if someone is set to punish agents who use CDT, giving them choices between bad and worse options, while FDTers are given great options? In such an environment, the engineer would be wise not build an CDT agent.

What if a skeleton pops out in the night and demands that you must recite the magna carta or else it will munch your nose off? Will you learn how to recite the magna carta in light of this damning thought experiment?

It is impossible to build an agent that wins in scenarios that are specifically contrived to foil that kind of agent. It will always be possible to propose specifically contrived situations for any proposed decision procedure.

Aaargh this has all been addressed by the arbital articles! :<

[-]dxu5y650
FDT says you should not pay because, if you were the kind of person who doesn't pay, you likely wouldn't have been blackmailed. How is that even relevant? You are being blackmailed.

I'm quoting this because, even though it's wrong, it's actually an incredibly powerful naive intuition. I think many people who have internalized TDT/UDT/FDT-style reasoning have forgotten just how intuitive the quoted block is. The unstated underlying assumption here (which is unstated because Schwarz most likely doesn't even realize it is an assumption) is extremely persuasive, extremely obvious, and extremely wrong:

If you find yourself in a particular situation, the circumstances that led you to that situation are irrelevant, because they don't change the undeniable fact that you are already here.

This is the intuition driving causal decision theory, and it is so powerful that mainstream academic philosophers are nearly incapable of recognizing it as an assumption (and a false assumption at that). Schwarz himself demonstrates just how hard it is to question this assumption: even when the opposing argument was laid right in front of him, he managed to misunderstand the point so hard that he actually used the very mistaken assumption the paper was criticizing as ammunition against the paper. (Note: this is not intended to be dismissive toward Schwarz. Rather, it's simply meant as an illustrative example, emphasizing exactly how hard it is for anyone, Schwarz included, to question an assumption that's baked into their model of the world.) And if even if you already understand why FDT is correct, it still shouldn't be hard to see why the assumption in question is so compelling:

How could what happened in the past be relevant for making a decision in the present? The only thing your present decision can affect is the future, so how could there be any need to consider the past when making your decision? Surely the only relevant factors are the various possible futures each of your choices leads to? Or, to put things in Pearlian terms: it's known that the influence of distant causal nodes is screened off by closer nodes through which they're connected, and all future nodes are only connected to past nodes through the present--there's no such thing as a future node that's directly connected to a past node while not being connected to the present, after all. So doesn't that mean the effects of the past are screened off when making a decision? Only what's happening in the present matters, surely?

Phrased that way, it's not immediately obvious what's wrong with this assumption (which is the point, of course, since otherwise people wouldn't find it so difficult to discard). What's actually wrong is something that's a bit hard to explain, and evidently the explanation Eliezer and Nate used in their paper didn't manage to convey it. My favorite way of putting it, however, is this:

In certain decision problems, your counterfactual behavior matters as much--if not more--than your actual behavior. That is to say, there exists a class of decision problems where the outcome depends on something that never actually happens. Here's a very simple toy example of such a problem:

Omega, the alien superintelligence, predicts the outcome of a chess game between you and Kasparov. If he predicted that you'd win, he gives you $500 in reality; otherwise you get nothing.

Strictly speaking, this actually isn't a decision problem, since the real you is never faced with a choice to make, but it illustrates the concept clearly enough: the question of whether or not you receive the $500 is entirely dependent on a chess game that never actually happened. Does that mean the chess game wasn't real? Well, maybe; it depends on your view of Platonic computations. But one thing it definitely does not mean is that Omega's decision was arbitrary. Regardless of whether you feel Omega based his decision on a "real" chess game, you would in fact either win or not win against Kasparov, and whether you get the $500 really does depend on the outcome of that hypothetical game. (To make this point clearer, imagine that Omega actually predicted that you'd win. Surprised by his own prediction, Omega is now faced with the prospect of giving you $500 that he never expected he'd actually have to give up. Can he back out of the deal by claiming that since the chess game never actually happened, the outcome was up in the air all along and therefore he doesn't have to give you anything? If your answer to that question is no, then you understand what I'm trying to get at.)

So outcomes can depend on things that never actually happen in the real world. Cool, so what does that have to do with the past influencing the future? Well, the answer is that it doesn't--at least, not directly. But that's where the twist comes in:

Earlier, when I gave my toy example of a hypothetical chess game between you and Kasparov, I made sure to phrase the question so that the situation was presented from the perspective of your actual self, not your hypothetical self. (This makes sense; after all, if Omega's prediction method was based on something other than a direct simulation, your hypothetical self might not even exist.) But there was another way of describing the situation:

You're going about your day normally when suddenly, with no warning whatsoever, you're teleported into a white void of nothingness. In front of you is a chessboard; on the other side of the chessboard sits Kasparov, who challenges you to a game of chess.

Here, we have the same situation, but presented from the viewpoint of the hypothetical you on whom Omega's prediction is based. Crucially, the hypothetical you doesn't know that they're hypothetical, or that the real you even exists. So from their perspective, something random just happened for no reason at all. (Yes, yes, if Omega used some method other than a simulation to make his prediction, the hypothetical you wouldn't have existed and wouldn't have had a perspective--but hey, that doesn't stop me from writing from their perspective, right? After all, real people write from the perspectives of unreal people all the time; that's just called writing fiction. And besides, we've already established that real or unreal, the outcome of the game really does determine whether you get the $500, so the thoughts and feelings of the hypothetical you are nonetheless important in that they partially determine the outcome of the game.)

And now we come to the final, crucial point that makes sense of the blackmail scenario and all the other thought experiments in the paper, the point that Schwarz and most mainstream philosophers haven't taken into account:

Every single one of those thought experiments could have been written from the perspective, not of the real you, but a hypothetical, counterfactual version of yourself.

When "you're" being blackmailed, Schwarz makes the extremely natural assumption that "you" are you. But there's no reason to suppose this is the case. The scenario never stipulates why you're being blackmailed, only that you're being blackmailed. So the person being blackmailed could be either the real you or a hypothetical. And the thing that determines whether it's the real you or a mere hypothetical is...

...your decision whether or not to pay up, of course.

If you cave into the blackmail and pay up, then you're almost certainly the real deal. On the other hand, if you refuse to give in, it's very likely that you're simply a counterfactual version of yourself living in an extremely low-probability (if not outright inconsistent) world. So your decision doesn't just determine the future; it also determines (with high probability) which you "you" are. And so then the problem simplifies into this: which you do you want to be?

If you're the real you, then life kinda sucks. You just got blackmailed and you paid up, so now you're down a bunch of money. If, on the other hand, you're the hypothetical version of yourself, then congratulations: "you" were never real in the first place, and by counterfactually refusing to pay, you just drastically lowered the probability of your actual self ever having to face this situation and (in the process) becoming you. And when things are put that way, well, the correct decision becomes rather obvious.

But this kind of counterfactual reasoning is extremely counterintuitive. Our brains aren't designed for this kind of thinking (well, not explicitly, anyway). You have to think about hypothetical versions of yourself that have never existed and (if all goes well) will never exist, and therefore only exist in the space of logical possibility. What does that even mean, anyway? Well, answering confused questions like that is pretty much MIRI's goal these days, so I dunno, maybe we can ask them.

So your decision doesn't just determine the future; it also determines (with high probability) which you "you" are.

Worse. It doesn't change who you are, you are the person being blackmailed. This you know. What you don't know is whether you exist (or ever existed). Whether you ever existed is determined by your decision.

(The distinction from the quoted sentence may matter if you put less value on the worlds of people slightly different from yourself, so you may prefer to ensure your own existence, even in a blackmailed situation, over the existence of the alternative you who is not blackmailed but who is different and so less valuable. This of course involves unstable values, but motivates degree of existence phrasing of the effect of decisions, over the change in the content of the world phrasing, since the latter doesn't let us weigh whole alternative worlds differently.)

"FDT means choosing a decision algorithm so that, if a blackmailer inspects your decision algorithm, they will know you can't be blackmailed. If you've chosen such a decision algorithm, a blackmailer won't blackmail you."

Is this an accurate summary?

I only just saw this comment. I think that there's a lot of value in imagining the possibility of both a real you and a hypothetical you, but I expect Schwarz would object to this as the problem statement says, "you". By default, this is assumed to refer to a version of you that is real within the scope of the problem, not a version that is hypothetical within this scope. Indeed, you seem to recognise that your reasoning isn't completely solid when you ask what these hypothetical selves even mean.

Fortunately, this is where some of my arguments from Deconfusing Logical Counterfactuals can come in. If we are being blackmailed and we are only blackmailed if we pay out, then it is logically impossible for us to not pay out, so we don't have a decision theory problem. Such a problem requires multiple possible actions and the only way to obtain this is by erasing some of the information. So if we erase the information about whether or not you are being blackmailed, we end up with two possibilities: a) you paying the the situation and getting blackmailed or b) you not being the kind of person who would pay and you not getting blackmailed. And in that case we'd pick option a).

We can then answer your the question about how your decision affects the past - it doesn't. The past is fixed and cannot be changed. So why does it look like your decision changes the past? Actually, your decision is fixed as well. You can't actually change what it will be. However, both you decision and the past may be different in counterfactuals, after all, they aren't factual. If we have a perfect predictor and we posit a different decision, then we must necessarily posit a different prediction in the past to maintain consistency. Even though we might talking colloquially about changing a decision, what's actually happening is that we are positing a different counterfactual.

Belatedly, is this a fair summary of your critique?

When someone thinks about another person (e.g. to predict whether they'll submit to blackmail), the act of thinking about the other person creates a sort of 'mental simulation' that has detailed concious experiences in its own right. So you never really know whether you're a flesh-and-blood person or a 'mental simulation' based on a flesh-and-blood person.

Now, suppose you seem to find yourself in a situation where you've been blackmailed. In this context, it's reasonable to wonder whether you're actually a flesh-and-blood person who's been blackmailed -- or merely a 'mental simulation' that exists in the mind of a potential blackmailer. If you're a mental simulation, and you care about the flesh-and-blood person you're based on, then you have reason to resist blackmail. The reason is that the decision you take as a simulation will determine the blackmailer's prediction about how the flesh-and-blood person will behave. If you resist blackmail, then the blackmailer will predict the flesh-and-blood person will refuse blackmail and therefore decide not to blackmail them.

If this is roughly in the right ballpark, then I would have a couple responses:

  1. I disagree that the act of thinking about a person will tend to create a mental simulation that has detailed concious experiences in its own right. This seems like a surprising position that goes against the grain of conventional neuroscience and views on the philosophy of conciousness. As a simple illustrative case, suppose that Omega makes a prediction about Person A purely on the basis of their body language. Surely thinking "This guys looks really nervous, he's probably worried he'll be seen as the sort of guy who'll submit to blackmail -- because he is" doesn't require bringing a whole new conciousness into existance.

  2. Suppose that when a blackmailer predicts someone's behavior, they do actually create a concious mental simulation. Suppose you don't know whether you're this kind of simulation or the associated flesh-and-blood person, but you care about what happens to the flesh-and-blood person in either case. Then, depending on certain parameter values, CDT does actually say you should resist blackmail. This is because there is some chance that you will cause the flesh-and-blood person to avoid being blackmailed. So CDT gives the response you want in this case.

Overall, I don't think this line of argument really damages CDT. It seems to be based on a claim about conciousness that I think probably wrong. But even if the claim is right, all this implies is that CDT recommends a different action than one would otherwise have thought.

(If my summary is roughly in the right ballpark, then I also think it's totally reasonable for academic decision theorists to read the FDT paper to fail to know that a non-mainstream neuroscience/philosophy-of-conciousness view is being assumed and provides the main justification for FDT. The paper really doesn't directly say anything about this. It seems wrong, then, to me to suggest that Schwarz only disagrees because he lacks the ability to see his own assumptions.)

[[EDIT: Oops, rereading your comment, seems like the summary is probably not fair. I didn't process this bit:

Yes, yes, if Omega used some method other than a simulation to make his prediction, the hypothetical you wouldn't have existed and wouldn't have had a perspective--but hey, that doesn't stop me from writing from their perspective, right? After all, real people write from the perspectives of unreal people all the time; that's just called writing fiction.

But now, reading the rest of the comment in light of this point, I don't think this reduces my qualms. The suggestion seems to be that, when seem to find yourself in the box room, you should in some cases be uncertain about whether or not you exist at all. And in these cases you should one box, because, if it turns out that you don't exist, then your decision to one box will (in some sense) cause a corresponding person who does exist to get more money. You also don't personally get less money by one boxing, because you don't get any money either way, because you don't exist.

Naively, this line of thought seems sketchy. You can have uncertainty about the substrate your mind is being run on or about the features of the external world -- e.g. you can be unsure whether or not you're a simulation -- but there doesn't seem to be room for uncertainty about whether or not you exist. "Cogito, ergo sum" and all that.

There is presumably some set of metaphysical/epistemological positions under which this line of reasoning makes sense, but, again, the paper really doesn't make any of these positions explicit or argue for them directly. I mainly think it's premature to explain the paper's faillure to persuade philosophers in terms of their rigidity or inability to question assumptions.]]

[-]gjm5y60

You say

So he's using a counterexample that's predicated on a logical inconsistency and could not happen. If a decision theory fails in situations that couldn't really happen, that's actually not a problem.

but that counterexample isn't predicated on a logical inconsistency. An FDT agent can still get blackmailed, by someone who doesn't know it's an FDT agent or who doesn't care that the outcome will be bad or who is lousy at predicting the behaviour of FDT agents.

No, this is wrong.

What prevents the FDT agent from getting blackmailed is not that he is known to be an FDT agent. Rather, it’s simply that he’s known to be the sort of person who will blow the whistle on blackmailers. Potential blackmailers need not know or care why the FDT agent behaves like this (indeed, it does not matter at all, for the purpose of each individual case, why the agent behaves like this—whether it’s because the dictates of FDT so direct him, or because he’s naturally stubborn, or whatever; it suffices that blackmailers expect the agent to blow the whistle).

So if we stipulate an FDT agent, we also, thereby, stipulate that he is known to be a whistle-blower. (We’re assuming, in every version of this scenario, that, regardless of the agent’s motivations, his propensity toward such behavior is known.) And that does indeed make the “he’s blackmailed anyway” scenario logically inconsistent.

[-]gjm5y90

Even if you are known to be the sort of person who will blow the whistle on blackmailers it is still possible that someone will try to blackmail you. How could that possibly involve a logical inconsistency? (People do stupid things all the time.)

I'll say it again in different words, I did not understand the paper (and consequently, the blog) to be talking about actual blackmail in a big messy physical world. I understood them to be talking about a specific, formalized blackmail scenario, in which the blackmailer's decision to blackmail is entirely contingent on the victim's counterfactional behaviour, in which case resolving to never pay and still being blackmailed isn't possible- in full context, it's logically inconsistent.

Different formalisation are possible, but I'd guess the strict one is what was used. In the softer ones you still generally wont pay.

[-]gjm5y60

The paper discusses two specific blackmail scenarios. One ("XOR Blackmail") is a weirdly contrived situation and I don't think any of what wo says is talking about it. The other ("Mechanical Blackmail") is a sort of stylized version of real blackmail scenarios, and does assume that the blackmailer is a perfect predictor. The paper's discussion of Mechanical Blackmail then considers the case where the blackmailer is an imperfect (but still very reliable) predictor, and says that there too an FDT agent should refuse to pay.

wo's discussion of blackmail doesn't directly address either of the specific scenarios discussed in the FDT paper. The first blackmail scenario wo discusses (before saying much about FDT) is a generic case of blackmail (the participants being labelled "Donald" and "Stormy", perhaps suggesting that we are not supposed to imagine either of them as any sort of perfectly rational perfect predictor...). Then, when specifically discussing FDT, wo considers a slightly different scenario, which again is clearly not meant to involve perfect prediction, simulation, etc., because he says things like " FDT says you should not pay because, if you were the kind of person who doesn't pay, you likely wouldn't have been blackmailed" and "FDT agents who are known as FDT agents have a lower chance of getting blackmailed" (emphasis mine, in both cases).

So. The paper considers a formalized scenario in which the blackmailer's decision is made on the basis of a perfectly accurate prediction of the victim, but then is at pains to say that it all works just the same if the prediction is imperfect. The blog considers only imperfect-prediction scenarios. Real-world blackmail, of course, never involves anything close to the sort of perfect prediction that would make it take a logical inconsistency for an FDT agent to get blackmailed.

So taking the paper and the blogpost to be talking only about provably-perfect-prediction scenarios seems to me to require (1) reading the paper oddly selectively, (2) interpreting the blogpost very differently from me, and (3) not caring about situations that could ever occur in the actual world, even though wo is clearly concerned with real-world applicability and the paper makes at least some gestures towards such applicability.

For the avoidance of doubt, I don't think there's necessarily anything wrong with being interested primarily in such scenarios: the best path to understanding how a theory works in practice might well go via highly simplified scenarios. But it seems to me simply untrue that the paper, still more the blogpost, considers (or should have considered) only such scenarios when discussing blackmail.

This thread might be relevant to your question.

[-]gjm5y20

It might. It's pretty long. I don't suppose you'd like to be more specific?

(If the answer is no, you wouldn't, because all you mean is that it seems like the sort of discussion that might have useful things in it, then fair enough of course.)

I read that under a subtext where we were talking about the same blackmail scenario, but, okay, others are possible.

In cases where the blackmail truly seems not to be contingent on its policy, (and in many current real-world cases) the FDT agent will pay.

The only cases when an FDT agent actually will get blackmailed and refuse to pay are cases where being committed to not paying shifts the probabilities enough to make that profitable on average.

It is possible to construct obstinate kinds of agents that aren't sensitive to FDT's acausal dealmaking faculties. Evolution might produce them often. They will not be seen as friends. As an LDT-like human, my feelings towards those sorts of blackmailers is that we should destroy all of them as quickly as we can, because their existence is a blight to ours. In light of that, I'm not sure they have a winning strategy. When you start to imagine the directed ocean of coordinated violence that an LDT-aligned faction (so, literally any real-world state with laws against blackmail) points in your direction as soon as they can tell what you are, you may start to wonder if pretending you can't understand their source code is really a good idea.

I imagine a time when the distinction between CDT and LDT is widely essentially understood, by this time, the very idea of blackmailing will come to seem very strange, we will wonder how there was this era when a person could just say "If you don't do X, then I will do the fairly self-destructive action Y which I gain nothing from doing" and have everyone just believe them unconditionally, just believe this unqualified statement about their mechanism. Wasn't that stupid? To lie like that? And even stupider for their victims to pretend that they believe the lie? We will not be able to understand it any more.

Imagine that you see an agnostic community head walking through the park at night. You know it's a long shot, but you amble towards him, point your gun at him and say "give me your wallet." He looks back at you and says, "I don't understand the deal. You'll shoot me? How does that help you? Because you want my wallet? I don't understand the logic there, why are those two things related? That doesn't get you my wallet."

Only it does, because when you shoot someone you can loot their corpse, so it occurs to me that muggers are a bad example of blackmail. I imagine they've got to have a certain amount of comfort with actually killing people, to do that. It's not really threatening to do something self-destructive, in their view, they still benefit a little from killing you. They still get to empty your pockets. To an extent, mugging is often just a display of a power imbalance and consequent negotiation of a mutually beneficial alternative to violence.

The gang can profit from robbing your store at gunpoint, but you and them both will profit more if you just pay them protection money. LDT only refuses to pay protection money if it realises that having all of the other entangled LDT store owners paying protection money as well would make the gang profitable enough to grow, and that having a grown gang around would have been, on the whole, worse than the amortised risk of being robbed.

Relevant excerpt for why exactly it was rejected:

The standards for deserving publication in academic philosophy are relatively simple and self-explanatory. A paper should make a significant point, it should be clearly written, it should correctly position itself in the existing literature, and it should support its main claims by coherent arguments. The paper I read sadly fell short on all these points, except the first. (It does make a significant point.) [...]
I still think the paper could probably have been published after a few rounds of major revisions. But I also understand that the editors decided to reject it. Highly ranked philosophy journals have acceptance rates of under 5%. So almost everything gets rejected. This one got rejected not because Yudkowsky and Soares are outsiders or because the paper fails to conform to obscure standards of academic philosophy, but mainly because the presentation is not nearly as clear and accurate as it could be.

So apparently the short version of "why their account has a hard time gaining traction in academic philosophy" is (according to this author) just "the paper's presentation and argumentation aren't good enough for the top philosophy journals".

I feel like rejection-with-explanation is still an improvement over the norm.

Maybe pulling back and attacking the wrong intuitions Schwarz is using directly and generally would be worthwhile.

Strategic High Skill Immigration seems to be a very high quality and relevant post that has been overlooked by most people here, perhaps due to its length and rather obscure title. (Before I strongly upvoted it, it had just 12 points and 4 voters, presumably including the author.) If a moderator sees this, please consider curating it so more people will read it. And for everyone else, I suggest reading at least part 1. It talks about why global coordination on AI safety and other x-risks is hard, and suggests a creative solution for making it a little easier. Part 2 is much longer and goes into a lot of arguments, counterarguments, and counter-counterarguments, and can perhaps be skipped unless you have an interest in this area.

This was an interesting post. However, given Google's rocky history with DARPA, I'm not convinced a high concentration of AI researchers in the US would give the US government a lead in AI.

The author suggests that just slowing down research into risky technologies in other countries would be worthwhile:

  • The lack of acceleration of science following the high skill immigration shock to the US is not necessarily bad news: it may also imply that future shocks won’t accelerate risky technologies, that research funding is a more fundamental constraint, or that other sectors of the economy are better at absorbing high skill immigrants.
  • Further emigration likely decelerated progress for potentially risky technologies in the former USSR, which is a net reduction of risk: there is less incentive for the US government to engage in an arms race if there is no one to race.

I'd be keen to see other examples of the What, How and Why books for an area of expertise that you know about. I found the two examples in the post to be really useful for framing how I even think about those areas - procrastination and calculus,. (If we get some more good examples, maybe we could make another post like the 'best textbooks' post but for this, to collect them.)

I've been lurking for a while but haven't posted very much. I'm a writer who also enjoys doing weird experiments in my spare time. Hi there :)

What weird experiments?

Well, I'm setting up a SETI style project looking for extra-temporal info... in other words looking for time travelers. I did an initial set of experiments which were poorly planned out and riddled with paradox, but I've redesigned the experiments and will be starting them soon.

Welcome!

Hi all, I've posted a few comments, but never introduced myself: I'm an academic working in philosophy of science and social epistemology, mainly on methodological issues underlying scientific inquiry, scientific rationality, etc. I'm coming from the EA forum, but on Ben's invitation I dropped by here a few days ago and I am genuinely curious about the prospects of this forum, its overall functions and its possible interactions with the academic research. So I'm happy to read and chip in where I can contribute :)

Welcome! :)

I really liked reading your comments on the recent post about Kuhn, and would be interested in hearing more of your thoughts on related topics, And if you ever have any questions or problems with the site, feel free to ping us here or on Intercom!

Thanks a lot! :)

I too have been lurking for a little while. I have listened to the majority of Rationality from A to Z by Eliezer and really appreciate the clarity that Bayescraft and similar ideas offer. Hello :)

Welcome :) I wish you well in practising the art of Bayescraft.

If Moore's law completely stops (in a sense there will be no new more effective chips), this will lower the price of computations for a few reasons:

1) Biggest part of a processor price is covering of R&D, but if Moore's law stops, there will be no R&D, only manufacturing costs.

2) Biggest part of the manufacturing cost is covering the cost and amortisation of large chip fabs. If no more new chip fabs, price will be marginal. For example, 8080 processor cost 350 USD after inventing in the beginning of1970s and only 3.5 USD at the end of 1970s then it was morally obsolete.

3) No more moral amortisation of computers. Amortisation will be not 3 years, but 20 years, which will lower the price of computation - or alternatively will allow users to linearly increase their computation power by buying more and more computers for a long periods of time.

4) Expiring patents will allow cheaper manufacturing by other vendors.

Are you claiming that price per computation would drop in absolute terms, or compared with the world in which Moore's law continued? The first one seems unobjectionable, the default state of everything is for prices to fall since there'll be innovation in other parts of the supply chain. The second one seems false. Basic counter-argument: if it were true, why don't people produce chips from a decade ago which are cheaper per amount of computation than the ones being produced today?

1. You wouldn't have to do R&D, you could just copy old chip designs.

2. You wouldn't have to keep upgrading your chip fabs, you could use old ones.

3. People could just keep collecting your old chips without getting rid of them.

4. Patents on old chip designs have already expired.

Do you think it is more likely that r&d will simply cease rather than there being fewer and fewer returns from r&d over time, causing companies to put more money into it to stay competitive? I wonder if the situation might not cause the prices to actually go up, like with medication.

I don't think that r&d will cease. My argument was in style if "A then B", but I don't think that A is true. I am argue here against those who associate the end of Moore's law with the end of growth of computational power.

I see. Just running with the premise as it stood.

In place of "Moore's law stops" let us say 'the doubling time increases from X to Y.'

I believe that the intuitive economic model you have in mind does not work.

1) In the moment when you sell a thing, its development costs are sunk. That is, you have to explain the price via market conditions at the moment when things are offered at the market. You can argue that development is a fixed cost, therefore less firms will enter the market, therefore the price is higher. But this is independent of the development costs for future processors.

2) Basically, see 1) ...?

3) If I don't expect that my computer becomes obsolete in two years, I am willing to pay more. Thus, the demand curve moves upwards. So the price of the processor may be higher. (But this also depends on supply, i.e., points 1) and 2))

4) Ok, but this is independent of whether Moore's law does or does not hold. That is, if you have some processor type X1, at some point its patent expires and people can offer it cheaply (because they don't have to cover the costs discussed in point 1).

What's meant by "Moral" here?

A thing could function, but there are better and cheaper things - the correct name probably is "functional obsolescence"

https://en.wikipedia.org/wiki/Obsolescence#Functional_obsolescence

New to the forum, found it through effective altruism, but began reading and participating or a different reason. there was a topic i want to challenge my beliefs about, and research further then what was already done (relatively little).

Up to now, I've read the first 2 books of "rationality: from A to Z", and randomly read around the site, loving it :)

(If you have any article suggestions for me it would be awesome)

Welcome!

A lot of people really enjoy Scott Alexander's writing. We have a compilation of the best of his writing here.

A lot of people also really enjoy Harry Potter and the Methods of Rationality, which you can find here.

Another good way of discovering good posts is to go to the all-posts page and sort by "top" which shows you all historical posts sorted by karma. You can find that page with that sorting activated here.

thanks, I've already read methods of rationality. I'll read the codex, it really does seem to have much of what i was interested in :)

Did anyone follow the development of the stock exchange IEX? I see they landed their first publicly traded company in October. I also see that Wall Street is building its own stock exchange, explicitly rejecting the IEX complaints.

I thought this was a very interesting story when it broke, because it is about how to do things better than the stock market does. If anyone with more financial background could comment, I'd be very interested to hear it.

For those completely unfamiliar, this exchange was built specifically to mitigate High Frequency Trading. There's some more details of the kinds of things that were happening in this article about the year after Flash Boys was published. Naturally, the people who employ high frequency algorithms insist it is all hokum.

Everything I've ever heard attributed to Michael Lewis on this topic was false. Good ideas shouldn't require lies to sell them. And I can tell it's false because I carefully read the quoted passages. In particular, you conclude from his claim that "the richest people on Wall Street" are angry that they employ HFT algorithms. But that's not at all true. HFT is tiny. If the biggest players on Wall Street are angry, it's because they'd rather trade with HFT than with Brad Katsuyama.*

Is IEX listing a stock a significant milestone? I doubt it. IEX has been selling other stocks for 4 years, first as a dark pool and then for 2 years as an exchange. (But aren't "dark pools" evil incarnate?) IEX claims to be 2.7% of the total volume. I never looked at that number before today. I've previously claimed that IEX was a failure, and I was surprised to see that the number was so high. I welcome experimentation and I'm happy that they've found a niche. But if you thought it was a much better product that would quickly win in the marketplace, maybe you should reconsider this, 2-4 years on.** But the future is long. Maybe IEX listing individual stocks will matter, though you should be suspicious if no one can explain why. And it's hard to rule out the possibility that's it's a much better product that will take a long time to win.

* It was probably a bad idea to use Brad K as metonymy, because he plays two roles. I meant the kind of trader he was at the beginning of the book, who was outcompeted by HFT. I don't mean IEX, the market he now runs. In as much as IEX exists as a place to trade with people like him, it seems like most people wouldn't want to trade there, either, but it has a lot of room to evolve.

** To put the 2.7% in context, there are 12 exchanges, so I think IEX is the smallest, but I don't know. There are dozens of dark pools, so the 1.5+% market share IEX had when it transitioned to exchange was pretty big for a dark pool.

If the biggest players on Wall Street are angry, it's because they'd rather trade with HFT than with Brad Katsuyama.

I'm confused by this. That is almost exactly the claim that Lewis and Katsuyama are making: the large exchanges are preferring HFT. The reason this is a problem is because it unilaterally disadvantages everyone who lacks similar trading speed, like individual investors or retirement funds. The exchanges seem to benefit by the increased volume and direct payments from the HFT people for routing privileges.

Do you have a better source to recommend for how they operate in the market? I'm happy to dump this guy if I can get more reliable information.

I haven't read it, but Vanguard wrote about how it loves HFT. A lot of what I know is from Matt Levine, but he is such a fragmentary blogger that he probably doesn't say much in one place and it's hard to find any particular thing he's written.

Your abstractions like "benefit" seem confused to me. Where is money flowing? How?

By the biggest players, I mean investment firms. I thought that the biggest investors are bigger than the exchanges, but maybe they're only equally big. For example, BATS and Fidelity both have a market cap of $60 billion (NASDAQ $14B, NYSE $6B).* HFT firm Virtu has a market cap of $5B. That is, the net present value of the profits Fidelity extracts from its retirement accounts is $60B, while the NPV of the profits BATS extracts from all retirement accounts in the US is about the same. BATS allows (and subsidizes) HFT because it thinks that Fidelity wants it. That's what BATS says and that's what Fidelity says. Maybe they're lying and actually BATS has market power to extract money from Fidelity, but I'll get to that later. [And why would Fidelity go along with such a lie?] [* looking up these numbers again, I find completely different numbers. But they're roughly right if we swap NYSE and BATS.]

Transaction costs are way down. This is easily and objectively measured in terms of bid-ask spreads and trading fees. This is money that is not going to the middlemen, neither exchanges nor market makers, but is saved by the investment firms, which is why they love HFT. There is a more subtle argument that market liquidity is an illusion that will go away "when it matters" and produce flash crashes. I think that this is also false, both painting too rosy a view of the past and exaggerating the damage caused by flash crashes, but it is much harder to argue about rare events.

HFT and running exchanges are not terribly lucrative businesses, not by the standards of Wall Street. HFT makes orders of magnitude less money than market makers made (in aggregate) even 20 years ago. Individual HFT firms make a lot of money, but there used to be a huge number of market makers who specialized in very small numbers of stocks. When HFT first appeared and drove out these people, they reduced the aggregate money going to market makers and the small number of HFT firms have continued to compete away their own profit. This is a pretty simple metric that is exactly opposed to many common stories. It's not that simple because many of the market makers were vertically integrated into firms that did other things, so it's not that easy to aggregate the market makers. In particular, I claim that's what Brad K's old job was (understanding market structure and making money off of the difference between markets, allowing the rest of the firm to think in terms of stocks, not markets). If you buy that, it makes the old market makers look even more bloated and thus the new HFT look even more efficient, but I don't claim that it is obvious.

But are you saying that Lewis is saying that the exchanges are sucking all the profits out of the HFT? I don't think that exchanges are very lucrative (see numbers in beginning). Does he give any numbers? I think that there are only about 4 companies running exchanges in the US (including IEX), which doesn't sound very competitive. But that's because they keep buying out new exchanges, so it can't be that hard to enter the market. And they keep the exchanges around, so they do see value in diversity. IEX being the 12th exchange and the 4th company does make the market more diverse and competitive, but they're probably only fill a small niche. Whereas the dozens of dark pools are already very competitive.

From Lewis:

Katsuyama asked him a simple question: Did BATS sell a faster picture of the stock market to high-frequency traders while using a slower picture to price the trades of investors? That is, did it allow high-frequency traders, who knew current market prices, to trade unfairly against investors at old prices? The BATS president said it didn’t, which surprised me. On the other hand, he didn’t look happy to have been asked. Two days later it was clear why: it wasn’t true. The New York attorney general had called the BATS exchange to let them know it was a problem when its president went on TV and got it wrong about this very important aspect of its business. BATS issued a correction and, four months later, parted ways with its president.

Emphasis mine. I interpret Lewis' claim to be that BATS was helping HFTs extract money from non-HFT investors. The benefit to which I referred is the fee BATS was paid for the faster market picture and the increased trading volume generated as a consequence.

More broadly and aside from allegations of specific wrongdoing, the claim is that HFT is just shaving the margins of everyone who makes trades more slowly. This argument makes sense to me; I can see no added value in giving preferential information to one market participant over others. It isn't as though HFT is providing a service by helping disseminate information faster - most of the action taken by regulators on the subject was because of exchanges not informing investors about whatever they were doing. My naive guess is their only real impact on the market is to slightly amplify the noise.

That being said, I can easily imagine HFT competing to a profit margin of zero and thereby solving itself, and I can also imagine that there would be other uses for the technology once other types of algorithms were introduced. I can also imagine that the regulatory burden would be greater than the damage they do so it wouldn't be worth it to ban them.

Which is why IEX was the focus of my interest. They are competing on the basis of countering this specific practice, and they seem to be doing alright.

I've asserted a lot of things. Should you believe me? You don't know that I'm disinterested (though you should damn well know that 300 pages of advertising copy is very, very interested). But I can provide a different perspective; you can't imagine that anything good is going on, but I can try to widen your imagination.

More broadly and aside from allegations of specific wrongdoing, the claim is that HFT is just shaving the margins of everyone who makes trades more slowly.

The main difference in perspective I want to promote is how you carve up the market. You're carving it into HFT and Everyone. I say that you should carve it into market makers and investors. HFT makes money. It makes money by shaving the margins off of someone. But why think that it's shaving "everyone"? It's competing with market makers and shaving their margins. The market makers are mad about that. That's enough to explain everything that is observed. Maybe something bad is going on beyond that, but no one says what, no one except liars.

I could say more, but I think it would distract from that one point. (Indeed, I think my prior comments made that mistake.)

Seems like calling it ‘Open and Welcome Thread’ rather than just ‘Open Thread’ may have resulted in fewer posts last month. It’s also just an awkward name and (like almost all post titles) doesn’t display in full on my phone. I’m in favor of changing the title back, but keeping the part in the post body about introducing yourself.

Nod. Not sure about the causal mechanism here but seems fine. Done.

You could also randomize the thread titles over the next N months in order to collect more data. Beware the law of small numbers.

What does the law of small numbers have to do with few people posting in "The Welcome and Open Thread" in the beginning of January of 2019? (A claim which would be nice to compare against the usual amount of posting in The Open Thread in January, and what factors are in effect.)

[-]Elo5y30

Not sure if this is worthy of a top post but I'm wondering if there are any opinions on how this post aged in the last 1.5 years?

https://www.ribbonfarm.com/2017/08/17/the-premium-mediocre-life-of-maya-millennial/

I don't feel like much has changed in terms of evaluating it. Except that the silliness of the part about cryptocurrency is harder to deny now that the bubble has popped.