Hi. I'm Gareth McCaughan. I've been a consistent reader and occasional commenter since the Overcoming Bias days. My LW username is "gjm" (not "Gjm" despite the wiki software's preference for that capitalization). Elsewehere I generally go by one of "g", "gjm", or "gjm11". The URL listed here is for my website and blog, neither of which has been substantially updated in about the last four years. I live near Cambridge (UK) and work for a small technology company in Cambridge. My business cards say "mathematician" but in practice my work is a mixture of simulation, data analysis, algorithm design, software development, problem-solving, and whatever random engineering no one else is doing. I am married and have a daughter born in mid-2006. The best way to contact me is by email: firstname dot lastname at pobox dot com. I am happy to be emailed out of the blue by interesting people. If you are an LW regular you are probably an interesting person in the relevant sense even if you think you aren't.
If you're wondering why some of my old posts and comments are at surprisingly negative scores, it's because for some time I was the favourite target of old-LW's resident neoreactionary troll, sockpuppeteer and mass-downvoter.
Voting for Hitler does nothing for you personally unless you actually want a Nazi government. If you have investments in say NVIDIA then presumably you expect them to make you more money than whatever else you're proposing to invest in instead. I am suggesting that the first-order effects of having more money are likely to outweigh the second-order (or whatever higher order they actually are) where you sell what in the big picture is a rather small number of NVIDIA shares, this has a probably imperceptible effect on NVIDIA's stock price, this makes them an imperceptibly more favourable company to lend money to, they borrow more money on better terms and make imperceptibly better GPUs, and all the AI research happens a tiny bit faster.
To be clear, I warmly endorse getting out of NVIDIA if owning shares of a company that might contribute to AI development makes you feel terrible. That's a first-order effect too. But it still looks to me as if the effects are likely negligible, and I don't think I buy any superrationality-style arguments along the lines of "whatever you do, other similar people are also likely to do, so you should multiply your estimate of the effect severalfold".
Do you mean that you expect OpenAI deliberately wrote training examples for GPT based on Gary Marcus's questions, or only that because Marcus's examples are on the internet and any sort of "scrape the whole web" process will have pulled them in?
The former would surely lead to GPT-4 doing better on those examples. I'm not sure the latter would. Scott's and Marcus's blog posts, for instance, contain GPT-3's continuations for those examples; they don't contain better continuations. Maybe a blog post saying "ha ha, given prompt X GPT continued it to make X Y; how stupid" is enough for the training process to make GPT give better answers when prompted with X, but it's not obvious that it would be. (On the face of it, it would e.g. mean that GPT is learning more intelligently from its training data than would be implied by the sort of stochastic-parrot model some have advocated. My reading of what Marcus wrote is that he takes basically that view: "What it does is something like a massive act of cutting and pasting, stitching variations on text that it has seen", "GPT-3 continues with the phrase “You are now dead” because that phrase (or something like it) often follows phrases like “… so you can’t smell anything. You are very thirsty. So you drink it.”", "It learns correlations between words, and nothing more".)
I don't know anything about how OpenAI actually select their training data, and in particular don't know whether they deliberately seed it with things that they hope will fix specific flaws identified by their critics. So the first scenario is very possible, and so I agree that testing different-but-similar examples would give more trustworthy evidence about whether GPT-4 is really smarter in the relevant ways than GPT-3. But if I had to guess, I would guess that they don't deliberately seed their training data with their critics' examples, and that GPT-4 will do about equally well on other examples of difficulty similar to the ones Marcus posted.
(I don't have access to GPT-4 myself, so can't test this myself.)
I'm not claiming "low probability implies expectation is negligible", and I apologize if what I wrote gave the impression that I was. The thing that seems intuitively clear to me is pretty much "expectation is negligible".
What is the actual effect on Microsoft of a small decrease in their share price? Before trying to answer that in a principled way, let's put an upper bound on it. The effect on MS of a transaction of size X cannot be bigger than X, because if e.g. every purchase of $1000 of MS stock made them $1000 richer then it would be to their advantage to buy stock; as they bought more the size of this effect would go down, and they would keep buying until the benefit from buying $1000 of MS stock became <= $1000.
(MS does on net buy stock every year, or at least has for the last several years, but my argument isn't "this can't happen because if it did then MS would buy their own stock" but "if this happened MS would buy enough of their own stock to make the effect go away".)
My intuition says that this sort of effect should be substantially less than the actual dollar value of the transaction, but if there's some way to prove this it isn't known to me. This intuition is the reason why I expect little effect on MS from you buying or selling MS shares. But let's set that intuition aside and see if we can make some concrete estimates (they will definitely require a lot of guesswork and handwaving). How does a higher share price actually help them?
To estimate the impact of the first two of these, we could e.g. look at how the number of outstanding Microsoft shares changes each year; if we (crudely?) suppose that this doesn't depend strongly on small changes in share price, then a decrease of x in the Microsoft share price lasting a year costs Microsoft (number of MS shares issued that year) times x, because that's how much less money, or perceived money-equivalent benefit, they got from issuing those shares.
It turns out, as I mentioned above, that for the last several years the number of outstanding Microsoft shares has decreased every year, so that crude model suggests that a decrease in their share price actually helps them, because as they (on net) buy back shares they are paying less money to do it. Oops.
The third effect seems like it's definitely of the right sign, since whatever the net chance in MS's debt over time it isn't literally taking out any anti-debt. Do we have any plausible way to estimate its size? We could look at e.g. Microsoft bond coupon figures and try to correlate them with the MSFT share price after correcting somehow for generally-prevailing interest rates, but I don't have the expertise to do this in a meaningful way and also don't have access to any information to speak of about Microsoft bond sales. Let's try an incredibly crude model and see what we get: suppose that right now MS can borrow money at 2% interest, and that if their stock price dropped 3x then they would be seen as a much bigger risk and the figure would go up to 4%, and that what happens in between is linear. The current stock price is about $300, so this is saying that a $200 fall in stock price would mean a 2% rise in annual interest rate, so each cent of stock price change means a 1/10000% change in annual interest rate.
How much borrowing does MS do? Hard to tell. (At least for me; perhaps others with more info or more business expertise can do better.) Over the last several years their total debt has consistently decreased, but not by very much; it sits at about $60B. With total debt roughly steady, that total debt should be the number of "debt dollar-years incurred per year". So one year of 1c-higher stock prices would mean a reduction of $60B x 1/10000 x 1% in debt interest payments; that is, of $60K.
There are also the other two effects, but crudely those appear to point in the other direction given MS's decision to buy back more stock than it issues. I'll assume they're zero. Maybe there are other mechanisms too (e.g., some sort of intangible thing where MS is a more attractive employer if its stock price is high) but I expect these to be smaller than the three concrete ones listed above.
What should we assume about the lasting effect of your buying or selling MS stock? If we assume the price is a pure random walk then a 1c decrease in price lasts for ever, but that seems incredibly implausible (at least in the case where, as here, your reasons for selling don't have anything to do with your opinion about the future value of the company). On the basis of pure handwaving I'm going to suppose that the effect of your selling your stock is to depress the share price by one cent for one month. (Both figures feel like big overestimates to me.) That would suggest that selling $300K of MS stock costs the company, via this particular mechanism, about 1/12 of $60K, or about $5K.
That doesn't mean $5K less for AI work, of course. I think MS spends about $50B per year. MS's total investment in OpenAI is $10B but that isn't happening every year, and presumably they do some internal AI work too. Let's suppose they spend $5B on AI per year, which feels like a substantial overestimate to me. Then giving them an extra $5K means an extra $500 spent on AI.
So, after all this very rigorous and precise analysis, my crude estimate is that by selling a $300K stake in Microsoft you might effectively reduce their AI spending by $500. I'm pretty comfortable calling that negligible. But, of course, the error bars are rather larger than the number itself :-).
(Counter-argument: "Duh, the value of a company as measured by the market, which knows best, is just its total market capitalization. So if something you do changes the share price by x and there are y outstanding shares, then you changed the value of the company by exactly xy. This value is much larger than your estimate, so your estimate is bullshit." But I claim that this argument is bullshit: if, as I believe, transactions that are independent of any sort of estimation the actual future value of the company have only transient effects, then it is just not true that you have changed any actually meaningful measure of the value of the company by exactly xy.)
I think that unless you are investing a very large amount of money it's reasonable to round the effect of your investment choices to zero. You presumably didn't buy shares directly from Google, Microsoft, Nvidia and Meta, so the only way for your choice to invest in them or not can affect those companies' ability to work on AI is via changes in the share price. When you sold those shares, did the price change detectably between the first share you sold and the last? If not, the transaction probably wasn't large enough to alter the price by as much as a cent per share. A sub-1c-per-share difference in the amount of money MS or whoever can raise by selling shares doesn't seem likely to have much impact on their ability to do whatever they might want to do.
One that gives good reason for someone hearing it who wasn't previously aware of it to increase their credence in the thing it's an argument for. (And, since really we should be talking about better and worse arguments rather than good and bad ones, a better argument is one that justifies a bigger increase.)
For instance, consider the arguments about how COVID-19 started infecting humans. "It was probably a leak from the Wuhan Institute of Virology, because you can't trust the Chinese" is a very bad argument. It makes no contact with anything actually related to COVID-19. "It was probably a natural zoonosis, because blaming things on the Chinese is racist" is an equally bad argument for the same reason. "It was probably a leak from the WIV, because such-and-such features of the COVID-19 genome are easier to explain as consequences of things that commonly happen in virology research labs than as natural occurrences" is a much better argument than either of those, though my non-expert impression is that experts don't generally find it very convincing. "It was probably a natural zoonosis, because if you look at the pattern of early spread it looks much more like it's centred on the wet market than like it's centred on the WIV" is also a much better argument than either of those; I'm not sure what the experts make of it. In the absence of more cooperation from the Chinese authorities (and perhaps even with it) I would not expect any argument to be very convincing, because finding this sort of thing out is really difficult.
I think it's completely wrong that
if you disagree on the point at issue, you must believe that there are no good arguments for the point.
There absolutely can be good arguments for something that's actually false. What there can't be is conclusive arguments for something that's actually false.
(Also, if I had been more precise I would have said "... prefer better arguments to worse ones"; even in a situation where there are no arguments for something that rise to the level of good, there may still be better and worse ones, and I may be disappointed that I'm being presented with a particularly bad one.)
I think "you'll never persuade people like that" means several different things on different occasions, and usually it doesn't mean what Zack says it always means.
(In what follows, A is making some argument and B is saying "you'll never persuade people like that".)
It can, in principle, mean (or, more precisely, indicate; I don't think it's exactly the meaning even when this is what is implicitly going on) "I am finding this convincing and don't want to, so I need to find a diversion". I think two other bad-faith usages are actually more common: "I am on some level aware that evidence and arguments favour your position over mine, and am seeking a distraction" (this differs from Zack's in that the thing that triggers the response is not that A's arguments specifically are persuasive to B) and "I fear that your arguments will be effective, and hope to guilt you into using weaker and less effective ones".
It can mean "I at-least-somewhat agree with you on the actual point at issue, and I think your arguments are bad and/or offputting and will push people away from agreeing with both of us, and I don't like that".
It can mean "I disagree with you on the actual point at issue, but prefer good arguments to bad ones, and I am disappointed that you're putting forward this argument that's no good".
It can mean "I disagree with you on the actual point at issue, and it's hard to tell whether your actual argument is any good because you're being needlessly obnoxious about it and that's distracting".
Zack suggests that "you'll never persuade people like that" is an obvious bad-faith argument because A isn't trying to persuade "people", they're trying to persuade B, and it's weird for B to complain about "people" rather than saying that/why B in particular isn't persuaded. But I don't buy it. 1. "You'll never persuade people like that" does in fact imply "you aren't persuading me like that". (Maybe sometimes dishonestly, but that is part of what is being claimed when someone says that.) 2. If A is being honest, they aren't only trying to persuade B. (Most of the time, if someone says something designed to be persuasive to you rather than generally valid, that's manipulation rather than honest argument.) So it's of some relevance if B reckons A's argument is not only unhelpful to B but unhelpful generally.
Whether it matters what other broadly similar groups do depends on what you're concerned with and why.
If you're, say, a staff member at an EA organization, then presumably you are trying to do the best you could plausibly do, and in that case the only significance of those other groups would be that if you have some idea how hard they are trying to do the best they can, it may give you some idea of what you can realistically hope to achieve. ("Group X has such-and-such a rate of sexual misconduct incidents, but I know they aren't really trying hard; we've got to do much better than that." "Group Y has such-and-such a rate of sexual misconduct incidents, and I know that the people in charge are making heroic efforts; we probably can't do better.")
So for people in that situation, I think your point of view is just right. But:
If you're someone wondering whether you should avoid associating with rationalists or EAs for fear of being sexually harassed or assaulted, then you probably have some idea of how reluctant you are to associate with other groups (academics, Silicon Valley software engineers, ...) for similar reasons. If it turns out that rationalists or EAs are pretty much like those, then you should be about as scared of rationalists as you are of them, regardless of whether rationalists should or could have done better.
If you're a Less Wrong reader wondering whether these are Awful People that you've been associating with and you should be questioning your judgement in thinking otherwise, then again you probably have some idea of how Awful some other similar groups are. If it turns out that rationalists are pretty much like academics or software engineers, then you should feel about as bad for failing to shun them as you would for failing to shun academics or software engineers.
If you're a random person reading a Bloomberg News article, and wondering whether you should start thinking of "rationalist" and "effective altruist" as warning signs in the same way as you might think of some other terms that I won't specify for fear of irrelevant controversy, then once again you should be calibrating your outrage against how you feel about other groups.
For the avoidance of doubt, I should say that I don't know how the rate of sexual misconduct among rationalists / EAs / Silicon Valley rationalists in particular / ... compares with the rate in other groups, nor do I have a very good idea of how high it is in other similar groups. It could be that the rate among rationalists is exceptionally high (as the Bloomberg News article is clearly trying to make us think). It could be that it's comparable to the rate among, say, Silicon Valley software engineers and that that rate is horrifyingly high (as plenty of other news articles would have us think). It could be that actually rationalists aren't much different from any other group with a lot of geeky men in it, and that groups with a lot of geeky men in them are much less bad than journalists would have us believe. That last one is the way my prejudices lean ... but they would, wouldn't they?, so I wouldn't put much weight on them.
[EDITED to add:] Oh, another specific situation one could be in that's relevant here: If you are contemplating Reasons Why Rationalists Are So Bad (cf. the final paragraph quoted in the OP here, which offers an explanation for that), it is highly relevant whether rationalists are in fact unusually bad. If rationalists or EAs are just like whatever population they're mostly drawn from, then it doesn't make sense to look for explanations of their badness in rationalist/EA-specific causes like alleged tunnel vision about AI.
[EDITED again to add:] To whatever extent the EA community and/or the rationalist community claims to be better than others, of course it is fair to hold them to a higher standard, and take any failure to meet it as evidence against that claim. (Suppose it turns out that the rate of child sex abuse among Roman Catholic clergy is exactly the same as that in some reasonably chosen comparison group. Then you probably shouldn't see Roman Catholic Clergy as super-bad, but you should take that as evidence against any claim that the Roman Catholic Church is the earthly manifestation of a divine being who is the source of all goodness and moral value, or that its clergy are particularly good people to look to for moral advice.) How far either EAs or rationalists can reasonably be held to be making such a claim seems like a complicated question.
It is. But if someone is saying "this group of people is notably bad" then it's worth asking whether they're actually worse than other broadly similar groups of people or not.
I think the article, at least to judge from the parts of it posted here, is arguing that rationalists and/or EAs are unusually bad. See e.g. the final paragraph about paperclip-maximizers.
Remark: It is hard to know what to make of a comment that has a decently positive approval score, a substantially negative agreement score, and no comments expressing disagreement. Clearly ... hmm, actually, clearly one person strong-agreement-downvoted it and chose not to say what they didn't like. So all I know is that of the several lines of argument in the lengthy comment above, something seemed very bad to someone.
It's not for me to tell anyone else what they ought to do, but personally I think that if I were strong-agreement-downvoting something and it wasn't perfectly obvious what might be wrong with it, then I would also want to leave a comment saying what I thought was wrong. (Maybe not if I thought the writer was an aggressive bozo who shouldn't be responded to, or something.)