I know a lot of people through a shared interest in truth-seeking and epistemics. I also know a lot of people through a shared interest in trying to do good in the world.
I think I would have naively expected that the people who care less about the world would be better at having good epistemics. For example, people who care a lot about particular causes might end up getting really mindkilled by politics, or might end up strongly affiliated with groups that have false beliefs as part of their tribal identity.
But I don’t think that this prediction is true: I... (read more)
I used to think that slower takeoff implied shorter timelines, because slow takeoff means that pre-AGI AI is more economically valuable, which means that economy advances faster, which means that we get AGI sooner. But there's a countervailing consideration, which is that in slow takeoff worlds, you can make arguments like ‘it’s unlikely that we’re close to AGI, because AI can’t do X yet’, where X might be ‘make a trillion dollars a year’ or ‘be as competent as a bee’. I now overall think ... (read more)
I don't really know how to think about anthropics, sadly.
But I think that it's pretty likely that nuclear war could have not killed everyone. So I still lose Bayes points compared to the world where nukes were fired but not everyone died.
Nuclear war doesn't have to kill everyone to make our world non-viable for anthropic reasons. It just has to render our world unlikely to be simulated.
It's tempting to anthropomorphize GPT-3 as trying its hardest to make John smart. That's what we want GPT-3 to do, right?
I don't feel at all tempted to do that anthropomorphization, and I think it's weird that EY is acting as if this is a reasonable thing to do. Like, obviously GPT-3 is doing sequence prediction--that's what it was trained to do. Even if it turns out that GPT-3 correctly answers questions about balanced parens in some contexts, I feel pretty weird about calling that "deliberately pretending to be stupider than it is".
I don't feel at all tempted to do that anthropomorphization, and I think it's weird that EY is acting as if this is a reasonable thing to do.
"It's tempting to anthropomorphize GPT-3 as trying its hardest to make John smart" seems obviously incorrect if it's explicitly phrased that way, but e.g. the "Giving GPT-3 a Turing Test" post seems to implicitly assume something like it:
This gives us a hint for how to stump the AI more consistently. We need to ask questions that no normal human would ever talk about.... (read more)
Q: How m
If the linked SSC article is about the aestivation hypothesis, see the rebuttal here.
Remember that I’m not interested in evidence here, this post is just about what the theoretical analysis says :)
In an economy where the relative wealth of rich and poor people is constant, poor people and rich people both have consumption equal to their income.
I agree that there's some subtlety here, but I don't think that all that happened here is that my model got more complex.
I think I'm trying to say something more like "I thought that I understood the first-order considerations, but actually I didn't." Or "I thought that I understood the solution to this particular problem, but actually that problem had a different solution than I thought it did". Eg in the situations of 1, 2, and 3, I had a picture in my head of some idealized market, and I had false beliefs about wh... (read more)
I agree that the case where there are several equilibrium points that are almost as good for the employer is the case where the minimum wage looks best.
Re point 1, note that the minimum wage decreases total consumption, because it reduces efficiency.
I've now made a Guesstimate here. I suspect that it is very bad and dumb; please make your own that is better than mine. I'm probably not going to fix problems with mine. Some people like Daniel Filan are confused by what my model means; I am like 50-50 on whether my model is really dumb or just confusing to read.
Also don't understand this part. "4x as many mild cases as severe cases" is compatible with what I assumed (10%-20% of all cases end up severe or critical) but where does 3% come from?
Yeah my text was wrong here; I meant that I think you get 4
... (read more)Oh yeah I'm totally wrong there. I don't have time to correct this now. Some helpful onlooker should make a Guesstimate for all this.
Epistemic status: I don't really know what I'm talking about. I am not at all an expert here (though I have been talking to some of my more expert friends about this).
EDIT: I now have a Guesstimate model here, but its results don't really make sense. I encourage others to make their own.
Here's my model: To get such a large death toll, there would need to be lots of people who need oxygen all at once and who can't get it. So we need to multiply the proportion of people who might have be infected all at once by the fatality rate for such people. I'm going to
... (read more)In places with aggressive testing, like Diamond Princess and South Korea, you see much lower fatality rates, which suggests that lots of cases are mild.
With South Korea, I think most cases have not had enough time to progress to fatality yet. With Diamond Princess, there are 7 deaths out of 707 detected cases so far, with more than half of the cases still active. I'm not sure how you concluded from this "that lots of cases are mild". Please explain more? That page does say only 35 serious or critical cases, but I suspect this is probably because the pas
... (read more)Just for the record, I think that this estimate is pretty high and I'd be pretty surprised if it were true; I've talked to a few biosecurity friends about this and they thought it was too high. I'm worried that this answer has been highly upvoted but there are lots of people who think it's wrong. I'd be excited for more commenters giving their bottom line predictions about this, so that it's easier to see the spread.
Wei_Dai, are you open to betting about this? It seems really important for us to have well-calibrated beliefs about this.
Yeah, I kind of wrote that in a hurry to highlight the implications of one particular update that I made (namely that if hospitals are overwhelmed the CFR will become much higher), and didn't mean to sound very confident or have it be taken as the LW consensus. (Maybe some people also upvoted it for the update rather than for the bottom line prediction?)
I do still stand by it in the sense that I think there's >50% chance that global death rate will be >2.5%. Instead of betting about it though, maybe you could try to convince me otherwise? E.g., what's the weakest part of my argument/model, or what's your prediction and how did you arrive at it?
(I'm unsure whether I should write this comment referring to the author of this post in second or third person; I think I'm going to go with third person, though it feels a bit awkward. Arthur reviewed this comment before I posted it.)
Here are a couple of clarifications about things in this post, which might be relevant for people who are using it to learn about the MIRI recruiting process. Note that I'm the MIRI recruiter Arthur describes working with.
General comments:
I think Arthur is a really smart, good programmer. Arthur doesn't have as much backgroun
... (read more)Hi,
Thank you for your long and detailed answer. I'm amazed that you were able to do it so quickly after the post's publication. Especially since you sent me your answer by email while I just published my post on LW without showing it to anyone first.
Arthur reports a variety of people in this post as saying things that I think are somewhat misinterpreted, and I disagree with several of the things he describes them as saying.
I added a link to this comment in the top of the post. I am not surprised to learn that I misunderstood some things which were said
... (read more)For the record, parts of that ratanon post seem extremely inaccurate to me; for example, the claim that MIRI people are deferring to Dario Amodei on timelines is not even remotely reasonable. So I wouldn't take it that seriously.
Agreed I wouldn’t take the ratanon post too seriously. For another example, I know from living with Dario that his motives do not resemble those ascribed to him in that post.
In OpenAI's Roboschool blog post:
This policy itself is still a multilayer perceptron, which has no internal state, so we believe that in some cases the agent uses its arms to store information.
formatting problem, now fixed
Given a policy π we can directly search for an input on which it behaves a certain way.
(I'm sure this point is obvious to Paul, but it wasn't to me)
We can search for inputs on which a policy behaves badly, which is really helpful for verifying the worst case of a certain policy. But we can't search for a policy which has a good worst case, because that would require using the black box inside the function passed to the black box, which we can't do. I think you can also say this as "the black box is an NP oracle, not a oracle".
This still means that w
... (read more)I think that the terms introduced by this post are great and I use them all the time
Ah yes this seems totally correct
[I'm not sure how good this is, it was interesting to me to think about, idk if it's useful, I wrote it quickly.]
Over the last year, I internalized Bayes' Theorem much more than I previously had; this led me to noticing that when I applied it in my life it tended to have counterintuitive results; after thinking about it for a while, I concluded that my intuitions were right and I was using Bayes wrong. (I'm going to call Bayes' Theorem "Bayes" from now on.)
Before I can tell you about that, I need to make sure you're thinking about Bayes in terms of ratios
... (read more)Email me at buck@intelligence.org with some more info about you and I might be able to give you some ideas (and we can maybe talk about things you could do for ai alignment more generally)
Minor point: I think asteroid strikes are probably very highly correlated between Everett branches (though maybe the timing of spotting an asteroid on a collision course is variable).
A couple weeks ago I spent an hour talking over video chat with Daniel Cantu, a UCLA neuroscience postdoc who I hired on Wyzant.com to spend an hour answering a variety of questions about neuroscience I had. (Thanks Daniel for reviewing this blog post for me!)
The most interesting thing I learned is that I had quite substantially misunderstood the connection between convolutional neural nets and the human visual system. People claim that these are somewhat bio-inspired, and that if you look at early layers of the visual cortex you'll find that it operates k
... (read more)I recommend looking on Wyzant.
I think that an extremely effective way to get a better feel for a new subject is to pay an online tutor to answer your questions about it for an hour.
It turns that there are a bunch of grad students on Wyzant who mostly work tutoring high school math or whatever but who are very happy to spend an hour answering your weird questions.
For example, a few weeks ago I had a session with a first-year Harvard synthetic biology PhD. Before the session, I spent a ten-minute timer writing down things that I currently didn't get about biology. (This is an exercise wo
... (read more)Hired an econ tutor based on this.
I've hired tutors around 10 times while I was studying at UC-Berkeley for various classes I was taking. My usual experience was that I was easily 5-10 times faster in learning things with them than I was either via lectures or via self-study, and often 3-4 one-hour meetings were enough to convey the whole content of an undergraduate class (combined with another 10-15 hours of exercises).
I'm confused about what point you're making with the bike thief example. I'm reading through that post and its comments to see if I can understand your post better with that as background context, but you might want to clarify that part of the post (with a reader who doesn't have that context in mind).
Can you clarify what is unclear about it?
I believe they would like to hire several engineers in the next few years.
We would like to hire many more than several engineers--we want to hire as many people as engineers as possible; this would be dozens if we could, but it's hard to hire, so we'll more likely end up hiring more like ten over the next year.
I think that MIRI engineering is a really high impact opportunity, and I think it's definitely worth the time for EA computer science people to apply or email me (buck@intelligence.org).
My main concern with this is the same as the problem listed on Wei Dai's answer: whether a star near us is likely to block out this light. The sun is about 10^9m across. A star that's 10 thousand light years away (this is 10% of the diameter of the Milky Way) occupies about (1e9m / (10000 lightyears * 2 * pi))**2 = 10^-24 of the night sky. A galaxy that's 20 billion light years away occupies something like (100000 lightyears / 20 billion lightyears) ** 2 ~= 2.5e-11. So galaxies occupy more space than stars. So it would be weird if individual stars blocked out a whole galaxy.
Another piece of idea: If you're extremely techno-optimistic, then I think it would be better to emit light at weird wavelengths than to just emit a lot of light. Eg emitting light at two wavelengths with ratio pi or something. This seems much more unmistakably intelligence-caused than an extremely bright light.
My first idea is to make two really big black holes and then make them merge. We observed gravitational waves from two black holes with solar masses of around 25 solar masses each located 1.8 billion light years away. Presumably this force decreases as an inverse square times exponential decay; ignoring the exponential decay this suggests to me that we need 100 times as much mass to be as prominent from 18 billion light years. A galaxy mass is around 10^12 solar masses. So if we spent 2500 solar masses on this each year, it would be at least as prominent a... (read more)
I think Anna and Rob answered the main questions here, but for the record I am still in the business of talking to people who want to work on alignment stuff. (And as Anna speculated, I am indeed still the person who processes MIRI job applications.)