Wiki Contributions

Comments

Writers at MIRI will primarily be focusing on explaining why it's a terrible idea to build something smarter than humans that does not want what we want. They will also answer the subsequent questions that we get over and over about that. 

We want a great deal of overlap with Pacific time hours, yes. A nine-hour time zone difference would probably be pretty rough unless you're able to shift your own schedule by quite a bit.

Of course. But if it's you, I can't guess which application was yours from your LW username. Feel free to DM me details.

No explicit deadline, I currently expect that we'll keep the position open until it is filled. That said, I would really like to make a hire and will be fairly aggressively pursuing good applications.

I don't think there is a material difference between applying today or later this week, but I suspect/hope there could be a difference between applying this week and next week.

"Wearing your [feelings] on your sleeve" is an English idiom meaning openly showing your emotions.

It is quite distinct from the idea of belief as attire from Eliezer's sequence post, in which he was suggesting that some people "wear" their (improper) beliefs to signal what team they are on.

Nate and Eliezer openly show their despair about humanity's odds in the face of AI x-risk, not as a way of signaling what team they're on, but because despair reflects their true beliefs.

2. Why do you see communications as being as decoupled (rather, either that it is inherently or that it should be) from research as you currently do? 

The things we need to communicate about right now are nowhere near the research frontier.

One common question we get from reporters, for example, is "why can't we just unplug a dangerous AI?" The answer to this is not particularly deep and does not require a researcher or even a research background to engage on.

We've developed a list of the couple-dozen most common questions we are asked by the press and the general public and they're mostly roughly on par with that one.

There is a separate issue of doing better at communicating about our research; MIRI has historically not done very well there. Part of it is that we were/are keeping our work secret on purpose, and part of it is that communicating is hard. To whatever extent it's just about 'communicating is hard,' I would like to do better at the technical comms, but it is not my current highest priority.

Re: the wording about airstrikes in TIME: yeah, we did not anticipate how that was going to be received and it's likely we would have wordsmithed it a bit more to make the meaning more clear had we realized. I'm comfortable calling that a mistake. (I was not yet employed at MIRI at the time but I was involved in editing the draft of the op-ed so it's at least as much on me as anybody else who was involved.)

Re: policy division: we are limited by our 501(c)3 status as to how much of our budget we can spend on policy work, and here 'budget' includes the time of salaried employees. Malo and Eliezer both spend some fraction of their time on policy but I view it as unlikely that we'll spin up a whole 'division' about that. Instead, yes, we partner with/provide technical advice to CAIP and other allied organizations. I don't view failure-to-start-a-policy-division as a mistake and in fact I think we're using our resources fairly well here.

Re: critiquing existing policy proposals: there is undoubtedly more we could do here, though I lean more in the direction of 'let's say what we think would be almost good enough' rather than simply critiquing what's wrong with other proposals.

I think that's pretty close, though when I hear the word "activist" I tend to think of people marching in protests and waving signs, and that is not the only way to contribute to the effort to slow AI development. I think more broadly about communications and policy efforts, of which activism is a subset.

It's also probably a mistake to put capabilities researchers and alignment researchers in two entirely separate buckets. Their motivations may distinguish them, but my understanding is that the actual work they do unfortunately overlaps quite a bit.

That's pretty surprising to me; for a while I assumed that the scenario where 10% of the population knew about superintelligence as the final engineering problem was a nightmare scenario e.g. because it would cause acceleration.

 

"Don't talk too much about how powerful AI could get because it will just make other people get excited and go faster" was a prevailing view at MIRI for a long time, I'm told. (That attitude pre-dates me.) At this point many folks at MIRI believe that the calculus has changed, that AI development has captured so much energy and attention that it is too late for keeping silent to be helpful, and now it's better to speak openly about the risks.

Load More