alexflint

alexflint's Comments

The simple picture on AI safety

So are you saying that my distillation didn't unpack the problem sufficiently to be helpful (in which case I agree but that wasn't my goal), or are you saying that I missed something important / included something unimportant?

The simple picture on AI safety

I parse you as pointing to the clarification of a vague problem like "flight" or "safety" or "heat" into an incrementally more precise concept or problem statement. I agree this type of clarification is ultra important and represents real progress in solving a problem, and I agree that my post absolutely did not do this. But I was actually shooting for something quite different.

I was shooting for a problem statement that (1) causes people to work on the problem, and (2) causes them to work on the right part of the problem. I claim it is possible to formulate such a problem statement without doing any clarification in the sense that you pointed at, and additionally that it is useful to do so because (1) distilled problem statements can cause additional progress to be made on a problem, and (2) clarification is super hard, so we definitely shouldn't block causing additional work to happen until clarification happens, since addition work could be a key ingredient in getting to key clarifications.

To many newcomers to the AI safety space, the problem feels vast and amorphous, and it seems to take a long time before newcomers have confidence that they know what exactly other people in the space are actually trying to accomplish. During this phase, I've noticed that people are mostly not willing to work directly on the problem, because of the suspicion that they have completely misunderstood where the core of the problem actually is. This is why distillation is valuable even absent clarification.

The simple picture on AI safety

I think the distillation needs to (1) be correct and (2) resonate with people. It's really hard to find a distillation that meets these two criteria. Finding such distillations is a good part of what a tech sector product manager spends their time doing.

I'm not at all sure that my distillation of AI safety meets those two.

Decision theory and zero-sum game theory, NP and PSPACE

Minor nit: I always thought the term "decision theory" referred to the meta-level task of formulating an algorithm which, given fully specified beliefs and values, tells you how to compare possible actions. On the contrary, when I see someone making a concrete decision using an EU calculation or some such, I don't think of them as "doing decision theory". So perhaps "decision making" or "positive sum game theory" rather than "decision theory"? It probably doesn't matter much.

Critique my Model: The EV of AGI to Selfish Individuals

I'd be interested in reading the literature you mention that suggests positive outcomes are more likely than negative outcomes, conditioned on AGI being developed. My sense is that if AGI is developed and the transition goes badly for humans, but an individual still lives for a long time, then it's quite likely that the individual has a bad life since if you select uniformly from environments that keep humans alive but are otherwise unoptimized for wellbeing, I'd expect most to be quite unhappy.

It also seems like you place around 66% probability (2.5 : 1.3) on our chances of successfully navigating the intelligence explosion. This seems quite high and may be worth pulling out into a separate variable just to make it more explicit.

Opportunities for individual donors in AI safety

Ah, the note about Max Tegmark and the singularity summit was supposed to point to that, but I had the wrong number. Fixed now.

Global insect declines: Why aren't we all dead yet?

From the link:

The study is the result of a collaboration with the Entomological Society of Krefeld – essentially made up of members of the public

This decreases my confidence in the robustness of the results of this particular study. Unfortunately I'm fairly unversed this area so I don't know whether this is a widely repeated result or an outlier.

The most important step

Beautiful. Please write more!

Load More