LESSWRONG
LW

2108
YonatanK
1066400
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
2YonatanK's Shortform
7mo
1
Ethical Design Patterns
YonatanK14d21

Many practice and endorse ethical heuristics against the censure of speech on any topic, especially any salient and politically relevant topic, lest such censure mess with our love of truth, or our ability to locate good policy options via the free and full exchange of ideas, or our freedom/autonomy/self-respect broadly.

I don't think this is actually true.

Even among rationalists I believe there are red lines for ideas that cannot be raised without censure and disgust. I won't attempt to draw them. The fact that among rationalists these lines lie other than where many people would draw them, including on the topic of racial difference, is not accepted as evidence of a commitment to open-mindedness that overrides other ethical commitments but just as a lack of commitment to those specific principles, with the commitment to open-mindedness as thin cover. Tetlock's ideas around sacred values, which can't be easily traded off, may be useful here. It's not that those willing to discuss racial differences don't have sacred values, it's just that non-racism isn't one of them.

Regarding the clash between the prudence heuristic of "don't do something that has a 10% chance of killing all people" and other heuristics such as "don't impede progress," we have to consider the credibility problem in the assertion of risk by experts, when many of the same experts continue to work on A(G)I (and are making fortunes doing so). The statements about the risk say one thing but the actions say another, so we can't conclude that anyone is actually trading off, in a real sense, against the prudence heuristic. This relates to my previous comment: "don't kill all humans" seems like a sacred value, and so statements suggesting one is making the trade-off are not credible. From this "revealed belief" perspective, a statement "I believe there is a 10% chance that AI will kill all people" by an AI expert still working toward AGI is a false statement, and the only way to increase belief in the prediction is for AI experts to stop working on AI (at which point stopping the suicidal hold-outs becomes much easier). Conversely, amplifying the predictions of risk by leaders in the AI industry is a great way to confound the advocacy of the conscientious objectors.

Reply
Four ways learning Econ makes people dumber re: future AI
YonatanK17d30
  1. It's true that Acemoglu generally avoids dealing with extreme AI capabilities, and maybe should be more explicit about what is in and out of scope when he talks about AI. But the criticism I would lay at his feet is that it seems like a lot of what he does is explain to people who've been "dumbed" by learning economics how economics gets it all wrong, but without acknowledging that the intuitive, never-took-econ perspective doesn't need his corrections. Sort of a "man on the inside," when it's not clear whether it's worth the effort. A better example I'd suggest to represent your argument is Korinek and Suh's "Scenarios for Transition to AGI," which (as the title says) considers AGI, arrives at scenarios where, for instance, wages collapse, but completely ignores how this might upset economic models' unstated assumptions like wages needing to remain above subsistence level to avoid complete social breakdown.
  2. Nassim Nicholas Taleb, who loves taking down economists, is always worth a read as it is more generalizable than just AI stuff. The generalization is "human behavior is (correctly) tuned to avoiding being wiped out by power asymmetries and the unexpected, not maximizing expected returns under friendly conditions where a nation-state is there to save you from devastating losses."
Reply
$500 bounty for engagement on asymmetric AI risk
YonatanK1mo10

Thanks, Seth. What troubles me at the meta-level is the assumption of exclusive privilege implied by rationalist/utilitarian arguments, that of getting to make choices between extreme outcomes. It's not just "I, as a rationalist, have considered the trade-offs between X and Y and, if forced to, will choose X." It's "I, a rationalist, believe that rationalism is superior to heuristic-based and otherwise inconsistent reasoning, and therefore assume the responsibility of making choices on behalf of those inferior reasoners." There's not much further to go to get to "I will conceal the 'mild s-risk' of the deaths of billions from them to get them to ally with me to avoid the x-risks that I am concerned about (but to which they, in their imperfect reasoning, are relatively indifferent)."

Reply
Anthropic's leading researchers acted as moderate accelerationists
YonatanK2mo180

Thanks for this.

A minor comment and a major one:

  1. The nits: the section on the the Israeli military's use of AI against Hamas could use some tightening to avoid getting bogged down in the particularities of the Palestine situation. The line "some of the surveillance tactics Israeli settlers tested in Palestine" (my emphasis) to me suggests the interpretation that all Israelis are "settlers," which is not the conventional use of that term. The conventional use of settlers applied only to those Israelis living over the Green Line, and particularly those doing so with the ideological intent of expanding Israel's de facto borders. Similarly but separately, the discussion about Microsoft's response to me seemed to take as facts what I believe to still only be allegations.

  2. The major comment: I feel you could go farther to connect the dots between the "enshittification" of Anthropic and the issues you raise about the potential of AI to help enshittify democratic regimes. The idea that there are "exogenously" good and bad guys, with the former being trustworthy to develop A(G)I and the latter being the ones "we" want to stop from winning the race, is really central to AI discourse. You've pointed out the pattern in which participating in the race turns the "good" guys into bad guys (or at least untrustworthy ones).

Reply1
Underdog bias rules everything around me
YonatanK2mo40

I think this is the right response to the piece, but begs a more explicit challenge of the conclusion that underdog bias is maladaptive (@Garrett Baker offers both pre-modern tribal life and modern international relations as spheres in which this behavior is sensible).

One ought to be careful of the "anti-bias bias" leading one to accept evolutionary explanations for biases but then makes up reasons why they're maladaptive to fit the (speculative) narrative that the world can be perfected by increasing the prevalence of objectively true beliefs.

Reply
It's not about the sex: a moral restoration response to Trumpism
[+]YonatanK2mo-50
Three Months In, Evaluating Three Rationalist Cases for Trump
YonatanK2mo10

I have just written a full post inspired by this comment.

Reply
$500 bounty for engagement on asymmetric AI risk
YonatanK3mo10

But being equally against both requires a positive program to prevent Option 1 other than the default of halting technological development that can lead to it (and thus taking Option 2, or a delay in immortality because human research is slower)! Conversely, without committing to finding such a program, pursuing the avoidance of Option 2 is an implicit acceptance of Option 1. Are you committing to this search? And if it fails, which option will you choose?

Reply
$500 bounty for engagement on asymmetric AI risk
YonatanK3mo10

Well, it doesn't sound like I misunderstood you so far, but just so I'm clear, are you not also saying that people ought to favor being annihilated by a small number of people controlling an aligned (to them) AGI that also grants them immortality over dying naturally with no immortality-granting AGI ever being developed? Perhaps even that this is an obviously correct position?

Reply
What We Learned from Briefing 70+ Lawmakers on the Threat from AI
YonatanK4mo10

Can you speak to the difficulties of addressing risks from development in the national defense sector, which tends to be secret and therefore exposes us to the streetlight problem?

Reply
Load More
-5It's not about the sex: a moral restoration response to Trumpism
2mo
2
23$500 bounty for engagement on asymmetric AI risk
4mo
14
2YonatanK's Shortform
7mo
1
7Populectomy.ai
7mo
2
7To the average human, controlled AI is just as lethal as 'misaligned' AI
2y
20
3Winners-take-how-much?
2y
2