LESSWRONG
LW

YonatanK
946370
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
2YonatanK's Shortform
5mo
1
No wikitag contributions to display.
Anthropic's leading researchers acted as moderate accelerationists
YonatanK6h121

Thanks for this.

A minor comment and a major one:

  1. The nits: the section on the the Israeli military's use of AI against Hamas could use some tightening to avoid getting bogged down in the particularities of the Palestine situation. The line "some of the surveillance tactics Israeli settlers tested in Palestine" (my emphasis) to me suggests the interpretation that all Israelis are "settlers," which is not the conventional use of that term. The conventional use of settlers applied only to those Israelis living over the Green Line, and particularly those doing so with the ideological intent of expanding Israel's de facto borders. Similarly but separately, the discussion about Microsoft's response to me seemed to take as facts what I believe to still only be allegations.

  2. The major comment: I feel you could go farther to connect the dots between the "enshittification" of Anthropic and the issues you raise about the potential of AI to help enshittify democratic regimes. The idea that there are "exogenously" good and bad guys, with the former being trustworthy to develop A(G)I and the latter being the ones "we" want to stop from winning the race, is really central to AI discourse. You've pointed out the pattern in which participating in the race turns the "good" guys into bad guys (or at least untrustworthy ones).

Reply1
Underdog bias rules everything around me
YonatanK10d40

I think this is the right response to the piece, but begs a more explicit challenge of the conclusion that underdog bias is maladaptive (@Garrett Baker offers both pre-modern tribal life and modern international relations as spheres in which this behavior is sensible).

One ought to be careful of the "anti-bias bias" leading one to accept evolutionary explanations for biases but then makes up reasons why they're maladaptive to fit the (speculative) narrative that the world can be perfected by increasing the prevalence of objectively true beliefs.

Reply
It's not about the sex: a moral restoration response to Trumpism
[+]YonatanK1mo-50
Three Months In, Evaluating Three Rationalist Cases for Trump
YonatanK1mo10

I have just written a full post inspired by this comment.

Reply
$500 bounty for engagement on asymmetric AI risk
YonatanK1mo10

But being equally against both requires a positive program to prevent Option 1 other than the default of halting technological development that can lead to it (and thus taking Option 2, or a delay in immortality because human research is slower)! Conversely, without committing to finding such a program, pursuing the avoidance of Option 2 is an implicit acceptance of Option 1. Are you committing to this search? And if it fails, which option will you choose?

Reply
$500 bounty for engagement on asymmetric AI risk
YonatanK1mo10

Well, it doesn't sound like I misunderstood you so far, but just so I'm clear, are you not also saying that people ought to favor being annihilated by a small number of people controlling an aligned (to them) AGI that also grants them immortality over dying naturally with no immortality-granting AGI ever being developed? Perhaps even that this is an obviously correct position?

Reply
What We Learned from Briefing 70+ Lawmakers on the Threat from AI
YonatanK3mo10

Can you speak to the difficulties of addressing risks from development in the national defense sector, which tends to be secret and therefore exposes us to the streetlight problem?

Reply
What We Learned from Briefing 70+ Lawmakers on the Threat from AI
YonatanK3mo10

For this to be true, parliamentarians would have to be like ducklings who are impressed by whoever gets to them first or perversely run away from protecting The Greater Good merely because it is The Greater Good. That level of cynicism goes far beyond what falls out of the standard Interest Group model (e.g. Mancur Olsen's). By that model, given that ControlAI represents a committed interest group, there is no reason to believe they can't win.

Reply
$500 bounty for engagement on asymmetric AI risk
YonatanK3mo40

First let me say that with respect to the world of alignment research, or the AI world in general, I am nothing. I don't have a job in those areas, I am physically remote from where the action is. My contribution consists of posts and comments here.

This assertion deserves a lot of attention IMO, worthy of a post on its own, something along the lines of Why Rationalists Aren't Winners (meant not to mock, but to put it in terms of what rationalism is supposed to do). The gist is that morality is useful for mass coordination to solve collective action problems. When you participate in deliberation about what is good for the group, help arrive at shared answers to the question "how ought we to behave?" and then commit to following those answers, that is power and effectiveness. Overcoming biases that help with coordination so you can, what, win at poker, is not winning. Nassim Nicholas Taleb covers this quite well.

Thanks for working through your thinking. And thanks for bringing my/our attention to "The Peace War," I was not aware of it until now. My only caveat is that one must discount the verisimilitude of science fiction because it demands conflict to be interesting to read. It creates oppressive conditions for the protagonists to overcome, when rational antagonists would eschew those oppressive conditions so that there's no need to protect themselves from plucky protagonists.

The same kind of reasoning applies to the bringing about of AI overlords if you don't have to. @Mars_Will_Be_Ours covers this well in their comment.

The egoist/nihilist categories aren't mutually exclusive. "For the environment" is not nihilistic nor non-egoist when the environment is the provider of everything you need to live a good, free, peaceful albeit finite life.

Reply
$500 bounty for engagement on asymmetric AI risk
YonatanK3mo61

In that case, AI risk becomes similar to aging risk – it will kill me and my friends and relatives. The only difference is the value of future generations.

The casualness with which you throw out this comment seems to validate my assertion that "AI risk" and "risk of a misaligned AI destroying humanity" have become nearly conflated because of what, from the outside, appears like an incidental idiosyncrasy, longtermism, that initially attracted people to the study of AI alignment.

Part of the asymmetry that I'm trying to get acknowledgement of is subjective (or, if you prefer, due to differing utility functions). For most people "aging risk" is not even a thing but "I, my friends, and relatives all being killed" very much is. This is not a philosophical argument, it's a fact about fundamental values. And fundamental differences in values, especially between large majorities and empowered minorities, are a very big deal.

Reply1
Load More
-5It's not about the sex: a moral restoration response to Trumpism
1mo
2
21$500 bounty for engagement on asymmetric AI risk
3mo
12
2YonatanK's Shortform
5mo
1
7Populectomy.ai
5mo
2
6To the average human, controlled AI is just as lethal as 'misaligned' AI
1y
20
3Winners-take-how-much?
2y
2