960

LESSWRONG
LW

959

awg's Shortform

by awg
27th Feb 2023
1 min read
4

2

This is a special post for quick takes by awg. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
awg's Shortform
1awg
1awg
2Vladimir_Nesov
0awg
4 comments, sorted by
top scoring
Click to highlight new comments since: Today at 3:51 PM
[-]awg2y10

«Boundaries» and AI safety compilation and Embedded Agents got me thinking:

Cancerous cells are misaligned subsystems with respect to the human body. Their misalignment results in behavior that violates the usual functional boundaries of other subsystems.

Reply
[-]awg2y14

One thing I have observed in myself as I've followed AI more closely, especially as the pace has seemed to escalate in the past few weeks/months, is that my level of care for climate change has dropped significantly. (Maybe irrationally, to some degree.) I find myself being bored by appeals to climate change risk at this point, especially longer-term risks. They feel paltry in comparison to the risks posed by AGI. Like, assuming timelines <30-50 years, either AGI goes well and then climate change is a solved problem, or AGI doesn't go well and then climate change is no longer a concern.

Reply
[-]Vladimir_Nesov2y*20

The model of the world that has superintelligence in its future straightforwardly predicts that the scope of counterfactually higher climate change is much smaller than standard estimates, which ignore this consideration. After making that update, the emotional impression of caring less correctly tracks the underlying concern.

Reply
[-]awg3y00

EY gets mentioned in a recent newsletter in the Atlantic from writer Derek Thompson.

Reply
Moderation Log
More from awg
View more
Curated and popular this week
4Comments