LESSWRONG
LW

MichaelDickens
1159102070
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Summary of John Halstead's Book-Length Report on Existential Risks From Climate Change
MichaelDickens1d30

Do you think you can learn something useful about existential risk from reading the IPCC report?

FWIW I only briefly looked at the latest report but from what I saw, it seemed hard to learn anything about existential risk from it, except for some obvious things like "humans will not go extinct in the median outcome". I didn't see any direct references to human extinction in the report, nor any references to runaway warming.

Reply
Summary of John Halstead's Book-Length Report on Existential Risks From Climate Change
MichaelDickens1d41

"Climate change is not an x-risk" is the kind of thing you can easily (and correctly) prove to yourself in a matter of hours

How do you do that? I've spent several hours researching the topic and I'm still not convinced, but I think there's a lot I'm still missing, too.

My current thinking is

  1. Existential risk from climate change is not greater than 1%, because if it were, climate models would show a noticeable probability of extinction-level outcomes.
  2. But I can't confidently say that existential risk is less than 0.1% because the assumptions of climate models may break down when you get into tail outcomes, and our understanding of climate science isn't robust enough to strongly rule out those tail outcomes.
Reply
Don't Eat Honey
MichaelDickens1d40

Bees are more social than salmon. I haven't put serious thought into it, but I can see an argument that sociality is an important factor in determining intensity-of-consciousness. (Perhaps because sociality requires complex neuron interactions that give rise to certain conscious experiences?)

Reply
Mainstream Grantmaking Expertise (Post 7 of 7 on AI Governance)
MichaelDickens1d62

I've spoken to grantmakers about this in the past and I got the impression that they see it as a largely unavoidable problem:

  • You can't hire good people without taking a lot of time to assess them, which takes time away from other important activities.
  • Expanding the team requires hiring more managers, who are even harder to assess than grantmakers.
Reply
Habryka's Shortform Feed
MichaelDickens3d20

I thought price wars was false, although I haven't been paying that much attention to companies' pricings. GPT was $20/month in 2023 and it's still $20/month. IIRC Gemini/Claude were available in 2023 but they only had free tiers so I don't know how to judge them.

Reply
tailcalled's Shortform
MichaelDickens3d40

What evidence led you to believe this? In my experience, ~all non-forecasting-focused social groups are bad at making aggregate predictions.

Reply
Habryka's Shortform Feed
MichaelDickens9d102

Also why does Rational wiki hate LW so much? What is the source of all that animosity? Reply

I am not too familiar with RationalWiki but my impression is the editors come from a certain mindset where you always disbelieve anything that sounds weird, and LWers talk about a lot of weird stuff, which to them falls in the same bucket as religion / woo / pseudoscience. And I would think they especially dislike people calling themselves "rationalists" when in actuality they're just doing woo / pseudoscience.

Reply
An AI Race With China Can Be Better Than Not Racing
MichaelDickens10d40

Hm, that's a good point. I don't know how to express that cleanly, but there are other intermediate options in which the US moves slower, but still enough that there's a >50% chance of them getting TAI first, or they pull the brakes & alarms so that the PRC also slows down.

You could model it as a binary P(US wins | US races) and P(US wins | US does not race). A continuum would be more accurate but I think a binary is basically fine.

I saw your model on squigglehub, but didn't dig into it too deeply. I encourage you to post it on here with or without an explanation :-)

Posting the model is on my to-do list but I am not very satisfied with it right now so I want to fix it up some more. I want to make a bigger model that looks at all the main effects of slowing down, not just race dynamics, although perhaps that's too ambitious.

Reply
Moral Alignment: An Idea I'm Embarrassed I Didn't Think of Myself
MichaelDickens13d20

I think we are on the same page, I was trying to agree with what you said and add commentary on why I'm concerned about "CEV with humans as the primary source of values". Although I was only responding to your first paragraph not your second paragraph. I think your second paragraph also raises fair concerns about what a "CEV for all sentient beings" looks like.

Reply1
Moral Alignment: An Idea I'm Embarrassed I Didn't Think of Myself
MichaelDickens13d-1-6

I expect that the CEV of human values would indeed accord moral status to animals. But including humans-but-not-animals in the CEV still seems about as silly to me as including Americans-but-not-foreigners and then hoping that the CEV ends up caring about foreigners anyway.

Reply
Load More
2MichaelDickens's Shortform
4y
129
9How concerned are you about a fast takeoff due to a leap in hardware usage?
Q
18d
Q
7
24Why would AI companies use human-level AI to do alignment research?
2mo
8
16What AI safety plans are there?
2mo
3
7Retroactive If-Then Commitments
5mo
0
5A "slow takeoff" might still look fast
2y
3
2How much should I update on the fact that my dentist is named Dennis?
Q
3y
Q
3
15Why does gradient descent always work on neural networks?
Q
3y
Q
11
2MichaelDickens's Shortform
4y
129
19How can we increase the frequency of rare insights?
4y
10
1Should I prefer to get a tax refund, or not to?
Q
5y
Q
6
Load More