LESSWRONG
LW

1211
_will_
16911413
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Six Plausible Meta-Ethical Alternatives
_will_8mo80

Great post! I find myself coming back to it—especially possibility 5—as I sit here in 2025 thinking/worrying about AI philosophical competence and the long reflection.

On 6,[1] I’m curious if you’ve seen this paper by Joar Skalse? It begins:

I present an argument and a general schema which can be used to construct a problem case for any decision theory, in a way that could be taken to show that one cannot formulate a decision theory that is never outperformed by any other decision theory.

  1. ^

    Pasting here for easy reference (emphasis my own):

    6. There aren’t any normative facts at all, including facts about what is rational. For example, it turns out there is no one decision theory that does better than every other decision theory in every situation, and there is no obvious or widely-agreed-upon way to determine which one “wins” overall.

Reply
johnswentworth's Shortform
_will_8mo122

See also ‘The Main Sources of AI Risk?’ by Wei Dai and Daniel Kokotajlo, which puts forward 35 routes to catastrophe (most of which are disjunctive). (Note that many of the routes involve something other than intent alignment going wrong.)

Reply
The Field of AI Alignment: A Postmortem, and What To Do About It
_will_8mo87

Any chance you have a link to this tweet? (I just tried control+f'ing through @Richard's tweets over the past 5 months, but couldn't find it.)

Reply
RobertM's Shortform
_will_9mo*50

On your second point, I think that MacAskill and Ord were more saying “It would be worth it to spend thousands of years figuring out moral philosophy / figuring out what to do with the cosmos, if that’s how long it takes to be ~sure we’ve reached the ‘correct’ answer before locking things in, on account of the astronomical waste argument” than “I literally predict it will take today-humans thousands of years to figure out moral philosophy, even if we make a serious and coordinated effort to do so.” Somewhat relatedly, quoting from the ‘Long Reflection Reading List’ I wrote earlier this year (fn. 4):

Original discussion of the long reflection indicated that it could be a lengthy process of 10,000 years or more. More recent discussion I’m aware of, which is nonpublic, hence no corresponding reading, i) takes seriously the possibility that the long reflection could last just weeks rather than years or millenia, and ii) notes that wall clock time is probably not the most useful way to think about the length of reflection, given that the reflection process, if it happens at all, will likely involve many superfast AIs doing the bulk of the cognitive labor.

On your first point, I continue to be curious about your perspective. I basically agree with the following (written by Zach Stein-Perlman), but, based on what you said in your parentheses, it sounds like you view it as a bad plan?

The outline of the best [post-AGI] plan I’ve heard is build human-obsoleting AIs which are sufficiently aligned/trustworthy that we can safely defer[1] to them (before building wildly superintelligent AI). Assume it will take 5-10 years after AGI to build such systems and give them sufficient time. To buy time (or: avoid being rushed by other AI projects[2]), inform the US government and convince it to enforce nobody builds wildly superintelligent AI for a while (and likely limit AGI weights to allied projects with excellent security and control).

(I could be off, but it sounds like either you expect solving AI philosophical competence to come pretty much hand in hand with solving intent alignment (because you see them as similar technical problems?), or you expect not solving AI philosophical competence (while having solved intent alignment) to lead to catastrophe (thus putting us outside the worlds in which x-risks are reliably ‘solved’ for), perhaps in the way Wei Dai has talked about?)

  1. ^

    We don't need these human-obsoleting AIs to be able to implement CEV. We want to be able to defer to them on tricky wisdom-loaded questions like what should we do about the overall AI situation? They can ask us questions as needed.

  2. ^

    To avoid being rushed by your own AI project, you also have to ensure that your AI can't be stolen and can't escape, so you have to implement excellent security and control.

Reply
MIRI 2024 Communications Strategy
_will_11mo32

Thanks, that’s helpful!

(Fwiw, I don’t find the ‘caring a tiny bit’ story very reassuring, for the same reasons as Wei Dai, although I do find the acausal trade story for why humans might be left with Earth somewhat heartening. (I’m assuming that by ‘game-theoretic reasons’ you mean acausal trade.))

Reply1
MIRI 2024 Communications Strategy
_will_11mo*10

I don't think [AGI/ASI] literally killing everyone is the most likely outcome

Huh, I was surprised to read this. I’ve imbibed a non-trivial fraction of your posts and comments here on LessWrong, and, before reading the above, my shoulder Daniel definitely saw extinction as the most likely existential catastrophe.

If you have the time, I’d be very interested to hear what you do think is the most likely outcome. (It’s very possible that you have written about this before and I missed it—my bad, if so.)

Reply
johnswentworth's Shortform
_will_11mo*42

Hmm, the ‘making friends’ part seems the most important (since there are ways to share new information you’ve learned, or solve problems, beyond conversation), but it also seems a bit circular. Like, if the reason for making friends is to hang out and have good conversations(?), but one has little interest in having conversations, then doesn’t one have little reason to make friends in the first place, and therefore little reason to ‘git gud’ at the conversation game?

Reply
LTFF and EAIF are unusually funding-constrained right now
_will_2y*3-5

So basically I don't think it's possible to do robustly positive actions in longtermism with high (>70%? >60%?) probability of being net positive for the long-term future

This seems like an important point, and it's one I've not heard before. (At least, not outside of cluelessness or specific concerns around AI safety speeding up capabilities; I'm pretty sure that most EAs I know have ~100% confidence that what they're doing is net positive for the long-term future.)

I'm super interested in how you might have arrived at this belief: would you be able to elaborate a little? For instance, is there a theoretical argument going on here, like a weak form of cluelessness? Or is it more empirical, for example, did you get here through evaluating a bunch of grants and noticing that even the best seem to carry 30-ish percent downside risk? Something else?

Reply
How to have Polygenically Screened Children
_will_2y67

"GeneSmith"... the pun just landed with me. nice.

Reply
Open Thread With Experimental Feature: Reactions
_will_2y31

Very nitpicky (sorry): it'd be nice if the capitalization to the epistemic status reactions was consistent. Currently, some are in title case, for example "Too Harsh" and "Hits the Mark", while others are in sentence case, like "Key insight" and "Missed the point". The autistic part of me finds this upsetting.

Reply
Load More
46Evidential Cooperation in Large Worlds: Potential Objections & FAQ
2y
5
17Everett branches, inter-light cone trade and other alien matters: Appendix to “An ECL explainer”
2y
0
57Cooperating with aliens and AGIs: An ECL explainer
2y
8
11AI Risk & Policy Forecasts from Metaculus & FLI's AI Pathways Workshop
2y
4
Simulacrum Levels
2 years ago
(+1/-2)
Courage
2 years ago
(+6/-8)
Fun Theory
3 years ago
Forecasting & Prediction
3 years ago
(-1)
Forecasting & Prediction
3 years ago
(+1/-5)
Courage
3 years ago
(+74/-76)
Center for Human-Compatible AI (CHAI)
3 years ago
(+20/-20)
Logic & Mathematics
3 years ago
(+12/-11)
World Modeling
3 years ago
(+1/-3)
Updateless Decision Theory
3 years ago
(+38/-43)
Load More