LESSWRONG
LW

1661
Wei Dai
42179Ω2923144511618
Message
Dialogue
Subscribe

If anyone wants to have a voice chat with me about a topic that I'm interested in (see my recent post/comment history to get a sense), please contact me via PM.

My main "claims to fame":

  • Created the first general purpose open source cryptography programming library (Crypto++, 1995), motivated by AI risk and what's now called "defensive acceleration".
  • Published one of the first descriptions of a cryptocurrency based on a distributed public ledger (b-money, 1998), predating Bitcoin.
  • Proposed UDT, combining the ideas of updatelessness, policy selection, and evaluating consequences using logical conditionals.
  • First to argue for pausing AI development based on the technical difficulty of ensuring AI x-safety (SL4 2004, LW 2011).
  • Identified current and future philosophical difficulties as core AI x-safety bottlenecks, potentially insurmountable by human researchers, and advocated for research into metaphilosophy and AI philosophical competence as possible solutions.

My Home Page

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
10Wei Dai's Shortform
Ω
2y
Ω
262
Shortform
Wei Dai7h20

I'm curious what you say about "which are the specific problems (if any) where you specifically think 'we really need to have solved philosophy / improved-a-lot-at-metaphilosophy' to have a decent shot at solving this?'"

Assuming by "solving this" you mean solving AI x-safety or navigating the AI transition well, I just post a draft about this. Or if you already read that and are asking for an even more concrete example, a scenario I often think about is an otherwise aligned ASI, some time into the AI transition when things are moving very fast (from a human perspective) and many highly consequential decisions need to be made (e.g., what alliances to join, how to bargain with others, how to self-modify or take advantage of the latest AI advances, how to think about AI welfare and other near-term ethical issues, what to do about commitment races and threats, how to protect the user against manipulation or value drift, whether to satisfy some user request that might be harmful according to their real values) that often involve philosophical problems. And they can't just ask their user (or alignment target) or even predict "what would the user say if they thought about this for a long time" because the user themselves may not be philosophically very competent and/or making such predictions with high accuracy (over a long enough time frame) is still outside their range of capabilities.

So the specific problem is how to make sure this AI doesn't make wrong decisions that cause a lot of waste or harm, that quickly or over time cause most of the potential value of the universe to be lost, which in turn seems to involve figuring out how the AI should be thinking about philosophical problems, or how to make the AI philosophically competent even if their alignment target isn't.

Does this help / is this the kind of answer you're asking for?

Reply
Wei Dai's Shortform
Wei Dai15hΩ237613

Some of Eliezer's founder effects on the AI alignment/x-safety field, that seem detrimental and persist to this day:

  1. Plan A is to race to build a Friendly AI before someone builds an unFriendly AI.
  2. Metaethics is a solved problem. Ethics/morality/values and decision theory are still open problems. We can punt on values for now but do need to solve decision theory. In other words, decision theory is the most important open philosophical problem in AI x-safety.
  3. Academic philosophers aren't very good at their jobs (as shown by their widespread disagreements, confusions, and bad ideas), but the problems aren't actually that hard, and we (alignment researchers) can be competent enough philosophers and solve all of the necessary philosophical problems in the course of trying to build Friendly (or aligned/safe) AI.

I've repeatedly argued against 1 from the beginning, and also somewhat against 2 and 3, but perhaps not hard enough because I personally benefitted from them, i.e., having pre-existing interest/ideas in decision theory that became validated as centrally important for AI x-safety, and generally finding a community that is interested in philosophy and took my own ideas seriously.

Eliezer himself is now trying hard to change 1, and I think we should also try harder to correct 2 and 3. On the latter, I think academic philosophy suffers from various issues, but also that the problems are genuinely hard, and alignment researchers seem to have inherited Eliezer's gung-ho attitude towards solving these problems, without adequate reflection. Humanity having few competent professional philosophers should be seen as (yet another) sign that our civilization isn't ready to undergo the AI transition, not a license to wing it based on one's own philosophical beliefs or knowledge!

In this recent EAF comment, I analogize AI companies trying to build aligned AGI with no professional philosophers on staff (the only exception I know is Amanda Askell) with a company trying to build a fusion reactor with no physicists on staff, only engineers. I wonder if that analogy resonates with anyone.

Reply31
Shortform
Wei Dai1d*100

To try to explain how I see the difference between philosophy and metaphilosophy:

My definition of philosophy is similar to @MichaelDickens' but I would use "have serviceable explicitly understood methods" instead of "formally studied" or "formalized" to define what isn't philosophy, as the latter might be or could be interpreted as being too high of a bar, e.g., in the sense of formal systems.

So in my view, philosophy is directly working on various confusing problems (such as "what is the right decision theory") using whatever poorly understood methods that we have or can implicitly apply, and then metaphilosophy is trying to help solve these problems on a meta level, by better understanding the nature of philosophy, for example:

  1. Try to find if there is some unifying quality that ties all of these "philosophical" problems together (besides "lack of serviceable explicitly understood methods").
  2. Try to formalize some part of philosophy, or find explicitly understood methods for solving certain philosophical problems.
  3. Try to formalize all of philosophy wholesale, or explicitly understand what is it that humans are doing (or should be doing, or what AIs should be doing) when it comes to solving problems in general. This may not be possible, i.e., maybe there is no such general method that lets us solve every problem given enough time and resources, but it sure seems like humans have some kind of general purpose (but poorly understood) method, that lets us make progress slowly over time on a wide variety of problems, including ones that are initially very confusing, or hard to understand/explain what we're even asking, etc. We can at least aim to understand what is it that humans are or have been doing, even if it's not a fully general method.
     

Does this make sense?

Reply
Shortform
Wei Dai1d*153

One way to see that philosophy is exceptional is that we have serviceable explicit understandings of math and natural science, even formalizations in the forms of axiomatic set theory and Solomonoff Induction, but nothing comparable in the case of philosophy. (Those formalizations are far from ideal or complete, but still represent a much higher level of understanding than for philosophy.)

If you say that philosophy is a (non-natural) science, then I challenge you, come up with something like Solomonoff Induction, but for philosophy.

Reply2
life lessons from trading
Wei Dai2d177
  1. Trading is a zero sum game inside a larger positive sum game. Though every trade has a winner and offsetting losers,

This isn't true. Sometimes you're trading against someone with non-valuation motives, i.e., someone buying or selling for a reason besides thinking that the current market price is too low or too high, for example, someone being liquidated due to a margin violation, or the founder of a company wanting to sell in order to diversify. In that case, it makes more sense to think of yourself as providing a service for the other side of the trade, instead of there being a winner and a loser.

markets as a whole direct resources across space and time and help civilizations grow.

Unpriced externalities imply that sometimes markets harm civilizations. I think investments into AGI/ASI is a prime example of this, with x-risks being the unpriced externality.

Reply
leogao's Shortform
Wei Dai4d00

Figuring out the underlying substance behind "philosophy" is a central project of metaphilosophy, which is far from solved, but my usual starting point is "trying to solve confusing problems which we don't have established methodologies for solving" (methodologies meaning explicitly understood methods), which I think bakes in the least amount of assumptions about what philosophy is or could be, while still capturing the usual meaning of "philosophy" and explains why certain fields started off as being part of philosophy (e.g., science starting off as nature philosophy) and then became "not philosophy" when we figured out methodologies for solving them.

I think "figure out what are the right concepts to be use, and, use those concepts correctly, across all of relevant-Applied-conceptspace" is the expanded version of what I meant, which maybe feels more likely to be what you mean.

This bakes in "concepts" being the most important thing, but is that right? Must AIs necessarily think about philosophy using "concepts", or is that really the best way to formulate how idealized philosophical reasoning should work?

Is "concepts" even what distinguishes philosophy from non-philosophical problems, or is "concepts" just part of how humans reason about everything, which we latch onto when trying to define or taboo philosophy, because we have nothing else better to latch onto? My current perspective is that what uniquely distinguishes philosophy is their confusing nature and the fact that we have no well-understood methods for solving them (but would of course be happy to hear any other perspectives on this).

Regarding good philosophical taste (or judgment), that is another central mystery of metaphilosophy, which I've been thinking a lot about but don't have any good handles on. It seems like a thing that exists (and is crucial) but is very hard to see how/why it could exist or what kind of thing it could be.

So anyway, I'm not sure how much help any of this is, when trying to talk to the type of person you mentioned. The above are mostly some cached thoughts I have on this, originally for other purposes.

BTW, good philosophical taste being rare definitely seems like a very important part of the strategic picture, which potentially makes the overall problem insurmountable. My main hopes are 1) someone makes an unexpected metaphilosophical breakthrough (kind of like Satoshi coming out of nowhere to totally solve distributed currency) and there's enough good philosophical taste among the AI safety community (including at the major labs) to recognize it and incorporate it into AI design or 2) there's an AI pause during which human intelligence enhancement comes online and selecting for IQ increases the prevalence of good philosophical taste as a side effect (as it seems too much to hope that good philosophical taste would be directly selected for) and/or there's substantial metaphilosophical progress during the pause.

Reply
leogao's Shortform
Wei Dai4d40

Unless you can abstract out the "alignment reasoning and judgement" part of a human's entire brain process (and philosophical reasoning and judgement as part of that) into some kind of explicit understanding of how it works, how do you actually build that into AI without solving uploading (which we're obviously not on track to solve in 2-4 year either)?

put a bunch of smart thoughtful humans in a sim and run it for a long time

Alignment researchers have had this thought for a long time (see e.g. Paul Christiano's A formalization of indirect normativity) but I think all of the practical alignment research programs that this line of thought led to, such as IDA and Debate, are all still bottlenecked by lack of metaphilosophical understanding, because without the kind of understanding that lets you build an "alignment/philosophical reasoning checker" (analogous to a proof checker for mathematical reasoning) they're stuck trying to do ML of alignment/philosophical reasoning from human data, which I think is unlikely to work out well.

Reply
leogao's Shortform
Wei Dai4d33

first, I think it implies that we should try to figure out how to reduce the asymmetry in verifiability between capabilities and alignment

If solving alignment implies solving difficult philosophical problems (and I think it does), then a major bottlenecks for verifying alignment will be verifying philosophy, which in turn implies that we should be trying to solve metaphilosophy (i.e., understand the nature of philosophy and philosophical reasoning/judgment). But that is unlikely to be possible within 2-4 years, even with the largest plausible effort, considering the history of analogous fields like metaethics and philosophy of math.

What to do in light of this? Try to verify the rest of alignment, just wing it on the philosophical parts, and hope for the best?

in particular, because ultimately the only way we can make progress on alignment is by relying on whatever process for deciding that research is good that human alignment researchers use in practice (even provably correct stuff has the step where we decide what theorem to prove and give an argument for why that theorem means our approach is sound), there’s an upper bound on the best possible alignment solution that humans could ever have achieved, which is plausibly a lot lower than perfectly solving alignment with certainty.

I kind of want to argue against this, but also am not sure how this fits in with the rest of your argument. Whether or not there's an upper bound that's plausibly a lot lower than perfectly solving alignment with certainty, it doesn't seem to affect your final conclusions?

Reply
leogao's Shortform
Wei Dai5dΩ340

Have you seen A Master-Slave Model of Human Preferences? To summarize, I think every human is trying to optimize for status, consciously or subconsciously, including those who otherwise fit your description of idealized platonic researcher. For example, I'm someone who has (apparently) "chosen ultimate (intellectual) freedom over all else", having done all of my research outside of academia or any formal organizations, but on reflection I think I was striving for status (prestige) as much as anyone, it was just that my subconscious picked a different strategy than most (which eventually proved quite successful).

at the end of the day, what’s even the point of all this?

I think it's probably a result of most humans not being very strategic, or their subconscious strategizers not being very competent. Or zooming out, it's also a consequence of academia being suboptimal as an institution for leveraging humans' status and other motivations to produce valuable research. That in turn is a consequence of our blind spot for recognizing status as an important motivation/influence for every human behavior, which itself is because not explicitly recognizing status motivation is usually better for one's status.

Reply
Wei Dai's Shortform
Wei Dai5d60

I'm still using it for this purpose, but don't have a good sense of how much worse it is compared to pre-0325. However I'm definitely very wary of the sycophancy and overall bad judgment. I'm only using them to point out potential issues I may have overlooked, and not e.g. whether a draft is ready to post, or whether some potential issue is a real issue that needs to be fixed. All the models I've tried seem to err a lot in both directions.

Reply
Load More
10Wei Dai's Shortform
Ω
2y
Ω
262
65Managing risks while trying to do good
2y
28
47AI doing philosophy = AI generating hands?
Ω
2y
Ω
23
226UDT shows that decision theory is more puzzling than ever
Ω
2y
Ω
56
163Meta Questions about Metaphilosophy
Ω
2y
Ω
80
34Why doesn't China (or didn't anyone) encourage/mandate elastomeric respirators to control COVID?
Q
3y
Q
15
55How to bet against civilizational adequacy?
Q
3y
Q
20
6AI ethics vs AI alignment
3y
1
120A broad basin of attraction around human values?
Ω
4y
Ω
18
236Morality is Scary
Ω
4y
Ω
116
Load More
Carl Shulman
2 years ago
Carl Shulman
2 years ago
(-35)
Human-AI Safety
2 years ago
Roko's Basilisk
7 years ago
(+3/-3)
Carl Shulman
8 years ago
(+2/-2)
Updateless Decision Theory
12 years ago
(+62)
The Hanson-Yudkowsky AI-Foom Debate
13 years ago
(+23/-12)
Updateless Decision Theory
13 years ago
(+172)
Signaling
13 years ago
(+35)
Updateless Decision Theory
14 years ago
(+22)
Load More