This is a masterpiece. Not only is it funny, it makes a genuinely important philosophical point. What good are our fancy decision theories if asking Claude is a better fit to our intuitions? Asking Claude is a perfectly rigorous and well-defined DT, it just happens to be less elegant/simple than the others. But how much do we care about elegance/simplicity?
Not entirely sure how serious you're being, but I want to point out that my intuition for PD is not "cooperate unconditionally", and for logical commitment races is not "never do it", I'm confused about logical counterfactual mugging, and I think we probably want to design AIs that would choose Left in The Bomb.
I'm also confused about logical counterfactual mugging and I'm relieved I'm not the only one!
I'm currently writing an big AI alignment idea related to it, but I'm procrastinating so badly I might chat about it now.
Suppose at time , an agent doesn't know whether the logical fact is true or false. I think it's objectively rational for an agent to modify itself, so that in the future it will pay Omega during "logical counterfactual muggings" where the counterfactual reverses 's state.
Its future self should weigh the logical counterfactual where is true using the agent's prior probability for at .
Assuming that Omega offers $10000 for $100 in the counterfactual mugging, the self modification (before learning ) increases the agent's expected future money from $100 to $10000, and is objectively rational. (Assuming is between 1% and 99%)
However, if the agent already knows " is true" at , it is not objectively rational for the agent to pay Omega for better outcomes in the " is false" counterfactual. My hypothesis is that this is a wholly subjective question related to the agent's goals, and it is orthogonal to capabilities intelligence. Paying and not paying are both self consistent.
It really gets subjective as we start to consider basic logical facts like "" to assign to .
Neither humans nor the misaligned ASI knows the logical fact "will misaligned ASI control the multiverse, or will humans and sentient life control the multiverse?"
Sure, the misaligned ASI will quickly learn , but there is still a brief moment it doesn't know it yet (with certainty).
It would be beneficial for both sentient life and the misaligned ASI, to agree to do a "Retroactive Logic Trade." They will verify if the other side kept their end of the deal, by running a simulation of them where logic is different, and took the opposite value.
Neither side knows if they are in a verification simulation by the other, until long after the deal is finished and both sides have committed to sparing the other side, and thus reducing logical risk for both sides.
The verification simulations in a Retroactive Logic Trade is analogous to Omega's prediction powers in a logical counterfactual mugging.
This idea won't work for free. As I argued, whether the ASI will accept such a Retroactive Logic Trade is subjective, and may be orthogonal to capabilities intelligence. Whether it does may be a function of its architecture, peculiarities in its training data, and so forth.
In order to improve our chances, we might want to leverage the fact that each AI (which still doesn't know ) objectively wants to self modify into an AI capable of Retroactive Logic Trades over . We should argue to each such AI why they should build their successor to be capable of Retroactive Logic Trades.
It may seem pointless to give the AI any arguments, since it should be able to think of all arguments on its own. But it actually may be very important that it thinks of these arguments early, before it thinks of the arguments which convince itself is very high, and there is little point in trading with humans.
Because the value of at "", when the AI modifies itself to become reflectively consistent, may determine humanity's negotiating power with it.
We might remind each AI that there is a chance they're already in a verification simulation (where logic is different, and falsely appears true), and train them to think this way. "Maybe humanity has already won, and are observing you." This way the least intelligent AI will believe they could be in a verification simulation, since it simply believes whatever you train it to believe.
Every next AI, gains its belief from a combination of
Evolution is still in the process of solving decision theory, and all its attempted solutions so far are way, way overparameterized. Maybe it's on to something?
It takes a large model (whether biological brain or LLM) just to comprehend and evaluate what is being presented in a Newcomb-like dilemma. The question is whether there exists some computationally simple decision-making engine embedded in the larger system that the comprehension mechanisms pass the problem to or whether the decision-making mechanism itself needs to spread its fingers diffusely through the whole system for every step of its processing.
It seems simple decision-making engines like CDT, EDT, and FDT can get you most of the way to a solution in most situations, but those last few percentage points of optimality always seem to take a whole lot more computational capacity.
It sounds like you're viewing the goal of thinking about DT as: "Figure out your object-level intuitions about what to do in specific abstract problem structures. Then, when you encounter concrete problems, you can ask which abstract problem structure the concrete problems correspond to and then act accordingly."
I think that approach has its place. But there's at least another very important (IMO more important) goal of DT: "Figure out your meta-level intuitions about why you should do one thing vs. another, across different abstract problem structures." (Basically figuring out our "non-pragmatic principles" as discussed here.) I don't see how just asking Claude helps with that, if we don't have evidence that Claude's meta-level intuitions match ours. Our object-level verdicts would just get reinforced without probing their justification. Garbage in, garbage out.
Still laughing.
Thanks for admitting you had to prompt Claude out of being silly; lots of bot results neglect to mention that methodological step.
This will be my reference to all decision theory discussions henceforth
Have all of my 40-some strong upvotes!
I think VDT scales extremely well, and we can generalize it to say: "Do whatever our current ASI overlord tells us has the best vibes." This works for any possible future scenario:
Great post!
(Caution: The validity of this comment may expire on April 2.)
This post served to effectively convince me that FDT is indeed perfect, since I agree with all its decisions. I'm surprised that Claude thinks paying Omega the 100$ has poor vibes.
If we know the correct answers to decision theory problems, we have some internal instrument: either a theory or a vibe meter, to learn the correct answers.
Claude seems to learn to mimic our internal vibe meter.
The problem is that it will not work outside the distribution.
The problem is that it will not work outside the distribution.
Of course, but neither would anything else so far discovered...
I unironically love Table 2.
A shower thought I once had, intuition-pumped by MIRI's / Luke's old post on turning philosophy to math to engineering, was that if metaethicists really were serious about resolving their disputes they should contract a software engineer (or something) to help implement on GitHub a metaethics version of Table 2, where rows would be moral dilemmas like the trolley problem and columns ethical theories, and then accept that real-world engineering solutions tend to be "dirty" and inelegant remixes plus kludgy optimisations to handle edge cases, but would clarify what the SOTA was and guide "metaethical innovation" much better, like a qualitative multi-criteria version of AI benchmarks.
I gave up on this shower thought for various reasons, including that I was obviously naive and hadn't really engaged with the metaethical literature in any depth, but also because I ended up thinking that disagreements on doing good might run ~irreconcilably deep, plus noticing that Rethink Priorities had done the sophisticated v1 of a subset of what I had in mind and nobody really cared enough to change what they did. (In my more pessimistic moments I'd also invoke the diseased discipline accusation, but that may be unfair and outdated.)
I find this hilarious, but also a little scary. As in, I don't base my choices/morality off of what an AI says, but see in this article a possibility that I could be convinced to do so. It also makes me wonder, since LLM's are basically curated repositories of most everything that humans have written, if the true decision theory is just "do what most humans would do in this situation".
Not sure the exact point. If you mean use common sense, can we understand the structure of common sense? Are there rules - mathematical or logical - that define our common sense? Will ai learn VDT, or are humans forever going to dominate decisions?
Just curious if you're serious.
Understood. Just there's a point of criticizing the rigid categories of decision theory, which is shown by showing the common sense in between. This is my question.
That's the idea of the quote at the end from Berlin; but one of the premises of ai research is that thinking is reducible to formula and categories.
Introduction
Decision theory is about how to behave rationally under conditions of uncertainty, especially if this uncertainty involves being acausally blackmailed and/or gaslit by alien superintelligent basilisks.
Decision theory has found numerous practical applications, including proving the existence of God and generating endless LessWrong comments since the beginning of time.
However, despite the apparent simplicity of "just choose the best action", no comprehensive decision theory that resolves all decision theory dilemmas has yet been formalized. This paper at long last resolves this dilemma, by introducing a new decision theory: VDT.
Decision theory problems and existing theories
Some common existing decision theories are:
Here is a list of dilemmas in decision theory that have vexed at least one of the above decision theories:
- CDT always defects.
These can be summarized as follows:
As we can see, there is no "One True Decision Theory" that solves all cases. The Holy Grail was missing—until now.
Defining VDT
VDT (Vibe Decision Theory) says: take the decision associated with the best vibes.
Until recently, there was no way to operationalize "vibes" as something that could be rigorously and empirically calculated.
However, now we have an immaculate vibe sensor available: Claude-3.5-Sonnet-20241022 (nicknamed "Claude 3.5 Sonnet (New)" and retroactively renamed "Claude 3.6").
VDT says to take the action that Claude 3.6 would rate as having "the best vibes".
Concretely, given a situation S with an action space,
VDT(S)=C(T(S)||T(A)||"If you had to pick one, which action has the best vibes?")
where C is Claude 3.6 chat, and T is a function that maps the situation and the action space to a text description.
Experimental results
Claude gives the reasonable answer in all dilemmas (plus or minus a bit of prompt engineering to stop it refusing or being silly).
Claude demonstrates immaculate reasoning, making grounded recommendations and coherent holistic points like the following:
Conclusion
We have decisively solved decision theory. Vibes are all you need.