Isnasene

Isnasene's Comments

Matt Goldenberg's Short Form Feed
And the thing is, I would go as far as to say many people in the rationality community experience this same frustration. They found a group that they feel like should be their tribe, but they really don't feel a close connection to most people in it, and feel alienated as a result.

As someone who has considered making the Pilgrimmage To The Bay for precisely that reason and as someone who decided against it partly due to that particular concern, I thank you for giving me a data-point on it.

Being a rationalist in the real world can be hard. The set of people who actually worry about saving the world, understanding their own minds and connecting with others is pretty low. In my bubble at least, picking a random hobby and incidentally becoming friends with someone at it and then incidentally getting slammed and incidentally an impromptu conversation has been the best performing strategy so far in terms of success per opportunity-cost. As a result, looking from the outside at a rationalist community that cares about all these things looks like a fantastical life-changing ideal.

But, from the outside view, all the people I've seen who've aggressively targeted those ideals have gotten crushed. So I've adopted a strategy of Not Doing That.

(pssst: this doesn't just apply to the rationalist community! it applies to any community oriented around values disproportionately held by individuals who have been disenfranchised by broader society in any way! there are a lot of implications here and they're all mildly depressing!)

Predictors exist: CDT going bonkers... forever

Can you clarify what you mean by "successfully formalised"? I'm not sure if I can answer that question but I can say the following:

Stanford's encyclopedia has a discussion of ratifiability dating back to the 1960s and (by the 1980s) it has been applied to both EDT and CDT (which I'd expect, given that constraints on having an accurate world model should be independent of decision theory). This gives me confidence that it's not just a random Less Wrong thing.

Abram Dempski from MIRI has a whole sequence on when CDT=EDT which leverages ratifiability as a sub-assumption. This gives me confidence that ratifiability is actually onto something (the Less Wrong stamp of approval is important!)

Whether any of this means that it's been "successfully formalised", I can't really say. From the outside-view POV, I literally did not know about the conventional version of CDT until yesterday. Thus, I do not really view myself as someone currently capable of verifying the extent to which a decision theory has been successfully formalised. Still, I consider this version of CDT old enough historically and well-enough-discussed on Less Wrong by Known Smart People that I have high confidence in it.


Predictors exist: CDT going bonkers... forever

Having done some research, it turns out the thing I was actually pointing to was ratifiability and the stance that any reasonable separation of world-modeling and decision-selection should put ratifiability in the former rather than the latter. This specific claim isn't new: From "Regret and Instability in causal decision theory":

Second, while I agree that deliberative equilibrium is central to rational decision making, I disagree with Arntzenius that CDT needs to be ammended in any way to make it appropriately deliberational. In cases like Murder Lesion a deliberational perspective is forced on us by what CDT says. It says this: A rational agent should base her decisions on her best information about the outcomes her acts are likely to causally promote, and she should ignore information about what her acts merely indicate. In other words, as I have argued, the theory asks agents to conform to Full Information, which requires them to reason themselves into a state of equilibrium before they act. The deliberational perspective is thus already a part of CDT

However, it's clear to me now that you were discussing an older, more conventional, version of CDT[1] which does not have that property. With respect to that version, the thought-experiment goes through but, with respect to the version I believe to be sensible, it doesn't[2].

[1] I'm actually kind of surprised that the conventional version of CDT is that dumb -- and I had to check a bunch of papers to verify that this was actually happening. Maybe if my memory had complied at the time, it would've flagged your distinguishing between CDT and EDT here from past LessWrong articles I've read like CDT=EDT. But this wasn't meant to be so I didn't notice you were talking about something different.

[2] I am now confident it does not apply to the thing I'm referring to -- the linked paper brings up "Death in Damascus" specifically as a place where ratifiable CDt does not fail

Mary Chernyshenko's Shortform

This reminds me a little bit of the posts on anti-memes. There's a way in which people are constantly updating their worldviews based on personal experience that

  • is useless in discussion because people tend not to update on other people's personal experience over their own,
  • is personally risky in adversarial contexts because personal information facilitates manipulation
  • is socially costly because the personal experience that people tend to update on is usually the kind of emotionally intense stuff that is viewed as inappropriate in ordinary conversation

And this means that there are a lot of ideas and worldviews produced by The Statistics which are never discussed or directly addressed in polite society. Instead, these emerge indirectly through particular beliefs which really on arguments that obfuscate the reality.

Not only is this hard to avoid on a civilizational level; it's hard to avoid on a personal level: rational agents will reach inaccurate conclusions in adversarial (ie unlucky) environments.

Underappreciated points about utility functions (of both sorts)

Thanks for the reply. I re-read your post and your post on Savage's proof and you're right on all counts. For some reason, it didn't actually click for me that P7 was introduced to address unbounded utility functions and boundedness was a consequence of taking the axioms to their logical conclusion.

Underappreciated points about utility functions (of both sorts)

Ahh, thanks for clarifying. I think what happened was that your modus ponens was my modus tollens -- so when I think about my preferences, I ask "what conditions do my preferences need to satisfy for me to avoid being exploited or undoing my own work?" whereas you ask something like "if my preferences need to correspond to a bounded utility function, what should they be?" [1]. As a result, I went on a tangent about infinity to begin exploring whether my modified notion of a utility function would break in ways that regular ones wouldn't.

Why should one believe that modifying the idea of a utility function would result in something that is meaningful about preferences, without any sort of theorem to say that one's preferences must be of this form?

I agree, one shouldn't conclude anything without a theorem. Personally, I would approach the problem by looking at the infinite wager comparisons discussed earlier and trying to formalize them into additional rationality condition. We'd need

  • an axiom describing what it means for one infinite wager to be "strictly better" than another.
  • an axiom describing what kinds of infinite wagers it is rational to be indifferent towards

Then, I would try to find a decisioning-system that satisfies these new conditions as well as the VNM-rationality axioms (where VNM-rationality applies). If such a system exists, these axioms would probably bar it from being represented fully as a utility function. If it didn't, that'd be interesting. In any case, whatever happens will tell us more about either the structure our preferences should follow or the structure that our rationality-axioms should follow (if we cannot find a system).

Of course, maybe my modification of the idea of a utility function turns out to show such a decisioning-system exists by construction. In this case, modifying the idea of a utility function would help tell me that my preferences should follow the structure of that modification as well.

Does that address the question?

[1] From your post:

We should say instead, preferences are not up for grabs -- utility functions merely encode these, remember. But if we're stating idealized preferences (including a moral theory), then these idealized preferences had better be consistent -- and not literally just consistent, but obeying rationality axioms to avoid stupid stuff. Which, as already discussed above, means they'll correspond to a bounded utility function.
Go F*** Someone

I had fun reading this post. But as someone who has a number of meaningful relationships but doesn't really bother dating, I was also confused of what to make of it.

Also, given that this is Rationalism-Land, its worth keeping in mind that many people who don't date got there because they have an unusually low prior on the idea that they will find someone they can emotionally connect with. This prior is also often caused by painful experience that advice like "date more!" will tacitly remind them of.

Anyway, things that I agree with you on:

  • Dating is hard
  • Self-improvement is relatively easy compared to being emotionally vulnerable
  • I hate the saying "you do you." I emotionally interpret it as "here's a shovel; bury yourself with it"

Things I disagree with you on:

  • We aren't more lonely because of aggressively optimizing relationships for status rather than connection; we're more lonely because the opportunity cost of going on dates is unusually high. Many reasons for this:
    • It's easier than ever to unilaterally do cool things (ie learn guitar from the internet, buy arts and crafts off Amazon). And, as you noted, there's a cottage industry for making this as awesome as possible
    • It's easier than ever to defect from your local community and hang out with online people who "get" you
    • This causes a feedback loop that reduces the people looking to date, which increases the effort it dates to date, which reduces the number of people looking to date. Everyone is else defecting so I'm gonna defect too
  • I think the general conflation of "self-improvement" with "bragging about stuff on social media" is odd in the context you're discussing. People who aren't interested in the human connection of dates generally don't get much out of social media. At least in my bubble, people who are into self-improvement tend to do things like delete facebook.
  • If you're struggling to build financial capital, the goal is to keep doing that until you're financially secure. The goal very much isn't to refocus your efforts on going on hundreds of dates to learn how to make others happy.

Predictors exist: CDT going bonkers... forever

[Comment edited for clarity]

Since when does CDT include backtracking on noticing other people's predictive inconsistency?

I agree that CDT does not including backtracking on noticing other people's predictive inconsistency. My assumption is that decision-theories (including CDT) takesa world-map and outputs an action. I'm claiming that this post is conflating an error in constructing an accurate world-map with an error in the decision theory.

CDT cannot notice that Omega's prediction aligns with its hypothetical decision because Omega's prediction is causally "before" CDT's decision, so any causal decision graph cannot condition on it. This is why post-TDT decision theories are also called "acausal."

Here is a more explicit version of what I'm talking about. CDT makes a decision to act based on the expected value of its action. To produce such an action, we need to estimate an expected value. In the original post, there are two parts to this:

Part 1 (Building a World Model):

  • I believe that the predictor modeled my reasoning process and has made a prediction based on that model. This prediction happens before I actually instantiate my reasoning process
  • I believe this model to be accurate/quasi-accurate
  • I start unaware of what my causal reasoning process is so I have no idea what the predictor will do. In any case, the causal reasoning process must continue because I'm thinking.
  • As I think, I get more information about my causal reasoning process. Because I know that the predictor is modeling my reasoning process, this let's me update my prediction of the predictor's prediction.
  • Because the above step was part of my causal reasoning process and information about my causal reasoning process affects my model of the predictor's model of me, I must update on the above step as well
  • [The Dubious Step] Because I am modeling myself as CDT, I will make a statement intended to inverse the predictor. Because I believe the predictor is modeling me, this requires me to inverse myself. That is to say, every update my causal reasoning process makes to my probabilities is inversing the previous update
    • Note that this only works if I believe my reasoning process (but not necessarily the ultimate action) gives me information about the predictor's prediction.
  • The above leads to infinite regress

Part 2 (CDT)

  • Ask the world model what the odds are that the predictor said "one" or "zero"
  • Find the one with higher likelihood and inverse it

I believe Part 1 fails and that this isn't the fault of CDT. For instance, imagine the above problem with zero stakes such that decision theory is irrelevant. If you ask any agent to give the inverse of its probabilities that Omega will say "one" or "zero" with the added information that Omega will perfectly predict those inverses and align with them, that agent won't be able to give you probabilities. Hence, the failure occurs in building a world model rather than in implementing a decision theory.



-------------------------------- Original version

Since when does CDT include backtracking on noticing other people's predictive inconsistency?

Ever since the process of updating a causal model of the world based on new information was considered an epistemic question outside the scope of decision theory.

To see how this is true, imagine the exact same situation as described in the post with zero stakes. Then ask any agent with any decision theory about the inverse of the prediction it expects the predictor to make. The answer will always be "I don't know", independent of decision theory. Ask that same agent if it can assign probabilities to the answers and it will say "I don't know; every time I try to come up with one, the answer reverses."

All I'm trying to do is compute the probability that the predictor will guess "one" or "zero" and failing. The output of failing here isn't "well, I guess I'll default to fifty-fifty so I should pick at random"[1], it's NaN.

Here's a causal explanation:

  • I believe the predictor modeled my reasoning process and has made a prediction based on that model.
  • I believe this model to be accurate/quasi-accurate
  • I start unaware of what my causal reasoning process is so I have no idea what the predictor will do. But my prediction of the predictor depends on my causal reasoning process
  • Because my causal reasoning process is contingent on my prediction and my prediction is contingent on my causal reasoning process, I end up in an infinite loop where my causal reasoning process cannot converge on an actual answer. Every time it tries, it just keeps updating.
  • I quit the game because my prediction is incomputable
Predictors exist: CDT going bonkers... forever

Decision theories map world models into actions. If you ever make a claim like "This decision-theory agent can never learn X and is therefore flawed", you're either misphrasing something or you're wrong. The capacity to learn a good world-model is outside the scope of what decision theory is[1]. In this case, I think you're wrong.

For example, suppose the CDT agent estimates the prediction will be "zero" with probability p, and "one" with probability 1-p. Then if p≥1/2, they can say "one", and have a probability p≥1/2 of winning, in their own view. If p<1/2, they can say "zero", and have a subjective probability 1−p>1/2 of winning.

This is not what a CDT agent would do. Here is what a CDT agent would do:

1. The CDT agent makes an initial estimate that the prediction will be "zero" with probability 0.9 and "one" with probability 0.1.

2. The CDT agent considers making the decision to say "one" but notices that Omega's prediction aligns with its actions.

3. Given that the CDT agent was just considering saying "one", the agent updates its initial estimate by reversing it. It declares "I planned on guessing one before but the last time I planned that, the predictor also guessed one. Therefore I will reverse and consider guessing zero."

4. Given that the CDT agent was just considering saying "zero", the agent updates its initial estimate by reversing it. It declares "I planned on guessing zero before but the last time I planned that, the predictor also guessed zero. Therefore I will reverse and consider guessing one."

5. The CDT agent realizes that, given the predictor's capabilities, its own prediction will be undefined

6. The CDT agent walks away, not wanting to waste the computational power

The longer and longer the predictor is accurate for, the higher and higher the CDT agent's prior becomes that its own thought process is casually affecting the estimate[2]. Since the CDT agent is embedded, it's impossible for the CDT agent to reason outside it's thought process and there's no use in it nonsensically refusing to leave the game.

Furthermore, any good decision-theorist knows that you should never go up against a Sicilian when death is on the line[3].

[1] This is not to say that world-modeling isn't relevant to evaluating a decision theory. But in this case, we should be fully discussing things that may/may not happen in the actual world we're in and picking the most appropriate decision theory for this one. Isolated thought experiments do not serve this purpose.

[2] Note that, in cases where this isn't true, the predictor should get worse over time. The predictor is trying to model the CDT agent's predictions (which depend on how the CDT agent's actions affect its thought-process) without accounting for the way the CDT agent is changing as it makes decision. As a result, a persevering CDT agent will ultimately beat the predictor here and gain infinite utility by playing the game forever

[3] The Battle of Wits from the Princess Bride is isomorphic to problem in this post

Load More