ozziegooen

I'm currently working as a Research Scholar at the Future of Humanity Institute. I've previously co-created the application Guesstimate. Opinions are typically my own.

ozziegooen's Comments

ozziegooen's Shortform

One nice thing about cases where the interpretations matter, is that the interpretations are often easier to measure than intent (at least for public figures). Authors can hide or lie about their intent or just never choose to reveal it. Interpretations can be measured using surveys.

ozziegooen's Shortform

It seems like there are a few distinct kinds of questions here.

  1. You are trying to estimate the EV of a document.
    Here you want to understand the expected and actual interpretation of the document. The intention only matters to how it effects the interpretations.

  2. You are trying to understand the document.
    Example: You're reading a book on probability to understand probability.
    Here the main thing to understand is probably the author intent. Understanding the interpretations and misinterpretations of others is mainly useful so that you can understand the intent better.

  3. You are trying to decide if you (or someone else) should read the work of an author.
    Here you would ideally understand the correctness of the interpretations of the document, rather than that of the intention. Why? Because you will also be interpreting it, and are likely somewhere in the range of people who have interpreted it. For example, if you are told, "This book is apparently pretty interesting, but every single person who has attempted to read it, besides one, apparently couldn't get anywhere with it after spending many months trying", or worse, "This author is actually quite clever, but the vast majority of people who read their work misunderstand it in profound ways", you should probably not make an attempt; unless you are highly confident that you are much better than the mentioned readers.

ozziegooen's Shortform

Communication should be judged for expected value, not intention (by consequentialists)

TLDR: When trying to understand the value of information, understanding the public interpretations of that information could matter more than understanding the author's intent. When trying to understand the information for other purposes (like, reading a math paper to understand math), this does not apply.

If I were to scream "FIRE!" in a crowded theater, it could cause a lot of damage, even if my intention were completely unrelated. Perhaps I was responding to a devious friend who asked, "Would you like more popcorn? If yes, should 'FIRE!'".

Not all speech is protected by the First Amendment, in part because speech can be used for expected harm.

One common defense of incorrect predictions is to claim that their interpretations weren't their intentions. "When I said that the US would fall if X were elected, I didn't mean it would literally end. I meant more that..." These kinds of statements were discussed at length in Expert Political Judgement.

But this defense rests on the idea that communicators should be judged on intention, rather than expected outcomes. In those cases, it was often clear that many people interpreted these "experts" as making fairly specific claims that were later rejected by their authors. I'm sure that much of this could have been predicted. The "experts" often definitely didn't seem to be going out of their way to be making their after-the-outcome interpretations clear before-the-outcome.

I think that it's clear that the intention-interpretation distinction is considered highly important by a lot of people, so much so as to argue that interpretations, even predictable ones, are less significant in decision making around speech acts than intentions. I.E. "The important thing is to say what you truly feel, don't worry about how it will be understood."

But for a consequentialist, this distinction isn't particularly relevant. Speech acts are judged on expected value (and thus expected interpretations), because all acts are judged on expected value. Similarly, I think many consequentialists would claim that here's nothing metaphysically unique about communication as opposed to other actions one could take in the world.

Some potential implications:

  1. Much of communicating online should probably be about developing empathy for the reader base, and a sense for what readers will misinterpret, especially if such misinterpretation is common (which it seems to be).
  2. Analyses of the interpretations of communication could be more important than analysis of the intentions of communication. I.E. understanding authors and artistic works in large part by understanding their effects on their viewers.
  3. It could be very reasonable to attempt to map non probabilistic forecasts into probabilistic statements based on what readers would interpret. Then these forecasts can be scored using scoring rules just like those as regular probabilistic statements. This would go something like, "I'm sure that Bernie Sanders will be elected" -> "The readers of that statement seem to think the author applying probability 90-95% to the statement 'Bernie Sanders will win'" -> a brier/log score.

Note: Please do not interpret this statement as attempting to say anything about censorship. Censorship is a whole different topic with distinct costs and benefits.

Go F*** Someone

Thanks for the response!

For what it's worth, I predict that this would have gotten more upvotes here at least with different language, though I realize this was not made primarily for LW.

my personal opinion is that LW shouldn't cater to people who form opinions on things before reading them and we should discourage them from hanging out here.

I think this is a complicated issue. I could appreciate where it's coming from and could definitely imagine things going too far in either direction. I imagine that both of us would agree it's a complicated issue, and that there's probably some line somewhere, though we may of course disagree on where specifically it is.

A literal-ish interpretation of your phrase there is difficult for me to interpret. I feel like I start with priors on things all the time. Like, if I know an article comes from The NYTimes vs. The Daily Stormer, that snippet of data itself would give me what seems like useful data. There's a ton of stuff online I choose not to read because it seems to be from sources I can't trust for reasons of source, or a quick read of headline.

Go F*** Someone

A bit more thinking;

I would guess that one reason why you had a strong reaction, and/or why several people upvoted you so quickly, was because you/they were worried that my post would be understood by some as "censorship=good" or "LessWrong needs way more policing".

If so, I think that's a great point! It's similar to my original point!

Things get misunderstood all the time.

I tried my best to make my post understandable. I tried my best to condition it so that people wouldn't misinterpret or overinterpret it. But then my post was misunderstood (from what I can tell, unless I'm seriously misunderstanding Ben here) literally happened within 30 minutes.

My attempt provably failed. I'll try harder next time.

Go F*** Someone

Did you interpret me to say, "One should be sure that zero readers will feel offended?" I think that would clearly be incorrect. My point was that there are cases where one may believe that a bunch of readers may be offended, with relatively little cost to change things to make that not the case.

For instance, one could make lots of points that use alarmist language to poison the well, where the language is technically correct, but very predictably misunderstood.

I think there is obviously some line. I imagine you would as well. It's not clear to me where that line is. I was trying to flag that I think some of the language in this post may have crossed it.

Apologies if my phrasing was misunderstood. I'll try changing that to be more precise.

Go F*** Someone

I think I'm fairly uncomfortable with some of the language in this post being on LessWrong as such. It seems from the other comments that some people find some of the information useful, which is a positive signal. However, there are 36 votes on this, with a net of +12, which is a pretty mixed signal. My impression is that few of the negative voters gave descriptive comments.

I think with any intense language the issue isn't only "Is this effective language to convey the point without upsetting an ideal reader", but also something like, "Given that there is a wide variety of readers, are we sufficiently sure that this will generally not needlessly offend or upset many of them, especially in ways that could easily be improved upon?"

I could imagine casual readers quickly looking at this and assuming it's related to the PUA community or similar groups that have some sketchy connotations.

This presents two challenges. First, anyone who makes this inference may also assume that other writers on LessWrong share similar beliefs to what they think this kind of writing signals to them. Second, it may attract other writing that may be quite bad in ways we definitely don't want.

I would suggest that in the future, posts either don't use such dramatic language here, or in the very least just done as link posts.

I'd be curious if others have takes on this issue; it's definitely possible my intuitions are off here.

ACDT: a hack-y acausal decision theory

Nice post! I found the diagrams particularly readable, it makes a lot of sense to me to have them in such a problem.

I'm not very well-read on this sort of work, so feel free to ignore any of the following.

The key question I have is the correctness of the section:

In a sense, ACDT can be seen as anterior to CDT. How do we know that causality exists, and the rules it runs on? From our experience in the world. If we lived in a world where the Newcomb problem or the predictors exist problem were commonplace, then we'd have a different view of causality.

It might seem gratuitous and wrong to draw extra links coming out of your decision node - but it was also gratuitous and wrong to cut all the links that go into your decision node. Drawing these extra arrows undoes some of the damage, in a way that a CDT agent can understand (they don't understand things that cause their actions, but they do understand consequences of their actions).

I don't quite see why the causality is this flexible and arbitrary. I haven't read Causality, but think I get the gist.

It's definitely convenient here to be uncertain about causality. But it would be similarly convenient to have uncertainty about the correct decision theory. A similar formulation could involve a meta-decision-algorithm that has tries different decision algorithms until one produces favorable outcomes. Personally I think I'd be easier to be convinced that acausal decision theory is correct than that a different causal structure is correct.

Semi-related, one aspect of Newcomb's problem that has really confused me is the potential for Omega to have scenarios that favor incorrect beliefs. It would be arbitrary to imagine that Newcomb would offer $1,000 only if it could tell that one believes that "19 + 2 = 20". One could solve that by imagining that the participant should have uncertainty about what "19 + 2" is, trying out multiple options, and seeing which would produce the most favorable outcome.

Separately,

If it's encountered the Newcomb problem before, and tried to one-box and two-box a few times, then it knows that the second graph gives more accurate predictions

To be clear, I'd assume that the agent would be smart enough to simulate this before actually having it done? The outcome seems decently apparent to me.

ozziegooen's Shortform

One question around the "Long Reflection" or around "What will AGI do?" is something like, "How bottlenecked will be by scientific advances that we'll need to then spend significant resources on?"

I think some assumptions that this model typically holds are:

  1. There will be decision-relevant unknowns.
  2. Many decision-relevant unkowns will be EV-positive to work on.
  3. Of the decision-relevant unknowns that are EV-positive to work on, these will take between 1% to 99% of our time.

(3) seems quite uncertain to me in the steady state. I believe it makes an intuitive estimate between 2 orders of magnitude, while the actual uncertainty is much higher than that. If this were the case, it would mean:

  1. Almost all possible experiments are either trivial (<0.01% of resources, in total), or not cost-effective.
  2. If some things are cost-effective and still expensive (they will take over 1% of the AGI lifespan), it's likely that they will take 100%+ of the time. Even if they would take 10^10% of the time, in expectation, they could still be EV-positive to pursue. I wouldn't be surprised if there were one single optimal thing like this in the steady-state. So this strategy would look something like, "Do all the easy things, then spend a huge amount of resources on one gigantic-sized, but EV-high challenge."

(This was inspired by a talk that Anders Sandberg gave)

Load More