ozziegooen

I'm currently working as a Research Scholar at the Future of Humanity Institute. I've previously co-created the application Guesstimate. Opinions are typically my own.

ozziegooen's Comments

[Link] Beyond the hill: thoughts on ontologies for thinking, essay-completeness and forecasting

You need to log in if you want to make predictions. You should be able to see others' predictions without logging in. (At least on Firefox and Chrome)

Note the notebook interface is kind of new and still has some quirks that are getting worked out.

Existing work on creating terminology & names?

It looks interesting, but my search shall continue. Seems pretty short and not really on naming. I may order a copy though. Thanks!

Existing work on creating terminology & names?

Thanks! I've looked at (2) a bit and some other work on Information Architecture.

I've found it interesting but kind of old-school, it seems to have been a big deal when web tree navigation was a big thing, and to have died down after. It also seems pretty applied; as in there isn't a lot of connection with academic theory in how one could think about these classifications.

ozziegooen's Shortform

More Narrow Models of Credences

Epistemic Rigor
I'm sure this has been discussed elsewhere, including on LessWrong. I haven't spent much time investigating other thoughts on these specific lines. Links appreciated!

The current model of a classically rational agent assume logical omniscience and precomputed credences over all possible statements.

This is really, really bizarre upon inspection.

First, "logical omniscience" is very difficult, as has been discussed (The Logical Induction paper goes into this).

Second, all possible statements include statements of all complexity classes that we know of (from my understanding of complexity theory). "Credences over all possible statements" would easily include uncountable infinities of credences. One could clarify that even arbitrarily large amounts of computation would not be able to hold all of these credences.

Precomputation for things like this is typically a poor strategy, for this reason. The often-better strategy is to compute things on-demand.

A nicer definition could be something like:

A credence is the result of an [arbitrarily large] amount of computation being performed using a reasonable inference engine.

It should be quite clear that calculating credences based on existing explicit knowledge is a very computationally-intensive activity. The naive Bayesian way would be to start with one piece of knowledge, and then perform a Bayesian update on each next piece of knowledge. The "pieces of knowledge" can be prioritized according to heuristics, but even then, this would be a challenging process.

I think I'd like to see specification of credences that vary with computation or effort. Humans don't currently have efficient methods to use effort to improve our credences, as a computer or agent would be expected to.

Solomonoff's theory of Induction or Logical Induction could be relevant for the discussion of how to do this calculation.

ozziegooen's Shortform

Intervention dominance arguments for consequentialists

Global Health

There's a fair bit of resistance to long-term interventions from people focused on global poverty, but there are a few distinct things going on here. One is that there could be a disagreement on the use of discount rates for moral reasoning, a second is that the long-term interventions are much more strange.

No matter which is chosen, however, I think that the idea of "donate as much as you can per year to global health interventions" seems unlikely to be ideal upon clever thinking.

For the last few years, the cost-to-save-a-life estimates of GiveWell seem fairly steady. The S&P 500 has not been steady, it has gone up significantly.

Even if you committed to purely giving to global heath, you'd be better off if you generally delayed. It seems quite possible that if every life you would have saved in 2010, you could have saved 2 or more if you would have saved the money and spent it in 2020, with a decently typical investment strategy. (Arguably leverage could have made this much higher.) From what I understand, the one life saved in 2010 would likely not have resulted in one extra life equivalent saved in 2020; the returns per year was likely less than that of the stock market.

One could of course say something like, "My discount rate is over 3-5% per year, so that outweighs this benefit". But if that were true it seems likely that the opposite strategy could have worked. One could have borrowed a lot of money in 2010, donated it, and then spent the next 10 years paying that back.

Thus, it seems conveniently optimal if one's enlightened preferences would suggest not either investing for long periods or borrowing.

EA Saving

One obvious counter to immediate donations would be to suggest that the EA community financially invests money, perhaps with leverage.

While it is difficult to tell if other interventions may be better, it can be simpler to ask if they are dominant; in this case, that means that they predictably increase EA-controlled assets at a rate higher than financial investments would.

A good metaphor could be to consider the finances of cities. Hypothetically, cities could invest much of their earnings near-indefinitely or at least for very long periods, but in practice, this typically isn't key to their strategies. Often they can do quite well by investing in themselves. For instance, core infrastructure can be expensive but predictably lead to significant city revenue growth. Often these strategies area so effective that they issue bonds in order to pay more for this kind of work.

In our case, there could be interventions that are obviously dominant to financial investment in a similar way. An obvious one would be education; if it were clear that giving or lending someone money would lead to predictable donations, that could be a dominant strategy to more generic investment strategies. Many other kinds of community growth or value promotion could also fit into this kind of analysis. Related, if there were enough of these strategies available, it could make sense for loans to be made in order to pursue them further.

What about a non-EA growth opportunity? Say, "vastly improving scientific progress in one specific area." This could be dominant (to investment, for EA purposes) if it would predictably help EA purposes by more than the investment returns. This could be possible. For instance, perhaps a $10mil donation to life extension research[1] could predictably increase $100mil of EA donations by 1% per year, starting in a few years.

One trick with these strategies is that many would fall into the bucket of "things a generic wealthy group could do to increase their wealth"; which is mediocre because we should expect that type of things to be well-funded already. We may also want interventions that differentially change wealth amounts.

Kind of sadly, this seems to suggest that some resulting interventions may not be "positive sum" to all relevant stakeholders. Many of the "positive sum in respect to other powerful interest" interventions may be funded, so the remaining ones could be relatively neutral or zero-sum for other groups.

[1] I'm just using life extension because the argument would be simple, not because I believe it could hold. I think it would be quite tricky to find great options here, as is evidenced by the fact that other very rich or powerful actors would have similar motivations.

ozziegooen's Shortform

Update: After I wrote this shortform, I did more investigation in Pragmatics and realized most of this was better expressed there.

2018 Review: Voting Results!

Interesting. From the data, it looks like there's a decent linear correlation up to around 150 Karma or so, and then after that the correlation looks more nebulous.

2018 Review: Voting Results!

I'm quite curious how this ordering correlated with the original LessWrong Karma of each post, if that analysis hasn't been done yet. Perhaps I'd be more curious to better understand what a great ordering would be. I feel like there are multiple factors taken into account when voting, and it's also quite possible that the userbase represents multiple clusters that would have distinct preferences.

ozziegooen's Shortform

One nice thing about cases where the interpretations matter, is that the interpretations are often easier to measure than intent (at least for public figures). Authors can hide or lie about their intent or just never choose to reveal it. Interpretations can be measured using surveys.

ozziegooen's Shortform

It seems like there are a few distinct kinds of questions here.

  1. You are trying to estimate the EV of a document.
    Here you want to understand the expected and actual interpretation of the document. The intention only matters to how it effects the interpretations.

  2. You are trying to understand the document.
    Example: You're reading a book on probability to understand probability.
    Here the main thing to understand is probably the author intent. Understanding the interpretations and misinterpretations of others is mainly useful so that you can understand the intent better.

  3. You are trying to decide if you (or someone else) should read the work of an author.
    Here you would ideally understand the correctness of the interpretations of the document, rather than that of the intention. Why? Because you will also be interpreting it, and are likely somewhere in the range of people who have interpreted it. For example, if you are told, "This book is apparently pretty interesting, but every single person who has attempted to read it, besides one, apparently couldn't get anywhere with it after spending many months trying", or worse, "This author is actually quite clever, but the vast majority of people who read their work misunderstand it in profound ways", you should probably not make an attempt; unless you are highly confident that you are much better than the mentioned readers.

Load More