Pongo

Posts

Sorted by New

Comments

Public transmit metta

Any favored resources on metta?

Misalignment and misuse: whose values are manifest?

If we solve the problem normally thought of as "misalignment", it seems like this scenario would now go well. If we solve the problem normally thought of as "misuse", it seems like this scenario would now go well. This argues for continuing to use these categories, as fixing the problems they name is still sufficient to solve problems that do not cleanly fit in one bucket or another

How can I bet on short timelines?

Sure!

I see people on twitter, for example, doing things like having GPT-3 provide autocomplete or suggestions while they're writing, or doing grunt work of producing web apps. Plausibly, figuring out how to get the most value out of future AI developments for improving productivity is important.

There's an issue that it's not very obvious exactly how to prepare for various AI tools in the future. One piece of work could be thinking more about how to flexibly prepare for AI tools with unknown capabilities, or predicting what the capabilities will be.

Other things that come to mind are:

  • Practice getting up to speed in new tool setups. If you are very bound to a setup that you like, you might have a hard time leveraging these advances as they come along. Alternatively, try and be sure you can extend your current workflow
  • Increase the attention you pay to new (AI) tools. Get used to trying them out, both for the reasons above and because it may be important to act fast in picking up very helpful new tools

To be clear, it's not super clear to me how much value there is in this direction. It is pretty plausible to me that AI tooling will be essential for competitive future productivity, but maybe there's not much of an opportunity to bet on that

Three Open Problems in Aging

Now, it's still possible that accumulation of slow-turnover senescent cells could cause the increased production rate of fast-turnover senescent cells.

Reminds me of this paper, in which they replaced the blood of old rats with a neutral solution (not the blood of young rats), and found large rejuvinative effects. IIRC, they attributed it to knocking the old rats out of some sort of "senescent equilibrium"

How can I bet on short timelines?

If timelines are short, where does the remaining value live? Some fairly Babble-ish ideas:

  • Alignment-by-default
    • Both outer alignment and inner by default
      • With full alignment by default, there's nothing to do, I think! One could be an accelerationist, but the reduction in suffering and lives lost now doesn't seem large enough for the cost in probability of aligned AI
      • Possibly value could be lost if values aren't sufficiently cosmopolitan? One could try and promote cosmopolitan values
    • Inner alignment by default
      • Focus on tools for getting good estimates of human values, or an intent-aligned AI
        • Ought's work is a good example
        • Possibly trying to experiment with governance / elicitation structures, like quadratic voting
        • Also thinking about how to get good governance structures actually used
  • Acausal trade
    • In particular, expand the ideas in this post. (I understand Paul to be claiming he argues for tractability somewhere in that post, but I couldn't find it)
    • Work through the details of UDT games, and how we could effect proper acausal trade. Figure out how to get the relevant decision makers on board
  • Strong, fairly late institutional responses
    • Work on making, for example, states strong enough to (coordinately) restrict or stop AI development

Other things that seem useful:

  • Learn the current hot topics in ML. If timelines are short, it's probably the case that AGI will use extensions of the current frontier
  • Invest in leveraging AI tools for direct work / getting those things that money cannot buy. This maybe a little early, but if the takeoff is at all soft, maybe there are still >10 years left of 2020-level intellectual work before 2030 if you're using the right tools
Where do (did?) stable, cooperative institutions come from?

Even more so, I would love to see your unjustifiable stab-in-the-dark intuitions as to where the center of all this is

Curious why this in particular (not trying to take umbrage with wanting this info; I agree that there’s a lot of useful data here. Would be a thing I’d also want to ask for, but wouldn’t have prioritised)

Where do (did?) stable, cooperative institutions come from?

Seems like you’re missing an end to the paragraph that starts “Related argument”

Kelly Bet or Update?

I liked your example of being uncertain of your probabilities. I note that if you are trying to make an even money bet with a friend (as this is a simple Schelling point), you should never Kelly bet if you have discounted rate of 2/3 or less of your naïve probabilities.

The maximum bet for  is when  is 1, which is  which crosses below 0 at 

The Darwin Game - Rounds 0 to 10

In the pie chart in the Teams section, you can see "CooperateBot [Larks]" and "CooperateBot [Insub]"

Should we use qualifiers in speech?

Yeah, that's what my parenthetical was supposed to address

(particularly because there is large interpersonal variation in the strength of hedging a given qualifier is supposed to convey)

Perhaps you are able to get more reliable information out of such statements than I am.

Load More