DragonGod

Comments

Are we in an AI overhang?

It has already got some spread. Michael Nielsen shared it on Twitter (126 likes and 29 RTs as at writing).

Are we in an AI overhang?

Thanks for this, I'll be sharing it on /r/slatestarcodex and Hacker News (rationalist discords too if it comes up).

Are we in an AI overhang?

Maybe for the most efficient possible algorithm, but even that is not clear, and it's not clear we'll discover such algorithms anytime soon.

Using only current algorithms and architecture, a scaling jump of a few orders of magnitude seems doable.

A Critique of Functional Decision Theory

Typo:

But this seems arbitrary — why should the fact that S’s causal influence on whether there’s money in the opaque box or not go via another agent much such a big difference?

The bolded should be "make" I think.

Is Rationalist Self-Improvement Real?
What’s more, the outcomes don’t scale smoothly with your level of skill. When rare, high leverage opportunities come around, being slightly more rational can make a huge difference. Bitcoin was one such opportunity; meeting my wife was another such one for me.  I don’t know what the next one will be: an emerging technology startup? a political upheaval? cryonics? I know that the world is getting weirder faster, and the payouts to Rationality are going to increase commensurately.

I think COVID-19 has been another one. Many rats seem to have taken it seriously back in Jan/Feb.

Wei Dai made some money shorting the stock market.

Interpretations of "probability"

I don't think using likelihoods when publishing in journals is tractable.

  1. Where did your priors come from? What if other scientists have different priors? Justifying the chosen prior seems difficult.
  2. Where did your likelihood ratios come from? What if other scientists disagree.

P values may bave been a failed attempt at objectivity, but they're a better attempt than moving towards subjective probabilities (even though the latter is more correct).

Preface

This was very refreshing to read. I'm glad EY has realised that mocking silly ideas doesn't actually help (it makes adherents of the idea double down and be much less likely to listen to you and may also alienate some neutrals. This is particularly true for ideas which have gained currency like the Abrahamic religions). I wasn't able to recommend The Sequences to Christian friends previously because of it's antireligiosity — here's hoping this version would be better.

Embedded Agents

I don't understand why being an embedded agent makes Bayesian reasoning impossible. My intuition is that an hypothesis doesn't have to be perfectly correlated with reality to be useful. Furthermore suppose you conceived of hypotheses as being a conjunction of elementary hypothesis, then I see no reason why you cannot perform Bayesian reasoning of the form "hypothesis X is one of the consituents of the true hypothesis", even if the agent can't perfectly describe the true hypothesis.

Also, "the agent is larger/smaller than the environment" is not very clear, so I think it would help if you would clarify what those terms mean.

The Tails Coming Apart As Metaphor For Life

Not AGI per se, but aligned and beneficial AGI. Like I said, I'm a moral nihilist/relativist and believe no objective morality exists. I do think we'll need a coherent moral system to fulfill our cosmic endowment via AGI.

The Tails Coming Apart As Metaphor For Life

My response to this was initially a tongue in cheek "I'm a moral nihilist, and there's no sense in which one moral system is intrinsically better than another as morality is not a feature of the territory". However, I wouldn't as solving morality is essential to the problem of creating aligned AI. There may be no objectively correct moral system, or intrinsically better moral system, or any good moral system, but we still need a coherent moral framework to use to generate our AI's utility function if we want it to be aligned, so morality is important, and we do need to develop an acceptable solution to it.

Load More