interstice

Wiki Contributions

Comments

More Christiano, Cotra, and Yudkowsky on AI progress

The 'grand story' Eliezer is referring to here isn't anything like these, though. That story is more like "there is a gradual increase in capability in all species, on an slow timescale; eventually one of them crosses the threshold of being able to produce culture which evolves on a faster timescale". Sort of the opposite of these species-parochialist tales.

Even if you're right, you're wrong

The first bullet point seems valid(for propositions with no empirical content)

Against the idea that physical limits are set in stone

Do you have reason to believe we will never collect surprising observations?

Sure, it's likely we'll get some new surprising observations before we nail down the TOE. The question is just about how surprising, and whether they will let us upend physical limits. Agreed that there's a lot of new interesting things we could observe, but for most of your examples, I don't think we have good reason to think that we'll learn new things about fundamental physics from them.

Let me rephrase my objection. I think my main issue with your post can be found in this phrase near the beginning: you speak of the "rate at which we have constantly upended our own physical theories". I don't think that progress in fundamental physics is like technological progress or other things which happen at a steady rate per unit effort. It's more like exploiting a non-renewable resource: our ignorance of physical phenomena. So 400 years ago we basically started with a huge 'reservoir' of ignorance, which has gradually been drained as our theories improved, until now there's only a few small pools left that we can see. The reason that we've seen steady progress until recently is due to our slow draining of this reservoir, so now that it's mostly gone, we no longer have a reason to expect further such steady progress. It's possible that we might find new reservoirs someday, but equally possible that we won't, so it's a reasonable assumption that many of our current theories' physical limits will continue to apply indefinitely into the future.

Against the idea that physical limits are set in stone

There's a big difference between our current state of knowledge regarding physics and previous eras': we now completely understand the physics of everyday existence. In the past, there were many, blatantly obvious unknowns -- e.g. Newtonian mechanics doesn't let you understand how chemistry arises from physics. Nowadays there's a lot less room for reality to surprise us with new observations -- indeed, theoretical physics has largely stalled out in recent years due to the infeasibility of obtaining observations our theories don't already predict. More generally, this seems to be what we should expect to happen in a lawful universe: after an initial period of discovery, we eventually discover all the laws and are done. What you propose -- an endless string of new discoveries, each upending the last -- is incompatible with the universe having a finite description. It's not logically impossible that we live in such a universe, but scientific progress so far seems to support finite lawfulness.

What specifically is the computation -> qualia theory?

It seems to me like there should be an infinite number of ways to interpret atoms’ vibrations as having information or even being transformed in a way that approximates the operation of an algorithm. I don’t know if any SIC campers actually worry about this, if they have specific requirements on how to tell if atoms are running algorithms, how likely they think this is to happen in practice, etc.

People do indeed worry about this, leading to things like 'Solomonoff-weighted utilitarianism' that assign higher moral relevance to minds with short description lengths.

[Book Review] "The Vital Question" by Nick Lane

Before seeing any evidence, we should indeed expect that life has high density in the universe. We just have enough data to rule that out. More generally I think UDASSA is probably the best framework for approaching problems like this, and it would hold that, in situations where our existence is contingent on an anthropically-selected unlikely event, we should still expect that this event is as likely as possible while being consistent with the evidence. So 10^-40 likelihood origination events more probably than 10^-400 likelihood events.

This Can't Go On

There's some discussion of this in a followup post.

Player vs. Character: A Two-Level Model of Ethics

What seems off to me is the idea that the 'player' is some sort of super-powerful incomprehensible lovecraftian optimizer. I think it's more apt to think of it as like a monkey, but a monkey which happens to share your body and have write access to the deepest patterns of your thought and feeling(see Steven Byrnes' posts for the best existing articulation of this view). It's just a monkey, its desires aren't totally alien and I think it's quite possible for one's conscious mind to develop a reasonably good idea of what it wants. That the OP prefers to push the 'alien/lovecraftian' framing is interesting and perhaps indicates that they find what their monkey (and/or other peoples' monkeys) wants repulsive in some way.

I read “White Fragility” so you don’t have to (but maybe you should)

In rationalist circles, you might find out that you're being instrumentally or epistemically irrational in the course of a debate -- the norms of such a debate encourage you to rebut your opponent's points if you think they are being unfair. In contrast, the central thesis of this book is that white people disputing their racism is a mechanism for protecting white supremacy and needs to be unlearned, along with other cornerstones of collective epistemology such as the notion of objective knowledge. So under the epistemic conditions promoted by this book, I expect "found about being racist" to roughly translate to "was told you were racist".

The Codex Skeptic FAQ

I think those advancements could be evidence for both, depending on the details of how the nootropics work, etc. But it still seems worth distinguishing the two things conceptually. My objection in both cases is that only a small part of the evidence for the first comes from the causal impact of the second: i.e. if Codex gave crazy huge productivity improvements, I would consider that evidence for full code automation coming soon, but that's mostly because it suggests that Codex can likely be improved to the point of FCA, not because it will make OpenAI's progammers more productive.

Load More