LESSWRONG
LW

1517
hold_my_fish
397Ω821000
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
The Industrial Explosion
hold_my_fish3mo10

This analysis assumes that there hasn't already been mass deployment of generalist robots before an intelligence explosion, right? But such deployment might happen.

As a real-world example, consider the state of autonomous driving. If human-level AI were available today, Tesla's fleet would be fully autonomous--they are limited by AI, not volume of cars. Even for purely-autonomy-focused Waymo, their scale-up seems more limited by AI than by car production.

Drones are another example to consider. There are a ton of drones out there of various types and purposes. If human-level AI existed, it could immediately be put to use controlling drones.

So in both those cases, the hardware deployment is well ahead of the AI you'd ideally like to have to control it. The same might turn out to be true of the sort of generalist robot that could, if operated by human-level AI, build and operate a factory.

Reply
Winning the power to lose
hold_my_fish3mo10

That just falls back on the common doomer assumption that "evil is optimal" (as Sutton put it). Sure, if evil is optimal and you have an entity that behaves optimally, it'll act in evil ways.

But there are good reasons to think that evil is not optimal in current conditions. At least as long as a Dyson sphere has not yet been constructed, there are massive gains available from positive-sum cooperation directed towards technological progress. In these conditions, negative-sum conflict is a stupid waste.

This view, that evil is not optimal, ties back into the continuation framing. After all, you can make a philosophical argument either way. But in the continuation framing, we can ask ourselves whether evil is empirically optimal for humans, which will suggest whether evil is optimal for non-biological descendants (since they continue humanity). And in fact we see evil losing a lot, and not coincidentally--WW2 went the way it did in part because the losing side was evil.

Reply
Winning the power to lose
hold_my_fish3mo10

Which ones?

If an entity does stupid things, it's disfavored against competitors that don't do those stupid things, all else being equal. So it needs to adapt by ceasing the stupid behavior or otherwise lose.

machine gods of unimaginable power could be among us in short order, with no evolutionary fairies quick enough to punish their destructive stupidity

Any assumption of the form "super-intelligent AI will take actions that are super-stupid" is dubious.

Reply
Winning the power to lose
hold_my_fish3mo10

I'm afraid that I'm not following the point of the first line of argument. Yes, people sometimes do pointless destructive things for stupid reasons. Such behavior is in the long-term penalized by selective pressures. More-intelligent descendants would be less likely to engage in such behavior, precisely because they are smarter.

Sure, but obviously this isn't an all-or-nothing proposition, with either biological or artificial descendants, and it's clear to me that most people aren't indifferent about where on that spectrum those descendants will end up. Do you disagree with that, or think that only "accels" are indifferent (and in some metaphysical sense "correct")?

I doubt that most people think about long-term descendants at all, honestly.

Reply
Winning the power to lose
hold_my_fish3mo10

I think I agree with everything you wrote. Yes I'd expect there to be multiple niches available in the future, but I'd expect our descendants to ultimately fill all of them, creating an ecosystem of intelligent life. There is a lot of time available for our descendants to diversify, so it'd be surprising if they didn't.

How much that diversification process resembles Darwinian evolution, I don't know. Natural selection still applies, since it's fundamentally the fact that the life we observe today disproportionately descends from past life that was effective at self-reproduction, and that's essentially tautological. But Darwinian evolution is undirected, whereas our descendants can intelligently direct their own evolution, and that could conceivably matter. I don't see why it would prevent diversification, though.

Edit:

Here are some thoughts in reply to your request for examples. Though it's impossible to know what the niches of the long-term future will be, one idea is that there could be an analogue to "plant" and "animal". A plant-type civilization would occupy a single stellar system, obtaining resources from it via Dyson sphere, mining, etc. An animal-type civilization could move from star to star, taking resources from the locals (which could be unpleasant for the locals, but not necessarily, as with bees pollinating flowers).

I'd expect both those civilizations to descend from ours, much like how crabs and trees both descend from LUCA.

Reply
Winning the power to lose
hold_my_fish3mo10

Regarding wars, I don't think that wars in modern times have much to do with controlling the values of descendants. I'd guess that the main reason people fight defensive wars is to protect their loved ones and communities. And there really isn't any good reason to fight offensive wars (given current conditions--wasn't always true), so they are started by leaders who are deluded in some way.

Regarding Robin Hanson, I agree that his views are complicated (which is why I'd be hesitant to classify him as "accel"). The main point of his that I'm referring to is his observation that biological descendants would also have differing values from ours.

Reply
Winning the power to lose
hold_my_fish3mo10

The short answer is yes to both, because of convergent evolution. I think of convergent evolution as the observation that two sufficiently flexible adaptive systems, when exposed to the same problems, will find similar solutions. Since our descendants, whether biological or something else, will be competing in the same environment, we should expect their behavior to be similar.

So, if assuming convergent evolution:

  • If valuing paperclip maximization is unlikely for biological descendants, then it's unlikely for non-biological descendants too. (That addresses your first question.)
  • In any case, we don't control the values of our descendants, so the continuation framing isn't conditioned on their values. (That addresses your second question.)

To be clear, that doesn't mean I see the long-term future as unchangeable. Two examples:

  • It still could be the case that we don't have any long-term descendants at all, for example due to catastrophic asteroid impact.
  • A decline scenario is also possible, in which our descendants are not flexible enough to respond to the incentive for interstellar colonization, after which civilization declines and eventually ceases to exist.
Reply
Winning the power to lose
hold_my_fish3mo30

And similarly but worse if AI ends humanity—the ‘winning’ side won’t be any better off than the ‘losing side’.

I don't think most accels would agree with the framing here, of AI ending humanity. It is more common to think of AI as a continuation of humanity. This seems worth digging into, since it may be the key distinction between the accel and doomer worldviews.

Here are some examples of the accel way of thinking:

  • Hans Moravec uses the phrase "mind children".
  • The disagreement between Elon Musk and Larry Page that (in part) led to the creation of OpenAI involved Page considering digital life valid descendants and Musk disagreeing.
  • Robin Hanson (who I wouldn't call an accel exactly, but his descriptive worldview is accel in flavor), in his discussion with Scott Aaronson, often compared AI descendants to biological descendants.
  • Beff Jezos, though I cannot find the quote, at some point made a tweet to the effect of not having a preference between biological and non-biological descendants.

The two views (of AI either ending humanity or continuing humanity) then flavor all downstream thinking. If talking about AI replacing humanity, for example, an accel will tend to think of pleasant transition scenarios (analogous to generational transitions from parents to children) whereas a doomer will tend to think of unpleasant transition scenarios (analogous to violent revolutions or invasions).

As an accel-minded person myself, the continuation framing is so natural that I struggle to think how I would argue for it. Perhaps the best I can do is point again to Robin Hanson's discussion with Scott Aaronson, which at least makes the disagreement relatively more explicit.

Reply
How to Make Superbabies
hold_my_fish5mo30

One thing we're worried about is cases where the haplotypes have the small additive effects rather than individual SNPs, and you get an unpredictable (potentially deleterious) effect if you edit to a rare haplotype even if all SNPs involved are common.

This is a point of uncertainty that bothered me when I was doing a similar analysis a while ago. GWAS data is possibly good enough to estimate causal effects of haplotypes, but that's not enough information to do single base edits. To have reasonable confidence of getting the predicted effect, it'd be necessary to to make all the edits to transform the original haplotype into a different haplotype.

And unlike with distant variants where additive effects dominate, it'd make sense if non-additive effects are strong locally, since the variants are near each other. Whether this is actually true in reality is way beyond my knowledge, though.

Reply
Microsoft and OpenAI, stop telling chatbots to roleplay as AI
hold_my_fish2y40

Something new and relevant: Claude 3's system prompt doesn't use the word "AI" or similar, only "assistant". I view this as a good move.

As an aside, my views have evolved somewhat on how chatbots should best identify themselves. It still doesn't make sense for ChatGPT to call itself "an AI language model", for the same reason that it doesn't make sense for a human to call themselves "a biological brain". It's somehow a category error. But using a fictional identification is not ideal for productivity contexts, either.

Reply
Load More
18On the Loebner Silver Prize (a Turing test)
2y
2
50Microsoft and OpenAI, stop telling chatbots to roleplay as AI
3y
10