Wiki Contributions

Comments

That implies the ability to mix and match human chromosomes commercially is really far off

I agree that the issues of avoiding damage and having the correct epigenetics seem like huge open questions, and successfully switching a fruit fly chromosome isn't sufficient to settle them

Would this sequence be sufficient?

1. Switch a chromosome in a fruit fly
Success = normal fruit fly development

2a. Switch a chromosome in a rat
Success = normal rat development

2b. (in parallel, doesn't depend on 2a) Combine several chromosomes in a fruit fly to optimize aggressively for a particular trait
Success = fruit fly develops with a lot of the desired trait, but without serious negative consequences

3. Repeat 2b on a rat

4. Repeat 2a and 2b on a primate

Can you think of a faster way? It seems like a very long time to get something commercially viable

Maybe the test case is to delete one chromosome and insert another a chromosome in a fruit fly. Only 4 pairs of chromosomes, already used for genetic modifications with CRISPR

Goal = complete the insertion and still develop a normal fruit fly. I bet this is a fairly inexpensive experiment, within reach of many people on LessWrong

Chromosome selection seems like the most consequential idea here if it's possible

Is it possible now, even in animals? Can you isolate chromosomes without damaging them and assemble them into a viable nucleus?

Edit: also -- strong upvoted because I want to see more of this on LW. Not directly AI but massively affects the gameboard

My model of "steering" the military is a little different from that It's over a thousand partially autonomous headquarters, which each have their own interests. The right hand usually doesn't know what the left is doing

Of the thousand+ headquarters, there's probably 10 that have the necessary legitimacy and can get the necessary resources. Winning over any one of the 10 is a sufficient condition to getting the results I described above

In other words, you don't have to steer the whole ship. Just a small part of it. I bet that can be done in 6 months

I don't agree, because a world of misaligned AI is known to be really bad. Whereas a world of AI successfully aligned by some opposing faction probably has a lot in common with your own values

Extreme case: ISIS successfully builds the first aligned AI and locks in its values. This is bad, but it's way better than misaligned AI. ISIS want to turn the world into an idealized 7th Century Middle East, which is a pretty nice place compared to much of human history. There's still a lot in common with your own values

I bet that's true

But it doesn't seem sufficient to settle the issue. A world where aligning/slowing AI is a major US priority, which China sometimes supports in exchange for policy concessions sounds like a massive improvement over today's world

The theory of impact here is that there's a lot of policy actions to slow down AI, but they're bottlenecked on legitimacy. The US military could provide legitimacy

They might also help alignment, if the right person is in charge and has a lot of resources. But even if 100% their alignment research is noise that doesn't advance the field, military involvement could be a huge net positive

So the real question is:

  1. Is the theory of impact plausible
  2. Are their big risks that mean this does more harm than good

Because maximizing the geometric rate of return, irrespective of the risk of ruin, doesn't reflect most peoples' true preferences

In the scenario above with the red and blue lines, the full Kelly has a 9.3% chance of losing at least half your money, but the .4 Kelly only has a 0.58% chance of getting an outcome at least that bad

I agree. I think this basically resolves the issue. Once you've added a bunch of caveats:

  • The bet is mind-bogglingly favorable. More like the million-to-one, and less like the 51% doubling
  • The bet reflects the preferences of most of the world. It's not a unilateral action
  • You're very confident that the results will actually happen (we have good reason to believe that the new Earths will definitely be created)

Then it's actually fine to take the bet. At that point, our natural aversion is based on our inability to comprehend the vast scale of a million Earths. I still want to say no, but I'd probably be a yes at reflective equilibrium

Therefore... there's not much of a dilemma anymore

It doesn't matter who said an idea. I'd rather just consider each idea on its own merits

I don't think that solves it. A bounded utility function would stop you from doing infinite doublings, but it still doesn't prevent some finite number of doublings in the million-Earths case

That is, if the first round multiplies Earth a millionfold, then you just have to agree that a million Earths is at least twice as good as one Earth

Load More