LESSWRONG
LW

Neuralink
Personal Blog

3

Elon Musk launches Neuralink, a venture to merge the human brain with AI

by chaosmage
28th Mar 2017
1 min read
9

3

This is a linkpost for /r/discussion/lw/oss/elon_musk_launches_neuralink_a_venture_to_merge/
Neuralink
Personal Blog

3

Elon Musk launches Neuralink, a venture to merge the human brain with AI
3dogiv
0The_Jaded_One
0username2
0[anonymous]
0Lumifer
0[anonymous]
0Lumifer
0Commander Zander
3moridinamael
New Comment
9 comments, sorted by
top scoring
Click to highlight new comments since: Today at 11:35 PM
[-]dogiv8y30

Does anybody think this will actually help with existential risk? I suspect the goal of "keeping up" or preventing irrelevance after the onset of AGI is pretty much a lost cause. But maybe if it makes people smarter it will help us solve the control problem in time.

Reply
[-]The_Jaded_One8y00

It has been fairly standard LW wisdom for a long time that any kind of human augmentation is unhelpful for friendliness.

I think that we should be much less confident about this, and I welcome alternative efforts such as the neural lace.

Reply
[-]username28y00

Yes, I think the entire concept of the AI x-risk scary idea (e.g. Clippy) is predicated on machines being orders of magnitude smarter in some ways than their human builders. If instead there is a smooth transition to increasingly more powerful human augmented intelligence, then the transformative power of AI becomes evolutionary not revolutionary. Existing power structures continue to remain in effect as we move into a post human future.

Of course there will be issues of access to augmentation technologies, bioethics panels, government regulation, etc. But these won't be existential risks.

Reply
[-][anonymous]8y00

It is part of a research program that I can see.

Imagine that we understand the brain. We can replicate it in silicon and we can functionally decompose it in to problem solving and motivational sections. With a neural interface we could connect up a problem solving bit with our motivational section. To give ourselves an external lobe (this could perhaps be done in a hacky indirect way with out a direct connection).

If this happens then there are two benefits to existential risk:

1) People will spend less money/time trying to create new agents

2) We will be closer to parity to new agents when they come about in problem solving capability.

Reply
[-]Lumifer8y00

We can replicate it in silicon and we can functionally decompose it in to problem solving and motivational sections.

At which point powers-that-be specify what has to be in the motivation section and it's game over, man.

Reply
[-][anonymous]8y00

It depends how hard it is to specify content in the motivation section. You can see all of the FAI work as suggesting it is pretty hard. I think the path of least resistance is augmenting known motivation systems.

But yeah possible crazy failure scenario. I think a small subset of humanity getting hold and monopolizing the tech to just enhance themselves is another more likely failure scenario. It all depends on how it develops, which is probably a little influence-able at the moment.

Reply
[-]Lumifer8y00

It depends how hard it is to specify content in the motivation section. You can see all of the FAI work as suggesting it is pretty hard.

It is hard to specify motivation for a god-like entity. It's pretty easy to specify motivation for slaves: "You will love the Big Brother, you will experience debilitating anxiety and disgust at any thoughts of resistance, you will consider the most important thing in life to be fulfilling your quota of growing turnips, approval from your supervisor will be the most pleasurable thing you ever feel".

Reply
[-]Commander Zander8y00

I also think this project will be on a fairly slow timeline. Maybe the AGI connections are functionally just marketing, and the real benefit of this org will be more mundane medical issues.

Reply
[-]moridinamael8y30

Link doesn't work.

Reply
Moderation Log
Curated and popular this week
9Comments