This is a very interesting speech covering a lot of the popular topics, and it's only 27 minutes long.

Beware though his views are usually provocative and especially here it seems like certain ideas are expressed to challenge one group or several simultaneously, so I recommended it with caveats.

For example, his opening line is:

"I'm always reminded of a question a colleague of mine like to ask 'what is the antonym of diversity?', 'what word is the single antonym of diversity?'…university."

New to LessWrong?

New Comment
33 comments, sorted by Click to highlight new comments since: Today at 7:36 AM
  • AI Risk is mentioned first at 19:40.
  • Bostrom's "The Vulnerable World Hypothesis" paper is grossly misquoted.
  • No object-level arguments against AI Risk are presented, nor are there any reference to object-level arguments made by anyone.

I'm still upvoting the post, because I find it useful to know how AI Risk (and we) are perceived.

It does seem like it attracted an unusually large proportion of downvotes, which is why it's sitting at just 8.

Likely due to some animosity some folks may feel towards Thiel.

There's also the Q&A session afterwards, which isn't nearly as interesting or provocative, but it does reflect what the average Oxford debating union member might be thinking. Or at least that's my understanding.

After hearing the first 20 minutes, it seems to me that Thiel imagines some kind of conspiracy of universities to suppress progress. Which reminds me of a quote: "Don’t let your schooling interfere with your education."

If your problem with universities is ideology, there is little you can do about that. But if your problem is that they do not teach enough science, or teach the science wrong, the solution is straightforward -- provide better lessons of science online. Kids these days spend most of their time online; build something like Khan Academy on steroids, maybe with some real-life incentives (the five best students each year win actual money), and kids will compete at becoming better scientists.

Similarly, what can't we celebrate successful scientists? Dunno, why don't you organize a celebration for those who you think deserve it?

It's striking how none of the solutions involve more technology. The solution to climate change is not fusion reactors. The solution to nuclear weapons is not better anti-ballistic missile systems. The solution to AI... The solution to biotech is not accelerating the research even faster.

Disappointed that you skipped the technological solution to the AI. Yudkowsky might have learned a thing or two.

(Two advertisements every 5 minutes -- the ultimate YouTube experience. Just kidding, it will get worse soon.)

He does raise the interesting point that strong taboos are usually hiding something. 

Robert B. Laughlin, the controversial professor at Stanford, was his example. It seems to have been personal as one of his friends failed to receive a PhD under Laughlin due to the feuding around a very strong academic taboo being broken. 

The implication being Laughlin's students at the time were denied opportunities as revenge, as they were easier targets to take down than a Stanford professor who just won the Nobel prize in physics. 

If true, it's an understandable motivation to then hold a grudge or ponder about conspiracies behind other apparently inexplicable phenomena.

I guess in that sense it does boil down to an ideological fight. 

Can someone investigate their colleagues? Is it permissible to air suspicions openly? Is it acceptable to  claim other professors at the university are hucksters and fraudsters with just circumstantial evidence? 

But if your problem is that they do not teach enough science, or teach the science wrong, the solution is straightforward -- provide better lessons of science online. 

This relates to what Thiel said about fields that call themselves XY science. It's a tell, that this isn't really the right thing. Solving textbook problems isn't science. Science is 0 -> 1. Science is about doing experiments and learning something useful from them that was previously unknown.

The problem is not that children aren't taught enough science in school, it's that they usually aren't taught any. Most children leave school without having done a single experiment in which they learned something useful that was previously unknown.

maybe with some real-life incentives (the five best students each year win actual money), and kids will compete at becoming better scientists.

Science inherently has real-world incentives. If you learn something through your experiments, that is useful, that has real-world value. If there isn't any real-world value, that's a sign that it's not really science but just some game of pretending to do science.

Similarly, what can't we celebrate successful scientists? Dunno, why don't you organize a celebration for those who you think deserve it?

If Peter Thiel would organize a celebration for a scientist, you would get a bunch of journalists thinking about how to write negative articles about that event. That's not going to be a purely positive event for the celebrated scientist.

maybe with some real-life incentives (the five best students each year win actual money), and kids will compete at becoming better scientists.

Science inherently has real-world incentives. If you learn something through your experiments, that is useful, that has real-world value. If there isn't any real-world value, that's a sign that it's not really science but just some game of pretending to do science.

I'm not too sure about this.  Aren't there scientists who claim to have did it for the beauty, the enjoyment of such beauty, etc.?

Much like pure mathematicians who talk about the beauty of their equations motivating them.

Enjoyment of beauty is a real-world value. We pay artists to produce beautiful things because they have value to us. 

But even there, when you listen to talk to the string theorists about how their work reveals beauty that only they understand, there's the fraud question. 

In any case, students almost never create scientific work that's beautiful enough that someone would engage with it without being bribed to do so. 

Science has to do with understanding and knowledge - practical applications are applied science, engineering, medicine, etc. It's up to those fields to come up with ideas about how to find real-world use for the science.

I didn't speak about "practical applications". If another scientist can build on the work you produce, you are also creating real-world value. In the absence of fraud, anything people are willing to pay for has real-work value. 

Another scientist being able to build on the work some other scientist produces is different from what most people would call "real-world value," but I agree that's important (even though I disagree that other people's ability to build on, buy or do anything else with the work determines if something is science or if the science is worthwhile - the science-status of a paper is determined purely by its content, not by what other people are or aren't capable of doing with it).

Even though I agree that a platonic ideal of a scientist would be able to build on any paper containing true science (perhaps given enough time for technological advancement, if that is necessary).

There are multiple things you can mean by the word science. In the content of Thiel's talk science is the thing on which you can build progress. Science in that sentence depends on creating work that's valuable to other people. As long as the knowledge you gain is esoteric and in your own head it's not science. Science is actually about exoteric knowledge that other people adopt. 

I did link the Larry McEnerney talk for a reason. It gives more details about the notion of value that I'm pointing toward.

I don't think that's what anyone means by science, so I'm naturally suspicious towards someone using it in such a manner.

To refer to Duncan's latest post, do you seriously claim that I don't mean that with science (I'm certainly part on anyone)?

Or for that matter Larry McEnerney that defines knowledge in the linked talk as being something that's actually valuable to other people?

In our times of great stagnation, there are many people who don't think that science is about producing value. That position is part of the problem.

To refer to Duncan's latest post, do you seriously claim that I don't mean that with science (I'm certainly part on anyone)?

No. I mean the customary meaning of that phrase, which, I think, would be maybe something like "anyone except a few people."

It's certainly possible for you (or someone else) to redefine science, but then the criticism is that what-is-customarily-meant-by-science doesn't fulfill the criteria of what-the-speaker-redefined-the-word-science-to-mean, which might be true, but I don't see how is it important.

A better criticism would be that science that's not useful shouldn't be produced (rather than that it's not true science), but then the obvious problem is that the usefulness or uselessness of science can't always be judged in advance, and that it might take decades (or even centuries) for scientific knowledge to become useful, and humans trying to optimize for usefulness (rather than for science-quality-and-correctness) would curtail those scientific papers that have no obvious use today.

That would lead to being stuck in a sort of local maximum.

No. I mean the customary meaning of that phrase

Okay, so it's saying untrue things for rhetorical impact.

that it might take decades (or even centuries) for scientific knowledge to become useful, and humans trying to optimize for usefulness (rather than for science-quality-and-correctness) would curtail those scientific papers that have no obvious use today.

When you send a paper to a journal, that journal does ask itself "Is this paper useful to the people who read this journal and helps advance the field or is it pointless for the readers of the journal to read it." Given that this is how our scientific system works, following Larry McEnerney advise about writing to actually create value for the readers of the journal does help produce better papers.

Thomas Kuhn did distinguish scientific fields from fields that aren't by the fact that scientific fields progress. If the changes of a field are due to fashion and not progress, it's not a science but in Kuhns sense if the change it is. For progress to happen you need to solve problems that help the field progress.

In Viliam's proposal, having students train "science" with something Kahn Academy like is having train skills that are not about producing anything on which other people or even themselves can build. I used the term real-world to contrast it with the world of school. 

Okay, so it's saying untrue things for rhetorical impact.

It's saying something literally untrue (how the English language often works) but not for rhetorical impact, but simply because that's what the phrase means.

When you send a paper to a journal, that journal does ask itself "Is this paper useful to the people who read this journal and helps advance the field or is it pointless for the readers of the journal to read it."

If that was the case, the suggestion to change the process of producing science would be pointless, because science would already work that way.

In Viliam's proposal, having students train "science" with something Kahn Academy like is having train skills that are not about producing anything on which other people or even themselves can build. I used the term real-world to contrast it with the world of school.

The understanding from textbooks (or Khan Academy) is very much needed to create something other people can build on. The reason there is no obvious pathway from the former to the latter is because science is extremely complex with many layers of abstraction.

When you send a paper to a journal, that journal does ask itself "Is this paper useful to the people who read this journal and helps advance the field or is it pointless for the readers of the journal to read it."

If that was the case, the suggestion to change the process of producing science would be pointless, because science would already work that way.

Can you elaborate on this?

If only real-world-useful science was published in journals, it would be pointless to suggest that only real-world-useful science should be produced.

I am with him on this. The level of AI alarmism that is being put forward especially in this community is uncalled for. I was just reading Yudkowski and Scott's chat exchange and all the doom arguments I captured were of the form "what if?". What about we just return to the way we do engineering: keep building and innovating and dealing with negative side effects along the way?

To borrow Thiel's analogies, the same could also be said by proponents of further developments in nuclear weapons or 'gain-of-function' research of viruses... which raises the interesting question of whether he intended his speech to be partially self-negating one level further in.

AI risk is still at another level of concern. If you ask me to list what can go wrong with gain of function research, I can probably cite a lot of things. Now if you ask me what dangers LLM's can cause to humanity, I will have a much more inoffensive list.

[-]lc1y138

Current* large language models are not general intelligences. This community is mostly concerned with existential risk from future AIs, not the extremely minor risks from misuse of current AIs.

That's exactly my point. We don't even know how these future technologies will look like. Gain of function research has potential major negative effects right now, so I think it's reasonable to be cautious. AI is not currently at this point. It may potentially be in the future, but by then we will be better equipped to deal with it and assess the risk-benefit profile we are willing to put up with.

[-]lc1y119

but by then we will be better equipped to deal with it

This is precisely the point with which others disagree; especially the implicit assertion that we will be sufficiently equipped to handle the problem rather than just "better".

That's still a theoretical problem; something we should consider but not overly update on, in my opinion. Besides, can you think of any technology people could foresee it would be developed and specialists managed to successfully plan a framework before implementation? That wasn't the case even with nuclear bombs.

Besides, can you think of any technology people could foresee it would be developed and specialists managed to successfully plan a framework before implementation?

That's part of the reason why Eliezer Yudkowsky thinks we're doomed and Robin Hanson thinks that we shouldn't try to do much now. The difference between the two is take-off speed: For EY we either solve alignment before arrival of superintelligence (which is unlikely) or be doomed, RH thinks we have time to make alignment work during arrival of superintelligence.

Well, Eliezer is the one making extraordinary claims, so I think I am justified in applying a high dose of skepticism before evidence of AI severely acting against humanity's best interest pops up.

Are you able to strong man the argument in favor of AI being an existential risk to humanity?

Well....Eliezer does think we're doomed so doesn't necessarily contradict his worldview

Hm, I think this is way too confident of a take here. It is possible LLMs simply can't scale, but you need to avoid making such a rightly controversial claim as a response to someone.

[-]lc1y20

Added a word then.