“Our species is the only creative species, and it has only one creative instrument, the individual mind and spirit of man. Nothing was ever created by two men. There are no good collaborations, whether in music, in art, in poetry, in mathematics, in philosophy. Once the miracle of creation has taken place, the group can build and extend it, but the group never invents anything. The preciousness lies in the lonely mind of a man.” 

- John Steinbeck

“The Great Man theory of history may not be truly believable and Great Men not real but invented, but it may be true we need to believe the Great Man theory of history and would have to invent them if they were not real.” 

- Gwern


The Myth of the Lone Genius is a bullshit cliche and we would do well to stop parroting it to young people like it is some deep insight into the nature of innovation. It typically goes something like this - the view that breakthroughs come from Eureka moments made by geniuses toiling away in solitude is inaccurate; in reality, most revolutionary ideas, inventions, innovation etc. come from lots of hard work, luck, and collaboration with others. 

Here is a good description of the myth from The Ape that Understood the Universe: How the Mind and Culture Evolve by psychologist Steve Stewart-Williams.

”We routinely describe our species’ cultural achievements to lone-wolf geniuses – super-bright freaks of nature who invented science and technology for the rest of us. ... It's a myth because most ideas and most technologies come about not through Eureka moments of solitary geniuses but through the hard slog of large armies of individuals, each making—at best—a tiny step or two forward”

The problem here is that the myth of the lone genius is itself a myth. History (ancient and recent) is full of geniuses who came up with a revolutionary idea largely on their own - that’s why the archetype even exists in the first place (Aristotle, Newton, Darwin, Einstein to name the most obvious examples). The author of the above quote would seem to grant that at least some ideas and technologies come from eureka moments of solitary geniuses. Others would seem to go further - the author of an article entitled “The Myth of the Genius Solitary Scientist is Dangerous” holds up Einstein, Maxwell, and Newton as examples of this archetype, but then exposes the falsehood of these examples by saying:

“Newton looked down on his contemporaries (while suspecting them of stealing his work) but regularly communicated with Leibniz, who was also working on the development of calculus. Maxwell studied at several prestigious institutions and interacted with many intelligent people. Even Einstein made the majority of his groundbreaking discoveries while surrounded by people with whom he famously used as sounding boards.”

Uhhh ok, so they talked to other people while working on their ideas? Sure, we shouldn’t have this naive view that these so-called solitary geniuses work 1000% on their own without any input whatsoever from other people, but that doesn’t mean that they didn’t do most of the heavy lifting. Similarly, another proponent of the myth of the lone genius focuses on the power of partnership (Joshua Shenk, Powers of Two: How Relationships Drive Creativity). From the introduction of an interview with Shenk on Vox:  

“After struggling for years trying to develop his special theory of relativity, Einstein got his old classmate Michele Besso a job at the Swiss patent office — and after "a lot of discussions with him," Einstein said, "I could suddenly comprehend the matter." Even Dickinson, a famous recluse, wrote hundreds of poems specifically for people she voraciously corresponded with by letter.

The idea isn't that all of these situations represent equal partnerships — but that the lone genius is a total myth, and all great achievements involve some measure of collaboration.”

This seems contradictory - so there is still a dominant person in the partnership doing most (or all) of the difficult work, but at the same time the lone genius is a TOTAL myth. I have a feeling that Einstein’s contribution was a little more irreplaceable than that of this Besso fellow. Is there not room for a more moderate position here? I guess that doesn’t really sell books. 

It’s not hard to see why the myth of the lone genius is so popular - it is a very politically correct type of idea, very much going along with the general aversion to recognizing intelligence and genes as meaningful sources of variation in social/intellectual outcomes. It is also kind of a natural extension of the “you can achieve anything you set your mind to!” cliche. The fact that most of the geniuses in question are white men probably plays a not insignificant role in people’s quickness to discredit their contributions. At the end of the day, it’s really tough to admit that there are geniuses in the world and you aren’t one of them. 

Defenders of the myth would probably argue that the vast majority of people are not solitary geniuses and the vast majority of innovations do not come from people like this, so we should just preach the message that hard work and collaboration are what matters for innovation. In this view, the myth of the lone genius is a kind of noble lie - the lessons we impart by emphasizing the fallacy of the lone genius are more beneficial than the lessons imparted from uncritical acceptance of the lone genius story. I’m not sure this is true, and in fact I would argue that the uncritical acceptance of the myth of the lone genius is just as bad as uncritical acceptance of the lone genius story.

What lessons are we really trying to impart with the myth of the lone genius? 

(1) You are not just going to have a brilliant idea come to you out of thin air. 

(2) Creativity is enhanced by collaboration and sharing ideas with others. Most good ideas come from recombining pre-existing ideas. 

(3) Be humble and don’t expect that it will be easy to find good ideas. No, you will not “solve” quantum mechanics after taking your first high school physics class. 

Ok great, I’m on board with all of these lessons, it’s kind of impossible not to be. The problem is that by harping so much on the fallacy of the lone genius we are also sending some implicit messages that are actively harmful to aspiring scientists/engineers/entrepreneurs. 

(4) There are no such things as geniuses, and even if there were you are not one of them. 

(5) You won’t come up with a great idea by spending lots of time thinking deeply about something on your own. The people who think they can do this are crackpots. 

(6) Thinking isn’t real work and ideas are cheap, anything that doesn’t produce something tangible is a waste of time. Go do some experiments, have a meeting, write a paper, etc. 

(1)-(3) are certainly valuable lessons, but I think most relatively intelligent people eventually learn them on their own to some degree. My concern is that lessons (4)-(6) can become self-fulfilling prophecies - upon learning about how innovation really works from the myth of the lone genius, the next would-be revolutionary thinker will give up on that crazy idea she occasionally worked on in her free time and decide to devote more time to things like networking or writing academic papers that no one reads. We should want exceptional people to believe they can do exceptional things on their own if they work hard enough at it. If everyone internalizes the myth of the lone genius to such a degree that they no longer even try to become lone geniuses then the myth will become a reality.

My argument here is similar to the one that Peter Thiel makes about the general lack of belief in secrets in the modern world. 

“You can’t find secrets without looking for them. Andrew Wiles demonstrated this when he proved Fermat’s Last Theorem after 358 years of fruitless inquiry by other mathematicians— the kind of sustained failure that might have suggested an inherently impossible task. Pierre de Fermat had conjectured in 1637 that no integers a, b, and c could satisfy the equation an + bn = cn for any integer n greater than 2. He claimed to have a proof, but he died without writing it down, so his conjecture long remained a major unsolved problem in mathematics. Wiles started working on it in 1986, but he kept it a secret until 1993, when he knew he was nearing a solution. After nine years of hard work, Wiles proved the conjecture in 1995. He needed brilliance to succeed, but he also needed a faith in secrets. If you think something hard is impossible, you’ll never even start trying to achieve it. Belief in secrets is an effective truth.

The actual truth is that there are many more secrets left to find, but they will yield only to relentless searchers. There is more to do in science, medicine, engineering, and in technology of all kinds. We are within reach not just of marginal goals set at the competitive edge of today’s conventional disciplines, but of ambitions so great that even the boldest minds of the Scientific Revolution hesitated to announce them directly. We could cure cancer, dementia, and all the diseases of age and metabolic decay. We can find new ways to generate energy that free the world from conflict over fossil fuels. We can invent faster ways to travel from place to place over the surface of the planet; we can even learn how to escape it entirely and settle new frontiers. But we will never learn any of these secrets unless we demand to know them and force ourselves to look.”


Maybe I’m overthinking all of this - does the myth of the lone genius really affect anyone’s thinking in any substantial way? Maybe it only has the tiniest effect in the grand scheme of things. Even still, I would argue that it matters - uncritical acceptance of the lone genius myth is one more cultural force among many that is making it more and more difficult for individuals to do innovative work (and last time I checked, humanity is made up of individuals). In a fast-paced world full of intense economic/scientific/intellectual competition and decreasing opportunities for solitude, it is harder than ever before to justify spending significant time on intangible work that may or may not pay off. You can’t put on your resume - “I spend a lot of time thinking about ideas and scribbling notes that I don’t share with anyone.”

I guess what I want to counteract is the same thing that Stephen Malina, Alexey Guzey, Leopold Aschenbrenner argue against in “Ideas not mattering is a Psyop”. I don’t know how we could ever forget that ideas matter - of course they matter -  but somewhere along the way I think we got a little confused. How this happened, I don’t know - you can probably broadly gesture at computers, the internet, big data, etc. and talk about how these have led to a greater societal emphasis on predictability, quantifiability, and efficiency. Ideas (and the creative process that produces them) are inherently none of these things; as Malina et al. remind us - Ideas are often built on top of each other, meaning that credit assignment is genuinely hard” and “Ideas have long feedback loops so it’s hard to validate who is good at having ideas that turn out to be good”. I would also mention increased levels of competition (as a result of globalism, increased population sizes, and the multitude of technologies that enable these things) as a major culprit. For any position at a college/graduate school/job you are likely competing with many people who have done all kinds of impressive sounding things (although it is probably 90% bullshit) so you better stop thinking about crazy ideas (remember, there are no such things as lone geniuses) and starting doing things, even if the things you are doing are boring and trivial. As long as they look good on the resume...


The life and times of Kary Mullis provide an illustration of this tension between individual genius and collaboration in the production of radical innovation. Kary Mullis is famous for two things - inventing the polymerase chain reaction (which he would win the nobel prize for) and having some very controversial views. 

“A New York Times article listed Mullis as one of several scientists who, after success in their area of research, go on to make unfounded, sometimes bizarre statements in other areas. In his 1998 humorous autobiography proclaiming his maverick viewpoint, Mullis expressed disagreement with the scientific evidence supporting climate change and ozone depletion, the evidence that HIV causes AIDS, and asserted his belief in astrology. Mullis claimed climate change and HIV/AIDS theories were promulgated as a form of racketeering by environmentalists, government agencies, and scientists attempting to preserve their careers and earn money, rather than scientific evidence.”

This is another reason why people are so leery of the lone genius - it often comes with a healthy dose of crazy. Yes, obviously this can go poorly - his ideas on AIDS did NOT age well - but, as we all know because there is an idiom for it, sometimes you have to break a few eggs to make an omelette. 

“Mullis told Parade magazine: “I think really good science doesn’t come from hard work. The striking advances come from people on the fringes, being playful”

Proponents of the lone genius myth might be wondering at this point - did Mullis really invent PCR all on his own in a brilliant flash of insight? We shouldn’t be surprised that the answer is yes in fact he did, but also that it’s a little more complicated than that. 

“Mullis was described by some as a "diligent and avid researcher" who finds routine laboratory work boring and instead thinks about his research while driving and surfing. He came up with the idea of the polymerase chain reaction while driving along a highway.”

“A concept similar to that of PCR had been described before Mullis' work. Nobel laureate H. Gobind Khorana and Kjell Kleppe, a Norwegian scientist, authored a paper 17 years earlier describing a process they termed "repair replication" in the Journal of Molecular Biology.[34] Using repair replication, Kleppe duplicated and then quadrupled a small synthetic molecule with the help of two primers and DNA polymerase. The method developed by Mullis used repeated thermal cycling, which allowed the rapid and exponential amplification of large quantities of any desired DNA sequence from an extremely complex template.”

“His co-workers at Cetus, who were embittered by his abrupt departure from the company,[10] contested that Mullis was solely responsible for the idea of using Taq polymerase in PCR. However, other scientists have written that the "full potential [of PCR] was not realized" until Mullis' work in 1983,[35] and that Mullis' colleagues failed to see the potential of the technique when he presented it to them.”

"Committees and science journalists like the idea of associating a unique idea with a unique person, the lone genius. PCR is thought by some to be an example of teamwork, but by others as the genius of one who was smart enough to put things together which were present to all, but overlooked. For Mullis, the light bulb went off, but for others it did not. This is consistent with the idea, that the prepared (educated) mind who is careful to observe and not overlook, is what separates the genius scientist from his many also smart scientists. The proof is in the fact that the person who has the light bulb go off never forgets the "Ah!" experience, while the others never had this photochemical reaction go off in their brains."


So what’s the take-home message? Let’s not treat the myth of the long genius like it’s gospel. Sometimes really smart people think long and hard about something and come up with an idea that changes the world. Yes, this happens very rarely and most innovation comes from the “hard slog of large armies of individuals, each making—at best—a tiny step or two forward”, but if we aren’t careful then these Eureka moments will become fewer and farther between and everything will be a hard slog. Let’s do better by providing a more nuanced picture of innovation in which solitary exploration by “geniuses” and collaboration both play critical roles.

(originally posted at Secretum Secretorum)

New to LessWrong?

New Comment
26 comments, sorted by Click to highlight new comments since: Today at 6:50 AM

I agree that people do often make major discoveries alone. I also agree that "committees" truly could not have made many of those discoveries. But I the other thing I think is true is that they still only do it when the supporting ideas become available to them. Not just the observations, but the right ways of thinking and asking questions about those observations. Newton talked about "the shoulders of giants" and all that.

Once the conditions exist, you'll get your genius reasonably soon. There are enough geniuses out there to make things happen when the time is right.

If Einstein hadn't come up with, say, relativity, somebody else probably would have within 10 or 20 years. Maybe even a few people, who indeed might have been doing things more like "working alone and occasionally communicating", than "collaborating closely". On the other hand, I very much doubt that Einstein himself would have arrived at even Special Relativity if he'd been working 50 or 100 years earlier.

Thiel seems to be arguing against that by suggesting that the proof Fermat's Last Theorem just lay there as a "secret" for 358 years, until Wiles Heroically Challenged The Orthodoxy that refused to accept that it Could Not Be Done. I think that misstates the matter very badly, and that all the Thiel text is really unconvincing.

At least as Iunderstand the history, Wiles was indeed living in a mathematical community that was pretty discouraged about proving Fermat's Last Theorem... but nonetheless he was using a huge apparatus of number theory that had been built up over those 358-or-whatever years. Wiles didn't prove the theorem using tools that would have been available 350 years before (and nobody believes that Fermat himself ever had a correct proof). The bit Wiles filled in was the proof of the Taniyama-Shimura-Weil conjecture. To even state that conjecture, let alone prove it, you have to use a bunch of concepts to which Fermat's era had no access.

So Wiles' proof wasn't simply unnoticed for 350 years until he mystically "discovered a secret". Thiel's presentation reads as sloppy, clueless, or even dishonest, on that matter. It also seems kind of clueless on the true value of what Wiles did. Although I'm sure Wiles was very much motivated by wanting to nail Fermat's Last Theorem, the framework he developed to do that also advanced mathematics in general, and that's more important in the grand scheme of things.

As for Wiles keeping a secret, a 6-year secret is a very different matter from a 358-year secret. The field may have been demoralized enough, or maybe the solution was just truly inobvious enough, to give Wiles 6 years or more... but it wouldn't have taken another 350 years if Wiles hadn't done it. I suspect it wouldn't have taken 50 or even 20.

Also, when Wiles went public in 1993, what he had was wrong (and the theorem had a long history of false proofs at that point). It took Wiles another year to fix the problems other people found in his proof.

As for Mullis, PCR is a laboratory technique, not a sweeping framework. I don't think it puts Mullis in Wiles' league, let alone Einstein's or Newtons. And Mullis really does seem to have just mostly lucked into noticing it. I'm thinking it would more likely have been under 5 years than over 10 before somebody else came up with PCR. And I'm not entirely sure that a committee couldn't have come up with PCR given a driving application, so I think Mullis is actually a poor example.

What you say is even more true than you think. We would have had "relativity" in 1906, if you are satisfied with an experimentally indistinguishable theory which kept the ether as a conventional choice (a degree of difference similar to the one between interpretations of quantum mechanics). Poincaré had already submitted a paper in 1905 before seeing Einstein's, building on Lorentz's previous work. Now, Einstein's theory is preferable for several reasons, but ultimately the difference is small. 

If you look you find similar stories for Newton, Mendeleev, obviously Darwin, and others. There are some counterexamples, but ultimately we should take Newton seriously: the height of the shoulders you stand on is more important than your own for determining how far you can see. 

I had a look at my copy of Simon Singh's "Fermat's last theorem" (amazing book by the way) and three things are pretty clear :

  1. Wiles' proof makes extensive use of papers published in the years 1986-1993 while he was working on his proof, so he was certainly not isolated during this time.

  2. he was unable to find the error in his proof, and had help from Taylor to correct the error.

So if he had not been so obsessed with "being the one to prove Fermat's last theorem", the proof would have been finished a couple years sooner.

So yeah, the lonely genius is a myth and a dangerous one. Long live the collaborative genius who works with his fellow geniuses !

[-][anonymous]3y40

I don't even understand what Thiel is trying to say, which is pretty typical.

Well said and I largely agree with your assessment of Mullis. One thing to consider: there is some added value in getting discoveries sooner (e.g. something with medical implications, like PCR). I also wonder about the contingency/path-dependence of science/tech on large scales - if it had been discovered at another time by another person would science (and history) have followed the same path? 

On a broader level, I wonder how science/tech contingency interacts with the contingency of culture and history as these set what people value and care about in the first place, in turn affecting what people study/build. I think about how the history of science and biology would be different over the last 150 years if we only had Wallace and not Darwin. Wallace was not nearly as respected as Darwin, didn't have nearly as much evidence behind as theory, and had a more theological framing on Natural Selection. I wonder how what what the ripple effects would be today if we only had Wallace and not Darwin

On this I agree with you. But the Darwin issue is a bit of a special case - the topic was politically/religiously charged, so it was important that a very respected figure was spearheading the idea. Wallace himself understood it, I think - he sent his research to Darwin instead of publishing it directly. But this is mostly independent of Darwin's scientific genius (only mostly, because he gained that status with his previous work on less controversial topics).

On the whole, I agree with jbash and Gerald below - "geniuses" in the sense of very smart scientists surely exist, and all else equal they speed up scientific advancement. But they are not that above ordinary smart-ish people. Lack of geniuses is rarely the main bottleneck, so an hypothetical science with less geniuses but more productive average-smarts researchers would probably advance faster if less glamorously. 

You could make a parallel between geniuses in science and heroes in war: heroic soldiers are good to have, but in the end wars are won by the side with more resources and better strategies. This does not stop warring nations to make a big deal of heroic exploits, but it's done to improve morale mostly. 

Two things can be true at the same time:

(1) geniuses exist - individuals with statistically rare brains and knowledge bases able to make advances

(2) for major historical inventions there could have been many geniuses making the realization around the same time and the one we credit in the history books was just a little faster/better at self promotion.

I think 1 and 2 are both true. That lasting innovations come from technology bases becoming robust and broadly available, enabling the next advance. That while not everyone is a genius 1 in 100 at least could be, and the world therefore has millions who could fill the role.

That if you sent assassins back in time to kill key innovators you would delay things only a little, from weeks to a few years depending.

Commented above but relevant here: one thing to consider - there is some added value in getting discoveries sooner (e.g. something with medical implications, like PCR). I also wonder about the contingency/path-dependence of science/tech on large scales - if it had been discovered at another time by another person would science (and history) have followed the same path? 

On a broader level, I wonder how science/tech contingency interacts with the contingency of culture and history as these set what people value and care about in the first place, in turn affecting what people study/build. I think about how the history of science and biology would be different over the last 150 years if we only had Wallace and not Darwin. Wallace was not nearly as respected as Darwin, didn't have nearly as much evidence behind as theory, and had a more theological framing on Natural Selection. I wonder how what what the ripple effects would be today if we only had Wallace and not Darwin

Reply

Well, the hypothesis above says if you assassinate Darwin then someone else will make a similar discovery. The Voyage of the Beagle gave Darwin the information needed to reach this hypothesis. Other people alive then knew the same facts and were about as intelligent, therefore someone else would have advanced the theory.

That the underlying technology and infrastructure that allowed for a 5 year scientific voyage made this conclusion possible.

Maybe it would have taken until the introduction of cameras but the important thing is that evolution is a force you can see in the data. It's as "real" as an electron is.

Borrowing from the post, how about we categorise between a lone genius: someone who comes up with an idea using a combination of different means available to them such as collaboration, intense study, iterated experiments etc; and a solitary genius: one who’s completely withdrawn from the world and has chosen to rely solely on their cognitive prowess to come up with new ideas, independent of any interactions with the world or the processes and entities in it that are usually known to assist in the generation of new knowledge.

(Differentiating based on the other meanings of the word, that is, lone can mean solitary, but it can also mean single, whereas I feel solitary necessarily denotes isolated)

Maybe this distinction reduces the need for mythifying knowledge creation, as the latter is next to impossible and former is pretty much how knowledge creation works, no?

I have a feeling that the reason we are still having a conversation about ideas like the lone genius and the great founder is because different people are talking about different things when they talk about the lone genius. Some implicit assume the interactivistic elements, whether it be with fellow humans or processes; while others see it as an attempt at undermining the credibility of an interconnected structure(because they think the idea of lone genius signifies an individual in seclusion without any interactive components). We first need to get the denominator right so that we know we are not talking past each other.

I wouldn't fully dismiss the solitary genius as a near-impossibility. Some really weird people actually meet your strict definition, and I think they should count at least as a little bit of evidence in favor of Lone Genius.

Consider Grigori Perelman, who solved the Poincaré conjecture some years ago. 

  • Rejected a Fields Medal
  • Rejected a million dollar prize from the Clay Institute.
  • Rejected a bunch of job offers from top US universities
  • Journalists are unable to meet him
  • So solitary that we don't even know where he lives (supposedly Saint Petersburg)

This pretty much classify as solitary genius to me.

Remarkable achievements indeed, but I would also note that there is always more to it than meets the eye. He lives a solitary life now, but he was an active mathematician up until 2004-05 at Steklov. We don't know how his stay at Courant, Stony Brook, or UCB shaped his ideas. Additionally, his work builds upon Hamilton's Ricci flow, which he had been working on, more or less, since 1992, which happens to coincide with his time at Courant and Stony brook. We don't know the people with whom he was in regular correspondence. Also, the analogous results were proved by Smale and Freedman for dimensions higher than 3, now what did he borrow from their work, we don't know. Also we know nothing about him other than that he is a recluse, who has withdrawn himself from the world, or that he espouses a ethical and moral philosophy that stands contra to what the global institutional mathematics espouses in terms of fame and recognition.

Please note that my point is not to refute the notion of a genius, but to show that the language we use undermines the shared understanding of the process of knowledge creation. I never said that you can't work in complete seclusion, but that it misses the "middle"(think excluded middle) that we don't usually convey. This can often lead to people talking past each other. Even in the case of Perelman, let's strip him of his math education, his exposure to modern mathematical results, his correspondence with people involved in modern mathematics, could he have built everything by himself in order to prove Poincare conjecture? Now you might intuitively understand that this is not what we usually mean by a lone/solitary genius, but think about the numerous times you've spoken to people about a lone genius and people have found it difficult to understand. It's because they feel you are undermining the fundamental nature of knowledge creation, that is, it is an iterative process that involves knowledge acquisition, conjecturing, formulation, correspondence, criticism, error correction, etc, which needs other people and processes. And when we speak in simplistic terms like lone genius and great founder, it is inevitable that we will end up getting entangled in the meta war and lose the sight of the object of communication.

My understanding is that the object of communication in our case is what leads to some people contributing significantly while others only do incrementally. Better questions would be how do they build on other's work? What were the incremental components of their work that went unnoticed? How did the correspondence they had with other people in the field affect their trajectory? etc. Now see that all of these allows for a solitary life and work but not a ground up creation of new knowledge without any interaction whatsoever with other people or the processes in it.

Perelman had ethical objections to scientific prizes, but during his productive period he was not particularly solitary and was involved in the normal academic live - although he was on the lower end of academic interaction.

I dunno, I think you had the right of it when you mentioned that the myth of the myth is politically convenient.  Like, you see this everywhere.  "You didn't build that", etc.

If you grant that anyone, anywhere, did anything, then you are, in the Laws of Jante style, insulting other people by implying that they, for not doing that thing, are lesser.  That's a vote/support loser.  So instead you get 'Hidden Figures' style conspiratorial thinking, where any achievement ascribed to a person is really the work of other exploited people, ideally nameless ones onto whom the people you are trying to grift can project themselves.

Depending on the politics of the person in question, sometimes you get a backhanded admission that maybe they had something to do with their achievements, but it will always be presented as being dwarfed by the contributions of the nameless anonymous audience standins.  

Well said, the Hidden Figures example is a really good one.

This reminds me of a similar issue I thought about when I first learned about the efficient market hypothesis (EMH) with respect to the stock market. The similarity is that if EMH were to be true, it dissuades people from seeking investment strategies that produce consistently out-sized returns relative to their risk, which in turn reduces the efficiency of the market. So you don't want too much belief in the EMH - in the same way you dont want too much belief in the Myth of the Lone Genius (MotLG).

It is an interesting problem to tackle: either we believe in EMH and the MotLG too much and dissuade smart people from making amazing discoveries  - or - we don't believe it enough and we encourage smart people to waste time on big impossible advances when they could have been making small marginal steps of progress.

I agree with OP that 'is this thing true or false' is not a nuanced enough conversation to have about such a complex topic, because it seems obvious to me that lone geniuses can exist in a world also filled with smart people making small advances. Arguments based on 'how alone' was the lone genius seem arbitrary, because there is no way to agree on a definition of 'how alone' a genius is. I also agree OP's suggested reasons why one might be motivated to argue in a 'true or false' way.

Some follow up points I think are worth talking about, but that I have no current answers to, are:

  • Does belief in MotLG actually dissuade the people we care about it dissuading? Are potential lone geniuses likely to be affected by MotLG, or are they so naturally stubborn and resistant to the ideas that they have no effect on this type of person anyway? Are there many potentially lone geniuses at the margin who are dissuaded by MotLG?
  • If belief in MotLG does genuinely dissuade brilliant minds from pursuing novel ideas in a material way, what is the magnitude of this impact?
  • What are the trade-offs (if any) between encouraging brilliant minds to find novel ideas versus encouraging larger cohorts to work together?
  • Assuming we are optimising our society for novel ideas/progress (and not being politically correct, etc):
    • If the magnitude of the impact is high and/or trade-offs are acceptable, how do we approach presenting MotLG and other such ideas in public discourse?
    • Do we stop teaching MotLG altogether? Or do we teach a more nuanced version of MotLG with relevant counterpoints? What do we emphasize and de-emphasize, etc?
    • Do we try to identify potential lone geniuses and treat them differently to the general population? If there are many potential lone geniuses at the margin what other ways can we attempt to get the right message (or mix of messages) to the right people?
    • What is the optimal mix in belief in MotLG across a given population?
    • How does this optimal mix in belief in MotLG change when the population changes? i.e. is the optimal mix of belief in MotLG different for different nations, where the average characteristics of that nation means that belief/disbelief in MotLG holds that nation back?

If we dissuade them from being lone geniuses, what do they do instead?

Become accountants?

Find co-founders for their startups, thereby increasing their startup's chance of success?

Well said, I genuinely don't know about any of your follow up questions but I think they are important to consider. 

If it weren't for solitary geniuses, mathematics as a field would not exist. Nearly everything interesting was discovered by them, and nearly every mathematician I can name would count as one - Euclid, al-Khwarizmi, Galois, Ramanujan, Cantor, Goedel...

Ramanujan and Galois are textbook exemples of mathematicians who would have had a much bigger impact if they had not been as isolated. (And Ramanujan most productive period was when he was working with Hardy and Littlewood).

Also, Erdos.

“Committees and science journalists like the idea of associating a unique idea with a unique person, the lone genius

Peter Higgs tends to get sole credit although he was the one of six authors on the original paper.

I like the post, but just to pick on one thing

(4) There are no such things as geniuses, and even if there were you are not one of them. 

There are two parts to this, the first "There are no such things as geniuses" is not proclaimed by anyone serious, the second, "you are not one of them" is basically correct if you rephrase it as "if you need to ask whether you are one of them, you are not."

The old wizard sighed. His half-glasses eyes looked only at her, as though they were the only two people present. "Miss Granger, it might be possible to discourage witches from becoming Charms Mistresses, or Quidditch players, or even Aurors. But not heroes. If someone is meant to be a hero then a hero they will be. They will walk through fire and swim through ice. Dementors will not stop them, nor the deaths of friends, and not discouragement either."

"Well," Hermione said, and paused, struggling with the words. "Well, I mean... what if that's not actually true? I mean, to me it seems that if you want more witches to be heroes, you ought to teach them heroing."

I don't know if that last paragraph is the author's view, and whether there is any evidence/consensus for it. I go by what I see, and this is a person possessed to overcome obstacles over and over again. Musk is an extreme example, but in general all the classic tech moguls are "natural heroes" in that sense. The burning need inside to do "world optimization" cannot be quenched.

[+][comment deleted]3y00
[+][comment deleted]3y-30