The Wright Brother’s first flight was a joke by modern flying standards. Even though proving heavier than air flight was viable was monumental, many people at the time could not see how a machine that only can fly a short distance, a few feet off the ground, with the pilot in an uncomfortable position, and very questionable safety with no protection against crashing into the ground, could ever amount to anything.

Despite all these obvious flaws, entrepreneurs and engineers saw the potential. They rapidly iterated on the design, and within 2 decades, airplanes were a decisive advantage in war, they were changing the delivery of goods, they were creeping into commercial travel on the very luxury end of the spectrum. A hundred years later, we have modern marvels like the Airbus A380 and Boeing 747. We have international airports which are practically a world wonder in their operations and interoperability.

ChatGPT is analogous to the Wright Flyer. There were capable LLMs and tons of work in AI prior, but ChatGPT put this all together in a way for the general public to imagine how AI could be a part of their life. For others, they saw a clear villain to fear. But what is the analog in AI for the P51 Mustang? Or the B17? Or global air traffic control systems? Or composite materials and jet engines?

AI is going to have as big an impact or bigger as airplanes did, and a huge amount of infrastructure will need to be built in the process. What should humanity be building?

New Comment
9 comments, sorted by Click to highlight new comments since:
[-]dsj31

Highly tangential point, but the “modern marvel” Boeing 747 first flew in 1969, before the first Moon landing and only 66 years after the Wright Flyer’s maiden flight.

Have you read much about the concerns people have about AI safety on Less Wrong because if you agree with those concerns that would likely change your answer here!

I have and I'm continuing to read them. I used to buy into the singularity view and the fears Bostrom wrote about, but as someone who works in engineering and also works with ML, I don't believe these concerns are warranted anymore for a few reasons... might write about why later.

Fair enough! Would be keen to hear your thoughts here.

Connor from Conjecture made a similar point about GPT-3:

About a year ago this post had -1 upvotes xD

Ha, just stumbled across "GPT-2 As Step Toward General Intelligence" by Scott Alexander, published 1 day after Implications of GPT-2

You weren't wrong there. One big thing about ChatGPT is that non-tech people on instagram and TikTok were using it and doing weird/funny stuff with it.