How does one distinguish "idea is a flop and should be killed" from "idea is good, under-executed and more effort would make it work"?
Yeah, that's tricky. My hope is that once you're decently good at execution, you can recognize good execution, such that you can distinguish between the two.
What seems harder yet is "idea is flop and should be adjusted" vs "idea is flop and I should switch to a different idea" (a distinction which, by the way, lives on a spectrum)
The Lean Startup describes this as the "Pivot or Persevere" question.
Where persevere is roughly: iterate more on the current idea.
And Pivot is roughly: iterate/change to a different idea.
There are movements that say people should quit sooner (see e.g. https://www.youtube.com/watch?v=3Xdyioqs5Ds ), and stories of startup founders who stubbornly stuck to their vision, rejection after rejection (whilst iterating).
Oh, also, getting advice from others is probably helpful on the execution front. If you're underexecuting, people can probably recognize that, I expect. Whereas good advise on the product-market-fit side, i.e. "should I iterate on this idea or pivot to another one) is very hard to give/get.
Most ideas fail.
Successful startup founders don't get their first idea right. So they try things out, iterate relentlessly, and pivot when needed –[1] until they find something that works.
In AI Safety, most ideas won't be impactful either.
However, most people don’t know how to systematically move quickly and address the problems that need solving.
The result?
Projects that move slowly, fail to gain traction, and get stuck.
This post shows you how to avoid getting stuck in failure. We’ll introduce you to the iteration system that helps startup founders succeed.
Specifically, entrepreneurial skills help you with:
This is post 2 of 5 in the sequence "How to launch an impactful AI Safety project". Post 1 introduces the broader topic, but this post works well as a standalone post.
Execution: How to build something
Be relentlessly resourceful.
That’s how Paul Graham – co-founder of YCombinator and legendary entrepreneurship coach – describes being a good startup founder.[2]
Beyond that, building anything requires a broad range of generalist execution skills: project and people management, finance, communication, engineering, research.
These skills are essential. You might know they matter, but it can be hard to actually acquire them. One place to start is reading the book “How to Launch an Impactful Nonprofit”.[3]
We strongly suggest you try to improve your general execution skills, or at least get feedback from people who know you well to figure out what you could work on.
Nevertheless, competent execution isn’t sufficient for success.
Most new ideas will fail, even if competently executed[4]
Rather than just building something, you have to find the right thing to build.
Some thing, idea, or artefact that, if you execute it well, people will actually use, talk about, or build upon. And then, additionally, all that buzz has to actually affect AI Safety outcomes.
That’s how Impact = Adoption x Effectiveness.
We'll discuss adoption in this post, and then turn to impact in post 3.
Adoption: Build something people want
If people don’t want what you created, they won’t spend time with it.
To build something people actually adopt, you need to understand what they want.[5] You can do this by talking to users, identifying assumptions, testing the riskiest assumptions, and iterating toward a solution.
“Users” can take many forms across AI Safety – for example: researchers building on your work; legislators implementing your policies; people joining your fellowships; or people adopting your tool.
Getting adoption is entrepreneurship 101 and there are entire books on the topic – e.g. The Lean Startup, Inspired, The Mom Test and Continuous Discovery Habits. Podcasts like Lenny's Podcast and entrepreneurship bootcamps teach this too.
We’ll cover just a few key takeaways in this post.
Fall in love with the problem, not the solution
Don’t get attached to your idea.
Don’t even get attached to a specific sub-problem within AI Safety yet.[6] First fall in love with the entire problem of existential risk. Then identify a specific sub-problem where you have the highest expected impact.
Only then think about solutions.[7]
Note that the maximum impact of your project is capped by the importance of the problem it tries to solve. If a problem causes 3% of existential risk, a solution can only reduce risk by 3%. It's often better to focus on another problem that's responsible for, e.g., 40% existential risk.[8]
Ship fast, kill fast
Successful entrepreneurs iterate. A lot.
They make it their goal to kill bad ideas fast, so they can get to a good idea sooner. They do that by maximizing their learning rate, and de-risking the assumptions that stand between them and success.
So go build something scrappy and get it out there. Don’t spend a lot of time on it. Optimize for learning, not for being proud of what you built. Then improve what works and scrap what doesn’t.
It’s tricky to give advice on this topic because different people need to hear different things.
The entrepreneurial skills from this post will help you create something that people use. But adoption alone isn't enough for impact.
Adoption ≠ Impact
I like bananas. If you sell them, I'll buy them. But this does approximately nothing for existential risk.
Remember: Impact = Adoption x Effectiveness
Adoption without effectiveness looks like:
To be effective, you’ll need to know what problems need solving and what might solve them. That’s where strategy and impact assessment come in, which we'll turn to in the post 3.
Unless you're in the top ~100 AI safety experts, that’s where you’ll want to go next.
Post 3 hasn't been published yet, but you can leave your email address below to get notified when it launches:
(more info about Lens Academy: Superintelligence Risk Education that Scales)
Emdashes; don’t you love them? This post was written and rewritten several times by humans. We did also take suggestions from AI. Nevertheless, most of the emdashes in this article are actually from humans; it’s one of the things we’ve learned from AI as they’re actually linguistically nice. Fun fact: we actually use endashes with spaces (foo – bar) instead of emdashes (foo—bar) because for some reason the main author (Luc) likes them better visually and because they smell a little less like AI while still giving the linguistic benefits. Fyi, there’s the dash (-), endash (–), and the emdash (—).
Introduced here: https://www.paulgraham.com/relres.html. And somewhat actually explained here: https://www.jasonshen.com/how-to-be-relentlessly-resourceful/ (though this is just one interpretation)
You can get the book at https://www.amazon.com/Launch-High-Impact-Nonprofit-Joey-Savoie-ebook/dp/B09NP91L31/ . Sadly, this book is paywalled.
In his book "The Right It", Alberto Savoia refers to this as "the law of market failure".
In startup jargon, this is often called 'value' or 'desirability'. There are other factors to consider, like 'usability', business 'feasibility', and having a working 'growth engine'. However, ‘value’ ("Do users actually want your product?”) is often the most important.
Examples of sub-problems: ASI x-risk, AIxBio, Power Concentration, Gradual Disempowerment. Don’t get attached to any of these.
This process of understanding the problem first before thinking about the solution is described by e.g. the "Double Diamond" process. Of course, progress in reality is usually not so linear. But it's a useful heuristic nonetheless.
Focusing on an important problem is a useful heuristic, but of course tractability and neglectedness matter too. In the end, it's about the project's expected percentage reduction of existential risk, which is based on some combination of problem, solution, and counterfactuals.
Your theory of change helps your project be impactful, instead of just making you rich. We’ll discuss it in post 3.