Fair enough. If nothing else, it's best to state where you think the min value, max value, and midpoint are, and ideally put error bars around those.
Or, at least, you can state outright you think the midpoint and max are far enough away as to be irrelevant distractions to some particular practical purpose. Or that you expect other factors to intervene and change the trend long before the later parts of the sigmoid shape become relevant.
To add: It is in principle very easy for people to make equivalent prediction errors in either direction about when a particular exponential will level off, and to be wrong by many orders of magnitude. In practice, I usually encounter a vocal minority who happily ignores the fact that the sigmoid even exists, and a larger group who thinks that leveling off must be imminent and the trend can possibly continue much longer. The cynic in me thinks the former group tends to start getting believed just in time to be proven wrong, while the latter group misses out on a lot of opportunities but then helps ensure the leveling off has less catastrophic consequences when it happens.
I'm curious: was there a particular (set of) sigmoid(s) you had in mind when writing this post? And particular opinions about them you haven't seen reflected in discussions?
Most of the times I've used sigmoid in my own modeling/forecasting have been about adoption and penetration of new technologies. Often the width of the sigmoid (say the time to get from 1% to 99% of the way from min to max) is relatively easy to approximate, driven by forces like "how incumbent institutions are governed" (yes, this is critical even for most extremely disruptive innovations). The midpoint and maximum are much harder to anticipate.
This is absolutely true. However, actually using it effectively requires having a sufficiently good, principled reason for thinking the limit is in some particular place, or will be approached on some particular timeline. When I've looked at many (most?) real-world attempts to forecast when some particular exponential will start looking like an s-curve, they're usually really far off, sometimes in ways that 'should' be obvious, even if the forecasting exercise itself is instructive.
I feel like this underestimates the difference between what you're citing, valuation growth over 2 years post-YC, and the Garry Tan quote, which was about weekly growth during the 3 month YC program. I also wish the original Garry Tan claim were more specific about the metric being used for that weekly growth statistic. In principle, these aren't necessarily mutually exclusive claims. In practice, I'd expect there's some fudging going on.
I can imagine something like the following: Companies grow faster with less investment, reaching more revenue sooner because of GenAI. But, this also means the company has fewer defensible assets, and less of a lead over competitors, so the valuation is lower after 2 years. AKA potentially the cost of software innovation is going down, speed is going up, and there's less of an advantage to being first because it's getting easier to be a fast follower. In a world where it's possible-in-principle to think about one person unicorns, then why should software companies ever have high valuations at all, once enough people know what they're trying to build?
I'm curious what effects with would have, if true. If we end up in a place where an individual can build a $10-20M company, practically on their own, in months, with only a seed round, but can never get to $100M or $1B, how does this affect startup funding models? Pace of overall innovation? I could see this going really well (serial founders have time in their careers to build 50 companies, cost to access new software products drops) or really badly (VCs lose interest in software startups, innovation slows to a crawl), or any number of other ways.
Or, also not mutually exclusive with the above: Maybe GenAI-2023 is sufficiently different from GenAI-2025 that we shouldn't really be comparing them, and the 2023 batches were growing less or slower due to dealing with the aftereffects of covid or something.
Completely agreed - suggesting that this is a solution was a failure to think the next thought.
Nevertheless, if we had any idea how to actually, successfully do what Hinton suggested, even if we really wanted to? I'd feel a lot better about our prospects than I do right now.
No, but filter strength is likely to be exponentially distributed. The universe is big, intelligent life seems to be rare from where we stand, there's many OOMs of filter to explain. But the claim/premise is that statistically it's likely one (or a small number) of factors explains more OOMs of filter than any given other, and we don't know which one(s). So you can have a lot of things that each kill off 90% of paths to potential spacefaring civilizations, but not as many that kill off all but a billionth of them, or we probably wouldn't be here to ask the question.
Some academics are beginning to explore the idea of “model welfare”,
Linked paper aside, "Some academics" is an interesting way to spell "A set of labs including Anthropic, one of the world's leading commercial AI developers."
I do wonder how it affected the economics and process of training new toolmakers, though.
That was, admittedly, a snarky overgeneralization on my part, sorry.
It may well be on purpose. However, I tend to think in many cases it's more likely a semi-conscious or unconscious-by-long-practice habit of writing what will get people to read and discuss, and not what will get them to understand and learn.
If you're not a software company, and what you want to do requires steel in the ground, then any workable plan to become huge will realistically require 3-4 years each in the lab, pilot, demo, and FOAK phases, largely in series, and will often benefit from the founders stepping down as CEO quite early in favor of someone with much more direct industry experience, and if you're honest about that many VCs will run away.
As you explain later, the first part of this would be nonsense if the second part weren't so important. AKA, if only the founders have the discipline to not increase spend rate beyond necessity and instead use the money to increase runway and still follow an optimal path to growth, instead of inefficiently chasing faster growth by spending more and just assuming more funding will be available when needed, this would not be such a problem.
Also it's not about having a down round, necessarily. It's sometimes about needing one at all. I've met people whose shareholders forced their companies to wind down instead of allowing a down round, even if the down round would likely have led to a successful exit later, because e.g. the shareholder was trying to raise their own next fund and a down round on their record would have made it harder.