Epistemic status: Confident in the general concept, but not in the specific details.
The world is changing fast. As soon as 2026, we may experience an exponential increase in personal productivity, intelligence, and overall capacity to influence the world, powered by new AI tools. To remain relevant in this new era, individuals must embrace these tools.
In this emerging economy, certain factors will determine one's ability to excel:
To succeed in this environment, it is vital to identify and begin acquiring these slow-to-acquire resources now to gain a competitive advantage. It is preferable to possess unused resources than to encounter insurmountable bottlenecks in the future. Here are some essential resources for this new era:
To obtain these resources, prioritize and focus on the ones most valuable right now to create compounding returns, and those that will take the longest to acquire. Examples of wise investments at the moment include:
In conclusion, to thrive in the 100X economy and remain relevant amidst rapidly advancing technology, it is crucial to adopt new AI tools and begin acquiring slow-to-acquire resources now. Identifying and prioritizing these resources, such as mental habits and personal datasets, can give individuals a competitive edge and position them for success in the weird times ahead.
Do you think there are important points missing in preparing for this wild future?
2026 mainly reflects the fact that we have short timelines. This market could be relevant to our prediction of short term economic change :
A further question to explore is how to filter the ever expanding list of new tools and workflows. I hope LessWrong can stay a place where high quality productivity information is filtered and curated.
Another possiblitity is that assistants will be good at modelling their user from little interaction, so the large initial dataset will be less useful.
Types of data which could be valuable could be notes database, unstructured voice and screen recordings, measurements à la Quantified self.
If Roodman's model of economic growth holds, prepare for serious gains.
My own guess here is that access to capital will become more important than it is today by an order of magnitude.
In the forager era capital barely mattered because almost value was created via labor. With no way to reliable accumulate capital, there was little opportunity to exploit it.
In the farmer era, capital became much more important, mainly in the form of useful land, but labor remained of paramount importance for generating value. If anything, capital made labor more valuable and thus demanded more of it.
In the in industrial era, capital became more important again, and now we saw the relationship to labor change. Individuals could have their labor multiplied by application of capital, and in some cases labor could be replaced by capital, with the remaining labor commanding how that capital was used.
With AI we'll see another shift. AI is very capital intensive and even more than heavy industry magnifies labor productivity. It seems plausible that in the not distant future most or all labor will be automated away, and all that will matter for economic production is owning capital or having ideas of how to effectively deploy it, since the rest will be fully automated. During the transition labor will matter, but capital will matter more.
So now, as always, it's a good idea to have a lot of money, and having enough money to invest in capital improvements that can generate returns will matter even more in the near future with labor further marginalized.
My actionable advice would be find ways to possess as much money as you can and be less willing to trade off money for other things in the short term since you'll soon have the opportunity to deploy it for outsized gains.
It's useful to note that of the three things you list, only one is rivalrous. Mental habits and personal datasets are not, perhaps, equally available to everyone, but the limit is not about division but just ... personal limits. There's no zero-sum elements where someone gets better mental habits only by someone else getting less.
Classic capital IS rivalrous and, on large scales, zero-sum. Money is only valuable if it moves from one entity to another, and only because nobody can have enough of it.
To the extent that having money is (even more than today) the best way to get more money, it seems likely that the best advice to prepare for that world is to collect "classic capital" in forms that will retain or increase in value as things change. You can always HIRE people (or AIs) with good habits and personal datasets.
Note: like "first, catch a rabbit", or "first, create the universe" as the first step in making rabbit stew, this advice may be correct, but that doesn't make it useful.
I think I focused too much on the "competitive" part, but my main point was that only certain factors would maintain a difference between individuals productivity, whether they are zero-sum or not. If future AI assistants require large personal datasets to perform well, only the people with preexisting datasets will perform well for a while, even though anyone could start their own dataset at that point.
I thought a bit about datasets before and to me it seems like what needs collecting most is detailed personal preference datasets. E.g. input-output examples of how you generally prefer information to be filtered, processed, communicated to you, refined with your inputs; what are your success criteria for tasks, where are the places in your day flow / thought flow where the thing needs to actively intervene and correct you. Especially in those places where you feel you can benefit from cognitive extensions most, based on your bottlenecks. It could initially be too hard to infer from screen logs alone.
Thanks for this post. It's refreshing to hear about how this technology will impact our lives in the near future without any references to it killing us all
This doesn't seem wrong, but it's extremely thin on "how" and reads like a blog post generated by SEO (which I guess these days means generated by an LLM trained to value what SEO values?).
Like, I know that at some point, one of the GPTs will be useful enough to justify a lawyer spending billable time with it, but this post did not tell me anything about how to get from my current state to the state of being able to analyze whether it's useful enough, or whether I'm just unskilled, or some other confounder.
The question I was exploring was not how to find the tools that do make their users more productive, as I expect good curation to appear in time with the tools, but whether there were resources which would be necessary to use those tools, but difficult to acquire in a short time when the tools are released.
The post was not optimized for SEO, but it definitely has a ChatGPT style I dislike. It's one of my first post, so I'm still exploring how to write good quality post. Thank you for the feedback!
What about agentic AGI? You only discuss tools here.
At the individual level, I expect agentic AI to allow even more powerful tools, like ACT acting as a semi autonomous digital assistant, or AutoGPT acting as a lower level executor, taking in your goals and doing most of the work.
Once we have powerful agentic AGI, of the kind that can run continuously and disempower humanity, I expect that at this point we'll be leaving the "world as normal but faster" phase where tools are useful, and then what happens next depends on our alignment plan I guess.
OK, I think we are in agreement then. I think we'll be leaving the "world as normal but faster" phase sooner than you might expect -- for example, by the time my own productivity gets a 3x boost even.
We're in agreement. I'm not sure what's my expectation for the length of this phase or the final productivity boost, but I was exploring what we would need to do now to prepare for the kind of world where there is a short period of time when productivity skyrockets. If we end up in such a world, I would prefer people working on AI alignment to be ready to exploit the productivity gains fully.