I am indeed - shall PM you and we can make it happen
The problem with MPI is it feels like "Anyone can trivially spend a small amount of money and no effort to make a larger amount of money" is the kind of thing that quickly gets saturated. If you don't have an MPI only because of all the other MPI's out there that have eaten up the free profits, it's not a great measure of capabilities.
I like the other three, but I also wonder how close to TUI we already are. It wouldn't shock me that much if we already were most of the way TUI and the only reason this didn't lead to robotics being solved is the AI itself has limitations - i.e. it can build an interface to control the robot, but the AI itself (not the interface) ends up being too slow, too high-latency, and too unable to plan things properly to actually perform at the level it needs to. (And that slowness I expect to continue such that creating/distilling small models is better for robotics use)
Interesting. You have convinced me that I need a better definition for this approximate level of capabilities. I do expect AI to advance faster than legacy organisations will adapt, such that it would be possible to have a world of "10% of jobs can be done by AI" but the AI capabilities need to be higher than "Can replace 10% of jobs in 2022".
So, my understanding of ASI is that it's supposed to mean "A system that is vastly more capable than the best humans at essentially all important cognitive tasks." Currently, AI's are indeed more capable, possibly even vastly more capable, than humans at a bunch of tasks, but they are not more capable at all important cognitive tasks. If they were, they could easily do my job, which they currently cannot.
Two terms I use in my own head, that largely correlate with my understanding of what people meant by the old AGI/ASI:
"Drop in remote worker" - A system with the capabilities to automate a large chunk of remote workers (I've used 50% before, but even 10% would be enough to change a lot) by doing the job of that worker with similar oversight and context as a human contractor. In this definition, the model likely gets a lot of help to set up, but then can work autonomously. E.g. if Claude Opus 4.5 could do this, but couldn't have built Claude Code for itself, that's fine.
This AI is sufficient to cause severe economic disruption and likely to advance AI R&D considerably.
"Minimum viable extinction" - A system with the capabilities to destroy all humanity, if it desires to. (The system is not itself required to survive this) This is when we get to the point of sufficiently bad alignment failures not giving us a second try. Unfortunately, this one is quite hard to measure, especially if the AI itself doesn't want to be measured.
The Australian financial year starts and ends in the middle of the year, so it makes no difference to me if we do it in 2026. Let's make it happen :)
I live in Australia, so I lack tax advantage for this. I am likely to still donate 1k or so if I can't get tax advantages, but before doing so I wanted to check if anyone wanted to do a donation swap where I donate to any of these Australian tax-advantaged charities largely in global health in exchange for you donating the same amount to charities that are tax-advantaged in the US.
I am willing to donate up to 3k USD to MIRI, and 1.5k USD to Lightcone if I'm doing so tax-advantaged. If nobody takes me up on this I'll still probably donate 2k USD to MIRI and 1k USD to Lightcone. I will accept offers to match only one of these two donations.
Also open to any alternate ways to gain tax advantage from Australia that are currently unknown to me in order to achieve this same outcome.
Fair point, if you add that you can't assess it at less than you paid for it, this problem goes away.
Wouldn't the equilibrium here trend towards a bunch of wasted labor where I deliberately lowball the value of the land, and then if someone offers a larger amount, I just say no and then start paying the larger amount, thus having a potential to pay less tax but losing nothing if I'm called out for it? No downside to me personally, and if this became common, it'd be harder to legitimately buy stuff. Seems like you'd need to pay some sort of fee to the entity credibly offering this larger amount to make it worth it.
This is the kind of content I keep coming back to this site for.
I also like that it's practical and practicable in day to day life while also being important for bigger, important questions.
And we have made it happen! Thanks to both Aloekine and Lightcone :)