I think the claim here could be summarized as: charities may have a vested interest in the problem they're trying to solve (conflict on interest). However, it's good to observe this isn't always the case.
For example, I know volunteers that help the homeless. Everyone in the org is a volunteer, except the cook. If homelessness disappeared tomorrow, they could just take a rest day or go to the park etc.. This is a first prevention mechanism (minimizing vested interests).
Sometimes though you may need employees so that your organization is effective. In this case, they kind of need to pay a cook. It's hard work, specialized work, workers are usually not high income and need the money. Maybe it would be ideal to find a volunteer cook, alas. In that case, there is still ethics. If the organization and people are effectively ethical, then they should not respond to the incentive by increasing homelessness (in any case... I think it's fairly difficult to increase homelessness on purpose, and even more difficult to do such a way as to make a personal difference). This is a second mechanism (ethical reflection).
But conflicts of interest are extremely important to keep an eye on. On everyday discussions, political and social cases. (see: scout mindset)
As for why the problems haven't been solved, I think it could be that it's just not that simple. It's like asking a farmer "If fertilizers worked well, why do you need to keep fertilizing the soil after all those years?". Some problems may demand constant, permanent attention. Don't volunteer trying to solve homelessness, instead volunteer trying to make the life of homeless people better. Hopefully that one day lowers or eliminates homelessness as well, but we shouldn't condition help on that.
Interesting topic. Do you feel anecdotally any differences in your wellbeing with abundant filtering? Significantly better breathing? Is your city significatly polluted?
I mostly agree, and I want to echo 'tailcalled' that there's another layer of intelligence that builds upon humans: civilization, or human culture (although surely there's some merit to our "architecture", so to speak, to be sure!). We've found that you can teach machines essentially any task (because of Turing completeness). That doesn't mean a single machine, by itself, might warrant being called an 'universal learner'. Such universality would come from algorithms running on said machine. I think there's a degree of universality inherent to animals and hence to humans as well. We can learn to predict and plan very well from scratch (many animals learn with little or no parenting required), are curious for learning more, can memorize and recall things from the past, etc..
However, I think the perspective of our integration with society is important. We also probably would not learn to reach remotely similar levels of intelligence (in the sense of the ability to solve problems, act in the world, and communicate) without instruction -- much like the instruction Turing machines receive when programmed. And this instruction has undergone refinement from many generations, through other improvement algorithms (like 'quasi-genetic' improvement of which cultures have the best teaching methods and better outcomes, and of course teachers thinking how to teach best, what to teach, etc.).
I think there's the insight that our brain is universal, simply because yes, we can probably follow/memorize any algorithm (i.e. explicit set of instructions) which fits our memory. But also our culture equips us with more powerful forms of universality where we detect most important problems, solve them, and evolve as a civilization.
I think the most important form of universality is that of meaning and ethics: dicovering what is meaningful, what activities we should pursue, what is ethical and isn't, and what is a good life. I think we're still not very firmly in this ground of universality, lest the machines we create.
Lower clock rates mean lower energy usage per operation (due to quadratic resistance effects). Even transportation of physical goods sees the same dilemma. However, we know that in real life we have to balance environmental degradation (i.e. everything decaying), expansion potential (it may be better to use more energy per op now as an investment) with the process velocity to achieve our goals.
You can also consider the 2nd law of thermodynamics (implying finite lifetimes of anything): even the Sun itself will one day go extinct... although of course this is more of a science fiction discussion.