Grain of Truth (Reflective Oracles). Understanding an opponent perfectly requires greater intelligence or something in common.
And understanding yourself. Of course, you have plenty in common with yourself. But, you don't have everything in common with yourself, if you're growing.
Dovetailing. Every meta-cognition enthusiast reinvents Levin/Hutter search, usually with added epicycles.
To frame it in a very different way, learning math and generally gaining lots of abstractions and getting good wieldy names for them is super important for thinking. Doing so increases your "algorithmic range", within your very constrained cognition.
Chaitin's Number of Wisdom. Knowledge looks like noise from outside.
To a large extent, but not quite exactly (which you probably weren't trying to say), because of "thinking longer should make you less surprised". From outside, a big chunk of alien knowledge looks like noise (for now), true. But there's a "thick interface" where just seeing stuff from the alien knowledgebase will "make things click into place" (i.e. will make you think a bit more / make you have new hypotheses (and hypothesis bits)). You can tell that the alien knowledgebase is talking about Things even if you aren't very familiar with those Things.
Lower Semicomputability of M. Thinking longer should make you less surprised.
I'd go even farther and say that in "most" situations in real life, if you feel like you want to think about X more, then the top priority (do it first, and do it often ongoingly) is to think of more hypotheses.
A basic issue with a lot of deliberate philanthropy is the tension between:
The kneejerk solution I'd propose is "proof of novel work". If you want funding to do X, you should show that you've done something to address X that others haven't done. That could be a detailed insightful write-up (which indicates serious thinking / fact-finding); that could be some you did on the side, which isn't necessarily conceptually novel but is useful work on X that others were not doing; etc.
I assume that this is an obvious / not new idea, so I'm curious where it doesn't work. Also curious what else has been tried. (E.g. many organizations do "don't apply, we only give to {our friends, people we find through our own searches, people who are already getting funding, ...}".)
In this example, you're trying to make various planning decisions; those planning decisions call on predictions; and the predictions are about (other) planning decisions; and these form a loopy network. This is plausibly an intrinsic / essential problem for intelligences, because it involves the intelligence making predictions about its own actions--and those actions are currently under consideration--and those actions kinda depend on those same predictions. The difficulty of predicting "what will I do" grows in tandem with the intelligence, so any sort of problem that makes a call to the whole intelligence might unavoidably make it hard to separate predictions from decisions.
A further wrinkle / another example is that a question like "what should I think about (in particular, what to gather information about / update about)", during the design process, wants these predictions. For example, I run into problems like:
Another kind of example is common knowledge. What people actually do seems to be some sort of "conjecture / leap of faith", where at some point they kinda just assume / act-as-though there is common knowledge. Even in theory, how is this supposed to work, for agents of comparable complexity* to each other? Notably, Lobian handshake stuff doesn't AFAICT especially look like it has predictions / decisions separated out.
*(Not sure what complexity should mean in this context.)
We almost certainly want to eventually do uploading, if nothing else because that's probably how you avoid involuntary-preheatdeath-death. It might be the best way to do supra-genomic HIA, but I would rather leave that up to the next generation, because it seems both morally fraught and technically difficult. It's far from clear to me that we ever want to make ASI; why ever do that rather than just have more human/humane personal growth and descendants? (I agree with the urgency of all the mundane horrible stuff that's always happening; but my guess is we can get out of that stuff with HIA before it's safe to make ASI. Alignment is harder than curing world hunger and stopping all war, probably (glib genie jokes aside).)
Mind uploading is probably quite hard. See here. It's probably much easier to get AGI from partial understanding of how to do uploads, than to get actual uploads. Even if you have unlimited political capital, such that you can successfully prevent making partial-upload-AGIs, it's probably just very technically difficult. Intelligence amplification is much more doable because we can copy a bunch of nature's work by looking at all the existing genetic variants and their associated phenotypes.
I assume this got stuck / waysided; do you know why?
Since it's slower, the tech development cycle is faster in comparison. Tech development --> less expensive tech --> more access --> less concetration of power --> more moral outcomes.