homunq_homunq
homunq_homunq has not written any posts yet.

homunq_homunq has not written any posts yet.

I absolutely "disagree that AI systems in the near-future will be capable of distinguishing valuable from non-valuable outcomes about as reliably as humans". In particular, I think that progress here in the near future will resemble self-driving-car progress over the near past. That is to say, it's far easier to make something that's mostly right most of the time, than to make something that is reliably not wrong in a way that I think humans under ideal conditions can in fact achieve.
Basically, I think that the current paradigm (in general: unsupervised deep learning on large in large datasets using reasonably- parallelizable architectures, possibly followed by architectural adjustments and/or supervised tuning) is unsuited... (read more)
What about Moore's Law?
I understand it is a totally different situation. There is no opponent, and plenty of positive feedback. However, I still think that "up so far, so up forever" is just as much of a fallacy with chips as with markets.
"Economically develop" is only meaningful against some baseline. Israel has had policies that clearly massively harm Gaza's development, and other policies that somewhat help it. There are also other factors Israel doesn't control, which probably are a net positive; economies in general tend to develop over time.
So if the baseline is some past year, or if it's the counterfactual situation with a blockade and no mitigating policies, there's been development. But if it's the counterfactual with no blockade and no mitigating policies, I'd guess not so much.
In other words: Israel's "strategy" has included at least some things that in themselves help Gaza's development, but Israel has still hurt its development overall / on net.
(I'm not an expert or an insider here.)