I'm sympathetic to some of your arguments but even if we accept that the current paradigm will lead us to an AI that is pretty similar to a human mind, and even in the best case I'm already not super optimistic that a scaled up random almost human is a great outcome. I simply disagree where you say this:
>For example, humans are not perfectly robust. I claim that for any human, no matter how moral, there exist adversarial sensory inputs that would cause them to act badly. Such inputs might involve extreme pain, starvation, exhaustion, etc. I don't think the mere existence of such inputs means that all humans are unaligned.
Humans aren't that aligned at the extreme and the extreme matters when talking about the smartest entity making every important decision about everything.
Also, your general arguments about the current paradigms being not that bad are reasonable but again, I think our situation is a lot closer to all or nothing - if we get pretty far with RLHF or whatever, scale up the model until it's extremely smart and thus eventually making every decision of consequence then unless you got the alignment near perfectly the chance that the remaining problematic parts screw us over seems uncomfortably high to me.
I can't even get a good answer of "What's the GiveWell of AI Safety" so I can quickly donate to a very reputable and agreed upon option with little thinking without at best getting old lists to a ton of random small orgs and giving up. I'm not very optimistic ordinary less convinced people who want to help are having an easier time.
It seems quite different. Tha main argument in that article is that Climate Change wouldn't make the lives of readers' children much worse or shorter and that's not the case for AI.
While NVDA is naively the most obvious play - the vast majority of GPU-based AI systems use them, I fail to see why you'd expect it will outperform the market, at least in the medium term. Even if you don't believe in the EMH, I assume you acknowledge things can be more or less priced-in? Well, NVDA's such an obvious choice that it does seem like all the main arguments for it are priced-in which has helped get it to a PE ratio of 55.
I also don't see OpenAI making a huge dent on MSFT's numbers anytime soon. Almost all of MSFT's price is going to be determined by the rest of their business. Quick googling suggests revenue of 3m for OpenAI, and 168b total for MSFT for 2021. If OpenAI was already 100 times larger I still wouldn't see how a bet on MSFT just because of it is justified. It seems like this was chosen just because OpenAI is popular and not out of any real analysis beyond it. Can you explain what I'm missing?
I do like your first 3 choices of TSM, Google and Samsung (is that really much of an AI play though).
No, it's the blockchain Terra (with Luna being its main token).
There is little reason to think that's a big issue. A lot of data is semi-tagged, some of the ML-generated data can be removed either that way or by being detected by newer models. And in general as long as the 'good' type of data is also increasing model quality will also keep increasing even if you have some extra noise.
What's the GiveWell/AMF of AI Safety? I'd like to occasionally donate. In the past I've only done so for MIRI a few times. A quick googling fails to return anything useful in the top results which is odd given how much seems to be written in LW/EA and other forums on the subject every week.
In Bulgaria (where cyrilic was invented) writing in Latin is common (especially before cyrilic support was good) but frowned upon as it is considered uneducated and ugly. The way we do it is just replace each letter with the equivalent latin letter one to one and do whatever with the few which don't fit (eg just use y for ъ but some might use a, ч is just ch etc). So молоко is just moloko. Водка is vodka. Стол is stol etc. This is also exactly how it works on my keyboard with the phonetic layout.
Everyone else who uses cyrilic online seems to get it when you write like that in my experience though nowadays it's rarer.