Isn't the academic grad school basically this same model, at scale? I do not see any improvements here that are scalable.
Indeed, a lot of the most ridiculous human behavior is non-experts mimicking experts randomly and picking the wrong attributes.
Khamenei is not actually challenged by new people in the system. His position is more or less permanent. But to get to that position, then, yes, he must have done some things "right."
Telegram was a much better choice for this purpose. Their APIs are completely open (supporting alternative third-party clients and bots has been one of their priorities since years ago), and there are fantastic wrapper libraries available. Their clients are also native, not the Electron crap. There was already (at least) one Telegram [TUI](https://github.com/zevlg/telega.el), too.
PS: Cute. :-)
It would be great if Lesswrong online events could be recorded and put in a podcast. Live is great if you plan to participate, but for just listening, it sucks.
One of the good examples I have seen is the Techmeme podcast; They host a lot of Clubhouse/Twitter/etc live chats, and they post the content to their podcast. Some tools have recording as a built-in feature, e.g., Telegram's voice chats.
Can you provide concrete examples of the specialized pieces?
I think people are already tolerant of the level of hypocrisy that can be useful. For example, a new convert to Islam will have more slack in doing unislamic things.
Anyhow, this is not an isolated matter. Any kind of punishment has the potential to create adverse effects; Banning ransom payments can cause secret ransom payments, banning drugs powers gangs, banning one carcinogenic chemical can make companies use an even worse carcinogenic chemical, ... . There is no general solution to these, but I’m inherently skeptical of claims that favor the status quo of “rabbits” in a rabbit-stag game.
This is the most intuitive answer to me, as well. It’s also extremely difficult, and it‘s unclear how it is going to be useful for doing alignment generally.
Perhaps one idea is to train AI to write legible code, then use human code review on it. This seems as safe as our current mode of software development if the AI is not actively obfuscating (a big assumption).
There are weaker computational machines than Turing machines, like regexes. But you don really care about that, you just want to ban automatic reasoning. I think it’s impossible to succeed with that constrain; Playing Go is hard, people can’t just read code that plays Go well and “learn from it.”
Just use Julia ;)