I have never done cryptography, but the way I imagine working in it is that it exists in a context of extremely resourceful adversarial agents, and thus you have to give up a kind of casual, not quite noticed neglect toward extremely weird and artificial-sounding edge cases / seemingly weird and unlikely scenarios, because this is where the danger lives: your adversaries may force these weird edge cases to happen, and this is a part of the system's behavior you haven't sufficiently thought through.
Maybe one possible analogy with AI alignment, at leas...
Council of Europe ... (and Russia is in, believe it or not).
It's not. It was Yeltsin trying to get in in the nineties, and then Russia was excluded in 2022.
What is the connection between the concepts of intelligence and optimization?
I see that optimization implies intelligence (that optimizing sufficiently hard task sufficiently well requires sufficient intelligence). But it feels like the case for existential risk from superintelligence is dependent on the idea that intelligence is optimization, or implies optimization, or something like that. (If I remember correctly, sometimes people suggest creating "non-agentic AI", or "AI with no goals/utility", and EY says that they are trying to invent non-wet water o...
Is there currently any place for possibly stupid or naive questions about alignment? I don't wish to bother people with questions that have probably been addressed, but I don't always know where to look for existing approaches to a question I have.
...The OpenBSD project to build a secure operating system has also, in passing, built an extremely robust operating system, because from their perspective any bug that potentially crashes the system is considered a critical security hole. An ordinary paranoid sees an input that crashes the system and thinks, “A crash isn't as bad as somebody stealing my data. Until you demonstrate to me that this bug can be used by the adversary to steal data, it's not extremely critical.” Somebody with security mindset thinks, “Nothing inside this subsystem is supposed to be
Sorry, I know this is tangential, but I'm curious — is it based on it being less psychosis-inducing in this investigation or are there more data points / is it known to be otherwise more aligned as well?