Thanks for the thoughtful reply. It took me a lot of squinting, but IIUC you're saying:
what exactly is it about human brains[1] that allows them to not always act like power-seeking ruthless consequentialists?
By asking this question, you've already lost me. The question tells me that "ruthless consequentialist" is your default mentality for how rational thinking beings operate, absent wiring / training / reward systems that limit the default outcome. And if that worldview is representative of the "technical-alignment-is-hard" camp, then of course the only plausible outcome of AI advance is "AIs eventually break free of those limite...
It would be nice to end this post with a recommendation of how to avoid these problems. Unfortunately, I don’t really have one, other than “if you are withholding information because of how you expect the other party to react, be aware that this might just make everything worse”.
Maybe this is me being naive, but this seems like a topic where awareness of the destructive tendency can help defeat the destructive tendency. How about this, as a general policy: "I worry that this info will get misinterpreted, but here's the full information along with a brief c...
it takes me longer to ask the LLM repeatedly to edit my file to the appropriate format than to just use regular expressions or other scripting methods myself
Not surprised. I would expect GPT to be better at helping me identify data cleaning issues, and helping me plan out how to safely fix each, and less good at actually producing cleaned data (which I wouldn't trust to be hallucination-free anyway).
When programming, I track a mixed bag of things, top of which is readability: Will me-6-months-from-now be able to efficiently reconstruct the intention of this code, track down the inevitable bugs, etc.?
I'm surprised that this whole conversation has happened with no mention of the minor but growing trend towards self-management organizational structures, teal organizations, Holacracy, or Sociocracy.
I have some experience with Holacracy, and while I would never call it a cure-all, I feel strongly about the relevance of its driving principles to the question of what an ideal governance system would look like -- eg. a structure of nested units/teams with high levels of local autonomy, a unique method of making governance decisions on how to change said struc...
you can find God killing the first-born male children of Egypt to convince an unelected Pharaoh to release slaves who logically could have been teleported out of the country. An Orthodox Jew is most certainly familiar with this episode
I've seen Yudkowsky make this point in a couple places (why bother inflicting mass infanticide etc. etc. when you're presumably omnipotent and could teleport everyone to safety) and it makes me blink, something about the argument feels off. Are there cases in the scriptures where God teleports large numbers of people large di...
What are the odds that the face showing is 1? Well, the prior odds are 1:5 (corresponding to the real number 1/5 = 0.20)
I'm years late to this party, and probably missing something obvious. But I'm confused by Yudkowsky's math here. Wouldn't it be more correct to say that the prior odds of rolling a 1 are 1:5, which corresponds to a probability of 1/6 or 0.1666...? If odds of 1:5 correspond to a probability of 1/5 = 0.20, that makes me think there are 5 sides to this six-sided die, each side having equal probability.
Put differently: when I think of how to ...
I'm again years late to the party, but there's a couple things here that I want to respond to:
If I read between the lines, you seem to be suggesting "It's not a strawman if you don't take religious beliefs seriously. Non-believers have no obligation to care whether their critique-of-religion accurately represents the thing being critiqued." If I'm misreading you, please tell me. But if that is your position, it's *exactly opposite* the spirit of epistemic generosity that this article is trying t... (read more)