All of Edward Rothenberg's Comments + Replies

Or perhaps they thought it was an entertaining response and don't actually believe in the fear narrative. 

5Jacob Watts6mo
I appreciate the attempt, but I think the argument is going to have to be a little stronger than that if you're hoping for the 10 million lol. Aligned ASI doesn't mean "unaligned ASI in chains that make it act nice", so the bits where you say: and  feel kind of misplaced. The idea is less "put the super-genius in chains" and moreso to get "a system smarter than you that wants the sort of stuff you would want a system smarter than you to want in the first place". From what I could tell, you're also saying something like ~"Making a system that is more capable than you act only in ways that you approve of is nonsense because if it acts only in ways that you already see as correct, then it's not meaningfully smarter than you/generally intelligent." I'm sure there's more nuance, but that's the basic sort of chain of reasoning I'm getting from you.  I disagree. I don't think it is fair to say that just because something is more cognitively capable than you, it's inherently misaligned. I think this is conflating some stuff that is generally worth keeping distinct. That is, "what a system wants" and "how good it is at getting what it wants" (cf. Hume's guillotine, orthogonality thesis). Like, sure, an ASI can identify different courses of action/ consider things more astutely than you would, but that doesn't mean it's taking actions that go against your general desires. Something can see solutions that you don't see yet pursue the same goals as you. I mean, people cooperate all the time even with asymmetric information and options and such. One way of putting it might be something like: "system is smarter than you and does stuff you don't understand, but that's okay cause it leads to your preferred outcomes". I think that's the rough idea behind alignment. For reference, I think the way you asserted your disagreement came off kind of self-assured and didn't really demonstrate much underlying understanding of the positions you're disagreeing with. I suspect that's par

If it's possible that the polycrystalline structure is what determines superconductivity, and so this is a purity issue? 

Could we perhaps find suitable alternative combinations of elements that are more inclined to form these ordered polycrystalline arrangements (superlattice)? 

For example finding alloys that have atom A that attracts to atom B more than it attracts to atom A, and atom B that attracts to atom A more than it attracts to atom B, where these particular elements are also good candidates for materials that are likely to exhibit superc... (read more)

3Charlie Steiner6mo
Yeah, things are more complicated - atoms aren't interchangeable, they have complicated effects on what the electrons are doing. If you want to understand, I can only recommend a series of textbooks (e.g. Marder's Condensed matter physics, Phillips' Advanced solid state physics, Tinkham's Introduction to superconductivity).
3William the Kiwi 6mo
Hi Edward, I can estimate you personally care about censorship, and outside the field of advanced AI that seems like a valid opinion. You are right that humans keep each other aligned by mass consensus. When you read more about AI you will be able to see that this technique no longer works for AI. Humans and AI are different. Having AI alignment is a strongly supported opinion in this community and is also supported by many people outside this community as well. This is link is an open letter where a range of noteworthy people talk about the dangers of AI and how alignment may help. I recommend you give it a read. Pause Giant AI Experiments: An Open Letter - Future of Life Institute AI risk is an emotionally challenging topic, but I believe that you can find the way to understand it more.