topic: intellectual discussion, ML tool, AI x-risks
Idea: Have a therapist present during intellectual debate to notice triggers, and help defuse them. Triggers activate a politics mindset where the goal becomes focused on status/self-preservation/appearances/looking smart/making the other person look stupid/etc. which makes it hard to think clearly.
Two people I follow will soon have a debate on AI x-risks which made me think of that. I can't really propose that intervention though because it will likely be perceived and responded as if it was a political move itself.
Another idea I had recently, also based on one of those people, was to develop a neural network helping us notice when we were activated in that way so we became aware of it and helped defuse it. AI is too important for our egos to get in the way (but it's easier said than done).
x-post Facebook
Topics: cause prioritization; metaphor
note I took on 2022-08-01; I don't remember what I had in mind, but I feel like it can apply to various things
from an utilitarian point of view though, i think this is almost like arguing whether dying with a red or blue shirt is better; while there might be an answer, i think it's missing the point, and we should focus on reducing risks of astronomical disasters
Topics: AI, forecasting, privacy
I wonder how much of a signature we leave in our writings. Like, how hard would it be for an AI to be rather confident I wrote this text? (say if it was trained on LessWrong writings, or all public writings, or maybe even private writings) What if I ask someone else to write an idea for me--how helpful is it in obfuscating the source?
Topic: AI strategy (policies, malicious use of AI, AGI misalignment)
Epistemic status: simplistic; simplified line of reasoning; thinking out loud; a proposed frame
A significant "warning shot" from a sovereign misaligned AI doesn't seem likely to me because a human-level (and plausibly a subhuman-level) intelligence can both 1) learn deception, yet 2) can't (generally) do a lot of damage (i.e. perceptible for humanity). So the last "warning shot" before AI learns deception won't be very big (if even really notable at all), and then a misaligned agent would hide (its power and/or intentions) until it's confident it can overpower humanity (because it's easy to gain power that way)--at which point it would cause an omnicide. An exception to that is if an AI thinks other AIs are hiding in the world, then it might want to take a higher risk to overpower humanity before it's confident it can do so because it's concerned another AI will do so first otherwise. I'm not very hopeful this would give us a good warning shot though because I think multiple such AIs trying to overpower humanity would likely be too damaging for us to regroup in time.
However, it seems much more plausible to me that (non-agentic) AI tools would be used maliciously, which could lead the government to highly regulate AIs. Those regulations (ex.: nationalizing AI) preventing malicious uses could also potentially help with negligent uses. Assuming a negligent use (i.e. resulting in AGI misalignment) is much more likely to cause an existential catastrophe than a malicious use of AI, and that regulations against malicious uses are more memetically fit, then the ideal regulations to advocate for might be those that are good at preventing both malicious uses and the negligent creation of a misaligned AGI.
note to self: not posted on Facebook (yet)
I'm 1h20 north of Georgia. I don't think I'll make it this time, but I'd love to connect with people in Georgia, so feel free to reach out ☺
ah, yeah that's true, I did know that actually. What some of the people I know want though is to be thawed after a certain condition rather than simply not being reanimated, and ir I remember correctly, when I asked Alcor, they said they couldn't do that. Conditions included AI progress and family not being preserved (or somethings along those lines)
Right, that one is part of "Easier emergency relocation" (I just edited the summary to add it, but it's in the post), but maybe that legal status also has more advantages than just transport.
idk what CLARITY is, but yeah, I'd love to see room temperature preservation protocols developed for human brain preservation. it also has the possibility of significantly reducing cost given a significant fraction of the cost goes towards paying for indefinite liquid nitrogen refills
Nectome is working on aldehyde-stabilized cryopreservation for humans which I think might provide some of those benefits (?) OregonCryo is also trying to do / doing something like that.
i know another researcher working on this which could probably use funding in the near future. if any of you know someone that might be interested in funding this, please lmk so I can put you in touch. i think this is one of the top opportunities for improving cryonics robustness and adoption (and maybe quality)