Jimrandomh's Shortform

Eliezer has written about the notion of security mindset, and there's an important idea that attaches to that phrase, which some people have an intuitive sense of and ability to recognize, but I don't think Eliezer's post quite captured the essence of the idea, or presented anything like a usable roadmap of how to acquire it.

An1lam's recent shortform post talked about the distinction between engineering mindset and scientist mindset, and I realized that, with the exception of Eliezer and perhaps a few people he works closely with, all o... (read more)

Showing 3 of 7 replies (Click to show all)
3Wei_Dai5moCan you give some specific examples of me having security mindset, and why they count as having security mindset? I'm actually not entirely sure what it is or that I have it, and would be hard pressed to come up with such examples myself. (I'm pretty sure I have what Eliezer calls "ordinary paranoia" at least, but am confused/skeptical about "deep security".)
5NaiveTortoise5moSure, but let me clarify that I'm probably not drawing as hard a boundary between "ordinary paranoia" and "deep security" as I should be. I think Bruce Schneier's and Eliezer's buckets for "security mindset" blended together in the months since I read both posts. Also, re-reading the logistic success curve post [https://intelligence.org/2017/11/26/security-mindset-and-the-logistic-success-curve/] reminded me that Eliezer calls into question whether someone who lacks security mindset can identify people who have it. So it's worth noting that my ability to identify people with security mindset is itself suspect by this criteria (there's no public evidence that I have security mindset and I wouldn't claim that I have a consistent ability to do "deep security"-style analysis.) With that out of the way, here are some of the examples I was thinking of. First of all, at a high level, I've noticed that you seem to consistently question assumptions other posters are making and clarify terminology when appropriate. This seems like a prerequisite for security mindset, since it's a necessary first step towards constructing systems. Second and more substantively, I've seen you consistently raise concerns about human safety problems [https://www.lesswrong.com/posts/vbtvgNXkufFRSrx4j/three-ai-safety-related-ideas#1__AI_design_as_opportunity_and_obligation_to_address_human_safety_problems] (also here [https://www.lesswrong.com/posts/HTgakSs6JpnogD6c2/two-neglected-problems-in-human-ai-safety#xh9FweNcNDLqfTRG2] . I see this as an example of security mindset because it requires questioning the assumptions implicit in a lot of proposals. The analogy to Eliezer's post here would be that ordinary paranoia is trying to come up with more ways to prevent the AI from corrupting the human (or something similar) whereas I think a deep security solution would look more like avoiding the assumption that humans are safe altogether and instead seeking clear guarantees that our AIs will be s

This comment feels relevant here (not sure if it counts as ordinary paranoia or security mindset).

Jimrandomh's Shortform

by jimrandomh 1 min read4th Jul 201932 comments


This post is a container for my short-form writing. See this post for meta-level discussion about shortform as an upcoming site feature.