My government name is Mack Gallagher. Crocker's Rules. I am an "underfunded" "alignment" "researcher". DM me if you'd like to fund my posts, or my project.
I post some of my less-varnished opinions on my Substack.
I just meant, reducing in their AIs the property which you postulate is the primary advantage of AI over human labor.
Whatever happened to holding software companies to the standard of not rolling vulnerable user data into their widely distributed business logic?
Say AI companies could effectively make copying hard enough to provide security benefits to scrape-ees [ if I'm reading you right, that's approximately who you're trying to protect ]. Say also that this "easy-to-copy" property of AIs, is "the fundamental" thing expected increase the demand for AI labor relative to human labor. . . . Hard-alignment-problem-complete problem specification, no?
Oh gosh, how irksome if Magic neurotypes its players like that.
Sirlin writes only of denial of one's weakness, not of a "need to lose".
. . . Wow, if that Rizzo piece is representative of how channer bicamerals were handling their internal conflicts before Ziz, I understand Ziz a little better.
Isn't losing just what you need to do to increase your ability to win? Other than the elements of what Rizzo writes about that are obviously just the activation of simian instincts to end a conflict by submitting, that is [ which is a lot of it ].
In the rate-limiting resource, housing, the poor have indeed gotten poorer. Treating USD as a wealth primitive [ not to mention treating "demand" as a game-theoretic primitive ] is an economist-brained error.
Coins are easier to model quasi-deterministically than humans, is the point Jonnan was making. [ I don't think they [Jonnan] realize how many people miss this fact. ]
Well, we're assuming Omega wants more money rather than less, aren't we?
If it's sufficiently omniscient to predict us, a much more complicated type of thing than a coin, what reason would it have to ever flip a physically fair coin which would come up heads?
I don't think the vast majority of people in this comments section realize coins aren't inherently random.
the human-created source code must be defining a learning algorithm of some sort. And then that learning algorithm will figure out for itself that tires are usually black etc. Might this learning algorithm be simple and legible? Yes! But that was true for GPT-3 too
Simple first-order learning algorithms have types of patterns they recognize, and meta-learning algorithms also have types of patterns they like.
In order to make a friendly or aligned AI, we will have to have some insight into what types of patterns we are going to have it recognize, and separately what types of things it is going to like or find salient.
There was a simple calculation protocol which generated GPT-3. The part that was not simple was translating that into predicting its preferences or perceptual landscape, and hence what it would do after it was turned on. And if you can't predict how a parameter will respond to input, you can't architect it one-shot.
I was talking with some people yesterday whom I accused of competing to espouse middling p(doom)s. One of them was talking about Aaronson's Faust parameter [ i.e. the p(doom), assuming "everything goes perfect" if ¬doom, at which you press the button and release superintelligent AI right now ]. And they had what I think was a good question: In what year do we foresee longevity escape velocity, assuming the AInotkilleveryoneist agenda succeeds and superintelligence is forestalled for decades?
The appropriate countervailing challenge question is: What is one plausible story for how a by-chance friendly ASI invents immortality within two years or whatever of its creation, while staying harmless to humanity? What is the tech tree, how does it traverse this tree and what are the guardrails keeping it from going off on some exciting [ what is effectively to a human ] pathology-gain-of-function tangent along the way?