LESSWRONG
LW

903
Lorec
227251731
Message
Dialogue
Subscribe

My government name is Mack Gallagher. Crocker's Rules. I am an "underfunded" "alignment" "researcher". DM me if you'd like to fund my posts, or my project.

I post some of my less-varnished opinions on my Substack.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
2Lorec's Shortform
1y
28
Lorec's Shortform
Lorec14d*10

I was talking with some people yesterday whom I accused of competing to espouse middling p(doom)s. One of them was talking about Aaronson's Faust parameter [ i.e. the p(doom), assuming "everything goes perfect" if ¬doom, at which you press the button and release superintelligent AI right now ]. And they had what I think was a good question: In what year do we foresee longevity escape velocity, assuming the AInotkilleveryoneist agenda succeeds and superintelligence is forestalled for decades?

The appropriate countervailing challenge question is: What is one plausible story for how a by-chance friendly ASI invents immortality within two years or whatever of its creation, while staying harmless to humanity? What is the tech tree, how does it traverse this tree and what are the guardrails keeping it from going off on some exciting [ what is effectively to a human ] pathology-gain-of-function tangent along the way?

Reply
Non-copyability as a security feature
Lorec4h10

I just meant, reducing in their AIs the property which you postulate is the primary advantage of AI over human labor.

Reply
Non-copyability as a security feature
Lorec7d10

Whatever happened to holding software companies to the standard of not rolling vulnerable user data into their widely distributed business logic?

Say AI companies could effectively make copying hard enough to provide security benefits to scrape-ees [ if I'm reading you right, that's approximately who you're trying to protect ]. Say also that this "easy-to-copy" property of AIs, is "the fundamental" thing expected increase the demand for AI labor relative to human labor. . . . Hard-alignment-problem-complete problem specification, no?

Reply
Notes on the need to lose
Lorec13d10

Oh gosh, how irksome if Magic neurotypes its players like that.

Sirlin writes only of denial of one's weakness, not of a "need to lose".

Reply
Notes on the need to lose
Lorec13d*10

. . . Wow, if that Rizzo piece is representative of how channer bicamerals were handling their internal conflicts before Ziz, I understand Ziz a little better.

Isn't losing just what you need to do to increase your ability to win? Other than the elements of what Rizzo writes about that are obviously just the activation of simian instincts to end a conflict by submitting, that is [ which is a lot of it ].

Reply
Four ways learning Econ makes people dumber re: future AI
Lorec2mo10

In the rate-limiting resource, housing, the poor have indeed gotten poorer. Treating USD as a wealth primitive [ not to mention treating "demand" as a game-theoretic primitive ] is an economist-brained error.

Reply
Counterfactual Mugging
Lorec2mo10

Coins are easier to model quasi-deterministically than humans, is the point Jonnan was making. [ I don't think they [Jonnan] realize how many people miss this fact. ]

Reply
Counterfactual Mugging
Lorec2mo10

Well, we're assuming Omega wants more money rather than less, aren't we?

If it's sufficiently omniscient to predict us, a much more complicated type of thing than a coin, what reason would it have to ever flip a physically fair coin which would come up heads?

Reply
Counterfactual Mugging
Lorec2mo10

I don't think the vast majority of people in this comments section realize coins aren't inherently random.

Reply
Inscrutability was always inevitable, right?
Answer by LorecAug 07, 202520

the human-created source code must be defining a learning algorithm of some sort. And then that learning algorithm will figure out for itself that tires are usually black etc. Might this learning algorithm be simple and legible? Yes! But that was true for GPT-3 too

Simple first-order learning algorithms have types of patterns they recognize, and meta-learning algorithms also have types of patterns they like.

In order to make a friendly or aligned AI, we will have to have some insight into what types of patterns we are going to have it recognize, and separately what types of things it is going to like or find salient.

There was a simple calculation protocol which generated GPT-3. The part that was not simple was translating that into predicting its preferences or perceptual landscape, and hence what it would do after it was turned on. And if you can't predict how a parameter will respond to input, you can't architect it one-shot.

Reply
Load More
3Final-Exam-Tier Medical Problem With Handwavy Reasons We Can't Just Call A Licensed M.D.
Q
11m
Q
0
3Galaxy-Brain Hobo Antibiotics?
Q
4mo
Q
9
41The Boat Theft Theory of Consciousness
4mo
36
4A Revision to Market Monetarism: Individual Hoarding as Rational, Competition for Dollars as Zero-Sum?
4mo
0
11Diabetes is Caused by Oxidative Stress
5mo
11
9What should I read to understand ancestral human society?
Q
5mo
Q
4
1You are too dumb to understand insurance
9mo
12
16Don't fall for ontology pyramid schemes
9mo
8
2Algorithmic Asubjective Anthropics, Cartesian Subjective Anthropics
10mo
0
14Re Hanson's Grabby Aliens: Humanity is not a natural anthropic sample space
10mo
64
Load More
Medianworld
5 months ago
(+559)