LESSWRONG
LW

aleph_four
71170
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
1aleph_four's Shortform
6y
2
AGI systems & humans will both need to solve the alignment problem
aleph_four3y20

If there is no solution to the alignment problem within reach of human level intelligence, then the AGI can’t foom into an ASI without risking value drift…

A human augmented by a strong narrow AIs could in theory detect deception by an AGI. Stronger interpretability tools…

What we want is a controlled intelligence explosion, where an increase in strength of the AGI leads to an increase in our ability to align, alignment as an iterative problem…

A kind of intelligence arms race, perhaps humans can find a way to compete indefinitely?

Reply
Ngo and Yudkowsky on alignment difficulty
aleph_four4y-10

I love being accused of being GPT-x on Discord by people who don't understand scaling laws and think I own a planet of A100s

There are some hard and mean limits to explainability and there's a real issue that a person that correctly sees how to align AGI or that correctly perceives that an AGI design is catastrophically unsafe will not be able to explain it. It requires super-intelligence to cogently expose stupid designs that will kill us all. What are we going to do if there's this kind of coordination failure?

Reply
Incorrect hypotheses point to correct observations
aleph_four4y20

People have poor introspective access to the reasons why they like or dislike something; when they are asked for an explanation, they often literally fabricate their reasons.

omg, they literally work that way. I can't, let me off

Reply
Open & Welcome Thread - February 2020
aleph_four6y10

Let’s add another Scott to our coffers.

Reply
aleph_four's Shortform
aleph_four6y10

Lately I’ve been requiring a higher bar than the Turing Test. I propose “Anything that can program, and can converse convincingly in natural language about what it is programming must be thinking.”

Reply
Meta-Preference Utilitarianism
aleph_four6y10

uh... I guess cannot get around the regress involved in claiming my moral values superior to competing systems in an objective sense? I hesitate to lump together the same kind of missteps that are involved with a mistaken conception of reality (a mis-apprehension of non-moral facts) with whatever goes on internally when two people arrive at different values.

I think it’s possible to agree on all mind independent facts, without entailing perfect accord on all value propositions, and that moral reflection is fully possible without objective moral truth. Perhaps I do not get to point at a repulsive actor and say they are wrong in the strict sense of believing falsehoods, but i can deliver a verdict on their conduct all the same.

Reply
Meta-Preference Utilitarianism
aleph_four6y10

Well, i struggle to articulate what exactly we disagree on, because I find no real issue with this comment. Maybe i would say “high philosophical ability/sophistication causes both intergalactic civilization and moral convergence.”? I hesitate to call the result of that moral convergence “moral fact,” though I can conceive of that convergence.

Reply
Paper Trauma
aleph_four6y30

uhh, it goes to sleep after a bit, but brings you back to what you were last doing.

The OCR doesn’t destroy the original

convert lines into appropriate geometric forms

Nope on this

convert text blocks to calendar entries, tickets, mails,...

Nope

Reply
Meta-Preference Utilitarianism
aleph_four6y20

I’m immensely skeptical that open individualism will ever be more than a minority position (among humans, at least) But at any rate, convergence on an ethic doesn’t demonstrate objective correctness of that ethic from outside that ethic.

Reply
Meta-Preference Utilitarianism
aleph_four6y10

Most intelligent beings in the multiverse share similar preferences.

I mean this could very well be true, but at best it points to some truths about convergent psychological evolution.

This came about because there are facts about what preferences one should have, just like there exist facts about what decision theory one should use or what prior one should have, and species that manage to build intergalactic civilizations

Sure, there are facts about what preferences would best enable the emergence of an intergalactic civilization. I struggle to see these as moral facts.

Also there’s definitely a manifest destiny evoking unquestioned moralizing of space exploration going on rn, almost like morality’s importance is only as an instrument to us becoming hegemonic masters of the universe. The angle you approached this question is value-laden in an idiosyncratic way (not in a particularly foreign way, here on less-wrong, but value-laden nonetheless)

One can recognize that one would be ”better off” with a different preference set without the alternate set being better in some objective sense.

change them to better fit the relevant moral facts.

I’m saying the self-reflective process that leads to increased parsimony between moral intuitions does not require objective realism of moral facts, or even the belief in moral realism. I guess this puts me somewhere between relativism and subjectivism according to your linked post?

Reply
Load More
1aleph_four's Shortform
6y
2