All of Yuxi_Liu's Comments + Replies

Can you get AGI from a Transformer?

Brief note: the "analysis by synthesis" idea is called "vision as inverse graphic" in computer graphics research.

Cybernetic dreams: Beer's pond brain

For reservoir computing, there are concrete results. It is not just magic.

No nonsense version of the "racial algorithm bias"

No. Any decider will be unfair in some way, whether it knows anything about history at all. The decider can be a coin flipper and it would still be biased. One can say that the unfairness is baked into the reality of base-rate difference.

The only way to fix this is not fixing the decider, but to just somehow make the base-rate difference disappear, or to compromise on the definition of fairness so that it's not so stringent, and satisfiable.

And in common language and common discussion of algorithmic bias, "bias" is decidedly NOT merely a statistical defini

... (read more)
No nonsense version of the "racial algorithm bias"

I'm following common speech where "biased" means "statistically immoral, because it violates some fairness requirement".

I showed that with base rate difference, it's impossible to satisfy three fairness requirements. The decider (machine or not) can completely ignore history. It could be a coin-flipper. As long as the decider is imperfect, it would still be unfair in one of the fairness requirements.

And if the base rates are not due to historical circumstances, this impossibility still stands.

1sirjackholland3yI'm not sure what "statistically immoral" means nor have I ever heard the term, which makes me doubt it's common speech (googling it does not bring up any uses of the phrase). I think we're using the term "historical circumstances" differently; I simply mean what's happened in the past. Isn't the base rate purely a function of the records of white/black convictions? If so, then the fact that the rates are not the same is the reason that we run into this fairness problem. I agree that this problem can apply in other settings, but in the case where the base rate is a function of history, is it not accurate to say that the cause of the conundrum is historical circumstances? An alternative history with equal, or essentially equal, rates of convictions would not suffer from this problem, right? I think what people mean when they say things like "machines are biased because they learn from history and history is biased" is precisely this scenario: historically, conviction rates are not equal between racial groups and so any algorithm that learns to predict convictions based on historical data will inevitably suffer from the same inequality (or suffer from some other issue by trying to fix this one, as your analysis has shown).
Let's Read: Superhuman AI for multiplayer poker

I cannot see anything that is particularly innovative in the paper, though I'm not an expert on this.

Maybe ask people working on poker AI, like Sandholm, directly. Perhaps something like many details of the particular program (and the paper is full of these details) must be assembled in order for this to work cheaply enough to be trained.

No nonsense version of the "racial algorithm bias"

Yes, (Kleinberg et al, 2016)... Do not read it. Really, don't. The derivation is extremely clumsy (and my professor said so too).

The proof has been considerably simplified in subsequent works. Look around for papers that cite that paper should give a published paper that does the simplification...

1MAllgoewer3yActually, Kleinberg et al. 2016 isn't all that bad. They have a small paragraph at the beginning of section 2 which they call an "informal overview" over the proof. But it's actually almost a decent proof in and of itself. You may accept it as such, or you may write it down a bit more formally, and you end up with a short, sweet proof. The reason they can't use a graphical approach like the one in this blog entry is that the above diagram with the squares only applies to the special case of scores that either output 0 or 1, but nothing in between. That is an important special case, but a special case nevertheless. Kleinberg et al. deal with the more common and slightly more general case of scores which can take any real value from 0 to 1. Also the COMPAS score, which is the topic of the ProPublica report cited above, can take other values than just 0 and 1. By the way, also the introductory section of the Kleinberg-et-al-paper is definitely worth reading. It gives an overview over the relevance of the problem for other areas of application. So only their attempt at a formal proof is kind of a waste of time to read.
Steelmanning Divination

Relevant quotes:

Original text is from Discourse on Heaven of Xunzi:


The Britannica says:

Another celebrated essay is “A Discussion of Heaven,” in which he attacks superstitious and supernatural beliefs. One of the work’s main themes is that unusual natural phenomena (eclipses, etc.) are no less natural for their irregularity—hence are not evil omens—and therefore men should not be concerned at their occurrence. Xunzi’s denial of supernaturalism led him into a sophisticated i

... (read more)
Let's Read: an essay on AI Theology

After reading the story, I don't believe that it is a bad idea to leave on the earring, and I just think the author made an inconsistency in the story.

Contest: $1,000 for good questions to ask to an Oracle AI

I fixed the submission as required.

Also I changed the submission 3 significantly.

Contest: $1,000 for good questions to ask to an Oracle AI


Setup: Other than making sure the oracles won't accidentally consume the world in their attempt to think up the answer, no other precautions necessary.

Episode length: as long as you want to wait, though a month should be more than enough.

  1. For a low-bandwidth oracle.

Ask the low-bandwidth oracle to predict if an earthquake (or some other natural disaster, like volcanoes or asteroid impacts, that the oracle's answer cannot affect), of a certain magnitude, in a certain area, in a certain timeframe, would happen. Possible answers are Yes, No.

  • If
... (read more)
3Stuart_Armstrong3yPlease add "submission" at the top of the post. 1. is insufficiently detailed - can you explain what is going on, how the Oracles are rewarded, what happens when the message is read/not read, and so on. Same for 5. 2. seems potentially very interesting.
Let's Read: an essay on AI Theology

Thanks. I had hoped it to be informative and entertaining. Think of it as "let's play" but for nerds.


Let's Read: an essay on AI Theology

This wades deep into the problem of what makes something feel conscious. I believe (and Scott Aaronson also wrote about it), that to have such a detailed understanding of a consciousness, one must also have a consciousness-generating process in it. That is, to fully understand a mind, it's necessary to recreate the mind.

If the Earring merely does the most satisfactory decisions according to some easy-to-compute universal standards (like to act morally according to some computationally efficient system), then the takeover makes sense to me, but otherwise it seems like a refusal to admit multiple realizations of a mind.

1Pattern3yPart of the story is that !> it tells you you are better off taking it off. Given that it's always as good/better than you at making decisions, leaving it on is a bad idea. <! I think it admits the possibility that such a thing may be to your detriment. (Perhaps it only contains one model (a human mind?), and uses that knowledge to destroy, rather than upload, human minds.) EDIT: How does one add spoilers here?
Let's Read: an essay on AI Theology

The Whispering Earring is interesting. It appears that the earring provides a kind of slow mind-uploading, but more noninvasive than most other approaches. The author of the story seems to consider it to be bad for some reason, perhaps because of triggering of "Liberty / Oppression" and "Purity / Sanctity" (of the inside-skull self) moral alarms.

Unfortunately I dislike reading novels. Would you kindly summarize the relevant parts?

2dxu3yThis is only true if whatever (hyper)computation the earring is using to make recommendations contains a model of the wearer. Such a model could be interpreted as a true upload, in which case it would be true that the wearer's mind is not actually destroyed. However, if the earring's predictions are made by some other means (which I don't think is impossible even in real life--predictions are often made without consulting a detailed, one-to-one model of the thing being predicted), then there is no upload, and the user has simply been taken over like a mindless puppet.
Accelerate without humanity: Summary of Nick Land's philosophy

Stars follow the laws of thermodynamics. This observation is more predictive than you make it out to be, once it is quantified.

The theory of thermodynamics of life is more than just a statement that life is constrained by thermodynamics in the boring sense. I'm especially interested in this statement:

In short, ecosystems develop in ways which systematically increase their ability to degrade the incoming solar energy.

If this is true, then it can be used to predict what kinds of future life would be like. It would not be any kind of life, but life that c

... (read more)
Accelerate without humanity: Summary of Nick Land's philosophy

I thought it was clear even to them that "wasting" energy meant using up usable energy into useless forms.

It is not just sophistry. If it turns out to be the fundamental feature of life (like how the laws of thermodynamics are for heat machines), then it would be predictive of the future activities of life. In particular, the aestivation hypothesis would be seriously threatened.

This is analogous to prediction that population would always go Malthulsian except in non-equilibrium situations. It's not a value/moral judgment, but an attempt to find general laws of life that can be used to predict the future.

2moridinamael3yPerhaps tautology is a better word than sophistry. Of course turning usable energy into unusable forms is a fundamental feature of life; it's a fundamental feature of everything to which the laws of thermodynamics apply. It'd be equally meaningless to say that using up useful energy is a fundamental property of stars, and that the purpose of stars is to waste energy. It's just something that stars do, because of the way the universe is set up. It's a descriptive observation. It's only predictive insofar as you would predict that life will probably only continue to exist where there are energy gradients.
Accelerate without humanity: Summary of Nick Land's philosophy

I will review more posthumanism, things like Dark Ecology, Object-Oriented Ontology, and such.

[According to dark ecology,] we must obliterate the false opposition between nature and people... the idea of nature itself is a damaging construct, and that humans (with their microbial biomass) are always already nonhuman.

Object-Oriented Ontology... rejects the privileging of human existence over the existence of nonhuman objects

Somewhat independently of transhumanism, posthumanism developed in a more philosophical and less scientific style in the liberal

... (read more)
4Aiyen3yThis seems wrong. The second law of thermodynamics isn't magic; it's simply the fact that when you have categories with many possible states that fit in them, and categories with only a few states that count, jumping randomly from state to state will tend to put you in the larger categories. Hence melting-arrange atoms randomly and it's more likely that you'll end up in a jumble than in one of the few arrangements that permit solidity. Hence heat equalizing-the kinetic energy of thermal motion can spread out in many ways, but remain concentrated in only a few; thus it tends to spread out. You can call that the universe hating order if you like, but it's a well-understood process that operates purely through small targets being harder to hit; not through a force actively pushing us towards chaos, making particles zig when they otherwise would have zagged so as to create more disorder. This being the case, claiming that life exists for the purpose of wasting energy seems absurd. Evolution appears to explain the existence of life, and it is not an entropic process. Positing anything else being behind it requires evidence, something about life that evolution doesn't explain and entropy-driven life would. Also, remember, entropy doesn't think ahead. It is purely the difficulty of hitting small targets; a bullet isn't going to 'decide' to swerve into a bull's eye as part of a plan to miss more later! It would be very strange if this could somehow mold us into fearing both death and immortality as part of a plan to gather as much energy as we could, then waste it through our deaths. This seems like academics seeking to be edgy much more than a coherent explanation of biology. As for transhumanism being overly interested in good or evil, what would you suggest we do instead? It's rather self-defeating to suggest that losing interest in goodness would be a good idea.
1areiamus2yYes. It was meant to imply a comparison set against which your post should be considered - e.g., if I read about 1-10 articles like yours every day, then your post was among the best of about 100-1000 (possibly an exaggeration for effect).
[Answer] Why wasn't science invented in China?

China’s neo-Confucian worldview which viewed the world through correlations and binary pairs may not have lent it itself to the causal thinking necessary for science.

I am very doubtful of this. Humans are hardwired to think in cause-and-effect terms, and Confucianism does not explicitly deny causality.

There was no Chinese equivalent to the Scholastic method of disputation, no canons of logic a la Aristotle

In very early China (about 500 BC), there was a period of great intellectual diversity before Confucianism dominated. There was a School of Names

... (read more)
Free will as an appearance to others

I like your horn tooting. I'll read it... later.

Free will as an appearance to others

You should not only shut your door, but also stop thinking about yourself and explaining your own behavior. People in "flow" seem to be in such a free-will-less state.

A more radical version of this idea is promoted by Susan Blackmore, which says that that consciousness (not just free will) exists only when a human (or some other thinking creature) thinks about itself:

Whenever you ask yourself, “Am I conscious now?” you always are.

But perhaps there is only something there when you ask. Maybe each time you probe, a retrospective story is concocted about

... (read more)
Free will as an appearance to others

It's quite clear that humans have free will relative to humans. I also conjecture that

Perhaps all complicated systems that can think are always too complicated to predict themselves, as such, they would all consider themselves to have free will.

Free will as an appearance to others

I know about the three stances. What's Dennett's account of consciousness? I know that he doesn't believe in qualias and tries to explain everything in the style of modern physics.

3Charlie Steiner3yIf I may toot my own horn: [] I'll admit I'm not totally sure what Said Achmiz means by his comparison, though :)
Huntington's Disease; power and duty of parents over offsprings

Dying at age -0.75 counts as nothing, as a little, or counts as a lot of a person, depending on what counts as a person and how much a person at various stages matter. If it counts as a lot of a person, then it leads to an anti-abortion stance, and some pro-abortion arguments might apply in this situation.

And an alternative to abortion is adoption. A person that is highly unlikely to have HD could even be produced on demand by surrogacy or IVF, instead of being taken from the pool of people already up to adoption, so that it is a "net gain".

If the women wo

... (read more)
2ChristianKl3yThere's a reason very few people label themselvse as pro-abortion in the US. It's not even a position that's important enough that gallup asks about it. People who favor the right to abortion in the US generally label as pro-choice. The general argument based on which abortion is legal in the US is about a woman's right to her own body being a very important sacred value that trumps the right to life of the fetus. It's not that the fetus has no interest in living. It's rather that his preferences aren't that valuable given that he hasn't yet progressed enough in the direction of personhood to overrule the strong sacred value of bodily autonomy. The US legal system does consider it reasonable to punish people who kill an unborn fetus of a woman without the consent of the woman and doesn't think the fetus counts for nothing. Given the cheapness of genetic testing, if you take your own argument seriously the woman in question wouldn't be the only person with an obligation to stop the pregnancy from resulting in birth.
Huntington's Disease; power and duty of parents over offsprings

Wait, 30-50 years of good life followed by 15 years of less-good, then death. That's "negligent"?

It's not comparing HD-life and no-life, but comparing HD-life and non-HD-life. I think it's obvious that HD-life is greatly worse than non-HD-life (otherwise HD wouldn't be considered a serious disease).

You might still disagree, and that gets us into the nonidentity problem.

2ChristianKl3yNo, the women in the story wouldn't create babies if she knows she has HD, so the comparison is HD-life and no-life. When it comes to selective genetic testing the alternatives are dying at age -0.75 or dying at age 45 to 60 while suffering from symptoms in the years 15 before that. That doesn't even figure in future medical progress.
Physical linguistics

In a language consistent with deterministic eliminative materialism, value judgments don't do anything, because there are no alternative scenarios to judge about.

I am not sure about nondeterministic eliminative materialism. Still, if consciousness and free will can be eliminated, even with true randomness in this world, value judgments still seem to not do anything.

1TAG3ySuppose I build a deterministic agent which has a value function in the most literal sense, ie. it has to call the function to get the values of various alternative actions in order to make a decision about which to perform. Would you still say it has no use for value judgements? An agent, an entity that acts, cannot say "what will be, will be", because it makes decisions, and because the decisions it makes are a component of the future. If it does not know the decision it will make before it makes it, it will be in a state of subjective uncertainty about the future. Subjective uncertainty and objective deyetminism are quite compatible. I think it is possible that you are being misled by fictional evidence. In Arrival, the Heptapods knowledge of the future is a straightforward extension of a fixed future, but everything we know indicates considerable barriers between determinism and foreknowledge
Physical linguistics

I'm glad you liked it. I was expecting some harsh words like "that's nothing new" or "that's nihilistic and thus immoral".

Physical linguistics

I am not fully committed to eliminative materialism, just trying to push it as far as possible, as I see it as the best chance at clarifying what consciousness does.

As for the last paragraph, if your analysis is correct, then it just means that a classical hedonic utilitarian + eliminative materialist would be a rare occurrence in this world, since such agents are unlikely to behave in a way that keeps itself existing.

If the project of eliminative materialism is fully finished, it would completely remove value judgments from human language. In the past, hu

... (read more)
1TAG3yBecause values are intrinsically non physical? Because agents dont have preferences? Because agents dont want to talk about preferences?
Reference request: human as "backup"

No, it was something way older, from maybe 2000-2009.

Evolution "failure mode": chickens

Okay I posted the whole thing here now.

1Capybasilisk3yThank you. Greatly appreciated.