All of lsusr's Comments + Replies

Omicron Post #4

This is my periodic "thank you" for all the work that goes into these things.

Shulman and Yudkowsky on AI progress

Whenever birds are an outlier I ask myself "is it because birds fly?" Bird cells (especially bird mitochondria) are intensely optimized for power output because flying demands a high power-to-weight ratio. I think bird cells' individually high power output produces brains that can perform more (or better) calculations per unit volume/mass.

Second-order selection against the immortal

Doesn't matter. Taking the immortality pill grants a strict competitive advantage over people who don't take it.

2M. Y. Zuo3dIt seems that this post is describing the regime beyond the threshold where group advantages outweigh individual advantages.
4Gunnar_Zarncke3dThe OP is not arguing on the individual level but on the population level. It is not uncommon that populations evolve to extinction.
Second-order selection against the immortal

The winning strategy is to take the immortality pill and reproduce. Voluntarily stopping having children to prevent over-crowding only works if everybody does it.

3Gunnar_Zarncke4dHe addresses this in the section "If the Immortals do continue to have babies, their second-order fitness is still pretty bad".
The Best Virtual Worlds for "Hanging Out"

I think this post is interesting as a historical document. I would like to look back at this post in 2050 with the benefits of hindsight.

Why Artists Study Anatomy

I like that this post addresses a topic that is underrepresented on Less Wrong and does so in a concise technical manner approachable to non-specialists. It makes accurate claims. The author understands how drawing (and drawing pedagogy) works.

100 Tips for a Better Life

I like this post because it following its advice has improved my quality of life.

The 2020 Review [Updated Review Dashboard]

Thank you for the link to my 2020 upvotes. I didn't know that was a thing. It brings the preliminary voting up from "super inconvenient" to "convenient".

It's weird looking at my list of strong upvotes, given that a lot of them are posts I have no memory of. "I guess I really liked this post since I strong-upvoted it, also I guess it was forgettable since if you'd told me I'd never seen it, I might have believed you."

(Possibly this says more about my memory than about the posts.)

It is a thing as of today. :)

Visible Thoughts Project and Bounty Announcement

It seems to me that their priority is find a pipeline that scales. Scaling competitions are frequently long-tailed, which makes them winner-take-all. A winner-take-all system has the bonus benefit of centralized control. They only have to talk to a small number of people. Working through a single distributor is easier than wrangling a hundred different authors directly.

Visible Thoughts Project and Bounty Announcement

Does your offer include annotating your thoughts too or does it only include writing the prompts?

7Brangus4dAfter trying it, I've decided that I am going to charge more like five dollars per step, but yes, thoughts included.
Coordinating the Unequal Treaties

That's a good question. I think the answer is "no" because each Western power had lots of rivals.

The Cold War was a different story. In the Cold War, there were (in theory) only two opposing sides. The USA would fund basically anyone who opposed the USSR (and vice versa).

First Strike and Second Strike

You're not wrong. Context does indeed matter. Few systems fall perfectly into first-strike vs second-strike.

[Book Review] "Sorceror's Apprentice" by Tahir Shah

I wanted to give readers the experience of what it was like for me to read the book.

2Pattern12dBy the way, this was a cool book review. (The table of contents on the left, didn't really follow the structure, but as per usual I just read the whole thing without looking at that until afterward.) Does the review start with how you found the book in order to give the reader a taste of what reading that book is like, or just because how you find a book affects things like whether you finish it, and how you understand it?
Attempted Gears Analysis of AGI Intervention Discussion With Eliezer

I agree that GPT-3 sounds like a person on autopilot.

Re: Attempted Gears Analysis of AGI Intervention Discussion With Eliezer

The 1940's would like to remind you that one does not need nanobots to refine uranium.

I'm confused. Nobody has ever used nanobots to refine uranium.

I'm pretty sure if I had $1 trillion and a functional design for a nuclear ICBM I could work out how to take over the world without any further help from the AI.

Really? How would you do it? The Supreme Leader of North Korea has basically those resources and has utterly failed to conquer South Korea, much less the whole world. Israel and Iran are in similar situations and they're mere regional powers.

Re: Attempted Gears Analysis of AGI Intervention Discussion With Eliezer

Designing nuclear weapons isn't any use. The limiting factor in manufacturing nuclear weapons is uranium and industrial capacity, not technical know-how. That (I presume) is why Eliezer cares about nanobots. Self-replicating nanobots can plausibly create a greater power differential at a lower physical capital investment.

Do I think that the simplest AI capable of taking over the world (for practical purposes) can't be boxed if it doesn't want to be boxed? I'm not sure. I think that is a slightly different from whether an AI fooms straight from 1 to 2. I th... (read more)

1Logan Zoellner21dThe 1940's would like to remind you that one does not need nanobots to refine uranium. I'm pretty sure if I had $1 trillion and a functional design for a nuclear ICBM I could work out how to take over the world without any further help from the AI. If you agree that: 1. it is possible to build a boxed AI that allows you to take over the world 2. taking over the world is a pivotal act then maybe we should just do that instead of building a much more dangerous AI that designs nanobots and unboxes itself? (assuming of course you accept Yudkowski's "pivotal-act framework of course).
Re: Attempted Gears Analysis of AGI Intervention Discussion With Eliezer

Thank you for the quality feedback. As you know, I have a high opinion of your work.

I have replaced "outer alignment" with "bad actor risk". Thank you for the correction.

Re: Attempted Gears Analysis of AGI Intervention Discussion With Eliezer

The way I look at things, an AGI fooms straight from 1 to 2. At that point it has subdued all competing intelligences and can take it's time getting to 3. I don't think 2 can plausibly be boxed.

1Logan Zoellner21dYou don't think the simplest AI capable of taking over the world can be boxed? What if I build an AI and the only 2 things it is trained to do are: 1. pick stocks 2. design nuclear weapons Is your belief that: a) this AI would not allow me to take over the world or b) this AI could not be boxed ?
Education on My Homeworld

I played American football for two years. It was a lot of fun.

I had a online friend I made through foreign language learning provide a source of KN95 masks at the height of the COVID-19 shortage. He lives under an authoritarian government. Long-term relationships are one way how you avoid scams over there.

2Jiro20dOkay, then change it to "you like American football less than the people who that statement was addressing like it".
6Jiro21d"I did this and it was great" is pretty much a subset of typical minding. Your own experiences are always going to include a combination of things that actually work in general, things that occasionally work if you get lucky, and things that work for people like you but don't generalize.
Open & Welcome Thread November 2021


[T]he many-worlds interpretation of quantum mechanics. Such a view would completely destroy the idea that this world is the special creation of an Omni-Max God who has carefully been steering Earth history as part of His Grand Design.

One planet. A hundred billion souls. Four thousand years. Such small ambitions for an ultimate being of infinite power like Vishnu, Shiva or Yahweh. It seems more appropriately scoped for a minor deity.

3Jon Garcia21dWell, at the time I had assumed that Earth history was a special case, a small stage temporarily under quarantine from the rest of the universe where the problem of evil could play itself out. I hoped that God had created the rest of the universe to contain innumerable inhabited worlds, all of which would learn the lesson of just how good the Creator's system of justice is after contrasting against a world that He had allowed to take matters into its own hands. However, now that I'm out of that mindset, I realize that even a small Type-I ASI could easily do a much better job instilling such a lesson into all sentient minds than Yahweh has purportedly done (i.e., without all the blood sacrifices and genocides).
Why do you believe AI alignment is possible?

Definition implies equality. Equality is commutative. If "human values" equals "whatever vague cluster of things human brains are pointing at" then "whatever vague cluster of things human brains are pointing at" equals "human values".

2Samuel Shadrach22dAgreed but that doesn't help. If you tell me that A aligns with B and B is defined as the thing that A aligns to, these statements are consistent but give zero information. And more specifically, zero information about whether some C in Set S can also align with B.
What the future will look like
  • I hope the $10 in cryptocurrency I get for saving energy is proof of work. I am ideologically opposed to proof of stake.
  • I appreciate the charity for machine rights. Machines are people too.
  • I want to hack someone else's neuro-pellets and Rickroll them.
Why do you believe AI alignment is possible?
Answer by lsusrNov 15, 202110

Human brains are a priori aligned with human values. Human brains are proof positive that a general intelligence can be aligned with human values. Wetware is an awful computational substrate. Silicon ought to work better.

6Raven21dHumans aren't aligned once you break abstraction of "humans" down. There's nobody I would trust to be a singleton with absolute power over me (though if I had to take my chances, I'd rather have a human than a random AI).

Arguments by definition don't work. If by "human values" you mean "whatever humans end up maximizing", then sure, but we are unstable and can be manipulated, which isn't we want in an AI. And if you mean "what humans deeply want or need", then human actions don't seem very aligned with that, so we're back at square one.

2Samuel Shadrach22dI see but isn't this reversed? "Human values" are defined by whatever vague cluster of things human brains are pointing at.
Education on My Homeworld

I read your Hacker News post. What don't you like about the curriculum? If the answer is "it's too easy" or "I hate Java" then you should take seriously the idea of dropping out (or if you're a freshman then consider changing your major to something harder like math or physics). If the classes aren't hard enough then the biggest thing you (personally) will lose if you drop out of college is an easy entry ticket into the big tech firms like Amazon, Facebook, etcetera. Try to arrange for a company to hire you early, before you graduate. If you succeed then y... (read more)

Education on My Homeworld

There is no legal obligation to prevent other people from hurting themselves. If someone uses your stuff without permission then it's basically impossible for them to sue you for negligence. Consequently, many workshop-like trespassing is done with a wink and a nod rather than explicit permission.

1M. Y. Zuo21dInteresting, how have the forces promoting greater regulations, liability, etc., been kept quiescent on your homeworld?
Improving on the Karma System

In my personal experience, a single post's karma already operates as a logarithmic measure of quality. It takes more than twice as much effort to write a 100 karma post compared to a 50 karma post.

Improving on the Karma System

Nitpick. Accumulating karma is useful in one respect: high karma users get more automatic karma in our posts, which draws more attention to them.

I agree with the do nothing proposal, by the way. The current system, while imperfect, is simple and effective.

Education on My Homeworld

In The Case against Education: Why the Education System Is a Waste of Time and Money, Bryan Caplan uses Earth data to make the case that compulsory education does not significantly increase literacy. I'm skeptical that prosociability and the ability to manage your own boredom are taught at school in a way that would not be learned otherwise. Managing your own boredom requires freedom, which is the opposite of compulsion. Sociability requires permission to speak, which is forbidden by default in classroom-style schooling. Algebra and calculus seem the most ... (read more)

4Zolmeister22dMy reading is that he claims compulsory education had little effect in Britain and the US, where literacy was already widespread. There's an interesting footnote where he references a paper on economic returns of compulsory education [], which cites many sources (p14) finding little to no economic return from schooling reform (though limited to Europe).

In The Case against Education: Why the Education System Is a Waste of Time and Money, Bryan Caplan uses Earth data to make the case that compulsory education does not significantly increase literacy.

Compulsory education increases literacy, see the Likbez in the USSR.

Managing your own boredom requires freedom, which is the opposite of compulsion.

One can make the opposite assertion, that it's fastest learned through discipline, and point to Chinese or South Korean schools.

I don’t doubt that it’s useful to have the whole population learn reading and

... (read more)
[Book Review] "The Bell Curve" by Charles Murray

There's nothing to worry about, but thanks. I didn't even lose my phone.


It shouldn't be that way at all. The normal way to save progress while you're editing a file is to type :w followed by the Enter key. If you do this, Vim should write (or overwrite) the file on disk, resulting in a maximum of one file. (I'm ignoring the hidden temporary file.)

  • Escape is too far from homerow compared to Ctrl+[. It's better to use Ctrl+[. I wrote about the i key in the "Insert Mode" section.
  • I'm not sure I understand the question. I take it you mean you save various versions of the same file? For version control, I use Git.
  • If you're using Vim via the terminal, you can often paste via Ctrl+Shift+v.
1Crackatook1moO I like these keys. Thank you Each time I save progress, vim creates another file. At the end, I have multiple files in addition to the original one. But it seems like it is not supposed to work that way?
We Live in a Post-Scarcity Society

Attention. Bitcoin. Military superiority. Being the prettiest person in the room. Anything where value is defined as winning a competition against other people.

Are there any essays on what scares us? A study of fear, so-to-speak.

Not that I know of—at least on this website. That being the case, here are my thoughts.

Fear is an evolutionary adaptation to avoid danger. Some things like snakes, spiders, heights, darkness, the unknown, social exclusion, people who are a little off and large charging animals are scary because evolution has had plenty of time to evolve mechanisms to recognize them. You can also learn fears. For example, guns are scary even though there is no evolutionarily programmed fear of guns. We learn to fear guns. You can unlearn fears too, via (de)conditioning.

The ... (read more)

1sunokthinks1moAwesome! I think I'll write up a draft. Thanks!
Tell the Truth

[T]his is one point where you should explain more.

I will explain more. Total heritability of intelligence (in the US) might be as low as .40 (but probably isn't). Heritability of intelligence due to being in one particular genetic bucket must be strictly lower than total heritability of intelligence. "Significant" can be below 50%.

An example from that article is that wearing earrings used to be highly heritable because you just had to look at whether they were female or male. As more people have started wearing earrings, the earring wearing trait has

... (read more)
1mysticRobot1moThanks for the thoughtful reply, and the interesting read! I'm not claiming that IQ has zero genetic component, but I am saying that it's not straightforward to conclude there are significant ethnic differences in IQ that are determined by genes. To be specific, I'm arguing that IQ between ethnic groups in the US is likely much less than 50% determined by genes. Finding genes correlated with IQ doesn't imply genes play a direct causal role, and there are very strong explanations that don't involve genes such as socioeconomic status for example. I'd wager around 0 to10% of the variation within normal IQ ranges is determined by genes for some cases, although that's speculation based on evidence. I can't find any rigorous scientific study of genes changing IQ (within normal ranges, as you can have genes that make the brain dysfunctional). Do you claim that heritability of intelligence due to being in one particular genetic bucket is closer to 50%? Or how much lower would you put it?
Are there any essays on what scares us? A study of fear, so-to-speak.

I interpret this question as seeking a list of scary things like snakes, spiders and heights. Is that what you're looking for?

1sunokthinks1moHey there! Thanks for your reply! I was actually wondering what fundamentally makes things scary, not things that are already scary. I take it that there is none?
Contact Us

The LW moderation team has always responded quickly and helpfully to my inquiries. I expect they will behave similarly to any other reasonable person who contacts them in good faith.

[Book Review] "The Bell Curve" by Charles Murray

In retrospect, I wish I had titled this [Book Review] "The Bell Curve" by Richard Herrnstein instead. That would have been funny.

I have read two other books by Charles Murray and zero other books by Richard Herrnstein. In my head, I think of all of them as "Charles Murray books", which is unfair to Richard Herrnstein.

5Ben Pace1mo+1 it would have been funny, especially if you'd opened by lampshading it.
[Book Review] "The Bell Curve" by Charles Murray

You have my sympathy. I hope you are personally OK. Also, I hope, for the sake of that whole neighborhood, that the criminal is swiftly captured and justly punished. I fear there is little I can do to help you or your neighborhood from my own distant location, but if you think of something, please let me know.

I'm totally unharmed. I didn't even lose my phone. There is absolutely nothing you can do but appreciate the offer and the well wishes.

2JenniferRM1moI'm glad you are unharmed and that my well wishes were welcome :-)
The Opt-Out Clause

I know why you're here, Neo. I know what you've been doing... why you hardly sleep, why you live alone, and why night after night, you sit by your computer. You're looking for him. I know because I was once looking for the same thing. And when he found me, he told me I wasn't really looking for him. I was looking for an answer. It's the question, Neo. It's the question that drives us. It's the question that brought you here. You know the question, just as I did.

The Matrix

2Dojan1moHow many roads must a man walk down?
7Eliezer Yudkowsky1moHow much wood would a woodchuck chuck if a woodchuck could chuck wood?
The Opt-Out Clause

There's not just one. We default into several overlapping simulations. Each simulation requires a different method of getting out. One of them is to just stare at a blank wall for long enough.

The Opt-Out Clause

This isn't a thought experiment. It's real, except the opt-out procedure is more complicated than a simple passphrase.

The problem is that this other procedure has side effects in worlds that are not simulations.

1Raymond D1moWhat's the procedure?
Tell the Truth

How many people answered the poll?

Vaccine Requirements, Age, and Fairness

My local ballroom came close to closing permanently due to lack of revenue. Forcing dance spaces to keep closed for several additional months would drive many of them out of business permanently.

What is the most evil AI that we could build, today?

Consider that a detailed answer to this question might constitute an information hazard.

I don't think this is dangerous to talk about. If anything, talking publicly about my preferred attack vectors helps the world better triage them and (if necessary) deploy countermeasures. It's not like anybody is really going to throw away $1 billion for the sake of evil.

3Zac Hatfield Dodds1moI agree; open discussion and red-teaming are valuable and I'm not concerned by your proposed (anti-?) financial attack vector. To quote Bostrom:
What is the most evil AI that we could build, today?

"[W]hat is the most infectious lethal virus which could be engineered and released today"?

Off the top of my head, my first impulse is to upgrade an influenza virus via gain-of-function research. Influenza spreads easily and used to kill lots of people. Plus, you can infect ferrets with it. (Ferrets have similar respiratory systems to human beings.) I don't think it's dangerous to talk about weaponized influenza because these facts are already public knowledge among biologists.

What is the most evil AI that we could build, today?

Yes and yes. However, pyramid schemes are created to maximize personal wealth, not to destroy collective value. Those are not quite the same thing. I think a supervillain could cause more harm to the world by setting out with the explicit aim of crashing the market. It's the difference between an accidental reactor meltdown verses a nuclear weapon. If LTCM achieved 95% leverage acting with noble aims, imagine what would possible for someone with ignoble motivations.

What is the most evil AI that we could build, today?

How exactly would you do this?…Even if you eventually grew your assets to $10B, how would you engineer a global liquidity crisis?

Pyramid scheme. I'd take up as much risk, debt and leverage as I can. Then I'd suddenly default on all of it. There are few defenses against this because rich agents in the financial system have always acted out of self-interest. Nobody has even intentionally thrown away $10 billion dollars and their reputation just to harm strangers indiscriminately. The attack would be unexpected and unprecedented.

4ThomasJ1moDidn't this basically happen with LTCM? They had losses of $4B on $5B in assets and a borrow of $120B. The US government had to force coordination of the major banks to avoid blowing up the financial markets, but meltdown was avoided. Edit: Don't pyramid schemes do this all the time, unintentionally? Like, Madoff basically did this and then suddenly (unintentionally) defaulted.
Load More