Recent Discussion

Perhaps these can be thought of as homework questions -- when I imagine us successfully making AI go well, I imagine us building expertise such that we can answer these questions quickly and easily. Before I read the answers I'm going to think for 10min or so about each one and post my own guesses.

Useful links / background reading: The glorious EfficientZero: How it Works. Related comment. EfficientZero GitHub. LW discussion.

Some of these questions are about EfficientZero, the net trained recently; others are about EfficientZero the architecture, imagined to be suitably scaled up to AGI levels. "If we made a much bigger and longer-trained version of this (with suitable training environment) such that it was superhuman AGI..."

  1. EfficientZero vs. reward hacking and inner alignment failure:
    1. Barring inner alignment failure,
...
2Steven Byrnes1hMaybe I'm confused, but I'm not ready to take that for granted. I think it's a bit subtle. Let's talk about chess. Suppose I start out knowing nothing about chess, and have a computer program that I can play that enforces the rules of chess, declares who wins, etc. * I play the computer program for 15 minutes, until I'm quite confident that I know all the rules of chess. * …Then I spend 8000 years sitting on a chair, mentally playing chess against myself. If I understand correctly, "EfficientZero is really good at Atari after playing for only 15 minutes" is true in the same sense that "I am good after chess after playing for only 15 minutes" in the above scenario. The 8000 years doesn't count as samples because it was in my head. Someone can correct me if I'm wrong… (Even if that's right, it doesn't mean that EfficientZero isn't an important technological advance—after all, there are domains like robotics where simulation-in-your-head is much cheaper than querying the environment. But it would maybe deflate some of the broader conclusions that people are drawing from EfficientZero, like how AI compares to human brains…)

It's a little bit less dramatic than that: the model-based simulation playing is interleaved with the groundtruth environment. It's more like you spend a year playing games in your head, then you play 1 30s bullet chess match with Magnus Carlsen (madeup ratio), then go back to playing in your head for another year. Or maybe we should say, "you clone yourself a thousand times, and play yourself at correspondence chess timescales for 1 game per pair in a training montage, and then go back for a rematch".

(The scenario where you play for 15 minutes at the begi... (read more)

2Richard_Kennaway10hEDT seems to mean something different every time someone writes an article to prop it up against the argument, made from its infancy, that it recommends trying to change the world by managing the news you receive of it. When the decider neither acts on the world nor is acted on by it (save for having somehow acquired knowledge about the world), there is only maximisation of expected utility, and no distinction between causal, evidential, or any other decision theory. When the decider is embedded in the world, this is called "naturalized decision theory", but no-one has a mature example of one. In the special case where the decider can accurately read the world and act on it, but the world has neither read nor write access to the decision-making process, and no agency in what the decider can know, CDT is correct. In the special case where the decider does not decide at all, but is a passive process that can do nothing but observe its own actions, then EDT is correct. CDT two-boxes on Newcomb and smokes in the Smoking Lesion problem. EDT (as originally formulated) one-boxes and abstains. On LessWrong, I believe the general consensus, or at least, Eliezer's belief, is to one-box and smoke, ruling out both theories. But there is at least one author (see Eells (1981, 1982) discussedhere [https://plato.stanford.edu/entries/decision-causal/]) arguing that EDT (or his formulation of it) two-boxes on Newcomb. Recalling a Zen koan, a CDT agent is not subject to causation and an EDT agent is subject to causation, but a naturalized decision theory must beone with causation [https://www.lesswrong.com/posts/a5Afzce6Ny8oo9p7L/hyakujo-s-fox].
3jacob_cannell16hJust real quick 5. So at a high level we know exactly how many flops it takes to simulate atari - it's about 10^6 flop/s vs perhaps 10^12 flops/s for typical games today (with the full potential of modern gpus at 10^14 flops/s, similar to reality). So you (and by you I mean DM) can actually directly compare - using knowledge of atari, circuits, or the sim code - the computational cost of the learned atari predictive model inside the agent vs the simulation cost of the (now defunct) actual atari circuit. There isn't much uncertainty in that calculation - both are known things (not like comparing to reality). The parameter count isn't really important - this isn't a GPT-3 style language model designed to absorb the web. It's parameter count is about as relevant as the parameter count of a super high end atari simulator that can simulate billions of atari frames per second - not much, because atari is small. Also - that is exactly what this thing is.

This post is an attempt to refute an article offering critique on Functional Decision Theory (FDT). If you’re new to FDT, I recommend reading this introductory paper by Eliezer Yudkowsky & Nate Soares (Y&S). The critique I attempt to refute can be found here: A Critique of Functional Decision Theory by wdmacaskill. I strongly recommend reading it before reading this response post.

The article starts with descriptions of Causal Decision Theory (CDT), Evidential Decision Theory (EDT) and FDT itself. I’ll get right to the critique of FDT in this post, which is the only part I’m discussing here.

“FDT sometimes makes bizarre recommendations”

The article claims “FDT sometimes makes bizarre recommendations”, and more specifically, that FDT violates guaranteed payoffs. The following example problem, called Bomb, is given to illustrate this...

1Heighn3hI'm gonna try this one more time from a different angle: what's your answer on Parfit's Hitchhiker? To pay or not to pay?
2Said Achmiz2hPay.
1Heighn2hSo even though you are already in the city, you choose to pay and lose utility in that specific scenario? That seems inconsistent with right-boxing on Bomb. For the record, my answer is also to pay, I but then again I also left-box on Bomb.

Parfit’s Hitchhiker is not an analogous situation, since it doesn’t take place in a context like “you’re the last person in the universe and will never interact with another agent ever”, nor does paying cause me to burn to death (in which case I wouldn’t pay; note that this would defeat the point of being rescued in the first place!).

But more importantly, in the Parfit’s Hitchhiker situation, you have in fact been provided with value (namely, your life!). Then you’re asked to pay a (vastly smaller!) price for that value.

In the Bomb scenario, on the other h... (read more)

Introduction

In some parts of the world, people go into the forest to hunt mushrooms. The reasons why they do this are not important, but the way that they do it will serve as a guide for noticing.

The problem is that mushrooms are kind of brown and the forest floor is also kind of brown. Finding brown things against a background of brown things is not an easy task.

Finding the first mushroom is done by brute force search. However, after the first mushroom is found, the finder crouches down and looks at that mushroom for a while. This act of observing the mushroom from all angles is called "getting your eyes on."

The purpose of this move is twofold. First, it lets you figure out what shape/color the mushrooms

...

I think there have been a few posts about noticing by now, but as Mark says, I think The Noticing Skill is extremely valuable to get early on in the rationality skill tree. I think this is a good explanation for why it is important and how to go about learning it.

TODO: dig up some of the other "how to learn noticing" intro posts and see how some others compare to this one as a standalone introduction. I think this might potentially be the best one. At the very least I really like the mushroom metaphor at the beginning. (If I were assembling the Ideal Noticing Intro post from scratch I might include the mushroom example even if I changed the instructions on how to learn the rationality-relevant-skills)

Previously:

My parents taught me the norm of keeping my promises.

My vague societal culture taught me a norm of automatically treat certain types of information as private.

My vague rationalist culture taught me norms that include:

Eliezer's post about meta-honesty was one of the most influential posts I've read in the past few years, and among the posts that inspired the coordination frontier. I was impressed that Eliezer looked at ethical edgecases, and wasn't content to make a reasonable judgment call and declare himself...

I wonder if the qualifier (if you are X) is even needed. Whether the dilemma is created by someone manipulating things or just conflicting values (e.g., confidentiality/one's word and discovered wrong correctable by disclosure) who wants to be on the horns.

Why not simply take the stance that I will always reserve judgment on what confidences I will protect and when you telling me something means you are deferring to my judgement, not binding me to your position?

5Viliam2hTo keep a secret properly, you have to act as if you didn't know it. At the same time, if you see things related to the secret, you make conclusions; but then you also have to act as if you hadn't made those conclusions. If the secret is entangled with many things in your life, you need to keep two separate realities. Secrets that are not entangled with anything else are relatively easy to keep. You need to remember to never mention X, and that's it. I guess it is easy to make a mistake and assume that the secret will be of this type, and that it will be easy to keep it... and it turns out the other way round, and suddenly you need to keep track of two separate realities, and it is difficult. Even worse if you e.g. know that something bad is going to happen, but you shouldn't try to prevent it, because in the "as if" reality, you do not have the information. Now you pay additional cost that you didn't expect before. Keeping a secret may require you to lie. Someone asks you "do you have any evidence of X?", and you only keep the secret if you lie and say "no". Again, it is possible that you didn't realize this before; you expected that you will be required to remain silent on something, not to actively tell a lie. Another problem is that it is difficult to keep two different realities in mind. Are you sure you can correctly simulate "what would be my conclusion from observing facts A, B, C, if I didn't know about X?". Like, maybe seeing just A, B, C would be enough for you to figure out X independently. Or maybe not. Or maybe you would merely suspect X, like maybe with 30% probability. And maybe it would prompt you to look for further evidence about X. So, to properly simulate the "you, who haven't been told X", should you now pretend to do an investigation that is equally likely to make you update towards X or away from X? Are you sure you can actually do it? Is this even a meaningful thing to do? Whom are you trying to impress, exactly? So, another bad conseq
3tomcatfish2hI'll note that while the latter is sane, it leads to potential issues with Parallel Construction [https://en.wikipedia.org/wiki/Parallel_construction], which I would expect a bad actor to almost certainly engage in.
4Dagon3hSignificant understatement. Everyone engages in some amount of manipulative behavior, and exactly where to draw the line is personal and situation-dependent. And manipulators (intentional or not) tend to be good at actively finding the line and pressuring you to categorize their behaviors in the way they want.

I'm worried that many AI alignment researchers and other LWers have a view of how human morality works, that really only applies to a small fraction of all humans (notably moral philosophers and themselves). In this view, people know or at least suspect that they are confused about morality, and are eager or willing to apply reason and deliberation to find out what their real values are, or to correct their moral beliefs. Here's an example of someone who fits this view:

I’ve written, in the past, about a “ghost” version of myself — that is, one that can float free from my body; which travel anywhere in all space and time, with unlimited time, energy, and patience; and which can also make changes to different variables, and

...
2Vanessa Kosoy9hYes, it's not a very satisfactory solution. Some alternative/complementary solutions: * Somehow use non-transformative AI to do my mind uploading, and then have the TAI to learn by inspecting the uploads. Would be great for single-user alignment as well. * Somehow use non-transformative AI to create perfect lie detectors, and use this to enforce honesty in the mechanism. (But, is it possible to detect self-deception?) * Have the TAI learn from past data which wasn't affected by the incentives created by the TAI. (But, is there enough information there?) * Shape the TAI's prior about human values in order to rule out at least the most blatant lies. * Some clever mechanism design I haven't thought of. The problem with this is, most mechanism designs rely on money and money that doesn't seem applicable, whereas when you don't have money there are many impossibility theorems.
3Vanessa Kosoy9hI admit that at this stage it's unclear because physicalism brings in the monotonicity principle that creates bigger problems than what we discuss here. But maybe some variant can work. Roughly speaking, in this case the 10% preserve their 10% of the power forever. I think it's fine because I want the buy-in of this 10% and the cost seems acceptable to me. I'm also not sure there is any viable alternative which doesn't have even bigger problems.
4Daniel Kokotajlo10hHave you heard about CEV and Fun Theory? In an earlier, more optimistic time, this was indeed a major focus. What changed is we became more pessimistic and decided to focus more on first things first -- if you can't control the AI at all, it doesn't matter what metaethics research you've done. Also, the longtermist EA community still thinks a lot about metaethics relative to literally every other community I know of, on par with and perhaps slightly more than my philosophy grad student friends. (That's my take at any rate, I haven't been around that long.)

CEV was written in 2004, fun theory 13 years ago. I couldn't find any recent MIRI paper that was about metaethics (Granted I haven't gone through all of them). The metaethics question is important just as much as the control question for any utilitarian (What good will it be to control an AI only for it to be aligned with some really bad values, an AI-controlled by a sadistic sociopath is infinitely worse than a paper-clip-maximizer). Yet all the research is focused on control, and it's very hard not to be cynical about it. If some people believe they are ... (read more)

Bedrooms, at least in the US, are nearly always constructed with built-in closets, but I don't see why? What's the appeal?

Personally, I don't like them. I want the flexibility to arrange furniture however currently best suits my needs, and a built-in closet permanently reserves a portion of the floor area. Stand-alone wardrobes also offer flexibility when occupants vary in how much stuff they have that is a good fit for a closet.

When we were adding dormers to our house four years ago we needed to decide whether to include closets in each of the rooms. Here's what the three new/expanded bedrooms looked like with and without closets:

Anna's bedroom

What will be Nora's bedroom once she's out of our room. Currently my office and where I sleep for the second half of the night.

Lily's bedroom

We decided...

1AnthonyC1hClosets do have a few advantages over furniture, but it's up to you whether this is a worthwhile use of space. * Unlike a wardrobe or armoire, a closet lets you use all the space from floor to ceiling for storage * "Unsightly" storage solutions, like inexpensive stacked bins for out-of-season clothes or extra bedding if you don't have a convenient linen closet, can be hidden in a bedroom closet; similarly, storage solutions that don't match the rest of the room's decor * Sometimes it is helpful to be able to put things "away" in a hurry for a little while, and having a closet to shove things into is an easy stopgap solution for this * Larger doors than typical furniture, if you have to store taller items * Floors are stronger than furniture bottoms, if you need to store anything heavy * Reconfigurability; it's much cheaper to find new ways to organize things in a closet, then to buy new furniture I am currently a few weeks from selling my house and moving into an RV full time, about an 8-fold reduction in square footage for me, my wife, and our dog and cat. I've been thinking a lot this past year about what space and things I actually use, and why, and how. This is what I've got RE: closets.

Closets protect clothes from dust etc.

To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Subscribe to Curated posts
Log In Reset Password
...or continue with

Today is the first day of the LessWrong 2020 Review. At the end of each year, we take a look at the best posts from the previous year, and reflect on which of them have stood the test of time. 

As we navigate the 21st century, a key issue is that we’re pretty confused about which intellectual work is valuable. We have very sparse reward signals when it comes to “did we actually figure out things that matter?”

The Review has a few goals. It improves our incentives, feedback, and rewards for contributing to LessWrong. It creates common knowledge about the LW community's collective epistemic state about the most important posts of 2020. And it turns all of that into a highly curated sequence that people can read. You...

I just made a significant update to the Review Dashboard page.

Most Important Structural Change:

Posts with at least one review, that you. haven't voted on, get sorted to the top. This means that if you think a post is particularly worth people's attention to vote on, you can get it in front of more eyeballs easily by writing at least a short review of it.

Other Changes:

  • The default view for the page includes recent Reviews, so it's easier to catch up on new reviews
  • The Post List on the dashboard now shows posts with unread comments, so you can more easily clic
... (read more)

Civilization is kept afloat by a massive, decentralized body of often unseen knowledge.

This dark mass is made up of innumerable pieces of know-how accumulated by people mostly stumbling around, observing each other and the way things work. It's what's lodged in the head of the East German handyman that knows whom to bribe (and how) to get West German spare parts. It's the idiosyncratic thought patterns and norms picked up by the students of Gerty and Carl Cori, who won the Nobel Prize in 1947, six of which went on to win the prize in turn. It's your two-year-old learning the local language.

Since this body of knowledge is hard to quantify, and often even hard to spot, we tend to not think about it as deeply as...

2Viliam5hThe subreddit for dissatisfied grown homeschoolers, you meant HomeschoolRecovery [https://old.reddit.com/r/HomeschoolRecovery], right? As I am reading it now, I will make some notes here. (Different paragraphs are from different comments; this is not one long text.) For the balance, many homeschoolers have a completely different experience. On the other hand, many kids in the school system have experience like this: As a summary, I'd say the main problem is that (at least as described in homeschooling in USA) there is very little accountability; the parents can do literally anything and there is no consequence. Many homeschooling parents intentionally cut off their children from the "decentralized knowledge system" that you described in the article.
1Henrik Karlsson4hYes, that's the one! That's the downside of the increased variance caused by decentralization. And the upside is someone like JS Mill sitting next to his father translating Greek at four. There need to be subtle controls to sort the one from the other – and maybe that's a bit of a pipe dream since these controls would need to be done by human beings. In the same way as the steel man version of education is a pipe dream because it needs to be implemented by human beings. The accountability is tricky: too little and you end up with the quotes above; too much and you end up forcing everyone to follow the same plan, whether at home or in learning centers or schools, leaving no room for innovation and individual needs. Parts of the US have tended toward the first error, Europe has tended toward the second. I have less insight into other parts of the globe.

And the upside is someone like JS Mill sitting next to his father translating Greek at four.

Technically, this is perfectly legal even in countries without homeschooling. The actual suffering only starts at six. :D

The accountability is tricky: too little and you end up with the quotes above; too much and you end up forcing everyone to follow the same plan

My first idea was to give kids exams at the end of each year, and allow homeschooling to those who overall results are not worse than the average results of kids who attend school. Because, intuitively, the... (read more)

2Viliam7h100% this. It seems like a possible solution could be to decouple some of those functions. For example, there is in my opinion no good reason why the institution that provides education should be the same as the institution that provides certificates. Even if both institutions are government-organized, if you separate them, you fix the problem with grade inflation (teachers give students better grades, to avoid conflicts with parents). But the political advantage of "everything under the same hood" is that you do not need to talk these things explicitly; you can just pretend that they are inseparable parts of "education". If you instead made a separate institution for teaching, separate institution for certification... and a separate institution for socialization (assuming that such thing is even possible), there would probably be a lot of opposition against the "socialization institution", because the mainstream families would see it (correctly) as a waste of time, and the abusive minorities would see it (correctly) as a tool used against their values. And the government would no longer have the "but education! you really need it to get a job" excuse. I think it is good when people can reason outside their profession. Like, consider this COVID-19 situation: how better it would be if people understood how viruses and vaccines work... and how much worse it would be if most people (anyone who is not a doctor or a biologist by profession) believed that even the very concepts of "virus" and "vaccine" are hoaxes. It's like there are two reasons why knowledge is good: the knowledge that is good for you, and the knowledge that is good for your neighbors. If you get sick, it is not just your problem, it has an impact on others. (Even outside of pandemics, you generally want people to wash their hands, not go to work sick, etc.) In democracy, you are supposed to vote on all kinds of topics; it is good if your model of the world in general is not completely stupid, otherw

Village Life in Late Tsarist Russia is an ethnographic account of Russian peasants around 1900. The author, Olga Semyonova Tian-Shanskaia (“Semyonova” for short), spent four years researching in the villages—one of the first to study a people through prolonged direct observation and contact with them.

Olga Semyonova Tian-Shanskaia

I was interested in the subject as part of learning, concretely, about the quality of life for people in various places and times and at various stages of development. Although material progress was advancing rapidly at the end of the 19th century, much of that progress had not yet reached the province of Riazan where Semyonova did most of her studies. What was life like there?

In brief, I went in expecting poverty, which I found. I did not expect to also...

I know this is tangential but it's the kind of thing that makes me wonder if I'm missing something fundamental or if I should see it as reason to doubt other aspects of this book, but...

During a famine, peasant meals consist of stale bread moistened in water and mixed with goosefoot. 

During a famine, why would there be stale bread lying around, let alone more of it than during good times?