Recent Discussion

[Link]Time Binders
242d1 min readShow Highlight

My continued exploration of Korzybski and the history of rationality.

I like the term "Memetic Ancestors" that you used (coined?)


"even if Korzybski gets lets himself get sucked into"

1Yoav Ravid12mI agree. I love LessWrong (and its surroundings), but i think it hasn't yet lived to its promise. to me it seems the community/movement suffers somewhat from focusing on the wrong stuff and premature optimization. it also seems that sequences suffer from the same halo effect as the author's project (origin, which I'm not familiar with). it has been written more then 10 years ago, ending on a note that there's still much to be discovered and improved about rationality - even with it's release as a book Eliezer noted in the preface his mistakes with it. Since there seems to be agreement on the usefulness of a body of information everybody is expected to read (e.g "read the sequences"), I'd expect there would at least be work or thought on some sort a second version. Just to be clear, since intentions sometimes don't come through in text, I'm saying that out of love for the project, not spite. I've came across this site a bit more then a year ago and have read a ton of content here, i both love it and somewhat disappointed - In short, I feel there's still a level above ours.

What I know about it from high school and general articles on the net doesn't satisfy. Maybe because I have critical holes in my knowledge.

From what I think I know: we're having AC running in the lines. AC means if we zoom down, we'll see that an electron is zipping along this direction, and after 1/50 sec (or 1/100?) that very electron will zip back in the opposite direction, ideally back to the specific point we're looking at, because phases are supposed to be equal.

So how does resistance come into the picture at atomic scale? Conductors heat up after a while, so maybe th... (Read more)

2Long try4hMy appreciation - that's really helpful, especially point 2. I was a bit hesitating when I saw the amount of links in cousin_it's link, but point 3 encourages me to do it, even slowly. Point 4 is kinda hard from my POV. I admit I'm too lazy to dig all the sources to display in a post. But then, if a question is formatted like that, wouldn't it be way too long? I thought titles should be concise & provoking.

Remember, you have a title and a body to work with when asking a question. Pithy titles are good for getting attention, and there's room for a bit more elaboration once people click through. The key is to keep it both open-ended and specific so the conversation has somewhere solid to start from. Otherwise you'll get a lot more off-topic discussion.

I'm glad you found my notes helpful!

Lotteries: A Waste of Hope
4313y1 min readShow Highlight

The classic criticism of the lottery is that the people who play are the ones who can least afford to lose; that the lottery is a sink of money, draining wealth from those who most need it. Some lottery advocates, and even some commentors on Overcoming Bias, have tried to defend lottery-ticket buying as a rational purchase of fantasy—paying a dollar for a day’s worth of pleasant anticipation, imagining yourself as a millionaire.

But consider exactly what this implies. It would mean that you’re occupying your valuable brain with a fantasy whose real probability is nearl... (Read more)

Necroing because what the hell was that!

May it be to poor education at a younger age, traumatic experiences, simply having the wrong education for the current playfield, being sacked at an older age, having no available finances to support further education, a lack of intelligence or simply spiraling down the road of depression due to a lack of chances or being stuck in a debt one can never recover of in a lifetime... these are all scenario's in which the player on the Lotto actually rationally pays for the soothing dream of a better (financial) future.

... (read more)

Posts like this have been written before, but I think it's worth making the point periodically.

Lurker ratios have likely increased over time. Comments and discussion are an important feedback mechanism for content creators. So if you see stuff you like, and you'd like to see more posts like it, it's quite helpful to comment. Many people report being intimidated about posting, especially if the platform in question has a highly specific vocabulary and norms. I wanted to offer a couple of the heuristics I use for making comments as well as invite others to boggle/comment/discuss w... (Read more)

I think it's OK for LW comments to be relatively off-the-cuff (in the sense of a discussion section for a college course). I mean, my off-the-cuff comments get upvoted, at least.

New article from Oren Etzioni
1520h1 min readShow Highlight

(Cross-posted from EA Forum.)

This just appeared in this week’s MIT Technology Review: Oren Etzioni, “How to know if AI is about to destroy civilization.” Etzioni is a noted skeptic of AI risk. Here are some things I jotted down:

Etzioni’s key points / arguments:

  • Warning signs that AGI is coming soon (like canaries in a coal mine, where if they start dying we should get worried)
    • Automatic formulation of learning problems
    • Fully self-driving cars
    • AI doctors
    • Limited versions of the Turing test (like Winograd Schemas)
      • If we get to the Turing test itself then it'll be too
... (Read more)
6Davidmanheim4hAs I said offline to Aryeh, in my mind, this is another example of people agreeing on most of the object level questions. For example, Etzioni's AI timelines overlap with most of the "alarmists," but (I assume) he's predicting the mean, not the worst case or 95% confidence interval for AI arrival. And yes, he disagrees with Eliezer on timelines, but so do most others in the alarmist camp - and he's not far away from what surveys suggest is the consensus view. He disagrees about planning the path forward, mostly due to value differences. For example, he doesn't buy the argument that most of the Effective Altruism / Lesswrong community has suggested that existential risk is a higher priority than almost anything near-term. He also clearly worries much more about over-regulation cutting off AI benefits.
4johnswentworth14hIf you can write down all the goals of a self-driving car in Python, then I expect there's quite a few companies which would very much like to hire you. It's not failing to recognize an object because it snows that's the problem; it's deciding what to do when it's snowing and there's an unrecognized object. There will always be confusing things all over the place. Even if we had perfect information about the environment, there will still be things in the world which just aren't categorized by the programmed/learned ontology - there are lots of unusual things in the world. If the car always responds to anything novel by braking, then it's going to be a slow and frustrating ride very often. The things-we-want-a-car-to-do are complicated - much like the things-we-want in general. There's a very wide tail of edge cases, and it's the edge cases that make the problem hard.
5ChristianKl4hThe accident of Telsa and Uber that resulted in deaths were both about not recognizing the object correctly.

Really? An accident where the system noticed something unusual, and then just froze up and waited for a second, is attributed to "not recognizing the object correctly" rather than "not deciding what to do about an unrecognized object correctly"? I mean, sure, recognizing the object would have been a sufficient condition to avoid the accident... but that's not the real problem here.

There will always be unrecognized objects. A self-driving car which cannot correctly handle unrecognized objects is not safe, and the Uber accident is a great example of that.

Quarantine Preparations
571d1 min readShow Highlight

A month ago I wrote about disaster preparedness, and while the current coronavirus outbreak had started it wasn't something I knew about yet. Now that there's a real possibility that it will spread globally, it's worth preparing for this specific disaster.

The ideal time to start thinking about how to respond was probably several weeks ago: some supplies like masks are already hard to find or very expensive. On the other hand, paying enough attention to potential issues that you catch them early is pretty unpleasant unless you enjoy it as a hobby. This is a strong advantage of pr... (Read more)

That may be true, but it is not a product of the general public not knowing UDT. A large number of people don't think or act in a CDT way either, and a lot of people that don't care for decision theory follow the categorical imperative.

1MakoYass4hI agree with avturchin, it's an appropriate thought to be having. UDT-like reasoning is actually fairly common in populations that have not been tainted with CDT rationality (IE, normal people) (usually it is written off by cdt rationalists as moralising or collectivism). This line of thinking doesn't require exact equivalence, the fact that there are many other people telling many other communities to prep is enough that all of those communities should consider the aggregate effects of that reasoning process. They are all capable of saying "what if everyone else did this as well? Wouldn't it be bad? Should we really do it?"
3John_Maxwell4hI think you could also argue that panic buying of this sort makes our supply chain more resilient in the event of an actual disaster, since warehouse owners will have an incentive to stockpile goods that people might hoard?
2Matthew Barnett11hWhere would you place global economic depression on your bimodal distribution? See my shortform post [] .


As the story goes, there was once a programmer with a bug. They wanted help solving the bug, so they asked a college. In the process of explaining the bug to the college, they solved the bug.

The theory is that in the process of explaining the bug, the programmer was forced to unravel parts of their model for how their code worked. In the process of this unraveling, they discovered parts of their model that were false, which led to a solution.

The programmer realized that they could just use a rubber duck instead of a human.

(And they say that programmers are going to be among the la

... (Read more)
4NANO9hThank you a lot for making these posts, I have read each one of them and they are helping me on my daily life.

You're welcome! Glad you're finding them helpful. Any insights you want to share?

Matthew Barnett's ShortformΩ
77mo1 min readΩ 2Show Highlight

I intend to use my shortform feed for two purposes:

1. To post thoughts that I think are worth sharing that I can then reference in the future in order to explain some belief or opinion I have.

2. To post half-finished thoughts about the math or computer science thing I'm learning at the moment. These might be slightly boring and for that I apologize.

It makes it much easier for people to dox you. There are some very bad ways that this can manifest.

I agree with this, so my original advice was aimed at people who already made the decision to make their pseudonym easily linkable to their real name (e.g., their real name is easily Googleable from their pseudonym). I'm lucky in that there are lots of ethnic Chinese people with my name so it's hard to dox me even knowing my real name, but my name isn't so common that there's more than one person with the same full name in the rationalist/EA space. (Even t

... (read more)
2Dagon7hIf you knew that then, it was actionable. If you know it now, and other traders also do, it's not.
2Matthew Barnett6h[ETA: I'm writing this now to cover myself in case people confuse my short form post as financial advice or something.] To be clear, and for the record, I am not saying that I had exceptional foresight, or that I am confident this outbreak will cause a global depression, or that I knew for sure that selling stock was the right thing to do a month ago. All I'm doing is pointing out that if you put together basic facts, then the evidence points to a very serious potential outcome, and I think it would be irrational at this point to place very low probabilities on doomy outcomes like the global population declining this year for the first time in centuries. People seem to be having weird biases that cause them to underestimate the risk. This is worth pointing out, and I pointed it out before.
2Matthew Barnett7hAs I said, I wrote a post about the risk about a month ago...
Reviewing the Review
308h9 min readShow Highlight

We just spent almost two months reviewing the best posts of 2018. It was a lot of development work, and many LW users put in a lot of work to review and vote on things. 

We’ve begun work on the actual printed book, which’ll be distributed at various conferences and events as well as shipped to the featured authors. I expect the finished product to influence the overall effect of the Review. But meanwhile, having completed the “review” part, I think there’s enough information to start asking: 

Was it worth it? Should we do it again? How should we do it differently?

Was it worth it? Shoul

... (Read more)
Hoarding and Shortages
1616h1 min readShow Highlight

One of the main responses to yesterday's post on preparing for a potential quarantine was something like:

Hoarding causes shortages. Leave masks for people that need them.
Another commenter made a similar argument with food.

I think the biggest question here is whether you think there's time and capacity for producers to react to increased demand. For example, some mask factories are not running right now because they're in affected areas, but many others are still running. More people trying to buy masks raises the market price, which makes it worth it for these factories ... (Read more)

Nobody's talking about DESTROYING the things you buy, are they? Zero-sum isn't negative-sum. There's a very real question of "can I decide how to use these better than a random less-foresightful person (who wanted to buy it later, but was unable because I bought the last)"? For me, the answer is clearly "yes". If that's by re-selling (or giving away) my surplus, great! If that's by keeping my family safe instead of someone else's, I can live with that.

As long as there is limited supply and unlimited (or just very large) demand, you're doing no harm and some potential good by buying early. This is true on any timescale.

0. Introduction: why yet another post about subagents?

I’ve recently been writing a sequence on how subagents can undermine impact penalties such as attainable utility preservation. I’m not happy with that sequence; it’s messy and without examples (apart from its first post), people didn’t understand it, and it suffers from the fact that I discovered key ideas as I went along.

So I’ve combined everything there into a single post, explained with examples and an abundance of pictures. Hopefully an over- rather than an under-abundance of pictures. Of the original sequence, I've only kept the mathe

... (Read more)
2Gurkenglas16hIt's only equal to the inaction baseline on the first step. It has the step of divergence always be the last step. Note that the stepwise pi0 baseline suggests using different baselines per auxiliary reward, namely the action that maximizes that auxiliary reward. Or equivalently, using the stepwise inaction baseline where the effect of inaction is that no time passes. I'll also remind here that it looks like instead of merely maximizing the auxiliary reward as a baseline, we ought to also apply an impact penalty to compute the baseline.
2Stuart_Armstrong14hI'm not following you here. Could you put this into equations/examples?

Here's three sentences that might illuminate their respective paragraph. If they don't, ask again.

The stepwise inaction baseline with inaction rollouts already uses the same policy for and rollouts, and yet it is not the inaction baseline.

Why not set ?

Why not subtract from every (in a fixpointy way)?

Open & Welcome Thread - February 2020
1622d1 min readShow Highlight

If it’s worth saying, but not worth its own post, here's a place to put it. (You can also make a shortform post)

And, if you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ.

The Open Thread sequence is here.

I share this reaction. I think that a lot of people are under-reacting due to misperception of overreaction, signaling wisdom and vague outside view stuff. I can tell because so far everyone who has told me to "stop panicking" won't give me any solid evidence for why my fears are underrated.

It now seems plausible that unless prominent epidemiologists are just making stuff up and the deathrate is also much smaller than its most commonly estimated value, then between 60-160 million people will die from it within about a year. Yet when I tell people this they just brush it off!

[Event]March Slate Star Codex Meetup
2Mar 21stWashingtonShow Highlight

Monthly discussion meetup for March 2020.

For more details see the announcement on Google Groups:!topic/dc-slatestarcodex/quCnTBB7qMw

What information about the virus' nature and spread would cause you to believe it's too risky to continue holding workshops?

Answer by evhubFeb 25, 202010

The CDC is currently warning that pandemic COVID-19 in the U.S. is likely and are currently moving their focus from prevention to mitigation. Specifically, the CDC has said that while they are “continuing to hope that we won't see [community] spread, ” the current goal is “that our measures give us extra time to prepare." Once spread within the US is confirmed, the CDC has noted that mitigation measures will likely include “social distancing, school closures, canceling mass gatherings, [...] telemedicine, teleschooling, [and] teleworking.” As CFAR workshop

... (read more)

There is one thing that really striked me after reading HPMOR, it's a certain pattern of events that repeats many times.

1) Harry gets into grave trouble due to his self-assurance and indiscretion

2) The author saves Harry using deus ex machina

And this makes me wonder - was Harry intentionally shown as an anti-example of rationality, or it just happened this way?

1ndee16hit's entirely possible that if Harry hadn't gotten out to pass a note, someone would have gone back in time to investigate his death, and inadvertently caused a paradox by unlocking the door. Sounds like too much of a stretch to me. Doesn't this make Harry virtually immortal unless something so catastrophic happens that it destroys all the world at once? in chapter 28 when he used transfiguration to apply force. I don't remember that part, could point me to it?

It is a stretch, which is why it needed to be explained.

And yes, it would kind of make him immune to dying... in cases where he could be accidentally rescued. Cases like a first year student's spell locking a door, which an investigator could easily dispel when trying to investigate.

Oh, and I guess once it was established, the other time travel scenes would have had to be written differently. Or at least clarify that "while Draco's murder plot was flimsy enough that the simplest timeline was the timeline in which it failed, Quirrel's mu

... (read more)
2Pattern17hThat's what the end of the book is about.
1ndee16hThe end of the book looks like Harry's worst case of self-assurance and indiscretion to me.

Previously: Slack

In a couple earlier articles I urged people to adopt strategies that reliably maintain a margin of "30% slack." I've seen lots of people burn out badly (myself included), and preserving a margin of resources such that you don't risk burning out seems quite important to me. 

But I realized a) "30% slack" isn't very clear, and b) this is an important enough concept it should really have a top-level post.

So, to be a bit more obvious:

Maintain enough slack that you can absorb 3 surprise problems happening to you in a week, without dipping into reserves.

"Surprise problems" can t

... (Read more)
Bayesian Evolving-to-ExtinctionΩ
3511d4 min readΩ 16Show Highlight

The present discussion owes a lot to Scott Garrabrant and Evan Hubinger.

In Defining Myopia, I formalized temporal or cross-instance myopia / non-myopia, but I claimed that there should also be some kind of single-instance myopia which I hadn't properly captured. I also suggested this in Predict-O-Matic.

This post is intended to be an example of single-instance partial agency.

Evolving to Extinction

Evolution might be myopic in a number of ways, but one way is that it's myopic across individuals -- it typically produces results very different from what group selection would produce, because it's

... (Read more)
Or just bad implementations do this - predict-o-matic as described sounds like a bad idea, and like it doesn't contain hypotheses, so much as "players"*. (And the reason there'd be a "side channel" is to understand theories - the point of which is transparency, which, if accomplished, would likely prevent manipulation.)

You can think of the side-channel as a "bad implementation" issue, but do you really want to say that we have to forego diagnostic logs in order to have a good implementation of "hypotheses" ... (read more)

2abramdemski15hAh right! I meant to address this. I think the results are more muddy (and thus don't serve as clear illustrations so well), but, you do get the same thing even without a side-channel.
3abramdemski15hYeah, in probability theory you don't have to worry about how everything is implemented. But for implementations of Bayesian modeling with a rich hypothesis class, each hypothesis could be something like a blob of code which actually does a variety of things. As for "want", sorry for using that without unpacking it. What it specifically means is that hypotheses like that will have a tendency to get more probability weight in the system, so if we look at the weighty (and thus influential) hypotheses, they are more likely to implement strategies which achieve those ends.
31 Laws of Fun
5011y8 min readShow Highlight

So this is Utopia, is it?  Well
I beg your pardon, I thought it was Hell.
        -- Sir Max Beerholm, verse entitled
        In a Copy of More's (or Shaw's or Wells's or Plato's or Anybody's) Utopia

This is a shorter summary of the Fun Theory Sequence with all the background theory left out - just the compressed advice to the would-be author or futurist who wishes to imagine a world where people might actually want to live:

  1. Think of a typical day in the life of someone who's been adapting to Utop
... (Read more)

I think you may be overlooking that this is a guide for fictional utopias. I’m not sure a good story could be written about a world full of humans in a vegetative bliss state. But maybe it can! :)

This post was written for Convergence Analysis.


We introduce the concept of memetic downside risks (MDR): risks of unintended negative effects that arise from how ideas “evolve” over time (as a result of replication, mutation, and selection). We discuss how this concept relates to the existing concepts of memetics, downside risks, and information hazards.

We then outline four “directions” in which ideas may evolve: towards simplicity, salience, usefulness, and apparent usefulness. For each “direction”, we give an example to illustrate how an idea mutating in that direction could have n

... (Read more)


Sometimes you really like someone, but you can't for the life of you understand why. By all means, you should have tired of them long ago, but you keep coming back for more. Welcome, my friend, to Topology.

This book is a good one, but boy was it slow (349 pages at ~30 minutes a page, on average). I just kept coming back, and I was slowly rewarded each time I did.

Note: sil ver already reviewed Topology.


Topology is about what it means for things to be "close" in a very abstract and general sense. Rather than taking on the monstrous task of intuitively explaining topology witho

... (Read more)
2Gurkenglas16hHuh? What open set in R contains no rational numbers but 0?

Yikes, you’re right. Oops. Wrote that part early on my way through the book. Removed the section because I don’t think it was too insightful anyways.

Load More