The MIRI Summer Fellows Program is having a writing day, where the participants are given a whole day to write whatever LessWrong / AI Alignment Forum posts that they like. This is to practice the skills needed for forum participation as a research strategy.
On an average day LessWrong gets about 5-6 posts, but last year this generated 28 posts. It's likely this will happen again, starting mid-to-late afternoon PDT.
It's pretty overwhelming to try to read-and-comment on 28 posts in a day, so we're gonna make sure this isn't the only chance you'll have to interact with th... (Read more)
Taxes are typically meant to be proportional to money (or negative externalities, but that's not what I'm focusing on). But one thing money buys you is flexibility, which can be used to avoid taxes. Because of this, taxes aimed at the wealthy tend to end up hitting the well-off-or-rich-but-not-truly-wealthy harder, and tax cuts aimed at the poor end up helping the middle class. Examples (feel free to stop reading these when you get the idea, this is just the analogy section of the essay):
Imagine reaching into an urn that contains seventy white balls and thirty red ones, and plucking out ten mystery balls.
Perhaps three of the ten balls will be red, and you’ll correctly guess how many red balls total were in the urn. Or perhaps you’ll happen to grab four red balls, or some other number. Then you’ll probably get the total number wrong.
This random error is the cost of incomplete knowledge, and as errors go, it’s not so bad. Your estimates won’t be incorrect on average, and the more you learn, the smaller your error will tend to be.... (Read more)
(cross posted from my personal blog)
Since middle school I've generally thought that I'm pretty good at dealing with my emotions, and a handful of close friends and family have made similar comments. Now I can see that though I was particularly good at never flipping out, I was decidedly not good "healthy emotional processing". I'll explain later what I think "healthy emotional processing" is, right now I'm using quotes to indicate "the thing that's good to do with emotions". Here it goes...
When I was a kid I adopted a stron... (Read more)
The word “optimizer” can be used in at least two different ways.
First, a system can be an “optimizer” in the sense that it is solving a computational optimization problem. A computer running a linear program solver, a SAT-solver, or gradient descent, would be an example of a system that is an “optimizer” in this sense. That is, it runs an optimization algorithm. Let “optimizer_1” denote this concept.
Second, a system can be an “optimizer” in the sense that it optimizes its environment. A human is an optimizer in this sense, because we robustly take actions that push our environment in a cert... (Read more)
I'm assuming a high level of credence in classic utilitarianism, and that AI-Xrisk is significant (e.g. roughly >10%), and timelines are not long (e.g. >50% ASI in <100years).
Here's my current list (off the top of my head):
Also, does anyone want to say why they think none of these should change the picture? Or point to a good reference discussing this ... (Read more)
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ.
The Open Thread sequence is here.
Moore's Law has been notorious for spurring a bunch of separate observations that are all covered under the umbrella of "Moore's law." But as far as I can tell, the real Moore's law is a fairly narrow prediction, which is that the transistor count on CPU chips will double approximately every two years.
Many people have told me that in recent years Moore's law has slowed down. Some have even told me they think it's stopped entirely. For example, the AI and compute article from OpenAI uses the past tense when talking about Moore's Law, "by comparison, ... (Read more)
(Or, is coordination easier in a long timeline?)
It seems like it would be good if the world could coordinate to not build AGI. That is, at some point in the future, when some number of teams will have the technical ability to build and deploy and AGI, but they all agree to voluntarily delay (perhaps on penalty of sanctions) until they’re confident that humanity knows how to align such a system.
Currently, this kind of coordination seems like a pretty implausible state of affairs. But I want to know if it seems like it becomes more or less plausible as time passes.
The following is my initial thi... (Read more)
I intend to use my shortform feed for two purposes:
1. To post thoughts that I think are worth sharing that I can then reference in the future in order to explain some belief or opinion I have.
2. To post half-finished thoughts about the math or computer science thing I'm learning at the moment. These might be slightly boring and for that I apologize.
This is (sort of) a response to Blatant lies are the best kind!, although I'd been working on this prior to that post getting published. This post explores similar issues through my own frame, which seems at least somewhat different from Benquo's.
I've noticed a tendency for people to use the word "lie", when they want to communicate that a statement is deceptive or misleading, and that this is important.
And I think this is (often) technically wrong. I'm not sure everyone defines lie quite the same way, but in most cases where I hear it unqualified, I usually assum... (Read more)
We are an established meetup that has been going since the 2018 "Meetups Everywhere" thread. We've often been listed on the open thread by SamChevre, but this is only the second time we've listed ourselves here. If you email me, I can add you to the email thread where we plan future meetings—or feel free to just show up!
We have a pretty healthy crowd of attendees for the size of the area, with about 4-7 folks typically turning up out of a pool of about 10 that has been slowly growing as people find us through the open thread. Most people are local, but folks come from as fa... (Read more)
Starting from this week, Richard Ngo will join me in writing summaries. His summaries are marked as such; I'm reviewing some of them now but expect to review less over time.
Introducing the Unrestricted Adversarial Examples Challenge (Tom B. Brown et al): There's a new adversarial examples contest, after the one from NIPS 2017. The goal of this contest is to figure out how to create a model that never confidently makes a mistake on a very simple task, even in the presence of a powerful adversary. This leads to many differences from the previous contest. The task is a lot sim... (Read more)
Location: Wine Bar next to the Landmark Theater in the Westside Pavilion (10850 W Pico Blvd #312, Los Angeles, CA 90064). We will move upstairs (to the 3rd floor hallway) as soon as we reach capacity.
Time: 7 pm (August 21st)
Parking: Available in the parking lot for the entire complex. The first three (3) hours are free and do not require validation (the website is unclear and poorly written, but it may be the case that if you validate your ticket and leave before three hours have passed, you will be charged $3). After that, parking is $3 for up to the fifth (5) hour, with validation.
Contact... (Read more)
Epistemic status: just a review of a well known math theorem and a brief rant about terminology.
Yesterday I saw another example of this: ...
In light of reading through Raemon's shortform feed, I'm making my own. Here will be smaller ideas that are on my mind.
I've started keeping a list of nuggets of advice, frames, world views, and concepts that I have in my mind that seem at odds with each other. It's been useful to try and enumerate why both frames feel compelling, and why it feels like there is a conflict between the two.
I'm inviting people to use this post to document paradoxes you find in your own thinking. My biggest suggestion is to pay careful attention to why it feels like the two frame are in conflict. If it feels like it's just a case of the law of equal and opposite advice, why do you feel that both apply to you?
Do... (Read more)