To understand reality, especially on confusing topics, it's important to understand the mental processes involved in forming concepts and using words to speak about them.
For this month's open thread, we're experimenting with Inline Reacts as part of the bigger reacts experiment. In addition to being able to react to a whole comment, you can apply a react to a specific snippet from the comment. When you select text in a comment, you'll see this new react-button off to the side (currently only designed to work well on desktop. If it goes well we'll put more polish into getting it working on mobile)
Right now this is enabled on a couple specific posts, and if it goes well we'll roll it out to more posts.
Meanwhile, the usual intro to Open Threads:
If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the...
"AI alignment" has the application, the agenda, less charitably the activism, right in the name. It is a lot like "Missiology" (the study of how to proselytize to "the savages") which had to evolve into "Anthropology" in order to get atheists and Jews to participate. In the same way, "AI Alignment" excludes e.g. people who are inclined to believe superintelligences will know better than us what is good, and who don't want to hamstring them. You can think we're well rid of these people. But you're still excluding people and thereby reducing the amount of thinking that will be applied to the problem.
"Artificial Intention research" instead emphasizes the space of possible intentions, the space of possible minds, and stresses how intentions that are not natural (constrained by...
I think that's better called simply coordination or cooperation problem. Alignment has the unfortunate implication of coming off as one party wanting to forcefully change the others. With AI it's fine because if you're creating a mind from scratch it'd be the height of stupidity to create an enemy.
I made this clock, counting down the time left until we build AGI:

It uses the most famous Metaculus prediction on the topic, inspired by several recent dives in the expected date. Updates are automatic, so it reflects the constant fluctuations in collective opinion.
Currently, it’s sitting in 2028, i.e. the end of the next presidential term. The year of the LA Olympics. Not so far away.
There were a few motivations behind this project:
Hey! Sorry the site was down so long; I accidentally let the payment lapse. It's back up and should stay that way, if you'd still like to use it. I also added a page where you can toggle between the predictions.
But no worries if you're happy with your own. :)
EDIT: I just realized you only posted this 2 days ago! I didn't see your version before fixing the site; I had actually just logged in to update anyone who cared. :P
Some suggest there might be alien aircraft on Earth now. The argument goes something like this:
(1) A priori, there’s no reason there shouldn’t be alien aircraft. Earth is 4.54 billion years old, but the universe is 13.7 billion years old, and within a billion light years of Earth there are something like 5 × 10¹⁴ stars. Most of those stars have planets, and if an alien civilization arose anywhere and built a von Neumann probe, those probes would spread everywhere.
(2) We have tons of observations that would be more likely if there were alien aircraft around than if there weren’t. These include:
Anyone who is confident no ufos are truly anomalous, please feel free to extend me odds for a bet here https://www.lesswrong.com/posts/t5W87hQF5gKyTofQB/ufo-betting-put-up-or-shut-up
I have already paid out to two betters so far, and would like some more
“There is a single light of science. To brighten it anywhere is to brighten it everywhere.” – Isaac Asimov
You cannot stand what I’ve become
You much prefer the gentleman I was before
I was so easy to defeat, I was so easy to control
I didn’t even know there was a war
– Leonard Cohen, There is a War
“Pick a side, we’re at war.”
– Steven Colbert, The Colbert Report
Recently, both Tyler Cowen in response to the letter establishing consensus on the presence of AI extinction risk, and Marc Andreessen on the topic of the wide range of AI dangers and upsides, have come out with posts whose arguments seem bizarrely poor.
These are both excellent, highly intelligent thinkers. Both clearly want good things to happen to humanity and the world. I am...
Running water doesn't create the conditions to permanently disempower almost everyone, AGI does. What I'm talking about isn't a situation in which initially only the rich benefit but then the tech gets cheaper and trickles down. It's a permanent trap that destroys democracy and capitalism as we know them.
Consider two claims:
These two claims should probably not both be true! If any system can be modeled as maximizing a utility function, and it is possible to build a corrigible system, then naively the corrigible system can be modeled as maximizing a utility function.
I exp...
Sometimes I have an internal desire different to do something different than what I think should be done (for example, I might desire to play a game while also thinking the better choice is to read). I've been experimenting with using randomness to mediate this. I keep a D20 with me, give each side of the dispute some odds proportional to the strength of its resolve, and then roll the die.
In theory, this means neither side will overpower the other, and even a small resolve still has a chance. I'm not sure how useful this is, but it's fun, and can sort of g...
This post is crossposted from my blog. If you liked this post, subscribe to Lynette's blog to read more -- I only crosspost about half my content to other platforms.
If you’re going into surgery, you want the youngest operating surgeon available.
This is a slight exaggeration – you don’t want a doctor in their first year out of medical school.[1] After that, it’s less clear. One review found thirty-two studies indicating that the older a doctor was, the worse their medical outcomes; that review only found one study indicating that all outcomes got better with increasing age.[2] Other analyses suggest that middle-aged doctors might do better than younger doctors (though the effect is not statistically significant)[3], but older doctors are still clearly worse than middle-aged doctors.[4]
It’s not like doctors...
How in-depth have you looked at the studies about declining performance in doctors with age? An obvious alternative hypothesis is that doctors gain skill as they age, and therefore tend to take on higher-risk patients and procedures with worse outcomes. I am not saying that's what's going on here - I'd just like to know if this is something you've looked into.
I went and created the AI Rights and Welfare tag.