To understand reality, especially on confusing topics, it's important to understand the mental processes involved in forming concepts and using words to speak about them.
Some notes on the causal and anatomical structure of the brain-body connection, possible man-in-the-middle attacks on the human brain, and hints towards a theory of AI directability via manipulating internal causal structure.
The brain is one of the most causally isolated structures in the human body. It's got a hard outer casing (the cranium), three layers of protective soft tissue under that (meninges), rests in a shock-absorbing fluid that passively supports homeostasis (CSF), and all blood entering it is stringently filtered by the blood-brain barrier.
Why so isolated? You need a stomach to live just the same, and yet the stomach is out fighting bacteria, chemicals, and physical trauma on the front lines while the brain hides in its fortress....
For this month's open thread, we're experimenting with Inline Reacts as part of the bigger reacts experiment. In addition to being able to react to a whole comment, you can apply a react to a specific snippet from the comment. When you select text in a comment, you'll see this new react-button off to the side (currently only designed to work well on desktop. If it goes well we'll put more polish into getting it working on mobile)
Right now this is enabled on a couple specific posts, and if it goes well we'll roll it out to more posts.
Meanwhile, the usual intro to Open Threads:
If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the...
"AI alignment" has the application, the agenda, less charitably the activism, right in the name. It is a lot like "Missiology" (the study of how to proselytize to "the savages") which had to evolve into "Anthropology" in order to get atheists and Jews to participate. In the same way, "AI Alignment" excludes e.g. people who are inclined to believe superintelligences will know better than us what is good, and who don't want to hamstring them. You can think we're well rid of these people. But you're still excluding people and thereby reducing the amount of thinking that will be applied to the problem.
"Artificial Intention research" instead emphasizes the space of possible intentions, the space of possible minds, and stresses how intentions that are not natural (constrained by...
I think that's better called simply coordination or cooperation problem. Alignment has the unfortunate implication of coming off as one party wanting to forcefully change the others. With AI it's fine because if you're creating a mind from scratch it'd be the height of stupidity to create an enemy.
I made this clock, counting down the time left until we build AGI:

It uses the most famous Metaculus prediction on the topic, inspired by several recent dives in the expected date. Updates are automatic, so it reflects the constant fluctuations in collective opinion.
Currently, it’s sitting in 2028, i.e. the end of the next presidential term. The year of the LA Olympics. Not so far away.
There were a few motivations behind this project:
Hey! Sorry the site was down so long; I accidentally let the payment lapse. It's back up and should stay that way, if you'd still like to use it. I also added a page where you can toggle between the predictions.
But no worries if you're happy with your own. :)
EDIT: I just realized you only posted this 2 days ago! I didn't see your version before fixing the site; I had actually just logged in to update anyone who cared. :P
Some suggest there might be alien aircraft on Earth now. The argument goes something like this:
(1) A priori, there’s no reason there shouldn’t be alien aircraft. Earth is 4.54 billion years old, but the universe is 13.7 billion years old, and within a billion light years of Earth there are something like 5 × 10¹⁴ stars. Most of those stars have planets, and if an alien civilization arose anywhere and built a von Neumann probe, those probes would spread everywhere.
(2) We have tons of observations that would be more likely if there were alien aircraft around than if there weren’t. These include:
Anyone who is confident no ufos are truly anomalous, please feel free to extend me odds for a bet here https://www.lesswrong.com/posts/t5W87hQF5gKyTofQB/ufo-betting-put-up-or-shut-up
I have already paid out to two betters so far, and would like some more
“There is a single light of science. To brighten it anywhere is to brighten it everywhere.” – Isaac Asimov
You cannot stand what I’ve become
You much prefer the gentleman I was before
I was so easy to defeat, I was so easy to control
I didn’t even know there was a war
– Leonard Cohen, There is a War
“Pick a side, we’re at war.”
– Steven Colbert, The Colbert Report
Recently, both Tyler Cowen in response to the letter establishing consensus on the presence of AI extinction risk, and Marc Andreessen on the topic of the wide range of AI dangers and upsides, have come out with posts whose arguments seem bizarrely poor.
These are both excellent, highly intelligent thinkers. Both clearly want good things to happen to humanity and the world. I am...
Running water doesn't create the conditions to permanently disempower almost everyone, AGI does. What I'm talking about isn't a situation in which initially only the rich benefit but then the tech gets cheaper and trickles down. It's a permanent trap that destroys democracy and capitalism as we know them.
Consider two claims:
These two claims should probably not both be true! If any system can be modeled as maximizing a utility function, and it is possible to build a corrigible system, then naively the corrigible system can be modeled as maximizing a utility function.
I exp...
Sometimes I have an internal desire different to do something different than what I think should be done (for example, I might desire to play a game while also thinking the better choice is to read). I've been experimenting with using randomness to mediate this. I keep a D20 with me, give each side of the dispute some odds proportional to the strength of its resolve, and then roll the die.
In theory, this means neither side will overpower the other, and even a small resolve still has a chance. I'm not sure how useful this is, but it's fun, and can sort of g...
I went and created the AI Rights and Welfare tag.