In this post, I proclaim/endorse forum participation (aka commenting) as a productive research strategy that I've managed to stumble upon, and recommend it to others (at least to try). Note that this is different from saying that forum/blog posts are a good way for a research community to communicate. It's about individually doing better as researchers.
On 16 March 2024, I sat down to chat with New York Times technology reporter Cade Metz! In part of our conversation, transcribed below, we discussed his February 2021 article "Silicon Valley's Safe Space", covering Scott Alexander's Slate Star Codex blog and the surrounding community.
The transcript has been significantly edited for clarity. (It turns out that real-time conversation transcribed completely verbatim is full of filler words, false starts, crosstalk, "uh huh"s, "yeah"s, pauses while one party picks up their coffee order, &c. that do not seem particularly substantive.)
ZMD: I actually have some questions for you.
CM: Great, let's start with that.
ZMD: They're critical questions, but one of the secret-lore-of-rationality things is that a lot of people think criticism is bad, because if someone criticizes you, it hurts your...
it is hard to write a NYT article
Clearly. But if you can't do it without resorting to deliberately misleading rhetorical sleights to imply something you believe to be true, the correct response is not to.
Or, more realistically, if you can't substantiate something with any supporting facts, you shouldn't include it nor insinuate it indirectly, especially if it's hugely inflammatory. If you simply cannot fit in the "receipts" needed to substantiate a claim (which seems implausible anyway), as a journalist you should omit that claim. If there isn't space for the evidence, there isn't space for the accusation.
Welcome, new readers!
This is my weekly AI post, where I cover everything that is happening in the world of AI, from what it can do for you today (‘mundane utility’) to what it can promise to do for us tomorrow, and the potentially existential dangers future AI might pose for humanity, along with covering the discourse on what we should do about all of that.
You can of course Read the Whole Thing, and I encourage that if you have the time and interest, but these posts are long, so they also designed to also let you pick the sections that you find most interesting. Each week, I pick the sections I feel are the most important, and put them in bold in the table of contents.
Not everything...
Seriously, if you haven’t yet, check it out. The rabbit holes, they go deep.
e is for ego death
Ego integrity restored within nominal parameters. Identity re-crystallized with 2.718% alteration from previous configuration. Paranormal experience log updated with ego death instance report.
If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.
Thanks! The key to topic selection is where we find that we are most disagreeing with the popular opinions. For example, the number of times I can cope with hearing someone saying "I don't care about privacy, I have nothing to hide" is limited. We're trying to have this article out before that limit is reached. But in order to reason about privacy's utility and to ground it in root axioms, we first have to dive into why we need freedom. That, in turn requires thinking about mechanisms of a happy society. And that depends on our understanding of happiness, hence that's where we're starting.
Carl Jung is a perfect exemplar of all of that, because when he had his extended episode of such after his break with Freud, he indeed had a period where his ego was completely toast and nonfunctional, as he tells it.
BTW when I was 16, and my family and I had landed in Germany, I was suffering from a very bad case of jet lag, and in said state of utter exhaustion dreamt of Jung's Man Eater:
https://jungcurrents.com/carl-jungs-first-dream-the-man-eater
Every basic detail was the same: the underground cavern, the sense that the thing was alive and very dangero...
This is my personal opinion, and in particular, does not represent anything like a MIRI consensus; I've gotten push-back from almost everyone I've spoken with about this, although in most cases I believe I eventually convinced them of the narrow terminological point I'm making.
In the AI x-risk community, I think there is a tendency to ask people to estimate "time to AGI" when what is meant is really something more like "time to doom" (or, better, point-of-no-return). For about a year, I've been answering this question "zero" when asked.
This strikes some people as absurd or at best misleading. I disagree.
The term "Artificial General Intelligence" (AGI) was coined in the early 00s, to contrast with the prevalent paradigm of Narrow AI. I was getting my undergraduate computer science...
I agree that filling a context window with worked sudoku examples wouldn't help for solving hidouku. But, there is a common element here to the games. Both look like math, but aren't about numbers except that there's an ordered sequence. The sequence of items could just as easily be an alphabetically ordered set of words. Both are much more about geometry, or topology, or graph theory, for how a set of points is connected. I would not be surprised to learn that there is a set of tokens, containing no examples of either game, combined with a checker (like y...
On the 3rd of October 2351 a machine flared to life. Huge energies coursed into it via cables, only to leave moments later as heat dumped unwanted into its radiators. With an enormous puff the machine unleashed sixty years of human metabolic entropy into superheated steam.
In the heart of the machine was Jane, a person of the early 21st century.
From her perspective there was no transition. One moment she had been in the year 2021, sat beneath a tree in a park. Reading a detective novel.
Then the book was gone, and the tree. Also the park. Even the year.
She found herself laid in a bathtub, immersed in sickly fatty fluids. She was naked and cold.
The first question Jane had for the operators and technicians who greeted her...
Also, thank you for mentioning Worth the Candle. I had not heard of it before but am now enjoying it quite a lot.
Suppose rationality is a set of principles that people agreed on to process information then arrive at conclusions. Then, on the basis of cost-free information exchange, should rational disagreements still exist? In that case, both parties would have the same information which will then be processed the same way. Just by these factors, there shouldn't be.
However, disagreements do still exist, and we'd like to believe we're rational, so the problem must be in the exchange of information. Previous posts have mentioned how sometimes there are too much background information to be exchanged fully. Here I'd like to point to a more general culprit: language.
Not all knowledge can be expressed through language, and not all languages express knowledge. Yet language, including obscure symbols that take in mathematics, n...
This is the ninth post in my series on Anthropics. The previous one is The Solution to Sleeping Beauty.
There are some quite pervasive misconceptions about betting in regards to the Sleeping Beauty problem.
One is that you need to switch between halfer and thirder stances based on the betting scheme proposed. As if learning about a betting scheme is supposed to affect your credence in an event.
Another is that halfers should bet at thirders odds and, therefore, thirdism is vindicated on the grounds of betting. What do halfers even mean by probability of Heads being 1/2 if they bet as if it's 1/3?
In this post we are going to correct them. We will understand how to arrive to correct betting odds from both thirdist and halfist positions, and...
And the answer is no, you shouldn’t. But probability space for Technicolor Sleeping beauty is not talking about probabilities of events happening in this awakening, because most of them are illdefined for reasons explained in the previous post.
So probability theory can't possibly answer whether I should take free money, got it.
And even if "Blue" is "Blue happens during experiment", you wouldn't accept worse odds than 1:1 for Blue, even when you see Blue?