In this post, I proclaim/endorse forum participation (aka commenting) as a productive research strategy that I've managed to stumble upon, and recommend it to others (at least to try). Note that this is different from saying that forum/blog posts are a good way for a research community to communicate. It's about individually doing better as researchers.

yanni8h2235
0
I like the fact that despite not being (relatively) young when they died, the LW banner states that Kahneman & Vinge have died "FAR TOO YOUNG", pointing to the fact that death is always bad and/or it is bad when people die when they were still making positive contributions to the world (Kahneman published "Noise" in 2021!).
I thought I didn’t get angry much in response to people making specific claims. I did some introspection about times in the recent past when I got angry, defensive, or withdrew from a conversation in response to claims that the other person made.  After some introspection, I think these are the mechanisms that made me feel that way: * They were very confident about their claim. Partly I felt annoyance because I didn’t feel like there was anything that would change their mind, partly I felt annoyance because it felt like they didn’t have enough status to make very confident claims like that. This is more linked to confidence in body language and tone rather than their confidence in their own claims though both matter.  * Credentialism: them being unwilling to explain things and taking it as a given that they were correct because I didn’t have the specific experiences or credentials that they had without mentioning what specifically from gaining that experience would help me understand their argument. * Not letting me speak and interrupting quickly to take down the fuzzy strawman version of what I meant rather than letting me take my time to explain my argument. * Morality: I felt like one of my cherished values was being threatened.  * The other person was relatively smart and powerful, at least within the specific situation. If they were dumb or not powerful, I would have just found the conversation amusing instead.  * The other person assumed I was dumb or naive, perhaps because they had met other people with the same position as me and those people came across as not knowledgeable.  * The other person getting worked up, for example, raising their voice or showing other signs of being irritated, offended, or angry while acting as if I was the emotional/offended one. This one particularly stings because of gender stereotypes. I think I’m more calm and reasonable and less easily offended than most people. I’ve had a few conversations with men where it felt like they were just really bad at noticing when they were getting angry or emotional themselves and kept pointing out that I was being emotional despite me remaining pretty calm (and perhaps even a little indifferent to the actual content of the conversation before the conversation moved to them being annoyed at me for being emotional).  * The other person’s thinking is very black-and-white, thinking in terms of a very clear good and evil and not being open to nuance. Sort of a similar mechanism to the first thing.  Some examples of claims that recently triggered me. They’re not so important themselves so I’ll just point at the rough thing rather than list out actual claims.  * AI killing all humans would be good because thermodynamics god/laws of physics good * Animals feel pain but this doesn’t mean we should care about them * We are quite far from getting AGI * Women as a whole are less rational than men are * Palestine/Israel stuff   Doing the above exercise was helpful because it helped me generate ideas for things to try if I’m in situations like that in the future. But it feels like the most important thing is to just get better at noticing what I’m feeling in the conversation and if I’m feeling bad and uncomfortable, to think about if the conversation is useful to me at all and if so, for what reason. And if not, make a conscious decision to leave the conversation. Reasons the conversation could be useful to me: * I change their mind * I figure out what is true * I get a greater understanding of why they believe what they believe * Enjoyment of the social interaction itself * I want to impress the other person with my intelligence or knowledge Things to try will differ depending on why I feel like having the conversation. 
Novel Science is Inherently Illegible Legibility, transparency, and open science are generally considered positive attributes, while opacity, elitism, and obscurantism are viewed as negative. However, increased legibility in science is not always beneficial and can often be detrimental. Scientific management, with some exceptions, likely underperforms compared to simpler heuristics such as giving money to smart people or implementing grant lotteries. Scientific legibility suffers from the classic "Seeing like a State" problems. It constrains endeavors to the least informed stakeholder, hinders exploration, inevitably biases research to be simple and myopic, and exposes researchers to constant political tug-of-war between different interest groups poisoning objectivity.  I think the above would be considered relatively uncontroversial in EA circles.  But I posit there is something deeper going on:  Novel research is inherently illegible. If it were legible, someone else would have already pursued it. As science advances her concepts become increasingly counterintuitive and further from common sense. Most of the legible low-hanging fruit has already been picked, and novel research requires venturing higher into the tree, pursuing illegible paths with indirect and hard-to-foresee impacts.
Recently someone either suggested to me (or maybe told me they or someone where going to do this?) that we should train AI on legal texts, to teach it human values. Ignoring the technical problem of how to do this, I'm pretty sure legal text are not the right training data. But at the time, I could not clearly put into words why. Todays SMBC explains this for me: Saturday Morning Breakfast Cereal - Law (smbc-comics.com) Law is not a good representation or explanation of most of what we care about, because it's not trying to be. Law is mainly focused on the contentious edge cases.  Training an AI on trolly problems and other ethical dilemmas is even worse, for the same reason. 
habryka4d5120
10
A thing that I've been thinking about for a while has been to somehow make LessWrong into something that could give rise to more personal-wikis and wiki-like content. Gwern's writing has a very different structure and quality to it than the posts on LW, with the key components being that they get updated regularly and serve as more stable references for some concept, as opposed to a post which is usually anchored in a specific point in time.  We have a pretty good wiki system for our tags, but never really allowed people to just make their personal wiki pages, mostly because there isn't really any place to find them. We could list the wiki pages you created on your profile, but that doesn't really seem like it would allocate attention to them successfully. I was thinking about this more recently as Arbital is going through another round of slowly rotting away (its search currently being broken and this being very hard to fix due to annoying Google Apps Engine restrictions) and thinking about importing all the Arbital content into LessWrong. That might be a natural time to do a final push to enable people to write more wiki-like content on the site.

Popular Comments

Recent Discussion

About 15 years ago, I read Malcolm Gladwell's Outliers. He profiled Chris Langan, an extremely high-IQ person, claiming that he had only mediocre accomplishments despite his high IQ. Chris Langan's theory of everything, the Cognitive Theoretic Model of the Universe, was mentioned. I considered that it might be worth checking out someday.

Well, someday has happened, and I looked into CTMU, prompted by Alex Zhu (who also paid me for reviewing the work). The main CTMU paper is "The Cognitive-Theoretic Model of the Universe: A New Kind of Reality Theory".

CTMU has a high-IQ mystique about it: if you don't get it, maybe it's because your IQ is too low. The paper itself is dense with insights, especially the first part. It uses quite a lot of nonstandard terminology (partially...

Falsifiable predictions?

6Wei Dai2h
While reading this, I got a flash-forward of what my life (our lives) may be like in a few years, i.e., desperately trying to understand and evaluate complex philosophical constructs presented to us by superintelligent AI, which may or may not be actually competent at philosophy.
22Scott Garrabrant11h
I think Chris Langan and the CTMU are very interesting, and I there is an interesting and important challenge for LW readers to figure out how (and whether) to learn from Chris. Here are some things I think are true about Chris (and about me) and relevant to this challenge. (I do not feel ready to talk about the object level CTMU here, I am mostly just talking about Chris Langan.) 1. Chris has a legitimate claim of being approximately the smartest man alive according to IQ tests. 2. Chris wrote papers/books that make up a bunch of words there are defined circularly, and are difficult to follow. It is easy to mistake him for a complete crackpot. 3. Chris claims to have proven the existence of God. 4. Chris has been something-sort-of-like-canceled for a long time. (In the way that seems predictable when "World's Smartest Man Proves Existence of God.") 5. Chris has some followers that I think don't really understand him. (In the way that seems predictable when "World's Smartest Man Proves Existence of God.") 6. Chris acts socially in a very nonstandard way that seems like a natural consequence of having much higher IQ than anyone else he has ever met. In particular, I think this manifests in part as an extreme lack of humility. 7. Chris is actually very pleasant to talk to if (like me) it does not bother you that he acts like he is much smarter than you. 8. I personally think the proof of the existence of God is kid of boring. It reads to me as kind of like "I am going to define God to be everything. Notice how this meets a bunch of the criteria people normally attribute to God. In the CTMU, the universe is mind-like. Notice how this meets a bunch more criteria people normally attribute to God." 9. While the proof of the existence of God feels kind of mundane to me, Chris is the kind of person who chooses to interpret it as a proof of the existence of God. Further, he also has other more concrete supernatural-like and conspiracy-theory-like beliefs,
4romeostevensit11h
Thoughts: Interesting asymmetry: languages don't constrain parsers much (maybe a bit, very broadly conceived), but a parser does constrain language, or which sequences it can derive meaning from. Unless the parser can extend/modify itself? Langan seems heavily influenced by Quine, which I think is a good place to start, as that seems to be about where philosophical progress petered out. In particular, Quine's assertion about scientific theories creating ontological commitments to the building blocks they are made from 'really existing' to which Langan's response seems to be 'okay, let's build a theory out of tautologies then.' This rhymes with Kant's approach, and then Langan goes farther by trying to really get at what 'a priori' as a construct is really about. I'm not quite sure how this squares with Quine's indeterminacy. That any particular data is evidence not only for the hypothesis you posed (which corresponds to some of Langan's talk of binary yes-no questions as a conception of quantum mechanics) but also for a whole family of hypotheses, most of which you don't know about, that define all the other universes that the data you observed is consistent with.

On 16 March 2024, I sat down to chat with New York Times technology reporter Cade Metz! In part of our conversation, transcribed below, we discussed his February 2021 article "Silicon Valley's Safe Space", covering Scott Alexander's Slate Star Codex blog and the surrounding community.

The transcript has been significantly edited for clarity. (It turns out that real-time conversation transcribed completely verbatim is full of filler words, false starts, crosstalk, "uh huh"s, "yeah"s, pauses while one party picks up their coffee order, &c. that do not seem particularly substantive.)


ZMD: I actually have some questions for you.

CM: Great, let's start with that.

ZMD: They're critical questions, but one of the secret-lore-of-rationality things is that a lot of people think criticism is bad, because if someone criticizes you, it hurts your...

The epistemology was not bad behind the scenes, it was just not presented to the readers. That is unfortunate but it is hard to write a NYT article (there are limits on how many receipts you can put in an article and some of the sources may have been off the record).

I'd have more trust in the writing of a journalist who presents what they believe to be the actual facts in support of a claim, than one who publishes vague insinuations because writing articles is hard.

Cade correctly informed the readers that Scott is aligned with Murray on race and IQ.

... (read more)
2tailcalled6m
It's totally possible to say taboo things, I do it quite often. But my point is more, this doesn't seem to disprove the existence of the tension/Motte-Bailey/whatever dynamic that I'm pointing at.
8Wei Dai1h
Many comments pointed out that NYT does not in fact have a consistent policy of always revealing people's true names. There's even a news editorial about this which I point out in case you trust the fact-checking of NY Post more. I think that leaves 3 possible explanations of what happened: 1. NYT has a general policy of revealing people's true names, which it doesn't consistently apply but ended up applying in this case for no particular reason. 2. There's an inconsistently applied policy, and Cade Metz's (and/or his editors') dislike of Scott contributed (consciously or subconsciously) to insistence on applying the policy in this particular case. 3. There is no policy and it was a purely personal decision. In my view, most rationalists seem to be operating under a reasonable probability distribution over these hypotheses, informed by evidence such as Metz's mention of Charles Murray, lack of a public written policy about revealing real names, and lack of evidence that a private written policy exists.
1Stephen Bennett3h
What do you think Metz did that was unethical here?

This is the ninth post in my series on Anthropics. The previous one is The Solution to Sleeping Beauty.

Introduction

There are some quite pervasive misconceptions about betting in regards to the Sleeping Beauty problem.

One is that you need to switch between halfer and thirder stances based on the betting scheme proposed. As if learning about a betting scheme is supposed to affect your credence in an event.

Another is that halfers should bet at thirders odds and, therefore, thirdism is vindicated on the grounds of betting. What do halfers even mean by probability of Heads being 1/2 if they bet as if it's 1/3?

In this post we are going to correct them. We will understand how to arrive to correct betting odds from both thirdist and halfist positions, and...

1Signer2h
*ethically Works against Thirdism in the Fissure experiment too. I mean, if you are going to precommit to the right strategy anyway, why do you even need probability theory? The whole question is how do you decide to ignore that P(Head|Blue) = 1/3, when you chose Red and see Blue. And how is it not "a probabilistic model produces incorrect betting odds", when you need to precommit to ignore it?
1Ape in the coat1h
No, I'm not making any claims about ethics here, just math. Yep, because it's wrong in Fissure as well. But I'll be talking about it later. To understand whether you should precommit to any stratagy and, if you should, then which one. The fact that  P(Heads|Blue) = P(Heads|Red) = 1/3 but P(Heads|Blue or Red) = 1/2 means, that you may precommit to either Blue or Red and it doesn't matter which, but if you don't precommit, you won't be able to guess Tails better than chance per experiment. You do not ignore it. When you choose red and see that the walls are blue you do not observe event "Blue". You observe outcome "Blue" which correspond to event "Blue or Red". Because the sigma-algebra of you probability space is affected by your precommitment.

You observe outcome “Blue” which correspond to event “Blue or Red”.

So you bet 1:1 on Red after observing this “Blue or Red”?

1Ape in the coat2h
Throughout your comment you've been saying a phrase "thirders odds", apparently meaning odds 1:2, not specifying whether per awakening or per experiment. This is underspecified and confusing category which we should taboo.  As I show in the first part of the post, thirder odds are the exact same thing as halfer odds 1:2 per awakening and 1:1 per experiment. I do not claim that. I say that in order to justify not betting differently, thirders have to retroactively change the utility of a bet already made: I critique thirdism not for making different bets - as the first part of the post explains, the bets are the same, but for their utilities not actually behaving like utilities - constantly shifting back and forth during the experiment, including shifts backwards in time, in order to compensate for the fact that their probabilities are not behaving as probabilities - because they are not sound probabilities as explained in the previous post. Wait, are you claiming that thirder Sleeping Beauty is supposed to always decline the initial per experiment bet - before the coin was tossed at 1:1 odds? This is wrong - both halfers and thirders are neutral towards such bets, though they appeal to different reasoning why. Some reward structures feels more natural for halfers and some for thirders - this is true. But good model for a problem is supposed to deal with any possible betting scheme without significant difficulties. Thirders probably can arrive to the correct answer post hoc, if explicitly primed by a question: "what odds are you supposed to bet if you bet only when the room is red?". But what I'm pointing at, is that thirdism naturally fails to develop an optimal strategy for per experiment bet in technicolor problem, falsly assuming that it's isomorphic to regular sleeping beauty. Nothing about their probabilistic model hints them that betting only when the room is red is the correct move. Their probability estimate is the same, despite new evidence about the s

(This post is intended for my personal blog. Thank you.)


One of the dominant thoughts in my head when I build datasets for my training runs: what our ancestors 'did' over their lifespan likely played a key role in the creation of language and human values.[1] 

 

Mother in European languages
"Mother" in European Languages

 

I imagine a tribe whose members had an approximate of twenty to thirty-five years to accumulate knowledge—such as food preparation, hunting strategies, tool-making, social skills, and avoiding predators. To transmit this knowledge, they likely devised a system of sounds associated with animals, locations, actions, objects, etc. 

 

 

Sounds related to survival would have been prioritized. These had immediate, life-and-death consequences, creating powerful associations (or neurochemical activity?) in the brain. "Danger" or "food" would have been far more potent than navigational instructions. I...

This is the eighth post in my series on Anthropics. The previous one is Lessons from Failed Attempts to Model Sleeping Beauty Problem. The next one is Beauty and the Bets.

Introduction

Suppose we take the insights from the previous post, and directly try to construct a model for the Sleeping Beauty problem based on them.

We expect a halfer model, so

On the other hand, in order not repeat Lewis' Model's mistakes:

But both of these statements can only be true if 

And, therefore, apparently,  has to be zero, which sounds obviously wrong. Surely the Beauty can be awaken on Tuesday! 

At this point, I think, you wouldn't be surprised, if I tell you that there are philosophers who are eager to bite this bullet and claim that the Beauty should, indeed, reason as...

1JeffJo17h
And as I’ve tried to get across, if the two versions are truly isomorphic, and also have faults, one should be able to identify those faults in either one without translating them to the other. But if those faults turn out to depend on a false analysis specific to one, you won’t find them in the other. The Two Coin version is about what happens on one day. Unlike the Always-Monday-Tails-Tuesday version, the subject can infer no information about coin C1 on another day, which is the mechanism for fault in that version. Each day, in the "world" of the subject, is a fully independent "world" with a mathematically valid sample space that applies to it alone. “It treats sequential events as mutually exclusive,” No, it treats an observation of a state, when that observation bears no connection to any other, as independent of any other. “… therefore unlawfully constructs sample space.” What law was broken? Do you disagree that, on the morning of the observation, there were four equally likely states? Do you think the subject has some information about how the state was observed on another day? That an observer from the outside world has some impact on what is known on the inside? These are the kind of details that produce controversy in the Always-Monday-Tails-Tuesday version. I personally think the inferences about carrying information over between the two days are all invalid, but what I am trying to do is eliminate any basis for doing that. Yes, each outcome on the first day can be paired with exactly one on the second. But without any information passing to the subject between these two days, she cannot do anything with such pairings. To her, each day is its own, completely independent probability experiment. One where "new information" means she is awakened to see only three of the four possible outcomes. “Your model treats HH, HT, TH and TT as four individual mutually exclusive outcomes” No, it treats the current state of the coins as four mutually exclusive st

The Two Coin version is about what happens on one day.

Let it be not two different days but two different half-hour intervals. Or even two milliseconds - this doesn't change the core of the issue that sequential events are not mutually exclusive.

observation of a state, when that observation bears no connection to any other, as independent of any other.

It very much bears a connection. If you are observing state TH it necessary means that either you've already observed or will observe state TT.

What law was broken?

The definition of a sample space - it's suppos... (read more)

This is a linkpost for https://arxiv.org/abs/2403.07949

In January, I defended my PhD thesis, which I called Algorithmic Bayesian Epistemology. From the preface:

For me as for most students, college was a time of exploration. I took many classes, read many academic and non-academic works, and tried my hand at a few research projects. Early in graduate school, I noticed a strong commonality among the questions that I had found particularly fascinating: most of them involved reasoning about knowledge, information, or uncertainty under constraints. I decided that this cluster of problems would be my primary academic focus. I settled on calling the cluster algorithmic Bayesian epistemology: all of the questions I was thinking about involved applying the "algorithmic lens" of theoretical computer science to problems of Bayesian epistemology.

Although my interest in mathematical reasoning about uncertainty...

Congratulations! I wish we could have collaborated while I was in school, but I don't think we were researching at the same time. I haven't read your actual papers, so feel free to answer "you should check out the paper" to my comments.

For chapter 4: From the high level summary here it sounds like you're offloading the task of aggregation to the forecasters themselves. It's odd to me that you're describing this as arbitrage. Also, I have frequently seen the scoring rule be used with some intermediary function to determine monetary rewards. For example, whe... (read more)

To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Log In Reset Password
...or continue with
Kaj_Sotala

I just started thinking about what I would write to someone who disagreed with me on the claim "Rationalists would be better off if they were more spiritual/religious", and for this I'd need to define what I mean by "spiritual". 

Here are some things that I would classify under "spirituality":

  • Rationalist Solstices (based on what I've read about them, not actually having been in one)
  • Meditation, especially the kind that shows you new things about the way your mind works
  • Some forms of therapy, especially ones that help you notice blindspots or significantly reframe your experience or relationship to yourself or the world (e.g. parts work where you first shift to perceiving yourself as being made of parts, and then to seeing those parts with love)
  • Devoting yourself to the practice of
...
1sliqz10h
Thanks, for the answer(s). Watched the video as well, always cool to hear about other peoples journeys. If you want there is a discordserver (MD) with some pretty advanced practitioners (3rd/4th path) you and/or Kaj could join (for some data points or practice or fun, feels more useful than Dharmaoverground these days). Not sure whether different enlightenment levels would be more recommendable for random people. E.g. stream-entry might be relatively easy and helpful, but then there is a "risk" of spending the next years trying to get 2nd/3rd/4th. It's such a transformative experience that it's hard to predict on an individual level what the person will do afterwards.  

That sounds fun, feel free to message me with an invite. :)

stream-entry might be relatively easy and helpful

Worth noting that stream entry isn't necessarily a net positive either:

However, if you’ve ever seen me answer the question “What is stream entry like,” you know that my answer is always “Stream entry is like the American invasion of Iraq.” It’s taking a dictatorship that is pretty clearly bad and overthrowing it (where the “ego,” a word necessarily left undefined, serves as dictator). While in theory this would cause, over time, a better government t

... (read more)
3greylag12h
THANK YOU! In personal development circles, I hear a lot about the benefits of spirituality, with vague assurances that you don't have to be a theist to be spiritual, but with no pointers in non-woo directions, except possibly meditation. You have unblurred a large area of my mental map. (Upvoted!)
2romeostevensit16h
I think cognitive understanding is overrated and physical changes to the CNS are underrated, as explanations for positive change from practices.

Cross-posted to EA forum

There’s been a lot of discussion among safety-concerned people about whether it was bad for Anthropic to release Claude-3. I felt like I didn’t have a great picture of all the considerations here, and I felt that people were conflating many different types of arguments for why it might be bad. So I decided to try to write down an at-least-slightly-self-contained description of my overall views and reasoning here.

Tabooing “Race Dynamics”

I’ve heard a lot of people say that this “is bad for race dynamics”. I think that this conflates a couple of different mechanisms by which releasing Claude-3 might have been bad.

So, taboo-ing “race dynamics”, a common narrative behind these words is

As companies release better & better models, this incentivizes other companies to pursue

...

Capabilities leakages don’t really “increase race dynamics”.

Do people actually claim this? Shorter timelines seems like a more reasonable claim to make. To jump directly to impacts on race dynamics is skipping at least one step.

2Charlie Steiner6h
Yup, I basically agree with this. Although we shouldn't necessarily only focus on OpenAI as the other possible racer. Other companies (Microsoft, Twitter, etc) might perceive a need to go faster / use more resources to get a business advantage if the LLM marketplace seems more crowded.

previously: https://www.lesswrong.com/posts/h6kChrecznGD4ikqv/increasing-iq-is-trivial

I don't know to what degree this will wind up being a constraint. But given that many of the things that help in this domain have independent lines of evidence for benefit it seems worth collecting.

Food

dark chocolate, beets, blueberries, fish, eggs. I've had good effects with strong hibiscus and mint tea (both vasodilators).

Exercise

Regular cardio, stretching/yoga, going for daily walks.

Learning

Meditation, math, music, enjoyable hobbies with a learning component.

Light therapy

Unknown effect size, but increasingly cheap to test over the last few years. I was able to get Too Many lumens for under $50. Sun exposure has a larger effect size here, so exercising outside is helpful.

Cold exposure

this might mostly just be exercise for the circulation system, but cold showers might also have some unique effects.

Chewing on things

Increasing blood...

Please provide more details on sources or how you measured the results.

5Mitchell_Porter9h
What things decrease blood flow to the brain?
2romeostevensit8h
Insulin insensitivity and weight gain Poor sleep Hypertension High cholesterol
7Chipmonk9h
Personal anecdote: Ever since reading George's post, I've been noticing ways in which I have been (subconsciously) tensing muscles in my neck-- and possibly around my vagus nerve and inside my head. I wonder if by tensing these muscles, I'm reducing blood flow.   (I can think of reasons why someone might learn to do this on purpose actually, eg in response to some social stress.) So now I'm experimenting with relaxing those muscles whenever I notice myself tensing them. Maybe this increases blood flow, idk. It maybe feels a little like that.

LessOnline

A Festival of Writers Who are Wrong on the Internet

May 31 - Jun 2, Berkeley, CA