In this post, I proclaim/endorse forum participation (aka commenting) as a productive research strategy that I've managed to stumble upon, and recommend it to others (at least to try). Note that this is different from saying that forum/blog posts are a good way for a research community to communicate. It's about individually doing better as researchers.

yanni9h2437
0
I like the fact that despite not being (relatively) young when they died, the LW banner states that Kahneman & Vinge have died "FAR TOO YOUNG", pointing to the fact that death is always bad and/or it is bad when people die when they were still making positive contributions to the world (Kahneman published "Noise" in 2021!).
Novel Science is Inherently Illegible Legibility, transparency, and open science are generally considered positive attributes, while opacity, elitism, and obscurantism are viewed as negative. However, increased legibility in science is not always beneficial and can often be detrimental. Scientific management, with some exceptions, likely underperforms compared to simpler heuristics such as giving money to smart people or implementing grant lotteries. Scientific legibility suffers from the classic "Seeing like a State" problems. It constrains endeavors to the least informed stakeholder, hinders exploration, inevitably biases research to be simple and myopic, and exposes researchers to constant political tug-of-war between different interest groups poisoning objectivity.  I think the above would be considered relatively uncontroversial in EA circles.  But I posit there is something deeper going on:  Novel research is inherently illegible. If it were legible, someone else would have already pursued it. As science advances her concepts become increasingly counterintuitive and further from common sense. Most of the legible low-hanging fruit has already been picked, and novel research requires venturing higher into the tree, pursuing illegible paths with indirect and hard-to-foresee impacts.
I thought I didn’t get angry much in response to people making specific claims. I did some introspection about times in the recent past when I got angry, defensive, or withdrew from a conversation in response to claims that the other person made.  After some introspection, I think these are the mechanisms that made me feel that way: * They were very confident about their claim. Partly I felt annoyance because I didn’t feel like there was anything that would change their mind, partly I felt annoyance because it felt like they didn’t have enough status to make very confident claims like that. This is more linked to confidence in body language and tone rather than their confidence in their own claims though both matter.  * Credentialism: them being unwilling to explain things and taking it as a given that they were correct because I didn’t have the specific experiences or credentials that they had without mentioning what specifically from gaining that experience would help me understand their argument. * Not letting me speak and interrupting quickly to take down the fuzzy strawman version of what I meant rather than letting me take my time to explain my argument. * Morality: I felt like one of my cherished values was being threatened.  * The other person was relatively smart and powerful, at least within the specific situation. If they were dumb or not powerful, I would have just found the conversation amusing instead.  * The other person assumed I was dumb or naive, perhaps because they had met other people with the same position as me and those people came across as not knowledgeable.  * The other person getting worked up, for example, raising their voice or showing other signs of being irritated, offended, or angry while acting as if I was the emotional/offended one. This one particularly stings because of gender stereotypes. I think I’m more calm and reasonable and less easily offended than most people. I’ve had a few conversations with men where it felt like they were just really bad at noticing when they were getting angry or emotional themselves and kept pointing out that I was being emotional despite me remaining pretty calm (and perhaps even a little indifferent to the actual content of the conversation before the conversation moved to them being annoyed at me for being emotional).  * The other person’s thinking is very black-and-white, thinking in terms of a very clear good and evil and not being open to nuance. Sort of a similar mechanism to the first thing.  Some examples of claims that recently triggered me. They’re not so important themselves so I’ll just point at the rough thing rather than list out actual claims.  * AI killing all humans would be good because thermodynamics god/laws of physics good * Animals feel pain but this doesn’t mean we should care about them * We are quite far from getting AGI * Women as a whole are less rational than men are * Palestine/Israel stuff   Doing the above exercise was helpful because it helped me generate ideas for things to try if I’m in situations like that in the future. But it feels like the most important thing is to just get better at noticing what I’m feeling in the conversation and if I’m feeling bad and uncomfortable, to think about if the conversation is useful to me at all and if so, for what reason. And if not, make a conscious decision to leave the conversation. Reasons the conversation could be useful to me: * I change their mind * I figure out what is true * I get a greater understanding of why they believe what they believe * Enjoyment of the social interaction itself * I want to impress the other person with my intelligence or knowledge Things to try will differ depending on why I feel like having the conversation. 
Recently someone either suggested to me (or maybe told me they or someone where going to do this?) that we should train AI on legal texts, to teach it human values. Ignoring the technical problem of how to do this, I'm pretty sure legal text are not the right training data. But at the time, I could not clearly put into words why. Todays SMBC explains this for me: Saturday Morning Breakfast Cereal - Law (smbc-comics.com) Law is not a good representation or explanation of most of what we care about, because it's not trying to be. Law is mainly focused on the contentious edge cases.  Training an AI on trolly problems and other ethical dilemmas is even worse, for the same reason. 
habryka4d5120
10
A thing that I've been thinking about for a while has been to somehow make LessWrong into something that could give rise to more personal-wikis and wiki-like content. Gwern's writing has a very different structure and quality to it than the posts on LW, with the key components being that they get updated regularly and serve as more stable references for some concept, as opposed to a post which is usually anchored in a specific point in time.  We have a pretty good wiki system for our tags, but never really allowed people to just make their personal wiki pages, mostly because there isn't really any place to find them. We could list the wiki pages you created on your profile, but that doesn't really seem like it would allocate attention to them successfully. I was thinking about this more recently as Arbital is going through another round of slowly rotting away (its search currently being broken and this being very hard to fix due to annoying Google Apps Engine restrictions) and thinking about importing all the Arbital content into LessWrong. That might be a natural time to do a final push to enable people to write more wiki-like content on the site.

Popular Comments

Recent Discussion

This is the ninth post in my series on Anthropics. The previous one is The Solution to Sleeping Beauty.

Introduction

There are some quite pervasive misconceptions about betting in regards to the Sleeping Beauty problem.

One is that you need to switch between halfer and thirder stances based on the betting scheme proposed. As if learning about a betting scheme is supposed to affect your credence in an event.

Another is that halfers should bet at thirders odds and, therefore, thirdism is vindicated on the grounds of betting. What do halfers even mean by probability of Heads being 1/2 if they bet as if it's 1/3?

In this post we are going to correct them. We will understand how to arrive to correct betting odds from both thirdist and halfist positions, and...

2Ape in the coat2h
No, I'm not making any claims about ethics here, just math. Yep, because it's wrong in Fissure as well. But I'll be talking about it later. To understand whether you should precommit to any stratagy and, if you should, then which one. The fact that  P(Heads|Blue) = P(Heads|Red) = 1/3 but P(Heads|Blue or Red) = 1/2 means, that you may precommit to either Blue or Red and it doesn't matter which, but if you don't precommit, you won't be able to guess Tails better than chance per experiment. You do not ignore it. When you choose red and see that the walls are blue you do not observe event "Blue". You observe outcome "Blue" which correspond to event "Blue or Red". Because the sigma-algebra of you probability space is affected by your precommitment.
1Signer2h
So you bet 1:1 on Red after observing this “Blue or Red”?
1Ape in the coat43m
Yes! There is 50% chance that the coin is Tails and so the room is to be Red in this experiment.

No, I mean the Beauty awakes, sees Blue, gets a proposal to bet on Red with 1:1 odds, and you recommend accepting this bet?

Today a trend broke in Formula One. Max Verstappen didn't win a Grand Prix. Of the last 35 Formula One Grand Prix, Max Verstappen has won all but 5. Last season he had something like 86% dominance. 

For context I believe that I am overall pessimistic when asked to give a probability range about something "working out". And since sports tend to vary in results if using a sport like Formula One would be a good source of data to make and compare predictions against?

Everything from estimating the range a pole position time, or the difference between pole and the last qualifier, from to a fastest lap in a race or what lap a driver will pit for fresh tyres.

What is the best way of doing it?

  • Every 2
...

Tracking your predictions and improving your calibration over time is good. So is practicing making outside-view estimates based on related numerical data. But I think diversity is good.

If you start going back through historical F1 data as prediction exercises, I expect the main thing that will happen is you'll learn a lot about the history of F1. Secondarily, you'll get better at avoiding your own biases, but in a way that's concentrated on your biases relevant to F1 predictions.

If you already want to learn more about the history of F1, then go for it, it... (read more)

1.1 Introduction

Human interactions are full of little “negotiations”. My friend and I have different preferences about where to go for dinner. My boss and I have different preferences about how soon I should deliver the report. My spouse and I are both enjoying this chat, but we inevitably have slightly different (unstated) preferences about whose turn it is to speak, whether to change the subject, etc.

None of these are arguments. Everyone is having a lovely time. But they involve conflicting preferences, however mild, and these conflicts need to somehow get resolved.

These ubiquitous everyday “negotiations” have some funny properties. At the surface level, both people may put on an elaborate pretense that there is no conflict at all. (“Oh, it’s no problem, it would be my pleasure!”) Meanwhile, below...

that thing about affine transformations

If the purpose of a utility function is to provide evidence about the behavior of the group, we can preprocess the data structure into that form: Suppose Alice may update the distribution over group decisions by ε. Then she'll push in the direction of her utility function, and the constraints "add up to 100%" and "size ε" cancel out the "affine transformation" degrees of freedom. Now such directions can be added up.

On 16 March 2024, I sat down to chat with New York Times technology reporter Cade Metz! In part of our conversation, transcribed below, we discussed his February 2021 article "Silicon Valley's Safe Space", covering Scott Alexander's Slate Star Codex blog and the surrounding community.

The transcript has been significantly edited for clarity. (It turns out that real-time conversation transcribed completely verbatim is full of filler words, false starts, crosstalk, "uh huh"s, "yeah"s, pauses while one party picks up their coffee order, &c. that do not seem particularly substantive.)


ZMD: I actually have some questions for you.

CM: Great, let's start with that.

ZMD: They're critical questions, but one of the secret-lore-of-rationality things is that a lot of people think criticism is bad, because if someone criticizes you, it hurts your...

So despite it being "hard to substantiate", or to "find Scott saying" it, you think it's so certainly true that a journalist is justified in essentially lying in order to convey it to his audience?

1wilkox1h
I'd have more trust in the writing of a journalist who presents what they believe to be the actual facts in support of a claim, than one who publishes vague insinuations because writing articles is hard. He really didn’t. Firstly, in the literal sense that Metz carefully avoided making this claim (he stated that Scott aligned himself with Murray, and that Murray holds views on race and IQ, but not that Scott aligns himself with Murray on these views). Secondly, and more importantly, even if I accept the implied claim I still don’t know what Scott supposedly believes about race and IQ. I don’t know what ‘is aligned with Murray on race and IQ’ actually means beyond connotatively ‘is racist’. If this paragraph of Metz’s article was intended to be informative (it was not), I am not informed.
2tailcalled1h
It's totally possible to say taboo things, I do it quite often. But my point is more, this doesn't seem to disprove the existence of the tension/Motte-Bailey/whatever dynamic that I'm pointing at.
8Wei Dai2h
Many comments pointed out that NYT does not in fact have a consistent policy of always revealing people's true names. There's even a news editorial about this which I point out in case you trust the fact-checking of NY Post more. I think that leaves 3 possible explanations of what happened: 1. NYT has a general policy of revealing people's true names, which it doesn't consistently apply but ended up applying in this case for no particular reason. 2. There's an inconsistently applied policy, and Cade Metz's (and/or his editors') dislike of Scott contributed (consciously or subconsciously) to insistence on applying the policy in this particular case. 3. There is no policy and it was a purely personal decision. In my view, most rationalists seem to be operating under a reasonable probability distribution over these hypotheses, informed by evidence such as Metz's mention of Charles Murray, lack of a public written policy about revealing real names, and lack of evidence that a private written policy exists.

About 15 years ago, I read Malcolm Gladwell's Outliers. He profiled Chris Langan, an extremely high-IQ person, claiming that he had only mediocre accomplishments despite his high IQ. Chris Langan's theory of everything, the Cognitive Theoretic Model of the Universe, was mentioned. I considered that it might be worth checking out someday.

Well, someday has happened, and I looked into CTMU, prompted by Alex Zhu (who also paid me for reviewing the work). The main CTMU paper is "The Cognitive-Theoretic Model of the Universe: A New Kind of Reality Theory".

CTMU has a high-IQ mystique about it: if you don't get it, maybe it's because your IQ is too low. The paper itself is dense with insights, especially the first part. It uses quite a lot of nonstandard terminology (partially...

Falsifiable predictions?

6Wei Dai3h
While reading this, I got a flash-forward of what my life (our lives) may be like in a few years, i.e., desperately trying to understand and evaluate complex philosophical constructs presented to us by superintelligent AI, which may or may not be actually competent at philosophy.
23Scott Garrabrant12h
I think Chris Langan and the CTMU are very interesting, and I there is an interesting and important challenge for LW readers to figure out how (and whether) to learn from Chris. Here are some things I think are true about Chris (and about me) and relevant to this challenge. (I do not feel ready to talk about the object level CTMU here, I am mostly just talking about Chris Langan.) 1. Chris has a legitimate claim of being approximately the smartest man alive according to IQ tests. 2. Chris wrote papers/books that make up a bunch of words there are defined circularly, and are difficult to follow. It is easy to mistake him for a complete crackpot. 3. Chris claims to have proven the existence of God. 4. Chris has been something-sort-of-like-canceled for a long time. (In the way that seems predictable when "World's Smartest Man Proves Existence of God.") 5. Chris has some followers that I think don't really understand him. (In the way that seems predictable when "World's Smartest Man Proves Existence of God.") 6. Chris acts socially in a very nonstandard way that seems like a natural consequence of having much higher IQ than anyone else he has ever met. In particular, I think this manifests in part as an extreme lack of humility. 7. Chris is actually very pleasant to talk to if (like me) it does not bother you that he acts like he is much smarter than you. 8. I personally think the proof of the existence of God is kid of boring. It reads to me as kind of like "I am going to define God to be everything. Notice how this meets a bunch of the criteria people normally attribute to God. In the CTMU, the universe is mind-like. Notice how this meets a bunch more criteria people normally attribute to God." 9. While the proof of the existence of God feels kind of mundane to me, Chris is the kind of person who chooses to interpret it as a proof of the existence of God. Further, he also has other more concrete supernatural-like and conspiracy-theory-like beliefs,
4romeostevensit12h
Thoughts: Interesting asymmetry: languages don't constrain parsers much (maybe a bit, very broadly conceived), but a parser does constrain language, or which sequences it can derive meaning from. Unless the parser can extend/modify itself? Langan seems heavily influenced by Quine, which I think is a good place to start, as that seems to be about where philosophical progress petered out. In particular, Quine's assertion about scientific theories creating ontological commitments to the building blocks they are made from 'really existing' to which Langan's response seems to be 'okay, let's build a theory out of tautologies then.' This rhymes with Kant's approach, and then Langan goes farther by trying to really get at what 'a priori' as a construct is really about. I'm not quite sure how this squares with Quine's indeterminacy. That any particular data is evidence not only for the hypothesis you posed (which corresponds to some of Langan's talk of binary yes-no questions as a conception of quantum mechanics) but also for a whole family of hypotheses, most of which you don't know about, that define all the other universes that the data you observed is consistent with.

(This post is intended for my personal blog. Thank you.)


One of the dominant thoughts in my head when I build datasets for my training runs: what our ancestors 'did' over their lifespan likely played a key role in the creation of language and human values.[1] 

 

Mother in European languages
"Mother" in European Languages

 

I imagine a tribe whose members had an approximate of twenty to thirty-five years to accumulate knowledge—such as food preparation, hunting strategies, tool-making, social skills, and avoiding predators. To transmit this knowledge, they likely devised a system of sounds associated with animals, locations, actions, objects, etc. 

 

 

Sounds related to survival would have been prioritized. These had immediate, life-and-death consequences, creating powerful associations (or neurochemical activity?) in the brain. "Danger" or "food" would have been far more potent than navigational instructions. I...

To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Log In Reset Password
...or continue with

This is the eighth post in my series on Anthropics. The previous one is Lessons from Failed Attempts to Model Sleeping Beauty Problem. The next one is Beauty and the Bets.

Introduction

Suppose we take the insights from the previous post, and directly try to construct a model for the Sleeping Beauty problem based on them.

We expect a halfer model, so

On the other hand, in order not repeat Lewis' Model's mistakes:

But both of these statements can only be true if 

And, therefore, apparently,  has to be zero, which sounds obviously wrong. Surely the Beauty can be awaken on Tuesday! 

At this point, I think, you wouldn't be surprised, if I tell you that there are philosophers who are eager to bite this bullet and claim that the Beauty should, indeed, reason as...

The Two Coin version is about what happens on one day.

Let it be not two different days but two different half-hour intervals. Or even two milliseconds - this doesn't change the core of the issue that sequential events are not mutually exclusive.

observation of a state, when that observation bears no connection to any other, as independent of any other.

It very much bears a connection. If you are observing state TH it necessary means that either you've already observed or will observe state TT.

What law was broken?

The definition of a sample space - it's suppos... (read more)

This is a linkpost for https://arxiv.org/abs/2403.07949

In January, I defended my PhD thesis, which I called Algorithmic Bayesian Epistemology. From the preface:

For me as for most students, college was a time of exploration. I took many classes, read many academic and non-academic works, and tried my hand at a few research projects. Early in graduate school, I noticed a strong commonality among the questions that I had found particularly fascinating: most of them involved reasoning about knowledge, information, or uncertainty under constraints. I decided that this cluster of problems would be my primary academic focus. I settled on calling the cluster algorithmic Bayesian epistemology: all of the questions I was thinking about involved applying the "algorithmic lens" of theoretical computer science to problems of Bayesian epistemology.

Although my interest in mathematical reasoning about uncertainty...

Congratulations! I wish we could have collaborated while I was in school, but I don't think we were researching at the same time. I haven't read your actual papers, so feel free to answer "you should check out the paper" to my comments.

For chapter 4: From the high level summary here it sounds like you're offloading the task of aggregation to the forecasters themselves. It's odd to me that you're describing this as arbitrage. Also, I have frequently seen the scoring rule be used with some intermediary function to determine monetary rewards. For example, whe... (read more)

Kaj_Sotala

I just started thinking about what I would write to someone who disagreed with me on the claim "Rationalists would be better off if they were more spiritual/religious", and for this I'd need to define what I mean by "spiritual". 

Here are some things that I would classify under "spirituality":

  • Rationalist Solstices (based on what I've read about them, not actually having been in one)
  • Meditation, especially the kind that shows you new things about the way your mind works
  • Some forms of therapy, especially ones that help you notice blindspots or significantly reframe your experience or relationship to yourself or the world (e.g. parts work where you first shift to perceiving yourself as being made of parts, and then to seeing those parts with love)
  • Devoting yourself to the practice of
...
1sliqz11h
Thanks, for the answer(s). Watched the video as well, always cool to hear about other peoples journeys. If you want there is a discordserver (MD) with some pretty advanced practitioners (3rd/4th path) you and/or Kaj could join (for some data points or practice or fun, feels more useful than Dharmaoverground these days). Not sure whether different enlightenment levels would be more recommendable for random people. E.g. stream-entry might be relatively easy and helpful, but then there is a "risk" of spending the next years trying to get 2nd/3rd/4th. It's such a transformative experience that it's hard to predict on an individual level what the person will do afterwards.  

That sounds fun, feel free to message me with an invite. :)

stream-entry might be relatively easy and helpful

Worth noting that stream entry isn't necessarily a net positive either:

However, if you’ve ever seen me answer the question “What is stream entry like,” you know that my answer is always “Stream entry is like the American invasion of Iraq.” It’s taking a dictatorship that is pretty clearly bad and overthrowing it (where the “ego,” a word necessarily left undefined, serves as dictator). While in theory this would cause, over time, a better government t

... (read more)
3greylag14h
THANK YOU! In personal development circles, I hear a lot about the benefits of spirituality, with vague assurances that you don't have to be a theist to be spiritual, but with no pointers in non-woo directions, except possibly meditation. You have unblurred a large area of my mental map. (Upvoted!)
2romeostevensit17h
I think cognitive understanding is overrated and physical changes to the CNS are underrated, as explanations for positive change from practices.

LessOnline

A Festival of Writers Who are Wrong on the Internet

May 31 - Jun 2, Berkeley, CA