In this post, I proclaim/endorse forum participation (aka commenting) as a productive research strategy that I've managed to stumble upon, and recommend it to others (at least to try). Note that this is different from saying that forum/blog posts are a good way for a research community to communicate. It's about individually doing better as researchers.

yanni5h1618
0
I like the fact that despite not being (relatively) young when they died, the LW banner states that Kahneman & Vinge have died "FAR TOO YOUNG", pointing to the fact that death is always bad and/or it is bad when people die when they were still making positive contributions to the world (Kahneman published "Noise" in 2021!).
I thought I didn’t get angry much in response to people making specific claims. I did some introspection about times in the recent past when I got angry, defensive, or withdrew from a conversation in response to claims that the other person made.  After some introspection, I think these are the mechanisms that made me feel that way: * They were very confident about their claim. Partly I felt annoyance because I didn’t feel like there was anything that would change their mind, partly I felt annoyance because it felt like they didn’t have enough status to make very confident claims like that. This is more linked to confidence in body language and tone rather than their confidence in their own claims though both matter.  * Credentialism: them being unwilling to explain things and taking it as a given that they were correct because I didn’t have the specific experiences or credentials that they had without mentioning what specifically from gaining that experience would help me understand their argument. * Not letting me speak and interrupting quickly to take down the fuzzy strawman version of what I meant rather than letting me take my time to explain my argument. * Morality: I felt like one of my cherished values was being threatened.  * The other person was relatively smart and powerful, at least within the specific situation. If they were dumb or not powerful, I would have just found the conversation amusing instead.  * The other person assumed I was dumb or naive, perhaps because they had met other people with the same position as me and those people came across as not knowledgeable.  * The other person getting worked up, for example, raising their voice or showing other signs of being irritated, offended, or angry while acting as if I was the emotional/offended one. This one particularly stings because of gender stereotypes. I think I’m more calm and reasonable and less easily offended than most people. I’ve had a few conversations with men where it felt like they were just really bad at noticing when they were getting angry or emotional themselves and kept pointing out that I was being emotional despite me remaining pretty calm (and perhaps even a little indifferent to the actual content of the conversation before the conversation moved to them being annoyed at me for being emotional).  * The other person’s thinking is very black-and-white, thinking in terms of a very clear good and evil and not being open to nuance. Sort of a similar mechanism to the first thing.  Some examples of claims that recently triggered me. They’re not so important themselves so I’ll just point at the rough thing rather than list out actual claims.  * AI killing all humans would be good because thermodynamics god/laws of physics good * Animals feel pain but this doesn’t mean we should care about them * We are quite far from getting AGI * Women as a whole are less rational than men are * Palestine/Israel stuff   Doing the above exercise was helpful because it helped me generate ideas for things to try if I’m in situations like that in the future. But it feels like the most important thing is to just get better at noticing what I’m feeling in the conversation and if I’m feeling bad and uncomfortable, to think about if the conversation is useful to me at all and if so, for what reason. And if not, make a conscious decision to leave the conversation. Reasons the conversation could be useful to me: * I change their mind * I figure out what is true * I get a greater understanding of why they believe what they believe * Enjoyment of the social interaction itself * I want to impress the other person with my intelligence or knowledge Things to try will differ depending on why I feel like having the conversation. 
Recently someone either suggested to me (or maybe told me they or someone where going to do this?) that we should train AI on legal texts, to teach it human values. Ignoring the technical problem of how to do this, I'm pretty sure legal text are not the right training data. But at the time, I could not clearly put into words why. Todays SMBC explains this for me: Saturday Morning Breakfast Cereal - Law (smbc-comics.com) Law is not a good representation or explanation of most of what we care about, because it's not trying to be. Law is mainly focused on the contentious edge cases.  Training an AI on trolly problems and other ethical dilemmas is even worse, for the same reason. 
Novel Science is Inherently Illegible Legibility, transparency, and open science are generally considered positive attributes, while opacity, elitism, and obscurantism are viewed as negative. However, increased legibility in science is not always beneficial and can often be detrimental. Scientific management, with some exceptions, likely underperforms compared to simpler heuristics such as giving money to smart people or implementing grant lotteries. Scientific legibility suffers from the classic "Seeing like a State" problems. It constrains endeavors to the least informed stakeholder, hinders exploration, inevitably biases research to be simple and myopic, and exposes researchers to constant political tug-of-war between different interest groups poisoning objectivity.  I think the above would be considered relatively uncontroversial in EA circles.  But I posit there is something deeper going on:  Novel research is inherently illegible. If it were legible, someone else would have already pursued it. As science advances her concepts become increasingly counterintuitive and further from common sense. Most of the legible low-hanging fruit has already been picked, and novel research requires venturing higher into the tree, pursuing illegible paths with indirect and hard-to-foresee impacts.
habryka3d5120
10
A thing that I've been thinking about for a while has been to somehow make LessWrong into something that could give rise to more personal-wikis and wiki-like content. Gwern's writing has a very different structure and quality to it than the posts on LW, with the key components being that they get updated regularly and serve as more stable references for some concept, as opposed to a post which is usually anchored in a specific point in time.  We have a pretty good wiki system for our tags, but never really allowed people to just make their personal wiki pages, mostly because there isn't really any place to find them. We could list the wiki pages you created on your profile, but that doesn't really seem like it would allocate attention to them successfully. I was thinking about this more recently as Arbital is going through another round of slowly rotting away (its search currently being broken and this being very hard to fix due to annoying Google Apps Engine restrictions) and thinking about importing all the Arbital content into LessWrong. That might be a natural time to do a final push to enable people to write more wiki-like content on the site.

Popular Comments

Recent Discussion

Kaj_Sotala

I just started thinking about what I would write to someone who disagreed with me on the claim "Rationalists would be better off if they were more spiritual/religious", and for this I'd need to define what I mean by "spiritual". 

Here are some things that I would classify under "spirituality":

  • Rationalist Solstices (based on what I've read about them, not actually having been in one)
  • Meditation, especially the kind that shows you new things about the way your mind works
  • Some forms of therapy, especially ones that help you notice blindspots or significantly reframe your experience or relationship to yourself or the world (e.g. parts work where you first shift to perceiving yourself as being made of parts, and then to seeing those parts with love)
  • Devoting yourself to the practice of
...
1sliqz7h
Thanks, for the answer(s). Watched the video as well, always cool to hear about other peoples journeys. If you want there is a discordserver (MD) with some pretty advanced practitioners (3rd/4th path) you and/or Kaj could join (for some data points or practice or fun, feels more useful than Dharmaoverground these days). Not sure whether different enlightenment levels would be more recommendable for random people. E.g. stream-entry might be relatively easy and helpful, but then there is a "risk" of spending the next years trying to get 2nd/3rd/4th. It's such a transformative experience that it's hard to predict on an individual level what the person will do afterwards.  

That sounds fun, feel free to message me with an invite. :)

stream-entry might be relatively easy and helpful

Worth noting that stream entry isn't necessarily a net positive either:

However, if you’ve ever seen me answer the question “What is stream entry like,” you know that my answer is always “Stream entry is like the American invasion of Iraq.” It’s taking a dictatorship that is pretty clearly bad and overthrowing it (where the “ego,” a word necessarily left undefined, serves as dictator). While in theory this would cause, over time, a better government t

... (read more)
3greylag10h
THANK YOU! In personal development circles, I hear a lot about the benefits of spirituality, with vague assurances that you don't have to be a theist to be spiritual, but with no pointers in non-woo directions, except possibly meditation. You have unblurred a large area of my mental map. (Upvoted!)
2romeostevensit13h
I think cognitive understanding is overrated and physical changes to the CNS are underrated, as explanations for positive change from practices.

Cross-posted to EA forum

There’s been a lot of discussion among safety-concerned people about whether it was bad for Anthropic to release Claude-3. I felt like I didn’t have a great picture of all the considerations here, and I felt that people were conflating many different types of arguments for why it might be bad. So I decided to try to write down an at-least-slightly-self-contained description of my overall views and reasoning here.

Tabooing “Race Dynamics”

I’ve heard a lot of people say that this “is bad for race dynamics”. I think that this conflates a couple of different mechanisms by which releasing Claude-3 might have been bad.

So, taboo-ing “race dynamics”, a common narrative behind these words is

As companies release better & better models, this incentivizes other companies to pursue

...

Capabilities leakages don’t really “increase race dynamics”.

Do people actually claim this? Shorter timelines seems like a more reasonable claim to make. To jump directly to impacts on race dynamics is skipping at least one step.

2Charlie Steiner3h
Yup, I basically agree with this. Although we shouldn't necessarily only focus on OpenAI as the other possible racer. Other companies (Microsoft, Twitter, etc) might perceive a need to go faster / use more resources to get a business advantage if the LLM marketplace seems more crowded.

previously: https://www.lesswrong.com/posts/h6kChrecznGD4ikqv/increasing-iq-is-trivial

I don't know to what degree this will wind up being a constraint. But given that many of the things that help in this domain have independent lines of evidence for benefit it seems worth collecting.

Food

dark chocolate, beets, blueberries, fish, eggs. I've had good effects with strong hibiscus and mint tea (both vasodilators).

Exercise

Regular cardio, stretching/yoga, going for daily walks.

Learning

Meditation, math, music, enjoyable hobbies with a learning component.

Light therapy

Unknown effect size, but increasingly cheap to test over the last few years. I was able to get Too Many lumens for under $50. Sun exposure has a larger effect size here, so exercising outside is helpful.

Cold exposure

this might mostly just be exercise for the circulation system, but cold showers might also have some unique effects.

Chewing on things

Increasing blood...

Please provide more details on sources or how you measured the results.

5Mitchell_Porter6h
What things decrease blood flow to the brain?
2romeostevensit6h
Insulin insensitivity and weight gain Poor sleep Hypertension High cholesterol
7Chipmonk7h
Personal anecdote: Ever since reading George's post, I've been noticing ways in which I have been (subconsciously) tensing muscles in my neck-- and possibly around my vagus nerve and inside my head. I wonder if by tensing these muscles, I'm reducing blood flow.   (I can think of reasons why someone might learn to do this on purpose actually, eg in response to some social stress.) So now I'm experimenting with relaxing those muscles whenever I notice myself tensing them. Maybe this increases blood flow, idk. It maybe feels a little like that.

On 16 March 2024, I sat down to chat with New York Times technology reporter Cade Metz! In part of our conversation, transcribed below, we discussed his February 2021 article "Silicon Valley's Safe Space", covering Scott Alexander's Slate Star Codex blog and the surrounding community.

The transcript has been significantly edited for clarity. (It turns out that real-time conversation transcribed completely verbatim is full of filler words, false starts, crosstalk, "uh huh"s, "yeah"s, pauses while one party picks up their coffee order, &c. that do not seem particularly substantive.)


ZMD: I actually have some questions for you.

CM: Great, let's start with that.

ZMD: They're critical questions, but one of the secret-lore-of-rationality things is that a lot of people think criticism is bad, because if someone criticizes you, it hurts your...

What do you think Metz did that was unethical here?

1wilkox2h
It seems like you think what Metz wrote was acceptable because it all adds up to presenting the truth in the end, even if the way it was presented was 'unconvincing' and the evidence 'embarassing[ly]' weak. I don't buy the principle that 'bad epistemology is fine if the outcome is true knowledge', and I also don't buy that this happened in this particular case, nor that this is what Metz intended. If Metz's goal was to inform his readers about Scott's position, he failed. He didn't give any facts other than that Scott 'aligned himself with' and quoted somebody who holds a politically unacceptable view. The majority of readers will glean from this nothing but a vague association between Scott and racism, as the author intended. More sophisticated readers will notice what Metz is doing, and assume that if there was substantial evidence that Scott held an unpalatable view Metz would have gladly published that instead of resorting to an oblique smear by association. Nobody ends up better informed about what Scott actually believes. I think trevor is right to invoke the quokka analogy. Rationalists are tying ourselves in knots in a long comment thread debating if actually, technically, strictly, Metz was misleading. Meanwhile, Metz never cared about this in the first place, and is continuing to enjoy a successful career employing tabloid rhetorical tricks.
3localdeity3h
The ones that come to my mind are "Person or Organization X is doing illegal, unethical, or otherwise unpopular practices which they'd rather conceal from the public."  Lie that you're ideologically aligned or that you'll keep things confidential, use that to gain access.  Then perhaps lie to blackmail them to give up a little more information before finally publishing it all.  There might be an ethical line drawn somewhere, but if it's not at "any lying" then I don't know where it is.
1cubefox6h
It is not surprising when a lot of people having a false belief is caused by the existence of a taboo. Otherwise the belief would probably already have been corrected or wouldn't have gained popularity in the first place. And giving examples for such beliefs of course is not really possible, precisely because it is taboo to argue that they are false.
This is a linkpost for https://arxiv.org/abs/2403.09863

Hi, I’d like to share my paper that proposes a novel approach for building white box neural networks. It introduces a concept of „semantic feature” and builds a simple white box PoC.

As an independent researcher I’d be grateful for your feedback!

This looks interesting, thanks!

This post could benefit from an extended summary.

In lieu of such a summary, in addition to the abstract

This paper introduces semantic features as a candidate conceptual framework for building inherently interpretable neural networks. A proof of concept model for informative subproblem of MNIST consists of 4 such layers with the total of 5K learnable parameters. The model is well-motivated, inherently interpretable, requires little hyperparameter tuning and achieves human-level adversarial test accuracy - with no form of adv

... (read more)

If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.

The Open Thread tag is here. The Open Thread sequence is here.

1complicated.world5h
Hi LessWrong Community! I'm new here, though I've been an LW reader for a while. I'm representing complicated.world website, where we strive to use similar rationality approach as here and we also explore philosophical problems. The difference is that, instead of being a community-driven portal like you, we are a small team which is working internally to achieve consensus and only then we publish our articles. This means that we are not nearly as pluralistic, diverse or democratic as you are, but on the other hand we try to present a single coherent view on all discussed problems, each rooted in basic axioms. I really value the LW community (our entire team does) and would like to start contributing here. I would also like to present from time to time a linkpost from our website - I hope this is ok. We are also a not-for-profit website.

Hey! 

It seems like an interesting philosophy. Feel free to crosspost. You've definitely chosen some ambitious topics to try to cover, which I am generally a fan of.

2habryka8h
Hey metalcrow! Great to have you here! Hope you have a good time and looking forward to seeing your post!
To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Log In Reset Password
...or continue with

If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.

The Open Thread tag is here. The Open Thread sequence is here.

1Cheops Steller6h
Hello there. This seems to be a quirky corner of the internet that I should've discovered and started using years ago. Looking forward to reading these productive conversations! I am particularly interested in information, computation, complex system and intelligence.

Hey Cheops!

Good to have you around, you'll definitely not be alone here with these interests. And always feel free to complain about any problems you run into either in these Open Threads, or via the Intercom chat in the bottom right corner.

Conglomerates like Unilever use shadow prices to allocate resources internally between their separate businesses. And sales teams are often compensated via commission, which is kind of market-ish.

5Linda Linsefors8h
Recently someone either suggested to me (or maybe told me they or someone where going to do this?) that we should train AI on legal texts, to teach it human values. Ignoring the technical problem of how to do this, I'm pretty sure legal text are not the right training data. But at the time, I could not clearly put into words why. Todays SMBC explains this for me: Saturday Morning Breakfast Cereal - Law (smbc-comics.com) Law is not a good representation or explanation of most of what we care about, because it's not trying to be. Law is mainly focused on the contentious edge cases.  Training an AI on trolly problems and other ethical dilemmas is even worse, for the same reason. 

I spoke with some people last fall who were planning to do this, perhaps it's the same people. I think the idea (at least, as stated) was to commercialize regulatory software to fund some alignment work. At the time, they were going by Nomos AI, and it looks like they've since renamed to Norm AI.

1CstineSublime6h
Would sensationalist tabloid news stories be better training data? Perhaps it is the inverse problem: fluffy human interest stories and outrage porn are both engineered for the lowest common denominator, the things that overwhelmingly people think are heartwarming or miscarriages of justice respectively. However if you wanted to get a AI to internalize what is in fact the sources of outrage and consensus among the wider community I think it's a place to start. The obvious other examples are fairy tales, fables, parables, jokes, and urban legends - most are purpose encoded with a given society's values. Amateur book and film reviews are potentially another source of material that displays human values in that whether someone is satisfied with the ending or not (did the villain get punished? did the protagonist get justice?)  or which characters they liked or disliked is often attached to the reader/viewer's value systems. Or as Jerry Lewis put it in the Total Filmmaker: in comedy, a snowball is never thrown at a battered fedora: "The top-hat owner is always the bank president who holds mortgage on the house...".

LessOnline

A Festival of Writers Who are Wrong on the Internet

May 31 - Jun 2, Berkeley, CA