In this post, I proclaim/endorse forum participation (aka commenting) as a productive research strategy that I've managed to stumble upon, and recommend it to others (at least to try). Note that this is different from saying that forum/blog posts are a good way for a research community to communicate. It's about individually doing better as researchers.

I like the fact that despite not being (relatively) young when they died, the LW banner states that Kahneman & Vinge have died "FAR TOO YOUNG", pointing to the fact that death is always bad and/or it is bad when people die when they were still making positive contributions to the world (Kahneman published "Noise" in 2021!).
Novel Science is Inherently Illegible Legibility, transparency, and open science are generally considered positive attributes, while opacity, elitism, and obscurantism are viewed as negative. However, increased legibility in science is not always beneficial and can often be detrimental. Scientific management, with some exceptions, likely underperforms compared to simpler heuristics such as giving money to smart people or implementing grant lotteries. Scientific legibility suffers from the classic "Seeing like a State" problems. It constrains endeavors to the least informed stakeholder, hinders exploration, inevitably biases research to be simple and myopic, and exposes researchers to constant political tug-of-war between different interest groups poisoning objectivity.  I think the above would be considered relatively uncontroversial in EA circles.  But I posit there is something deeper going on:  Novel research is inherently illegible. If it were legible, someone else would have already pursued it. As science advances her concepts become increasingly counterintuitive and further from common sense. Most of the legible low-hanging fruit has already been picked, and novel research requires venturing higher into the tree, pursuing illegible paths with indirect and hard-to-foresee impacts.
I thought I didn’t get angry much in response to people making specific claims. I did some introspection about times in the recent past when I got angry, defensive, or withdrew from a conversation in response to claims that the other person made.  After some introspection, I think these are the mechanisms that made me feel that way: * They were very confident about their claim. Partly I felt annoyance because I didn’t feel like there was anything that would change their mind, partly I felt annoyance because it felt like they didn’t have enough status to make very confident claims like that. This is more linked to confidence in body language and tone rather than their confidence in their own claims though both matter.  * Credentialism: them being unwilling to explain things and taking it as a given that they were correct because I didn’t have the specific experiences or credentials that they had without mentioning what specifically from gaining that experience would help me understand their argument. * Not letting me speak and interrupting quickly to take down the fuzzy strawman version of what I meant rather than letting me take my time to explain my argument. * Morality: I felt like one of my cherished values was being threatened.  * The other person was relatively smart and powerful, at least within the specific situation. If they were dumb or not powerful, I would have just found the conversation amusing instead.  * The other person assumed I was dumb or naive, perhaps because they had met other people with the same position as me and those people came across as not knowledgeable.  * The other person getting worked up, for example, raising their voice or showing other signs of being irritated, offended, or angry while acting as if I was the emotional/offended one. This one particularly stings because of gender stereotypes. I think I’m more calm and reasonable and less easily offended than most people. I’ve had a few conversations with men where it felt like they were just really bad at noticing when they were getting angry or emotional themselves and kept pointing out that I was being emotional despite me remaining pretty calm (and perhaps even a little indifferent to the actual content of the conversation before the conversation moved to them being annoyed at me for being emotional).  * The other person’s thinking is very black-and-white, thinking in terms of a very clear good and evil and not being open to nuance. Sort of a similar mechanism to the first thing.  Some examples of claims that recently triggered me. They’re not so important themselves so I’ll just point at the rough thing rather than list out actual claims.  * AI killing all humans would be good because thermodynamics god/laws of physics good * Animals feel pain but this doesn’t mean we should care about them * We are quite far from getting AGI * Women as a whole are less rational than men are * Palestine/Israel stuff   Doing the above exercise was helpful because it helped me generate ideas for things to try if I’m in situations like that in the future. But it feels like the most important thing is to just get better at noticing what I’m feeling in the conversation and if I’m feeling bad and uncomfortable, to think about if the conversation is useful to me at all and if so, for what reason. And if not, make a conscious decision to leave the conversation. Reasons the conversation could be useful to me: * I change their mind * I figure out what is true * I get a greater understanding of why they believe what they believe * Enjoyment of the social interaction itself * I want to impress the other person with my intelligence or knowledge Things to try will differ depending on why I feel like having the conversation. 
habryka4d5120
10
A thing that I've been thinking about for a while has been to somehow make LessWrong into something that could give rise to more personal-wikis and wiki-like content. Gwern's writing has a very different structure and quality to it than the posts on LW, with the key components being that they get updated regularly and serve as more stable references for some concept, as opposed to a post which is usually anchored in a specific point in time.  We have a pretty good wiki system for our tags, but never really allowed people to just make their personal wiki pages, mostly because there isn't really any place to find them. We could list the wiki pages you created on your profile, but that doesn't really seem like it would allocate attention to them successfully. I was thinking about this more recently as Arbital is going through another round of slowly rotting away (its search currently being broken and this being very hard to fix due to annoying Google Apps Engine restrictions) and thinking about importing all the Arbital content into LessWrong. That might be a natural time to do a final push to enable people to write more wiki-like content on the site.
Recently someone either suggested to me (or maybe told me they or someone where going to do this?) that we should train AI on legal texts, to teach it human values. Ignoring the technical problem of how to do this, I'm pretty sure legal text are not the right training data. But at the time, I could not clearly put into words why. Todays SMBC explains this for me: Saturday Morning Breakfast Cereal - Law (smbc-comics.com) Law is not a good representation or explanation of most of what we care about, because it's not trying to be. Law is mainly focused on the contentious edge cases.  Training an AI on trolly problems and other ethical dilemmas is even worse, for the same reason. 

Popular Comments

Recent Discussion

This is my personal opinion, and in particular, does not represent anything like a MIRI consensus; I've gotten push-back from almost everyone I've spoken with about this, although in most cases I believe I eventually convinced them of the narrow terminological point I'm making.

In the AI x-risk community, I think there is a tendency to ask people to estimate "time to AGI" when what is meant is really something more like "time to doom" (or, better, point-of-no-return). For about a year, I've been answering this question "zero" when asked.

This strikes some people as absurd or at best misleading. I disagree.

The term "Artificial General Intelligence" (AGI) was coined in the early 00s, to contrast with the prevalent paradigm of Narrow AI. I was getting my undergraduate computer science...

Yeah, the precise ability I'm trying to point to here is tricky. Almost any human (barring certain forms of senility, severe disability, etc) can do some version of what I'm talking about. But as in the restaurant example, not every human could succeed at every possible example.

I was trying to better describe the abilities that I thought GPT-4 was lacking, using very simple examples. And it started looking way too much like a benchmark suite that people could target.

Suffice to say, I don't think GPT-4 is an AGI. But I strongly suspect we're only a couple of breakthroughs away. And if anyone builds an AGI, I am not optimistic we will remain in control of our futures.

2jmh1h
I found this an interesting but complex read for me -- both the post and the comments. I found a number of what seemed good points to consider, but I seem to be coming away from the discussion thinking about the old parable of the blind men and the elephant.
2AnthonyC4h
I agree that filling a context window with worked sudoku examples wouldn't help for solving hidouku. But, there is a common element here to the games. Both look like math, but aren't about numbers except that there's an ordered sequence. The sequence of items could just as easily be an alphabetically ordered set of words. Both are much more about geometry, or topology, or graph theory, for how a set of points is connected. I would not be surprised to learn that there is a set of tokens, containing no examples of either game, combined with a checker (like your link has) that points out when a mistake has been made, that enables solving a wide range of similar games. I think one of the things humans do better than current LLMs is that, as we learn a new task, we vary what counts as a token and how we nest tokens. How do we chunk things? In sudoku, each box is a chunk, each row and column are a chunk, the board is a chunk, "sudoku" is a chunk, "checking an answer" is a chunk, "playing a game" is a chunk, and there are probably lots of others I'm ignoring. I don't think just prompting an LLM with the full text of "How to solve it" in its context window would get us to a solution, but at some level I do think it's possible to make explicit, in words and diagrams, what it is humans do to solve things, in a way legible to it. I think it largely resembles repeatedly telescoping in and out, to lower and higher abstractions applying different concepts and contexts, locally sanity checking ourselves, correcting locally obvious insanity, and continuing until we hit some sort of reflective consistency. Different humans have different limits on what contexts they can successfully do this in.
2Logan Zoellner2h
Absolutely.  I don't think it's impossible to build such a system.  In fact, I think a transformer is probably about 90% there.   Need to add trial and error, some kind of long-term memory/fine-tuning and a handful of default heuristics.  Scale will help too, but no amount of scale alone will get us there.

On 16 March 2024, I sat down to chat with New York Times technology reporter Cade Metz! In part of our conversation, transcribed below, we discussed his February 2021 article "Silicon Valley's Safe Space", covering Scott Alexander's Slate Star Codex blog and the surrounding community.

The transcript has been significantly edited for clarity. (It turns out that real-time conversation transcribed completely verbatim is full of filler words, false starts, crosstalk, "uh huh"s, "yeah"s, pauses while one party picks up their coffee order, &c. that do not seem particularly substantive.)


ZMD: I actually have some questions for you.

CM: Great, let's start with that.

ZMD: They're critical questions, but one of the secret-lore-of-rationality things is that a lot of people think criticism is bad, because if someone criticizes you, it hurts your...

I think this is a perfectly valid argument for why NYT shouldn't publish it, it just doesn't seem very strong or robust… Like, if the NYT did go out and count the number of pebbles on your road, then yes there's an opportunity cost to this etc., which makes it a pretty unnecessary thing to do, but it's not like you'd have any good reason to whip out a big protest or anything.

The context from above is that we’re weighing costs vs benefits of publishing the name, and I was pulling out the sub-debate over what the benefits are (setting aside the disagreement ... (read more)

2tailcalled1h
Why the downvotes? Because it's an irrelevant/tangential ramble? Or some more specific reason?
2localdeity3h
Looking at Wiki's Undercover Journalism article, one that comes to mind is Nellie Bly's Ten Days in a Mad-House. Interestingly... I can't say I'm happy with failure being rewarded with a higher budget.  Still, it may have been true that their budget was insufficient to provide sanitary and humane conditions.  Anyway, the report itself seems to have been important and worthwhile.
3frankybegs3h
Clearly. But if you can't do it without resorting to deliberately misleading rhetorical sleights to imply something you believe to be true, the correct response is not to. Or, more realistically, if you can't substantiate a particular claim with any supporting facts, due to the limitations of the form, you shouldn't include it nor insinuate it indirectly, especially if it's hugely inflammatory. If you simply cannot fit in the "receipts" needed to substantiate a claim (which seems implausible anyway), as a journalist you should omit that claim. If there isn't space for the evidence, there isn't space for the accusation.

This in the (bi-)annual ACX/SCC Schelling Meetup, where you can meet like-minded curious folks. This time I reserved an indoor space! I'm pleased to announce that we meet on Saturday 27nd of April at 15:00 at Leih-Lokal Freiräume, Gerwigstraße 41, Karlsruhe.

This is a foremost social event and there is no structure or schedule. Just come and enjoy the discourse about any topic you are interested in.

I'll try to provide some snacks so please RSVP for a better estimate of the expected number of mouths to feed.

The Karlsruhe Rationality group (currently in hiatus) aims to connect Rationalists from Karlsruhe (Germany) and surrounding areas. Everyone worries they're not serious enough about ACX to join, so you should banish that thought and come anyway.  "Please feel free to come even if you feel awkward about it, even if you’re not 'the typical ACX reader', even if you’re worried people won’t like you", even if you didn't come to the previous meetings, even if you don't speak German, etc., etc.

The location is confirmed :)
 

Lots of people already know about ACX/SSC, but I think that crossposting to LW is unusually valuable in this particular case, since lots of people were waiting for a big schelling-point overview of the 15-hour Rootclaim Lab Leak debate, and unlike LW, ACX's comment section is a massive vote-less swamp that lags the entire page and gives everyone equal status. 

It remains unclear whether commenting there is worth your time if you think you have something worth saying, since there's no sorting, only sifting, implying that it attracts small numbers of sifters instead of large numbers of people who expect sorting.

Here are the first 11 paragraphs:

Saar Wilf is an ex-Israeli entrepreneur. Since 2016, he’s been developing a new form of reasoning, meant to transcend normal human bias.

His method

...

One thing that occurs to me is that each analysis, such as the Putin one, can be thought of as a function hypothesis.

It takes as inputs the variables:

Russian demographics

healthy lifestyle

family history

facial swelling

hair present

And is outputting the probability 86%, where the function is

P = F(demographics, lifestyle, history, swelling, hair) and then each term is being looked up in some source, which has a data quality, and the actual equation seems to be a mix of Bayes and simple probability calculations.

There are other variables not considered, and other... (read more)

I want to thank Jan Kulveit, Tomáš Gavenčiak, and Jonathan Shock for their extensive feedback and ideas they contributed to this work and for Josh Burgener and Yusuf Heylen for their proofreading and comments. I would also like to acknowledge the Epistea Residency and its organisers where much of the thinking behind this work was done.

This post aims to build towards a theory of how meditation alters the mind based on the ideas of active inference (ActInf). ActInf has been growing in its promise as a theory of how brains process information and interact with the world and has become increasingly validated with a growing body of work in the scientific literature.

Why bring the idea of ActInf and meditation together? Meditation seems to have a profound effect on...

In his method, I think the happiness of the first few Jhanas is not caused by prediction error directly, but rather indirectly through the activation of the reward circuitry. So while the method involves creating some amount of prediction error, the ultimate result is less overall prediction error, because the reward neurotransmitters bring the experiential world closer to the ideal.

After the first three Jhanas, the reward circuitry is less relevant and you start to reduce overall prediction error through other means, by allowing attention to let go of asp... (read more)

1cesiumquail30m
I would say the warm shower causes less prediction error than the cold shower because it’s less shocking to the body, but there’s still a very subtle amount of discomfort which is hidden under all the positive feelings. The level of discomfort I’m talking about is very slight, but you would notice it if there was nothing else occupying your attention. I don’t mean to say it causes negative emotions. It’s more like the discomfort of imagining an unsatisfying shape, or watching a video at slightly lower resolution. If you compare any activity to deep sleep or unconsciousness, you can find sensations that grab your attention by being slightly irritating. As long as it’s noticeable I think it causes slight negative valence. But this is often outweighed by other aspects of the activity that increase valence. Sitting at home doing nothing might involve the negative sensations of boredom, restlessness, and impatience, all of which disappear when we go for a walk, so any discomfort is hard to notice underneath the obvious increase in valence.

Summary: The post describes a method that allows us to use an untrustworthy optimizer to find satisficing outputs.

Acknowledgements: Thanks to Benjamin Kolb (@benjaminko), Jobst Heitzig (@Jobst Heitzig) and Thomas Kehrenberg (@Thomas Kehrenberg)  for many helpful comments.

Introduction

Imagine you have black-box access to a powerful but untrustworthy optimizing system, the Oracle. What do I mean by "powerful but untrustworthy"? I mean that, when you give an objective function  as input to the Oracle, it will output an element  that has an impressively low[1] value of . But sadly, you don't have any guarantee that it will output the optimal element and e.g. not one that's also chosen for a different purpose (which might be dangerous for many reasons, e.g. instrumental convergence).

What questions can you safely ask the Oracle? Can you use it to...

To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Log In Reset Password
...or continue with

There's a particular kind of widespread human behavior that is kind on the surface, but upon closer inspection reveals quite the opposite. This post is about four such patterns.

 

Computational Kindness

One of the most useful ideas I got out of Algorithms to Live By is that of computational kindness. I was quite surprised to only find a single mention of the term on lesswrong. So now there's two.

Computational kindness is the antidote to a common situation: imagine a friend from a different country is visiting and will stay with you for a while. You're exchanging some text messages beforehand in order to figure out how to spend your time together. You want to show your friend the city, and you want to be very accommodating and make sure...

Forget where I read it, but this Idea seems similar. When responding to a request, being upfront about your boundaries or constraints feels intense but can be helpful for both parties. If Bob asks Alice to help him move, and Alice responds "sure thing" that leaves the interaction open to miscommunication. But if instead Alice says, " yeah! I am available 1pm to 5pm and my neck has been bothering me so no heavy lifting for me!" Although that's seems like less of a kind response Bob now doesn't have to guess at Alice's constraints and can comfortably move forward without feeling the need to tiptoe around how long and to what degree Alice can help.

1CstineSublime16h
This is an extremely relatable post, in both ways. I often find myself on the other side of the these interactions too and not knowing how to label and describe my awareness of what's happening without coming across as Larry David from Curb Your Enthusiasm.
2Lukas_Gloor18h
I really liked this post! I will probably link to it in the future. Edit: Just came to my mind that these are things I tend to think of under the heading "considerateness" rather than kindness, but it's something I really appreciate in people either way (and the concepts are definitely linked). 

About 15 years ago, I read Malcolm Gladwell's Outliers. He profiled Chris Langan, an extremely high-IQ person, claiming that he had only mediocre accomplishments despite his high IQ. Chris Langan's theory of everything, the Cognitive Theoretic Model of the Universe, was mentioned. I considered that it might be worth checking out someday.

Well, someday has happened, and I looked into CTMU, prompted by Alex Zhu (who also paid me for reviewing the work). The main CTMU paper is "The Cognitive-Theoretic Model of the Universe: A New Kind of Reality Theory".

CTMU has a high-IQ mystique about it: if you don't get it, maybe it's because your IQ is too low. The paper itself is dense with insights, especially the first part. It uses quite a lot of nonstandard terminology (partially...

5YimbyGeorge8h
Falsifiable predictions?

I don't see any. He even says his approach “leaves the current picture of reality virtually intact”. In Popper's terms this would be metaphysics, not science, which is part of why I'm skeptical of the claimed applications to quantum mechanics and so on. Note that, while there's a common interpretation of Popper saying metaphysics is meaningless, he contradicts this.

Quoting Popper:

Language analysts believe that there are no genuine philosophical problems, or that the problems of philosophy, if any, are problems of linguistic usage, or of the meaning of wo

... (read more)
12Wei Dai10h
While reading this, I got a flash-forward of what my life (our lives) may be like in a few years, i.e., desperately trying to understand and evaluate complex philosophical constructs presented to us by superintelligent AI, which may or may not be actually competent at philosophy.
This is a linkpost for https://arxiv.org/abs/2403.09863

Hi, I’d like to share my paper that proposes a novel approach for building white box neural networks.

The paper introduces semantic features as a general technique for controlled dimensionality reduction, somewhat reminiscent of Hinton’s capsules and the idea of “inverse rendering”. In short, semantic features aim to capture the core characteristic of any semantic entity - having many possible states but being at exactly one state at a time. This results in regularization that is strong enough to make the PoC neural network inherently interpretable and also robust to adversarial attacks - despite no form of adversarial training! The paper may be viewed as a manifesto for a novel white-box approach to deep learning.

As an independent researcher I’d be grateful for your feedback!

4mishka12h
This looks interesting, thanks! This post could benefit from an extended summary. In lieu of such a summary, in addition to the abstract I'll quote a paragraph from Section 1.2, "The core idea"

Thank you! The quote you picked is on point, I added an extended summary based on this, thanks for the suggestion!

LessOnline

A Festival of Writers Who are Wrong on the Internet

May 31 - Jun 2, Berkeley, CA