Customize
Rationality+Rationality+World Modeling+World Modeling+AIAIWorld OptimizationWorld OptimizationPracticalPracticalCommunityCommunity
Personal Blog+
A list of some contrarian takes I have: * People are currently predictably too worried about misuse risks * What people really mean by "open source" vs "closed source" labs is actually "responsible" vs "irresponsible" labs, which is not affected by regulations targeting open source model deployment. * Neuroscience as an outer alignment[1] strategy is embarrassingly underrated. * Better information security at labs is not clearly a good thing, and if we're worried about great power conflict, probably a bad thing. * Much research on deception (Anthropic's recent work, trojans, jailbreaks, etc) is not targeting "real" instrumentally convergent deception reasoning, but learned heuristics. Not bad in itself, but IMO this places heavy asterisks on the results they can get. * ML robustness research (like FAR Labs' Go stuff) does not help with alignment, and helps moderately for capabilities. * The field of ML is a bad field to take epistemic lessons from. Note I don't talk about the results from ML. * ARC's MAD seems doomed to fail. * People in alignment put too much faith in the general factor g. It exists, and is powerful, but is not all-consuming or all-predicting. People are often very smart, but lack social skills, or agency, or strategic awareness, etc. And vice-versa. They can also be very smart in a particular area, but dumb in other areas. This is relevant for hiring & deference, but less for object-level alignment. * People are too swayed by rhetoric in general, and alignment, rationality, & EA too, but in different ways, and admittedly to a lesser extent than the general population. People should fight against this more than they seem to (which is not really at all, except for the most overt of cases). For example, I see nobody saying they don't change their minds on account of Scott Alexander because he's too powerful a rhetorician. Ditto for Eliezer, since he is also a great rhetorician. In contrast, Robin Hanson is a famously terrible rhetorician, so people should listen to him more. * There is a technocratic tendency in strategic thinking around alignment (I think partially inherited from OpenPhil, but also smart people are likely just more likely to think this way) which biases people towards more simple & brittle top-down models without recognizing how brittle those models are. ---------------------------------------- 1. A non-exact term ↩︎
Thomas KwaΩ472
0
I started a dialogue with @Alex_Altair a few months ago about the tractability of certain agent foundations problems, especially the agent-like structure problem. I saw it as insufficiently well-defined to make progress on anytime soon. I thought the lack of similar results in easy settings, the fuzziness of the "agent"/"robustly optimizes" concept, and the difficulty of proving things about a program's internals given its behavior all pointed against working on this. But it turned out that we maybe didn't disagree on tractability much, it's just that Alex had somewhat different research taste, plus thought fundamental problems in agent foundations must be figured out to make it to a good future, and therefore working on fairly intractable problems can still be necessary. This seemed pretty out of scope and so I likely won't publish. Now that this post is out, I feel like I should at least make this known. I don't regret attempting the dialogue, I just wish we had something more interesting to disagree about.
Quote from Cal Newport's Slow Productivity book: "Progress in theoretical computer science research is often a game of mental chicken, where the person who is able to hold out longer through the mental discomfort of working through a proof element in their mind will end up with the sharper result."
We recently released an interview with independent scholar John Wentworth: It mostly centers around two themes: "abstraction" (forming concepts) and "agency" (dealing with goal-directed systems).  Check it out!
Raemon52
0
"Can we build a better Public Doublecrux?" Something I'd like to try at LessOnline is to somehow iterate on the "Public Doublecrux" format. Public Doublecrux is a more truthseeking oriented version of Public Debate. (The goal of a debate is to change your opponent's mind or the public's mind. The goal of a doublecrux is more like "work with your partner to figure out if you should change your mind, and vice vera") Reasons to want to do _public_ doublecrux include: * it helps showcase subtle mental moves that are hard to write down explicitly (i.e. tacit knowledge transfer) * there's still something good and exciting about seeing high profile smart people talk about ideas. Having some variant of that format seems good for LessOnline. And having at least 1-2 "doublecruxes" rather than "debates" or "panels" or "interviews" seems good for culture setting. Historically I think public doublecruxes have had some problems: * two people actually changing *their* minds tend to get into idiosyncratic frames that are hard for observers to understand. You're chasing *your* cruxes, rather than presenting "generally compelling arguments." This tends to get into weeds and go down rabbit holes * – having the audience there makes it a bit more awkward and performative. ... ... With that in mind, here are some ideas: * Maybe have the double cruxers in a private room, with videocameras. The talk is broadcast live to other conference-goers, but the actual chat is in a nice cozy room. * Have _two_ (or three?) dedicated facilitators. One is in the room with the doublecruxers, focused on helping them steer towards useful questions. (this has been tried before seems to go well if the facilitator prepares). The SECOND (and maybe third) facilitator hangs out with the audience outside, and is focused on tracking "what is the audience confused about?". The audience participates in a live google doc where they're organizing the conversational threads and asking questions. (the first facilitator is periodically surreptitiously checking the google doc or chat and sometimes asking the Doublecruxers questions about it) * it's possibly worth investing in developing a doublcrux process that's explicitly optimized for public consumption. This might be as simple as having the facilitator periodically asking participants to recap the open threads, what the goal of the current rabbit hole is, etc. But, like, brainstorming and doing "user tests" of it might be worthwhile. ... Anyway those are some thoughts for now. Curious if anyone's got takes.

Popular Comments

Recent Discussion

Quote from Cal Newport's Slow Productivity book: "Progress in theoretical computer science research is often a game of mental chicken, where the person who is able to hold out longer through the mental discomfort of working through a proof element in their mind will end up with the sharper result."

2Seth Herd
I think future more powerful/useful AIs will understand our intentions better IF they are trained to predict language. Text corpuses contain rich semantics about human intentions. I can imagine other AI systems that are trained differently, and I would be more worried about those. That's what I meant by current AI understanding our intentions possibly better than future AI.

Predicting the future is hard, so it’s no surprise that we occasionally miss important developments.

However, several times recently, in the contexts of Covid forecasting and AI progress, I noticed that I missed some crucial feature of a development I was interested in getting right, and it felt to me like I could’ve seen it coming if only I had tried a little harder. (Some others probably did better, but I could imagine that I wasn't the only one who got things wrong.)

Maybe this is hindsight bias, but if there’s something to it, I want to distill the nature of the mistake.

First, here are the examples that prompted me to take notice:

Predicting the course of the Covid pandemic:

  • I didn’t foresee the contribution from sociological factors (e.g., “people not wanting
...

Forecasting is hard.

Forecasting in a domain that includes human psychology, society-level propagation of beliefs, development of entirely new technology, and understanding how a variety of minds work in enough detail to predict not only what they'll do but how they'll change - that's really hard.

So, should we give up, and just prepare for any scenario? I don't think so. I think we should try harder.

That involves spending more individual time on it, and doing more collaborative prediction with people of different perspectives and different areas of expertis... (read more)

7Stefan_Schubert
Thanks for this thoughtful article. It seems to me that the first and the second examples have something in common, namely an underestimate of the degree to which people will react to perceived dangers. I think this is fairly common in speculations about potential future disasters, and have called it sleepwalk bias. It seems like something that one should be able to correct for. I think there is an element of sleepwalk bias in the AI risk debate. See this post where I criticise a particular vignette.
4faul_sname
I think one missing dynamic is "tools that an AI builds won't only be used by the AI that built them" and so looking at what an AI from 5 years in the future would do with tools from 5 years in the future if it was dropped into the world of today might not give a very accurate picture of what the world will look like in 5 years.
25Kaj_Sotala
Hmm, I feel like I always had something like this as one of my default scenarios. Though it would of course have been missing some key details such as the bit about model release culture, since that requires the concept of widely applicable pre-trained models that are released the way they are today.  E.g. Sotala & Yampolskiy 2015 and Sotala 2018 both discussed there being financial incentives to deploy increasingly sophisticated narrow-AI systems until they finally crossed the point of becoming AGI. S&Y 2015: And with regard to the difficulty of regulating them, S&Y 2015 mentioned that: and in the context of discussing AI boxing and oracles, argued that both AI boxing and Oracle AI are likely to be of limited (though possibly still some) value, since there's an incentive to just keep deploying all AI in the real world as soon as it's developed: I also have a distinct memory of writing comments saying something "why does anyone bother with 'the AI could escape the box' type arguments, when the fact that financial incentives would make the release of those AIs inevitable anyway makes the whole argument irrelevant", but I don't remember whether it was on LW, FB or Twitter and none of those platforms has a good way of searching my old comments. But at least Sotala 2018 had an explicit graph showing the whole AI boxing thing as just one way by which the AI could escape, that was irrelevant if it was released otherwise:

If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.

The Open Thread tag is here. The Open Thread sequence is here.

2nim
I notice that I am confused: an image of lily pads appears on https://www.lesswrong.com/s/XJBaPPEYAPeDzuAsy when I load it, but when I expand all community sequences on https://www.lesswrong.com/library (a show-all button might be nice....) and search the string "physical" or "necessity" on that page, I do not see the post appearing. This seems odd, because I'd expect that having a non-default image display when the sequence's homepage is loaded and having a good enough image to appear in the list should be the same condition, but it seems they aren't identical for that one.

There are two images provided for a sequence, the banner image and the card image. The card image is required for it to show up in the Library. 

2nim
I am delighted that you chimed in here; these are pleasingly composed and increase my desire to read the relevant sequences. Your post makes me feel like I meaningfully contributed to the improvement of these sequences by merely asking a potentially dumb question in public, which is the internet at its very best. Artistically, I think the top (fox face) image for lotteries cropped for its bottom 2/3 would be slightly preferable to the other, and the bottom (monochrome white/blue) for geometric makes a nicer banner in the aspect ratio that they're shown as.
2RobertM
EDIT: looks like habryka got there earlier and I didn't see it. https://www.lesswrong.com/posts/zXJfH7oZ62Xojnrqs/#sLay9Tv65zeXaQzR4 Intercom is indeed hidden on mobile (since it'd be pretty intrusive at that screen size).

1. If you find that you’re reluctant to permanently give up on to-do list items, “deprioritize” them instead

hate the idea of deciding that something on my to-do list isn’t that important, and then deleting it off my to-do list without actually doing it. Because once it’s off my to-do list, then quite possibly I’ll never think about it again. And what if it’s actually worth doing? Or what if my priorities will change such that it will be worth doing at some point in the future? Gahh!

On the other hand, if I never delete anything off my to-do list, it will grow to infinity.

The solution I’ve settled on is a priority-categorized to-do list, using a kanban-style online tool (e.g. Trello). The left couple columns (“lists”) are very active—i.e., to-do list...

2MondSemmel
I've found that there's value in having short to-do lists, because short lists fit much better into working memory and are thus easier to think about. If items are deprioritized rather than getting properly deleted from the system, this increases the total number of to-dos one could think about. On the other hand, maybe moving tasks to offscreen columns is sufficient to get them off one's mind? It seems to me like a both easier and more comprehensive approach would be to use a text editor with proper version control and diff features, and then to name particular versions before making major changes.

short lists fit much better into working memory

IMO the main point of a to-do list is to not have the to-do list in working memory. The only thing that should be in working memory is the one thing you're actually supposed to be focusing on and doing, right now. Right?

Or if you're instead in the mode of deciding what to do next, or making a schedule for your day, etc., then that's different, but working memory is still kinda irrelevant because presumably you have your to-do list open on your computer, right in front of your eyes, while you do that, right?... (read more)

This is a linkpost for http://Less.Online/

A Festival of Writers Who are Wrong on the Internet[1]

LessOnline is a festival celebrating truth-seeking, optimization, and blogging. It's an opportunity to meet people you've only ever known by their LessWrong username or Substack handle.

We're running a rationalist conference!

The ticket cost is $400 minus your LW karma in cents.

Confirmed attendees include Scott Alexander, Zvi Mowshowitz, Eliezer Yudkowsky, Katja Grace, and Alexander Wales.

Less.Online

Go through to Less.Online to learn about who's attending, venue, location, housing, relation to Manifest, and more.

We'll post more updates about this event over the coming weeks as it all comes together.

If LessOnline is an awesome rationalist event,
I desire to believe that LessOnline is an awesome rationalist event;

If LessOnline is not an awesome rationalist event,
I desire to believe that LessOnline is not an awesome rationalist event;

Let me not become attached to beliefs I may not want.

      —Litany of Rationalist Event Organizing

  1. ^

    But Striving to be Less So

Error20

I see no general-inquiries address on less.online, so I hope it's okay if I post mine here. Longtime rationalsphere lurker, much rarer poster, considering going. I'm based in Atlanta and pricing the trip:

  1. It's not clear from the page what the 'summer camp' part of the schedule is. To me that phrase connotes 'for kids', but I'm not sure, so I'm not sure if it's worth it to me to stay extra days. [EDIT: Never mind, I see this was answered below.]
  2. What's the nearest major airport, and is Lighthaven accessible from there by train and/or uber? (i.e. will I nee
... (read more)
3Ben Pace
I did find it and we sent him an email, hope he reads it and joins :)

Here's something I've been pondering.

hypothesis: If transformers has internal concepts, and they are represented in the residual stream. Then because we have access to 100% of the information then it should be possible for a non-linear probe to get 100% out of distribution accuracy. 100% is important because we care about how a thing like value learning will generalise OOD.

And yet we don't get 100% (in fact most metrics are much easier than what we care about, being in-distribution, or on careful setups). What is wrong with the assumptions hypothesis, do you think?

1wassname
new observations > new thoughts when it comes to calibrating yourself. The best calibrated people are people who get lots of interaction with the real world, not those who think a lot or have a complicated inner model. Tetlock's super forecasters were gamblers and weathermen.
To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Log In Reset Password
...or continue with

I stumbled upon a Twitter thread where Eliezer describes what seems to be his cognitive algorithm that is equivalent to Tune Your Cognitive Strategies, and have decided to archive / repost it here.

Sarah Constantin: I really liked this example of an introspective process, in this case about the "life problem" of scheduling dates and later canceling them: malcolmocean.com/2021/08/int…

Eliezer Yudkowsky: See, if I'd noticed myself doing anything remotely like that, I'd go back, figure out which steps of thought were actually performing intrinsically necessary cognitive work, and then retrain myself to perform only those steps over the course of 30 seconds.

SC: if you have done anything REMOTELY like training yourself to do it in 30 seconds, then you are radically smarter/more able/etc than me and all the other

...

I guess you could try it and see if you reach wrong conclusions, but that only works isn't so wired up with shortcuts that you cannot (or are much less likely to) discover your mistakes.

I've been puzzling over why EY's efforts to show the dangers of AGI (most notably this) have been unconvincing enough so that other experts (e.g. Paul Christiano) and, in my experience, typical rationalists have not adopted p(doom) > 90% like EY, or even > 50%. I was unconvinced because he simply didn't present a chain of reasoning that shows what he's trying to show.... (read more)

Suppose Alice and Bob are two Bayesian agents in the same environment. They both basically understand how their environment works, so they generally agree on predictions about any specific directly-observable thing in the world - e.g. whenever they try to operationalize a bet, they find that their odds are roughly the same. However, their two world models might have totally different internal structure, different “latent” structures which Alice and Bob model as generating the observable world around them. As a simple toy example: maybe Alice models a bunch of numbers as having been generated by independent rolls of the same biased die, and Bob models the same numbers using some big complicated neural net. 

Now suppose Alice goes poking around inside of her world model, and somewhere in there...

LessOnline & Manifest Summer Camp

June 3rd to June 7th

Between LessOnline and Manifest, stay for a week of experimental events, chill coworking, and cozy late night conversations.

Prices raise $100 on May 13th