Raemon

LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.

Sequences

Feedbackloop-First Rationality
The Coordination Frontier
Privacy Practices
Keep your beliefs cruxy and your frames explicit
LW Open Source Guide
Tensions in Truthseeking
Project Hufflepuff
Rational Ritual
Drawing Less Wrong

Wikitag Contributions

Comments

Sorted by
Raemon20

Curious how this takes you typically?

Raemon60

Well, this is the saddest I've been since April 1st 2022.

It really sucks that SB 1047 didn't pass. I don't know if Anthropic could have gotten it passed if they had said "dudes this this fucking important, pass it now" instead of "for some reason we should wait until things are already 

It is nice that at least Anthropic did still get to show up to the table, and that they said anything at all. I sure wish their implied worldview didn't seem so crazy. (I really don't get how you can think it's workable to race here, even if you think Phase I alignment is easy, as well as it seeming really wrong to think Phase I alignment is that likely to be easy)

It feels like winning pathways right now mostly route through:

  • Some kind of miracle of Vibe Shift (ideally mediated through a miracle of Sanity). I think this needs masterwork-level communication / clarity / narrative setting.
  • Just... idk, somehow figure out how to just Solve The Hard Part Real Fast.
  • Somehow muddle through with scary demos that get a few key people to change their mind before it's too late.
Raemon20

Oh lol right.

Raemon20

You wouldn't guess it, but I have an idea...

...what.... what was your idea?

Raemon97

I don't know if I'd go as strong as the OP, but, I think you're being the most pro-social if you have a sense of the scale of other things-worth-doing that aren't in the news, and consciously checking how the current News Thing fits into that scale of importance.

(There can be a few different ways to think about importance, which this frame can be agnostic on. i.e. doesn't have to be "global utilitarian in the classical sense")

Raemon20

FYI I do currently think "learn when/how to use your subconcious to process things" is an important tool in the toolbox (I got advice about that from a mentor I went to talk to). Some of the classes of moves here are:

  • build up intuitions about when it is useful to background process things vs deliberate-process them
  • if your brain is sort of subconsciously wandering in a rut, use a small amount of agency to direct your thoughts in a new direction, but then let them wander once you get them rolling down the hill in that new direction
Raemon30

I feel less optimistic about the "forgetting something on the tip of your tongue", and pretty optimistic about the code debugging.

Raemon20

The feedbackloops in escalating "realness" here for me are:

  • Do I identify principles/skills/habits/etc that seem like they should successfully cut down on time spent on things I regularly do?
  • Do I successfully identify moments where it seems like I should "think something faster the first time?", by applying a technique?
  • Do I do that? Does it seem to save time?

("does it seem to save time?" isn't an ironclad feedbackloop obviously. But, I think it + common sense is at least pretty good)

I've been doing some-kind-of-variant on this since 2023 with the Thinking Physics exercise "reflection portion". Everything in Skills from a year of Purposeful Rationality Practice I think at least somewhat counts as habits that I've gained that allow me to think either think-things-faster, or, think-things-at-all.

I workshopped and ad-hoc "review your thinking for 10 minutes" after various exercises into the ~hour-long exercise you see here, a few months ago. In that time, some new things I try at least sometimes

  • Look at my checklist for debugging code, and do the things on it. These so far include:
    • "actually adopt a stance of 'form hypotheses and try to disprove them'"
    • "patiently follow the code all the way up the stack" (instead of bouncing off after the second step)
    • "binary search for where the problem is by commenting out ~half the code in the relevant section."

      (these may seem obvious but I'm just not that strong a developer, and exercises like this are the main mechanism by which I've gotten better at basic debugging skills)
  • Try the simple dumb thing first. (I still fail to do this an embarrassing amount of time, but am working on it)
  • When I notice myself flailing around myopically,
    • a) these days, try getting a Thinking Assistant for the day.
    • b) back in December, when I first was noticing I was struggling to focus, I decided to write the Thinking Assistants post and spin up a Thinking Assistant community. The general form of that is "consider spinning up whole-ass subcommunities to deal with problems." (I knew from previous experience that finding a single thinking assistant was a brittle solution)
  • Also when I'm myopically flailing, try forming a more complete model of my constraints (as described in this blogpost), and then solve for them.

The first three things feel like they're straightforwardly working, although it's hard to tell how much they actually speed me up. (Often the thing I would previously do when failing to debug code was "ask someone for help", so it's less that there's a speedup exactly and more that I interrupt my colleagues less)

The fourth one, I feel like I'm still workshopping into a form that reliably works for me, because "make a map of the constraints" is made of a lot of subskills, which vary depending on the situation. I anticipate it turning into something pretty important over the next year but it's too early to tell.

Raemon20

I think this is sort of right, but, if you think big labs have wrong worldmodels about what should be useful, it's not that helpful to produce work "they think is useful", but isn't actually that helpful. (i.e. if you have a project that is 90% likely to be used by a lab, but ~0% likely to reduce x-risk, this isn't obviously better than a project that is 30% likely to be used by a lab if you hustle/convince them, but would actually reduce x-risk if you succeeded.)

I do think it's correct to have some model of how your research will actually get used (which I expect to involve some hustling/persuasion if it involves new paradigms)

Raemon21

I agree that it requires upfront investment, but, a few comments on this post are reminding me "oh right the default thing is that everyone falls into The Meta Trap", wherein people invest in meta things that end up not paying off.

My solution to this is to set standards for myself that involve keeping up a "ship quickly" momentum, and generally aim to spend ~10% of your time on meta.

Load More