LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.
Well, this is the saddest I've been since April 1st 2022.
It really sucks that SB 1047 didn't pass. I don't know if Anthropic could have gotten it passed if they had said "dudes this this fucking important, pass it now" instead of "for some reason we should wait until things are already
It is nice that at least Anthropic did still get to show up to the table, and that they said anything at all. I sure wish their implied worldview didn't seem so crazy. (I really don't get how you can think it's workable to race here, even if you think Phase I alignment is easy, as well as it seeming really wrong to think Phase I alignment is that likely to be easy)
It feels like winning pathways right now mostly route through:
Oh lol right.
You wouldn't guess it, but I have an idea...
...what.... what was your idea?
I don't know if I'd go as strong as the OP, but, I think you're being the most pro-social if you have a sense of the scale of other things-worth-doing that aren't in the news, and consciously checking how the current News Thing fits into that scale of importance.
(There can be a few different ways to think about importance, which this frame can be agnostic on. i.e. doesn't have to be "global utilitarian in the classical sense")
FYI I do currently think "learn when/how to use your subconcious to process things" is an important tool in the toolbox (I got advice about that from a mentor I went to talk to). Some of the classes of moves here are:
I feel less optimistic about the "forgetting something on the tip of your tongue", and pretty optimistic about the code debugging.
The feedbackloops in escalating "realness" here for me are:
("does it seem to save time?" isn't an ironclad feedbackloop obviously. But, I think it + common sense is at least pretty good)
I've been doing some-kind-of-variant on this since 2023 with the Thinking Physics exercise "reflection portion". Everything in Skills from a year of Purposeful Rationality Practice I think at least somewhat counts as habits that I've gained that allow me to think either think-things-faster, or, think-things-at-all.
I workshopped and ad-hoc "review your thinking for 10 minutes" after various exercises into the ~hour-long exercise you see here, a few months ago. In that time, some new things I try at least sometimes
The first three things feel like they're straightforwardly working, although it's hard to tell how much they actually speed me up. (Often the thing I would previously do when failing to debug code was "ask someone for help", so it's less that there's a speedup exactly and more that I interrupt my colleagues less)
The fourth one, I feel like I'm still workshopping into a form that reliably works for me, because "make a map of the constraints" is made of a lot of subskills, which vary depending on the situation. I anticipate it turning into something pretty important over the next year but it's too early to tell.
I think this is sort of right, but, if you think big labs have wrong worldmodels about what should be useful, it's not that helpful to produce work "they think is useful", but isn't actually that helpful. (i.e. if you have a project that is 90% likely to be used by a lab, but ~0% likely to reduce x-risk, this isn't obviously better than a project that is 30% likely to be used by a lab if you hustle/convince them, but would actually reduce x-risk if you succeeded.)
I do think it's correct to have some model of how your research will actually get used (which I expect to involve some hustling/persuasion if it involves new paradigms)
I agree that it requires upfront investment, but, a few comments on this post are reminding me "oh right the default thing is that everyone falls into The Meta Trap", wherein people invest in meta things that end up not paying off.
My solution to this is to set standards for myself that involve keeping up a "ship quickly" momentum, and generally aim to spend ~10% of your time on meta.
Curious how this takes you typically?