LESSWRONG
LW

Raemon
57193Ω7224818399308
Message
Dialogue
Subscribe

LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Bodydouble / Thinking Assistant matchmaking
Raemon2d30

My experience has varied with this over time – sometimes, body doubling is just free extra focused work time, and sometimes, it mostly seems to concentrate my focused work time into the beginning of the day and then I crash, but usually that is still preferable because concentration-of-focus lets me do tasks with more things-I-need-to-keep-track of and serial steps.

Reply1
Bodydouble / Thinking Assistant matchmaking
Raemon2d42

I think body-doubling is a job where being human is actually pretty important. (I agree you can make an AI that does a pretty good job of the basic task, and that may work for people who aren't too distracted and just need a bit of a reminder. And it's probably hard-but-doable to make a talks-out-loud chatbot who gives okay-ish cognitive advice. But, I think for many people, there being a real live human they can disappoint is loadbearing)

Reply
Bodydouble / Thinking Assistant matchmaking
Raemon3d20

In this case are you hoping more for someone who has, like, some background in bio, or is at least capable of quickly orienting on whatever you're talking about, or would someone more rubber-duck shaped work?

Reply
Do confident short timelines make sense?
Raemon3d50

I suppose that I don’t know exactly what kind of agentic tasks LLMs are currently being trained on…. But people have been talking about LLM agents for years, and I’d be shocked if the frontier labs weren’t trying? Like, if that worked out of the box, we would know by now (?). Do you disagree? 

I don't think LLMs have been particularly trained on what I'd consider the obvious things to really focus on agency-qua-agency in the sense we care about here. (I do think they've been laying down scaffolding and doing the preliminary versions of the obvious-things-you'd-do-first-in-particular)

Reply
Mo Putera's Shortform
Raemon3d30

This maybe reminds me:

I currently feel confused about how to integrate "the kind of thinking that is good at momentum / action" and "the kind of good that is good at creative strategy". And it seems like there should be more of a way to unify them into a holistic way-of-being.

The four checksums above are there to make sure I'm not being myopic in some way in a broader sense, but they apply more at the timescale of weeks than hours or days.

You might just say "well, idk, each week or day, just figure out if it's more like a momentum week or more like a creative strategy week". I feel dissatisfied with this for some reason.

At least part of it is "I think on average people/me could use to be in creative/broader strategy mode more often, even when in a Momentum mode period."

Another part is "there are strategy skills I want to be practicing, that are hard to practice if I don't do them basically every day. They aren't as relevant in a momentum-period, but they're not zero relevant.

Hrm. I think maybe what's most dissatisfying right now is that I just haven't compressed all the finnicky details of it, and it feels overwhelming to think about the entire "how to think" project, which is usually an indicator I am missing the right abstraction.

Reply
Mo Putera's Shortform
Raemon4d199

(see also search term "forward chaining vs back-chaining.")

This seems like reasonable life advice for people generally trying to accumulate resources and do something cool. I'm not sure about people who actually have specific goals they want to accomplish. I think in the domain of AI safety, forward chaining is insufficient (seems like the kind of thing that gets you OpenAI and Anthropic)

The principles I sort of try to live by are, each 2-weeks, I should have done:

  • some actions that forward chain towards more compounding resources
  • some actions explicitly backchaining from longterm goals
  • ship something concrete to users
  • do something wholesome

Which is I think aiming to accomplish similar goals towards the OP, without losing the plot on my more specific goals.

Reply1
Why is LW not about winning?
Raemon4d60

Nod. But, I think you are also wrong about the "you can hire hire experts" causal model, and "we tried this and it's harder than you think" is entangled with why, and it didn't seem that useful to argue the point more if you weren't making more of an explicit effort to figure out where your model was wrong.

Normally, people can hire try to hire experts, but, it often doesn't work very well. (I can't find the relevant Paul Graham essay, but, if you don't have the good taste to know what expertise looks like, you are going to end up hiring people who are good at persuading you they are experts, rather than actual experts). 

It can work in vert well understood domains where it's obvious what success looks like.

In domains where there is no consensus on what an expert would look like (and, since no one has solved the problem, expertise basically "doesn't exist").

(Note you didn't actually argue that hiring experts works, just asserted it)

I agree it'd be nice to have a clearly written history of what has been tried. An awful lot of things have been tried though, and different people coming in would probably want different histories tailored for different goals, and it's fairly hard to summarize. It could totally be done, but the people equipped to do a good job of it often have other important things to do and it's not obviously the right call. 

If you want to contribute to the overall situation I do think you should expect to need to have a pretty good understanding of the object level problem as well as what meta-level solutions have been tried. A lot of the reason meta-level solutions have failed is that people didn't understand the object level problem well enough and scaled the wrong thing.

(try searching "postmortem" and maybe skim some of the things that come up, especially higher karma ones?)

Reply
KvmanThinking's Shortform
Raemon4d30

Have you read Generalizing From One Example and Typical Mind Fallacy stuff? (that won't directly answer all your questions but the short answer is people just vary a lot in what their internal cognitive processes are like)

Reply
Why is LW not about winning?
Raemon4d1812

I feel like you haven't actually updated on "we've tried this a bunch, lots of people have this idea, and are currently doing it a bunch" in addition to "and it didn't work nearly as well as you might think." Like, you might be right that we didn't try it right or something, but, your initial post was built on a mistaken premise.

Reply
O(1) reasoning in latent space: 1ms inference, 77% accuracy, no attention or tokens
Raemon5d52

FYI I reviewed and approved this user's first post because it seemed much more specific/actually-making-claims than most of our other possibly-crank posts. I am interested in whether downvotes are more like "this is crank" or "this is AI capabilities" or "this seems likely enough to be crank that not having more information on the post-body is annoying" or what.

Reply1
Load More
23Raemon's Shortform
Ω
8y
Ω
609
Step by Step Metacognition
Feedbackloop-First Rationality
The Coordination Frontier
Privacy Practices
Keep your beliefs cruxy and your frames explicit
LW Open Source Guide
Tensions in Truthseeking
Project Hufflepuff
Rational Ritual
Load More (9/10)
48Bodydouble / Thinking Assistant matchmaking
3d
10
136"Buckle up bucko, this ain't over till it's over."
14d
24
119"What's my goal?"
17d
9
30Hiring* an AI** Artist for LessWrong/Lightcone
19d
6
32Social status games might have "compute weight class" in the future
2mo
7
50What are important UI-shaped problems that Lightcone could tackle?
3mo
22
133Anthropic, and taking "technical philosophy" more seriously
4mo
29
59"Think it Faster" worksheet
5mo
8
86Voting Results for the 2023 Review
5mo
3
99C'mon guys, Deliberate Practice is Real
5mo
25
Load More
Guide to the LessWrong Editor
3mo
Guide to the LessWrong Editor
3mo
Guide to the LessWrong Editor
3mo
Guide to the LessWrong Editor
3mo
(+317)
Sandbagging (AI)
4mo
Sandbagging (AI)
4mo
(+88)
AI "Agent" Scaffolds
4mo
AI "Agent" Scaffolds
4mo
(+340)
AI Products/Tools
4mo
(+121)
Language Models (LLMs)
4mo
Load More