LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.
I think body-doubling is a job where being human is actually pretty important. (I agree you can make an AI that does a pretty good job of the basic task, and that may work for people who aren't too distracted and just need a bit of a reminder. And it's probably hard-but-doable to make a talks-out-loud chatbot who gives okay-ish cognitive advice. But, I think for many people, there being a real live human they can disappoint is loadbearing)
In this case are you hoping more for someone who has, like, some background in bio, or is at least capable of quickly orienting on whatever you're talking about, or would someone more rubber-duck shaped work?
I suppose that I don’t know exactly what kind of agentic tasks LLMs are currently being trained on…. But people have been talking about LLM agents for years, and I’d be shocked if the frontier labs weren’t trying? Like, if that worked out of the box, we would know by now (?). Do you disagree?
I don't think LLMs have been particularly trained on what I'd consider the obvious things to really focus on agency-qua-agency in the sense we care about here. (I do think they've been laying down scaffolding and doing the preliminary versions of the obvious-things-you'd-do-first-in-particular)
This maybe reminds me:
I currently feel confused about how to integrate "the kind of thinking that is good at momentum / action" and "the kind of good that is good at creative strategy". And it seems like there should be more of a way to unify them into a holistic way-of-being.
The four checksums above are there to make sure I'm not being myopic in some way in a broader sense, but they apply more at the timescale of weeks than hours or days.
You might just say "well, idk, each week or day, just figure out if it's more like a momentum week or more like a creative strategy week". I feel dissatisfied with this for some reason.
At least part of it is "I think on average people/me could use to be in creative/broader strategy mode more often, even when in a Momentum mode period."
Another part is "there are strategy skills I want to be practicing, that are hard to practice if I don't do them basically every day. They aren't as relevant in a momentum-period, but they're not zero relevant.
Hrm. I think maybe what's most dissatisfying right now is that I just haven't compressed all the finnicky details of it, and it feels overwhelming to think about the entire "how to think" project, which is usually an indicator I am missing the right abstraction.
(see also search term "forward chaining vs back-chaining.")
This seems like reasonable life advice for people generally trying to accumulate resources and do something cool. I'm not sure about people who actually have specific goals they want to accomplish. I think in the domain of AI safety, forward chaining is insufficient (seems like the kind of thing that gets you OpenAI and Anthropic)
The principles I sort of try to live by are, each 2-weeks, I should have done:
Which is I think aiming to accomplish similar goals towards the OP, without losing the plot on my more specific goals.
Nod. But, I think you are also wrong about the "you can hire hire experts" causal model, and "we tried this and it's harder than you think" is entangled with why, and it didn't seem that useful to argue the point more if you weren't making more of an explicit effort to figure out where your model was wrong.
Normally, people can hire try to hire experts, but, it often doesn't work very well. (I can't find the relevant Paul Graham essay, but, if you don't have the good taste to know what expertise looks like, you are going to end up hiring people who are good at persuading you they are experts, rather than actual experts).
It can work in vert well understood domains where it's obvious what success looks like.
In domains where there is no consensus on what an expert would look like (and, since no one has solved the problem, expertise basically "doesn't exist").
(Note you didn't actually argue that hiring experts works, just asserted it)
I agree it'd be nice to have a clearly written history of what has been tried. An awful lot of things have been tried though, and different people coming in would probably want different histories tailored for different goals, and it's fairly hard to summarize. It could totally be done, but the people equipped to do a good job of it often have other important things to do and it's not obviously the right call.
If you want to contribute to the overall situation I do think you should expect to need to have a pretty good understanding of the object level problem as well as what meta-level solutions have been tried. A lot of the reason meta-level solutions have failed is that people didn't understand the object level problem well enough and scaled the wrong thing.
(try searching "postmortem" and maybe skim some of the things that come up, especially higher karma ones?)
Have you read Generalizing From One Example and Typical Mind Fallacy stuff? (that won't directly answer all your questions but the short answer is people just vary a lot in what their internal cognitive processes are like)
I feel like you haven't actually updated on "we've tried this a bunch, lots of people have this idea, and are currently doing it a bunch" in addition to "and it didn't work nearly as well as you might think." Like, you might be right that we didn't try it right or something, but, your initial post was built on a mistaken premise.
FYI I reviewed and approved this user's first post because it seemed much more specific/actually-making-claims than most of our other possibly-crank posts. I am interested in whether downvotes are more like "this is crank" or "this is AI capabilities" or "this seems likely enough to be crank that not having more information on the post-body is annoying" or what.
My experience has varied with this over time – sometimes, body doubling is just free extra focused work time, and sometimes, it mostly seems to concentrate my focused work time into the beginning of the day and then I crash, but usually that is still preferable because concentration-of-focus lets me do tasks with more things-I-need-to-keep-track of and serial steps.