Helps me decide which research to focus on
Both. Not sure, its something like lesswrong/EA speak mixed with the VC speak.
What I liked about applying for VC funding was the specific questions.
"How is this going to make money?"
"What proof do you have this is going to make money"
and it being clear the bullshit that they wanted was numbers, testimonials from paying customers, unambiguous ways the product was actually better, etc. And then standard bs about progress, security, avoiding weird wibbly wobbly talk, 'woke', 'safety', etc.
With Alignment funders, they really obviously have language they're looking for as well, or language that makes them more and less willing to put more effort into understanding the proposal. Actually, they have it more than the VCs. But they act as if they don't.
I would not call this a "Guide".
It's more a list of recommendations and some thoughts on them.
What observations would change your mind?
You can split your brain and treat LLMs differently, in a different language. Rather, I can and I think most people could as well
Ok, I want to make that at scale. If multiple people have done it and there's value in it, then there is a formula of some kind.
We can write it down, make it much easier to understand unambiguously (read: less unhelpful confusion about what to do or what the writer meant and less time wasted figuring that out) than any of the current agent foundations type stuff.
I'm extremely skeptical that needing to hear a dozen stories dancing around some vague ideas of a point and then 10 analogies (exagerrating to get emotions across) is the best we can do.
regardless of if it works, I think it's disrespectful for being manipulative at worst and wasting the persons time at best.
pauseai, controlai, etc, are doing this