Lucie Philippon

Wiki Contributions


Yesterday, I was searching for posts by alignment researchers describing how they got into the field. I was searching specifically for personal stories rather than guides on how other people can get into the field.

I was trying to perform Intuition flooding, by reading lots of accounts, and getting intuitions on which techniques work to enter the field.

I only managed to find three which fit somewhat my target:

Neel Nanda's post was the central example of what I was looking for, and I was surprised to not find more. Does anyone know where I can find more posts like this ? does not seem to exist anymore.

I don't have the intuition that reactions will replace some comments which would have been written without this feature. What makes you think this will happen?

If reactions were tied to posting a comment, such as reactions could not decrease the number of comments, would this make you more likely to support this feature?

Incidentally, thinking about which reaction to put to this comment instead of just up or downvoting made me realize I did not understand completely what you meant, and motivated me to write a comment instead.

I think in this situation, you could use the momentum to implement one hack which increases the probability of implementing all of them is the future. For example, buying a white board, writing all the life-hacks ideas you got from the minicamp and putting it in a very visible place.

We're in agreement. I'm not sure what's my expectation for the length of this phase or the final productivity boost, but I was exploring what we would need to do now to prepare for the kind of world where there is a short period of time when productivity skyrockets. If we end up in such a world, I would prefer people working on AI alignment to be ready to exploit the productivity gains fully.

The question I was exploring was not how to find the tools that do make their users more productive, as I expect good curation to appear in time with the tools, but whether there were resources which would be necessary to use those tools, but difficult to acquire in a short time when the tools are released.

The post was not optimized for SEO, but it definitely has a ChatGPT style I dislike. It's one of my first post, so I'm still exploring how to write good quality post. Thank you for the feedback!

At the individual level, I expect agentic AI to allow even more powerful tools, like ACT acting as a semi autonomous digital assistant, or AutoGPT acting as a lower level executor, taking in your goals and doing most of the work.

Once we have powerful agentic AGI, of the kind that can run continuously and disempower humanity, I expect that at this point we'll be leaving the "world as normal but faster" phase where tools are useful, and then what happens next depends on our alignment plan I guess.

I think I focused too much on the "competitive" part, but my main point was that only certain factors would maintain a difference between individuals productivity, whether they are zero-sum or not. If future AI assistants require large personal datasets to perform well, only the people with preexisting datasets will perform well for a while, even though anyone could start their own dataset at that point.

Conjecture is "a team of researchers dedicated to applied, scalable AI alignment research." according to their website

They are publishing regularly on the alignment forum and LessWrong

I also searched their website, and it does not look like Bonsai is publicly accessible. This must be some internal tool they developed ?

Load More