I don't have the intuition that reactions will replace some comments which would have been written without this feature. What makes you think this will happen?
If reactions were tied to posting a comment, such as reactions could not decrease the number of comments, would this make you more likely to support this feature?
Incidentally, thinking about which reaction to put to this comment instead of just up or downvoting made me realize I did not understand completely what you meant, and motivated me to write a comment instead.
I think in this situation, you could use the momentum to implement one hack which increases the probability of implementing all of them is the future. For example, buying a white board, writing all the life-hacks ideas you got from the minicamp and putting it in a very visible place.
We're in agreement. I'm not sure what's my expectation for the length of this phase or the final productivity boost, but I was exploring what we would need to do now to prepare for the kind of world where there is a short period of time when productivity skyrockets. If we end up in such a world, I would prefer people working on AI alignment to be ready to exploit the productivity gains fully.
The question I was exploring was not how to find the tools that do make their users more productive, as I expect good curation to appear in time with the tools, but whether there were resources which would be necessary to use those tools, but difficult to acquire in a short time when the tools are released.
The post was not optimized for SEO, but it definitely has a ChatGPT style I dislike. It's one of my first post, so I'm still exploring how to write good quality post. Thank you for the feedback!
At the individual level, I expect agentic AI to allow even more powerful tools, like ACT acting as a semi autonomous digital assistant, or AutoGPT acting as a lower level executor, taking in your goals and doing most of the work.
Once we have powerful agentic AGI, of the kind that can run continuously and disempower humanity, I expect that at this point we'll be leaving the "world as normal but faster" phase where tools are useful, and then what happens next depends on our alignment plan I guess.
I think I focused too much on the "competitive" part, but my main point was that only certain factors would maintain a difference between individuals productivity, whether they are zero-sum or not. If future AI assistants require large personal datasets to perform well, only the people with preexisting datasets will perform well for a while, even though anyone could start their own dataset at that point.
Conjecture is "a team of researchers dedicated to applied, scalable AI alignment research." according to their website https://www.conjecture.dev/
They are publishing regularly on the alignment forum and LessWrong https://www.lesswrong.com/tag/conjecture-org
I also searched their website, and it does not look like Bonsai is publicly accessible. This must be some internal tool they developed ?
This post points at an interesting fact: some people, communities or organizations already called themselves "rationalists" before the current rationalist movement. It brings forth the idea that the rationalist movement may be anchored in a longer history than what might first seem from reading LessWrong/Overcoming Bias/Eliezer history.
However, this post reads more like a Wikipedia article, or an historical overview. It does not read like it has a goal. Is this post making some sort of argument that the current rationalist community is descended from those earlier groups ? Is it poking at the consensus history of how the rationalist community ended up choosing "rationalist" as an identifier ? I don't know whether any of those things is argued in this post.
This feels like an interesting bag of facts, full of promising threads of inquiry which could develop in new historical insights and make great posts. I am looking forward to reading those follow-ups, but for now this feels incomplete and lacking purpose.
TIL that the expected path a new user of LW is expected to follow, according to https://www.lesswrong.com/posts/rEHLk9nC5TtrNoAKT/lw-2-0-strategic-overview, is to become comfortable with commenting regularly in 3-6 month, and comfortable with posting regularly in 6-9 month. I discovered the existence of shortforms. I (re)discovered the expectation that your posts should be treated as a personal blog medium style ?
As I'm typing this I'm still unsure whether I'm destroying the website with my bad shortform, even though the placeholder explicitly said... (\*right click inspect\*)
Write your thoughts here! What have you been thinking about?
Exploratory, draft-stage, rough, and rambly thoughts are all welcome on Shortform.
I'm definitely rambling ! Look ! I'm following the instructions !
I feel like a "guided tour of LW" is missing when joining the website ? Some sort if premade path to get up to speed on "what am I supposed and allowed to do as a user of LW, except reading posts ?". Could take some inspiration from Duolingo, Brilliant, or any other app trying to get a user past the initial step of interacting with the content ?
blog.jaibot.com does not seem to exist anymore.