Enjoyed reading this article, thanks for taking the time to carefully write it up!
Something I wanted to flag - I'm not totally convinced that people have a good calibration to identifying AI writing from human writing, at least without any helpful priors, such as the person's normal writing style. I haven't formally looked into this, but am curious whether you (or anyone else) had found any strong evidence that convinced you otherwise.
A few reasons to back up my skepticism:
I might be taking your claim a little bit out of context, as you were discussing it more in relation to having it help you with idea generation, but I still feel like this is worth raising. I agree that you might be fooling yourself that you are producing good content by using AI, but I disagree that people will definitely "sniff out" that you used AI to help.
Great post! I used Anki religiously during the first few years of my undergrad but eventually fell out of the habit, mostly because making new cards became too time-consuming. (I wish I had come across advice like this back then!)
A few anecdotes from my own experience:
I think the biggest hurdle for me in getting back into Anki has been not knowing what information is actually worth the effort to memorize. Reading this made me realize that creating really small, focused cards might make that question feel a lot less “all or nothing.” I might give it another shot :)
For Interpretability research, something being worked on right now are a set of tutorials which replicates results from recent papers in NNsight: https://nnsight.net/applied_tutorials/
What I find cool about this particular effort is that because the implementations are done with NNsight, it both makes it easier to adapt experiments to new models, and you can run the experiments remotely.
(Disclaimer - I work on the NDIF/NNsight project, though not on this initiative, so take my enthusiasm with a grain of salt)