(...) the term technical is a red flag for me, as it is many times used not for the routine business of implementing ideas but for the parts, ideas and all, which are just hard to understand and many times contain the main novelties.
- Saharon Shelah
As a true-born Dutchman I endorse Crocker's rules.
For my most of my writing see my short-forms (new shortform, old shortform)
Twitter: @FellowHominid
Personal website: https://sites.google.com/view/afdago/home
Beautifully argued, Dmitry. Couldn't agree more.
I would also note that I consider the second problem of interpretability basically the central problem of complex systems theory.
I consider the first problem a special case of the central probem of alignment. It's very closely related to the 'no free lunch' problem.
Thanks.
Well 2-3 shitposters and one gwern.
Who would be so foolish to short gwern? Gwern the farsighted, gwern the prophet, gwern for whom entropy is nought, gwern augurious augustus
Thanks for the sleuthing.
The thing is - last time I heard about OpenAI rumors it was Strawberry.
The unfortunate fact of life is that too many times OpenAI shipping has surpassed all but the wildest speculations.
Yes, this should be an option in the form.
Does clicking on HERE work for you?
Fair enough.
Thanks for reminding me about V-information. I am not sure how much I like this particular definition yet - but this direction of inquiry seems very important imho.
Those people will probably not see this so wont reply.
What I can tell you is that in the last three months I went through a phase transition in my AI use and I regret not doing this ~1 year earlier.
It's not that I didnt use AI daily before for mundane tasks or writing emails, it's not that I didnt try a couple times to get it to solve my thesis problem (it doesn't get it) - it's that I failed to refrain my thinking from asking "can AI do X?" to "how can I reengineer and refactor my own workflow, even the questions I am working on so as to maximally leverage AI?"
Me.