I write software for a living and sometimes write on substack: https://taylorgordonlunt.substack.com/
To be clear, I am not one of the ones who walk away from Omelas. I think those people are naive and suicidal.
I am one of the ones who builds a nonliving effigy in my basement, finds a way to prove it works just as well as a real suffering small child, then releases my results publicly, at first anonymously.
I agree. Welcome to Omelas.
Something appeals to me far more about the wobbly chair story than the dopamine addiction story. In the wobbly chair story, you spent 1 minute improving your life and didn't have to think about it again. In the other story, it was a constant battle that required diligence for a while. You can only do so many of those kinds of things at once.
It's good advice still. When things aren't working, thinking them through and trying things out is a good move. I just wonder if people have any advice that's more like the wobbly chair story. Quick, cheap, semi-permanent wins that don't require willpower.
I'll join! I'm sick right now so my first posts will be slapped together, but maybe that'll put me in the right mindset.
In my view, you don't get novel insights without deep thinking except extremely rarely by random, but you're right to make sure the topic doesn't shift without anyone noticing.
I think it might be worthwhile to distinguish cases where LLMs came up with a novel insight on their own vs. were involved, but not solely responsible.
You wouldn't credit Google for the breakthrough of a researcher who used Google when making a discovery, even if the discovery wouldn't have happened without the Google searches. The discovery maybe also wouldn't have happened without the eggs and toast the researcher had for breakfast.
"LLMs supply ample shallow thinking and memory while the humans supply the deep thinking" is a different and currently much more believable claim than "LLMs can do deep thinking to come up with novel insights on their own."
I can't remember the quote, but I believe this possibility is mentioned offhand in IABIED, with the authors suggesting superhuman but still weak AI might do what we can't and craft rather than grow another AI, to that is can ensure the better successor AI is aligned to its goals.
Let's say I weigh 250 pounds, but I show up to boxing weigh-in with negative 100 pounds of helium balloons strapped to my back. I end up in the same weight class as 150 pound men, even though I can punch like a 250 pound man. Is that fair?
If divisions are essentially arbitrary, when is it better to go through the effort to change them, and when is it better to just say, "no, sir, you can't weigh in with balloons on"?
The title, perhaps? I guess I wouldn't blame anyone. Thank you, by the way.