Research Lead at CORAL. Director of AI research at ALTER. PhD student in Shay Moran's group in the Technion (my PhD research and my CORAL/ALTER research are one and the same). See also Google Scholar and LinkedIn.
E-mail: {first name}@alter.org.il
Maybe we want a multi-level categorization scheme instead? Something like:
Level 0: Author completely abstains from LLM use in all contexts (not just this post)
Level 1: Author uses LLMs but this particular post was made with no use of LLM whatsoever
Level 2: LLM was used (e.g. to look up information), but no text/images in the post came out of LLM
Level 3: LLM was used for light editing and/or image generation
Level 4: LLM was used for writing substantial parts
Level 5: Mostly LLM-generated with high-level human guidance/control/oversight
10 years ago I argued that approval-based AI might lead to the creation of a memetic supervirus. Relevant quote:
Optimizing human approval is prone to marketing worlds. It seems less dangerous than physicalist AI in the sense that it doesn't create incentives to take over the world, but it might produce some kind of a hyper-efficient memetic virus.
I don't think that what we see here is literally that, but the scenario does seem a tad less far-fetched now.
Fixed!
I found LLMs to be very useful for literature research. They can find relevant prior work that you can't find with a search engine because you don't know the right keywords. This can be a significant force multiplier.
They also seem potentially useful for quickly producing code for numerical tests of conjectures, but I only started experimenting with that.
Other use cases where I found LLMs beneficial:
That said, I do agree that early adopters seem like they're overeager and maybe even harming themselves in some way.
I did link the relevant section of my agenda post:
This work is my first rigorous foray into compositional learning theory.
A brief and simplified summary:
I'm open to chatting on Discord.
I never did quite that thing successfully. I did have one time when I dropped progressively unsubtle hints on a guy, who remained stubbornly oblivious for a long time until he finally got the message and reciprocated.
Btw, what are some ways we can incorporate heuristics into our algorithm while staying on level 1-2?
Hold my beer ;)
Thanks for the heads up. Can you share which AI models were involved?