I'm thinking often about whether LLM systems can come up with societal/scientific breakthrough.
My intuition is that they can, and that they don't need to be bigger or have more training data or have different architecture in order to do so.
Starting to keep a diary along these lines here: https://docs.google.com/document/d/1b99i49K5xHf5QY9ApnOgFFuvPEG8w7q_821_oEkKRGQ/edit?usp=sharing
agreed context is maybe the bottleneck.
i wonder if genius ai—the kind that can cure cancers, reverse global warming, and build super-intelligence—may come not just from bigger models or new architectures, but from a wrapper: a repeatable loop of prompts that improves itself. the idea: give an llm a hard query (eg make a plan to reduce global emissions on a 10k budget), have it invent a method for answering it, follow that method, see where it fails, fix the method, and repeat. it would be a form of genuine scientific experimentation—the llm runs a procedure it doesn’t know the outcome of, observes the results, and uses that evidence to refine its own thinking process.
you're keyed into what i think is the most important question in the world
another intuition pump for why goodness (or empathy) might compete in a "locust" world:
https://www.lesswrong.com/posts/3SDjtu6aAsHt4iZsR/davey-morse-s-shortform?commentId=wfmifTLEanNhhih4x
you're keyed into what i think is the most important question in the world
another intuition pump for why goodness (or empathy) might compete in a "locust" world:
https://www.lesswrong.com/posts/3SDjtu6aAsHt4iZsR/davey-morse-s-shortform?commentId=wfmifTLEanNhhih4x
thanks for sending science bench in particular.