Research Lead at CORAL. Director of AI research at ALTER. PhD student in Shay Moran's group in the Technion (my PhD research and my CORAL/ALTER research are one and the same). See also Google Scholar and LinkedIn.
E-mail: {first name}@alter.org.il
I found LLMs to be very useful for literature research. They can find relevant prior work that you can't find with a search engine because you don't know the right keywords. This can be a significant force multiplier.
They also seem potentially useful for quickly producing code for numerical tests of conjectures, but I only started experimenting with that.
Other use cases where I found LLMs beneficial:
That said, I do agree that early adopters seem like they're overeager and maybe even harming themselves in some way.
I did link the relevant section of my agenda post:
This work is my first rigorous foray into compositional learning theory.
A brief and simplified summary:
I'm open to chatting on Discord.
I never did quite that thing successfully. I did have one time when I dropped progressively unsubtle hints on a guy, who remained stubbornly oblivious for a long time until he finally got the message and reciprocated.
Btw, what are some ways we can incorporate heuristics into our algorithm while staying on level 1-2?
Hold my beer ;)
I see modeling vs. implementation as a spectrum more than a dichotomy. Something like:
More precisely, rather than a 1-dimensional spectrum, there are at least two parameters involved:
[EDIT: And a 3rd parameter is, how justified/testable the assumptions of your model is. Ideally, you want these assumptions to be grounded in science. Some will likely be philosophical assumptions which cannot be tested empirically, but at least they should fit into a coherent holistic philosophical view. At the very least, you want to make sure you're not assuming away the core parts of the problem.]
For the purposes of safety, you want to be as close to the implementation end of the spectrum as you can get. However, the model side of the spectrum is still useful as:
Sorry, I was wrong. By Lob's theorem, all versions of are provably equivalent, so they will trust each other.
IIUC, fixed point equations like that typically have infinitely many solution. So, you defined not one predicate, but an infinite family of them. Therefore, your agent will trust a copy of itself, but usually won't trust variants of itself with other choices of fixed point. In this sense, this proposal is similar to proposals based on quining (as quining has many fixed points as well).
I don't know what's so bad about the "human male" bio. I might have swiped right on that one. (Especially if the profile had additional info that makes him sound interesting.)
Fixed!