LESSWRONG
LW

Vanessa Kosoy
9472Ω21063311421
Message
Dialogue
Subscribe

Research Lead at CORAL. Director of AI research at ALTER. PhD student in Shay Moran's group in the Technion (my PhD research and my CORAL/ALTER research are one and the same). See also Google Scholar and LinkedIn.

E-mail: {first name}@alter.org.il

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
8Vanessa Kosoy's Shortform
Ω
6y
Ω
233
Derivative
Vanessa Kosoy7d30

Fixed!

Reply
johnswentworth's Shortform
Vanessa Kosoy13dΩ10205

I found LLMs to be very useful for literature research. They can find relevant prior work that you can't find with a search engine because you don't know the right keywords. This can be a significant force multiplier.

They also seem potentially useful for quickly producing code for numerical tests of conjectures, but I only started experimenting with that.

Other use cases where I found LLMs beneficial:

  • Taking a photo of a menu in French (or providing a link to it) and asking it which dishes are vegan.
  • Recommending movies (I am a little wary of some kind of meme poisoning, but I don't watch movies very often, so seems ok).

That said, I do agree that early adopters seem like they're overeager and maybe even harming themselves in some way.

Reply
New Paper: Ambiguous Online Learning
Vanessa Kosoy16d100

I did link the relevant section of my agenda post:

This work is my first rigorous foray into compositional learning theory.

A brief and simplified summary:

  • In order to have powerful learning algorithms with safety guarantees, we first need learning algorithms with powerful generalization guarantees that we know how to rigorously formulate (otherwise how do you know the algorithm will correctly infer the intended goal/behavior from the training data?).
  • Additionally, in order to formally specify "aligned to human values", we need to formally specify "human values", and it seems likely that the specification of "X's values" should be something akin to "the utility function w.r.t. which X has [specific type of powerful performance guarantees]". These powerful performance guarantees are probably a form/extension of powerful generalization guarantees.
  • Both reasons require us to understand the kind of natural powerful generalization guarantees that efficient learning algorithms can satisfy. Moreover, such understanding would likely be applicable to deep learning as well, as it seems likely deep learning algorithms satisfy such guarantees, but we currently don't know how to formulate them.
  • I conjecture that a key missing ingredient in deriving efficient learning algorithms with powerful guarantees (more powerful than anything we already understand in computational learning theory), is understanding the role of compositionality in learning. This is because compositionality is a ubiquitous feature of our thinking about the world and, intuitively, particular forms of compositionality are strong candidates for properties that are both very general and strong enough to enable efficient learning. This line of thinking led me to some success in the context of control theory, which is a necessary ingredient of the kind of guarantees we will ultimately need.
  • I identified sequence prediction / online learning in the deterministic realizable case as a relatively easy (but already highly non-trivial) starting point for investigating compositional learning.
  • For the reasons stated in the OP, this led me to ambiguous online learning.

I'm open to chatting on Discord.

Reply1
johnswentworth's Shortform
Vanessa Kosoy1mo123

I never did quite that thing successfully. I did have one time when I dropped progressively unsubtle hints on a guy, who remained stubbornly oblivious for a long time until he finally got the message and reciprocated.

Reply1
Modeling versus Implementation
Vanessa Kosoy2moΩ340

Btw, what are some ways we can incorporate heuristics into our algorithm while staying on level 1-2?

  1. We don't know how the prove to required desiderata about the heuristic, but we can still reasonably conjecture them and support the conjectures with empirical tests.
  2. We can't prove or even conjecture anything useful-in-itself about the heuristic, but the way the heuristic is incorporated into the overall algorithm makes it safe. For example, maybe the heuristic produces suggestions together with formal certificates of their validity. More generally, we can imagine an oracle-machine (where the heuristic is slotted into the oracle) about which we cannot necessarily prove something like a regret bound w.r.t. the optimal policy, but we can prove (or at least conjecture) a regret bound w.r.t. some fixed simple reference policy. That is, the safety guarantee shows that no matter what the oracle does, the overall system is not worse than "doing nothing". Maybe, modulo weak provable assumptions about the oracle, e.g. that it satisfies a particular computational complexity bound.
  3. [Epistemic status: very fresh idea, quite speculative but intriguing.] We can't find even a guarantee like a above for a worst-case computationally bounded oracle. However, we can prove (or at least conjecture) some kind of an "average-case" guarantee. For example, maybe we have high probability of safety for a random oracle. However, assuming a uniformly random oracle is quite weak. More optimistically, maybe we can prove safety even for any oracle that is pseudorandom against some complexity class C1 (where we want C1 to be as small as possible). Even better, maybe we can prove safety for any oracle in some complexity class C2 (where we want C2 to be as large as possible) that has access to another oracle which is pseudorandom against C1. If our heuristic is not actually in this category (in particular, C2 is smaller than P and our heuristic doesn't lie in C2), this doesn't formally guarantee anything, but it does provide some evidence for the "robustness" of our high-level scheme.
Reply
Modeling versus Implementation
Vanessa Kosoy2mo150

Hold my beer ;)

Reply21
Modeling versus Implementation
Vanessa Kosoy2mo*Ω7217

I see modeling vs. implementation as a spectrum more than a dichotomy. Something like:

  1. On the "implementation" extreme you prove theorems about the exact algorithm you implement in your AI, s.t. you can even use formal verification to prove these theorems about the actual code you wrote.
  2. Marginally closer to "modeling", you prove (or at least conjecture) theorems about some algorithm which is vaguely feasible in theory. Some civilization might have used that exact algorithm to build AI, but in our world it's impractical, e.g. because it's uncompetitive with other AI designs. However, your actual code is conceptually very close to the idealized algorithm, and you have good arguments why the differences don't invalidate the safety properties of the idealized model.
  3. Further along the spectrum, your actual algorithm is about as similar to the idealized algorithm as DQN is similar to vanilla Q-learning. Which is to say, it was "inspired" by the idealized algorithm but there's a lot of heavy lifting done by heuristics. Nevertheless, there is some reason to hope the heuristic aspects don't change the safety properties.
  4. On the "modeling" extreme, your idealized model is something like AIXI: completely infeasible and bears little direct resemblance to the actual algorithm in your AI. However, there is still some reason to believe real AIs will have similar properties to the idealized model.

More precisely, rather than a 1-dimensional spectrum, there are at least two parameters involved: 

  • How close is the object you make formal statements about to the actual code of your AI, where "closeness" is measured by the strength of the arguments you have for the analogy, on a scale from "they are literally the same" to solid theoretical and/or empirical evidence to pure hand-waving/intuition
  • How much evidence you have for the formal statements, on a scale from "I proved it within some widely accepted mathematical foundation (e.g. PA)" to "I proved vaguely related things, tried very hard but failed to disprove the thing and/or accumulated some empirical evidence".

[EDIT: And a 3rd parameter is, how justified/testable the assumptions of your model is. Ideally, you want these assumptions to be grounded in science. Some will likely be philosophical assumptions which cannot be tested empirically, but at least they should fit into a coherent holistic philosophical view. At the very least, you want to make sure you're not assuming away the core parts of the problem.]

For the purposes of safety, you want to be as close to the implementation end of the spectrum as you can get. However, the model side of the spectrum is still useful as: 

  • A backup plan which is better than nothing, more so if there is some combination of theoretical and empirical justification for the analogizing
  • A way to demonstrate threat models, as the OP suggests
  • An intermediate product that helps checking that your theory is heading in the right direction, comparing different research agendas, and maybe even making empirical tests.
Reply
Working through a small tiling result
Vanessa Kosoy2moΩ250

Sorry, I was wrong. By Lob's theorem, all versions of goodnew are provably equivalent, so they will trust each other.

Reply
Working through a small tiling result
Vanessa Kosoy2moΩ131

IIUC, fixed point equations like that typically have infinitely many solution. So, you defined not one goodnew predicate, but an infinite family of them. Therefore, your agent will trust a copy of itself, but usually won't trust variants of itself with other choices of fixed point. In this sense, this proposal is similar to proposals based on quining (as quining has many fixed points as well).

[This comment is no longer endorsed by its author]Reply
Dating Roundup #4: An App for That
Vanessa Kosoy2mo*62

I don't know what's so bad about the "human male" bio. I might have swiped right on that one. (Especially if the profile had additional info that makes him sound interesting.)

Reply
Load More
Derivative
7d
(+11/-1)
30New Paper: Ambiguous Online Learning
Ω
18d
Ω
2
77New Paper: Infra-Bayesian Decision-Estimation Theory
Ω
3mo
Ω
4
54[Closed] Gauging Interest for a Learning-Theoretic Agenda Mentorship Programme
5mo
5
75Video lectures on the learning-theoretic agenda
Ω
9mo
Ω
0
40Linear infra-Bayesian Bandits
Ω
1y
Ω
5
120Which skincare products are evidence-based?
Q
1y
Q
48
124AI Alignment Metastrategy
Ω
2y
Ω
13
176Critical review of Christiano's disagreements with Yudkowsky
Ω
2y
Ω
40
103Learning-theoretic agenda reading list
Ω
2y
Ω
1
54[Closed] Agent Foundations track in MATS
2y
1
Load More