LESSWRONG
LW

3033
Alexandre Variengien
694Ω8516210
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Our ancestors didn't know their faces
Alexandre Variengien8d10

Yup indeed! See the other comment thread below

Reply
Our ancestors didn't know their faces
Alexandre Variengien8d10

I edited the post to reflect this! (pun intended)

Reply
Our ancestors didn't know their faces
Alexandre Variengien8d10

Went to the kitchen and tried to fill a bowl with water I think you are right, I underestimated how easy it is to get to see a reflection in water. I believe it is unlikely for someone to spend a lifetime without seeing their face (blind person apart), maybe still in arid desert area, or people living in the arctic?

Reply
You’re always stressed, your mind is always busy, you never have enough time
Alexandre Variengien9d43

Here is a choice: you could buy an alarm clock (I personally like this one ) and make your bedroom phone-free.

Reply
EU explained in 10 minutes
Alexandre Variengien12d80

The Balkan house analogy has been almost literally applied to the architecture of the seat of the European Parliament in Strasbourg. It is an unfinished amphitheater symbolizing the ever going construction of the Union.

Reply
Bird's eye view: An interactive representation to see large collection of text "from above".
Alexandre Variengien11mo30

Nope, I didn't know PaCMAP! Thanks for the pointer, I'll have a look.

Reply1
My guess at Conjecture's vision: triggering a narrative bifurcation
Alexandre Variengien2y10

In section 5, I explain how CoEm is an agenda with relaxed constraints. It does try to reduce the alignment tax to make the safety solution competitive for lab to use. Instead it considers there's enough advance in international governance that you have full control over how your AI get built and that there's enforcement mechanism to ensure no competitive but unsafe AI can be built somewhere else.

That's what the bifurcation of narrative is about: not letting lab implement only solution that have low alignment tax because this could just not be enough.

Reply
My guess at Conjecture's vision: triggering a narrative bifurcation
Alexandre Variengien2y82

My steelman of Conjecture's position here would be:

  • Current evals orgs are tightly integrated with AGI labs. AGI labs can pick which evals org to collaborate with, control the model access, which kind of evals will be conducted, which kind of report will be public, etc. This is this power position that makes current evals feed into AGI orthodoxy.
  • We don't have good ways to conduct evals. We have wide error bars over how much juice one can extract from models and we are nowhere close to having the tools to upper bound capabilities from evals. I remember this being a very strong argument internally: we are very bad at extracting capabilities from pre-trained models and unforeseen breakthroughs (like a mega-CoT, giving much more improvement than a fine-tuning baseline) could create improvement of several compute-equivalent OOM in the short term, rendering all past evals useless.
  • Evals draw attention away from other kinds of limits, in particular compute limits. Conjecture is much more optimistic about (stringent) compute limits as they are harder to game.

My opinion is:

  • For evals to be fully trusted, we need more independence such as third party auditing designated by public actors with a legal framework that gives modalities for access to the models. External accountability is the condition needed for evals not to feed into AGI orthodoxy. I'm quite optimistic that we'll get there soon, e.g. thanks to the effort of the UK AI Safety Institute, the EU AI Act, etc.
  • Re point I: the field of designing scaffolding is still very young. I think it's possible we can see surprising discontinuous progress in this domain such that current evals were in fact far from the upper bound of capabilities we can extract from models. If we base deployment / training actions on such evals and find out later a better technique, it's really hard to revert (e.g. for open source, but also it's much easier to stop a model halfway through training when finding a scary ability than deleting it after a first period of deployment). See https://www.lesswrong.com/posts/fnc6Sgt3CGCdFmmgX/we-need-a-science-of-evals 
  • I agree with the point 3. I'm generally quite happy with what we learned from the conservative evals and the role they played in raising public awareness of the risks. I'd like to see evals org finding more robust ways to evaluate performances and go toward more independence from the AGI labs.
Reply
Studying The Alien Mind
Alexandre Variengien2y41

I really appreciate the naturalistic experimentation approach – the fact that it tries to poke at the unknown unknowns, discovering new capabilities or failure modes of Large Language Models (LLMs).

I'm particularly excited by the idea of developing a framework to understand hidden variables and create a phenomenological model of LLM behavior. This seems like a promising way to "carve LLM abilities at their joint," moving closer to enumeration rather than the current approach of 1) coming up with an idea, 2) asking, "Can the LLM do this?" and 3) testing it. We lack access to a comprehensive list of what LLMs can do inherently. I'm very interested in anything that moves us closer to this, where human creativity is no longer the bottleneck in understanding LLMs. A constrained psychological framework could be helpful in uncovering non-obvious areas to explore. It also offers a way to evaluate the frameworks we build: do they merely describe known data, or do they suggest experiments and point toward phenomena we wouldn't have discovered on our own?

However, I believe there are unique challenges in LLM psychology that make it more complex:

  • Researchers are humans. We have an intrinsic understanding of what it's like to be human, including interesting capabilities and phenomena to study. Researchers can draw upon all of human history and literature to find phenomena worth exploring. In many ways, the hundreds of years of stories, novels, poems, and movies pre-digest work for psychologists by drawing detailed pictures of feelings, characters, and behaviors and surfacing the interesting phenomenon to study. LLMs, however, are i) extremely recent, and ii) a type of non-localized intelligence we have no prior examples of. This means we should expect significant blind spots.
  • LLMs appear quite brittle. Findings might be highly sensitive to i) the base model, ii) fine-tuning, and iii) the pre-prompt. Studying LLMs might mean exploring all the personas they can instantiate, potentially a vastly more enormous space than the range of human brains.
  • There's also the risk of being confused by results and not knowing how to proceed. For instance, if you find high sensitivity to the exact tokens used, affecting certain properties in ways that seem illogical, you might have a lot of data but no framework to make sense of it.

I really like the concept of species-specific experiments. However, you should be careful not to project too much of your prior models into these experiments. The ideas of latent patterns and shadows could already make implicit assumptions and constrain what we might imagine as experiments. I think this field requires epistemology on steroids because i) experiments are cheap, so most of our time is spent digesting data, which makes it easy to go off track and continually study our pet theories, and ii) our human priors are probably flawed to understand LLMs.

Reply1
The case for training frontier AIs on Sumerian-only corpus
Alexandre Variengien2y121

What I really like about ancient language is that there's no online community the model could exploit. Even low-ressource modern languages have online forums an AI could use as an entry point.

But this consideration might be eclipsed by the fact that a rogue AI would have access to a translator before trying online manipulation, or by another scenario I'm not considering.

Agree with the lack of direct access to CoT being one of the major drawback. Though we could have a slightly smarter reporter that could also answer questions about CoT interpretation.

Reply
Load More
17Thick practices for AI tools
3d
0
-2Our ancestors didn't know their faces
8d
5
8Breaking Books: A tool to bring books to the social sphere
8d
1
9Letter to a close friend
10d
0
10Solving a problem with mindware
11d
0
23The Connection
4mo
0
11Bird's eye view: An interactive representation to see large collection of text "from above".
11mo
4
75My guess at Conjecture's vision: triggering a narrative bifurcation
2y
12
130The case for training frontier AIs on Sumerian-only corpus
Ω
2y
Ω
16
84A Universal Emergent Decomposition of Retrieval Tasks in Language Models
2y
3
Load More