LESSWRONG
LW

Oleg Trott
1757290
Message
Dialogue
Subscribe

Columbia PhD, co-winner of the most well-funded ML competition ever, creator of the most cited molecular docking program: olegtrott.com

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
How unusual is the fact that there is no AI monopoly?
Oleg Trott11mo30

"why didn't the first person to come up with the idea of using computers to predict the next element in a sequence patent that idea, in full generality"

 

Patents are valid for about 20 years. But Bengio et al used NNs to predict the next word back in 2000:

https://papers.nips.cc/paper_files/paper/2000/file/728f206c2a01bf572b5940d7d9a8fa4c-Paper.pdf

So this idea is old. Only some specific architectural aspects are new.

Reply
Does VETLM solve AI superalignment?
Oleg Trott11mo10

I suspect this labeling and using the labels is still harder that you think though, since individual tokens don't have truth values.

 

Why should they?

You could label each paragraph, for example. Then, when the LM is trained, the correct label could come before each paragraph, as a special token: <true>, <false>, <unknown> and perhaps <mixed>.

Then, during generation, you'd feed it <true> as part of the prompt, and when it generates paragraph breaks.

Similarly, you could do this on a per-sentence basis.

Reply
Does VETLM solve AI superalignment?
Oleg Trott1y30

The idea that we're going to produce a similar amount of perfectly labeled data doesn't seem plausible.

 

That's not at all the idea. Allow me to quote myself:

Here’s what I think we could do. Internet text is vast – on the order of a trillion words. But we could label some of it as “true” and “false”. The rest will be “unknown”.

You must have missed the words "some of" in it. I'm not suggesting labeling all of the text, or even a large fraction of it. Just enough to teach the model the concept of right and wrong.

It shouldn't take long, especially since I'm assuming a human-level ML algorithm here, that is, one with data efficiency comparable to that of humans.

Reply
Does VETLM solve AI superalignment?
Answer by Oleg TrottAug 08, 2024-10

Carlson's interview, BTW. It discusses LessWrong in the first half of the video. Between X and YouTube, the interview got 4M views -- possibly the most high-profile exposure of this site?

 

 

I'm kind of curious about the factual accuracy: "debugging" / struggle sessions, polycules, and the 2017 psychosis -- Did that happen?

Reply
Does VETLM solve AI superalignment?
Oleg Trott1y10

What do VELM and VETLM offer which those other implementable proposals don't? And what problems do VELM and VETLM not solve?

 

VETLM solves superalignment, I believe. It's implementable (unlike CEV), and it should not be susceptible to wireheading (unlike RLHF, instruction following, etc) Most importantly, it's intended to work with an arbitrarily good ML algorithm -- the stronger the better. 

So, will it self-improve, self-replace, escape, let you turn it off, etc.? Yes, if it thinks that this is what its creators would have wanted.

Will it be transparent? To the point where it can self-introspect and, again if it thinks that being transparent is what its creators would have wanted. If it thinks that this is a worthy goal to pursue, it will self-replace with increasingly transparent and introspective systems.

Reply
Does VETLM solve AI superalignment?
Oleg Trott1y10

New proposals are useful mainly insofar as they overcome some subset of barriers which stopped other solutions.

 

CEV was stopped by being unimplementable, and possibly divergent:

The main problems with CEV include, firstly, the great difficulty of implementing such a program - “If one attempted to write an ordinary computer program using ordinary computer programming skills, the task would be a thousand lightyears beyond hopeless.” Secondly, the possibility that human values may not converge. Yudkowsky considered CEV obsolete almost immediately after its publication in 2004.

VELM and VETLM are easily implementable (on top of a superior ML algorithm). So does this fit the bill?

Reply
New Blog Post Against AI Doom
Oleg Trott1y50

That post was completely ignored here: 0 comments and 0 upvotes during the first 24 hours.

I don't know if it's the timing or the content.

On HN, which is where I saw it, it was ranked #1 briefly, as I recall. But then it got "flagged", apparently. 

Reply
AI existential risk probabilities are too unreliable to inform policy
Oleg Trott1y54

Machine Learning Street Talk interview of one of the authors: 

Reply
The Assassination of Trump's Ear is Evidence for Time-Travel
Oleg Trott1y10

There was an article in New Scientist recently about "sending particles back in time". I was a physics major, but I might have skipped the time travel class, so I don't have an opinion on this. But Sabine Hossenfelder posted a video, arguing that New Scientist misrepresented the actual research.

Reply
The $100B plan with "70% risk of killing us all" w Stephen Fry [video]
Oleg Trott1y10

Side note: the link didn't make it to the front page of HN, despite early upvotes. Other links with worse stats (votes at a certain age) rose to the very top. Anyways, it's currently ranked 78. I guess I don't really understand how HN ranks things. I hope someone will explain this to me. Does the source "youtube" vs "nytimes" matter? Do flag-votes count as silent mega-downvotes? Does the algorithm punish posts with numbers in them?

Reply
Load More
-1Does VETLM solve AI superalignment?
Q
1y
Q
10
18AI existential risk probabilities are too unreliable to inform policy
1y
5
35The $100B plan with "70% risk of killing us all" w Stephen Fry [video]
1y
8
3Recursion in AI is scary. But let’s talk solutions.
1y
10
11Alignment: "Do what I would have wanted you to do"
1y
48
9Fix simple mistakes in ARC-AGI, etc.
1y
9
87I'm a bit skeptical of AlphaFold 3
1y
14