LESSWRONG
LW

Hannes Thurnherr
15320
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Is training data going to be diluted by AI-generated content?
Hannes Thurnherr3y00

I haven't thought of training the models by evaluating the selection of the image by the user. And thanks for correcting me on my Dalle 2 training data - claim.

What do you mean by "training a model to detect its own errors"? Maybe this is a naive question (i am an ML newcomer) but isn't that impossible by definition? Why would a model make a mistake if it's capable of identifying it as such? Do you mean that through continuous improvement the model could correct the mistakes it made in the past, after some time has passed?

The problem of dilution remains for GPTs in my view. Widespread use seems likely over the coming years and the resulting text is unlikely to be properly labeled as AI-generated. Thus it seems likely that the text produced by today's models will get absorbed into the training data of future GPTs, which will cause them to at least partially attempt to emulate their predecessors. Am I making a mistake somewhere in this thought process?

Reply
Is training data going to be diluted by AI-generated content?
Hannes Thurnherr3y00

I tend to agree with you. But I am not sure that our way of distinguishing AI-generated from human-generated content will reach the perfection required for this to "work".  Assuming that the mechanism of distinguishing the two will remain imperfect at least a bit of a feedback loop will remain, which will slow down development.

Reply
4Decompiling Tracr Transformers - An interpretability experiment
1y
0
5Sentience in Silicon: The Challenges of AI Consciousness
2y
2
10Is training data going to be diluted by AI-generated content?
3y
7