The real details are here: What is Glaze?

I wonder how robust this technique is to adversarial attacks. It claims to work at the pixel level, masking an image by distorting the pixels in ways invisible to the human eye but which throw off a generative AI model.

But what about a high resolution photograph of an artwork, or a image of one on a computer monitor? I often take photographs of artworks I find compelling, then use Midjourney to style copy them to make novel images.

Certainly this is not scalable to mass data scraping (or could it be?) but I have observed Midjourney can style copy with high fidelity from a handful of never before seen images.

The creators, in all fairness, do not claim it is future proof.

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 3:03 AM

My gut feeling is that this only works against specific models at specific times. Have you tried out their sample images in Midjourney?