Modern deep learning approaches usually transform inputs into a modality-specific form. For example, the most common deep learning approach to image classification involves decoding image file bytes into an RGB tensor which is passed into a neural network. Instead, we investigate performing classification directly on file bytes, without the need for decoding files at inference time. Using file bytes as model inputs enables the development of models which can operate on multiple input modalities. Our model, \emph{ByteFormer}, achieves an ImageNet Top-1 classification accuracy of 77.33% when training and testing directly on TIFF file bytes using a transformer backbone with configuration similar to DeiT-Ti (72.2% accuracy when operating on RGB images). Without modifications or hyperparameter tuning, ByteFormer achieves 95.42% classification accuracy when operating on WAV files from the Speech Commands v2 dataset (compared to state-of-the-art accuracy of 98.7%). Additionally, we demonstrate that ByteFormer has applications in privacy-preserving inference. ByteFormer is capable of performing inference on particular obfuscated input representations with no loss of accuracy. We also demonstrate ByteFormer's ability to perform inference with a hypothetical privacy-preserving camera which avoids forming full images by consistently masking 90% of pixel channels, while still achieving 71.35% accuracy on ImageNet.

https://arxiv.org/abs/2306.00238

New to LessWrong?

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 7:43 PM

Can't an adversary run the "privacy-preserving" inputs through their own model?

The ByteFormer model you're discussing sounds pretty groundbreaking! It's fascinating how it can classify images directly from TIFF file bytes with such high accuracy (77.33% on ImageNet), beating traditional methods that work on RGB images. Even more impressive is its ability to handle WAV files from the Speech Commands v2 dataset with minimal effort, scoring a 95.42% classification accuracy.

The part about privacy-preserving inference is super intriguing. Operating on obfuscated inputs without losing accuracy could be a game-changer for data privacy. The idea of a privacy-preserving camera that ByteFormer can work with, even when 90% of pixel channels are masked, yet still pulling off a 71.35% accuracy, is pretty cool.

Regarding the PNG to PDF converter link (https://oneconvert.com/pdf-converter/png-to-pdf, it seems a bit out of context here unless you're thinking about applying ByteFormer to document formats too, which could open up some interesting possibilities in document classification and processing.

All in all, ByteFormer seems like it's paving the way for more efficient and versatile deep learning models, especially in terms of handling different data types and protecting privacy.