Anomaly detection

Contrastive Learning: Effective Anomaly Detection with Auto-Encoders

How to improve auto-encoders performance in anomaly detection tasks with Contrastive Learning and Keras

Facundo Santiago
7 min readAug 9, 2020

--

I’m sure you have heard about auto-encoders before. They are neural networks trained to learn efficient data representations in an unsupervised way. They have been proof useful in a variety of tasks like data denoising or dimensionality reduction. However, with a vanilla configuration they seldom work. Hence, in this post we are going to explore how we can construct an efficient anomaly detection model using an autoencoder and contrastive learning (on some literature you will find it referred as negative learning). Full implementation code is available on GitHub.

What’s an autoencoder?

Autoencoders can be seen as an encoder-decoder data compression algorithm where an encoder compress the input data (from the initial space to an encoded space — or latent space) and then the decoder decompress it(from the latent space to the original space). The idea here is that if the autoencoder has learnt efficient data representation in the latent space, then it should be able to reconstruct the instances correctly on the other end. Easy peasy.

--

--

Facundo Santiago
Facundo Santiago

Written by Facundo Santiago

Product Manager @ Microsoft AI. Graduate adjunct professor at University of Buenos Aires. Frustrated sociologist.

No responses yet