I’m sure you have heard about auto-encoders before. They are neural networks trained to learn efficient data representations in an unsupervised way. They have been proof useful in a variety of tasks like data denoising or dimensionality reduction. However, with a vanilla configuration they seldom work. Hence, in this post we are going to explore how we can construct an efficient anomaly detection model using an autoencoder and contrastive learning (on some literature you will find it referred as negative learning). Full implementation code is available on GitHub.
Autoencoders can be seen as an encoder-decoder data compression algorithm where an…
Machine Learning-based technologies have not historically been developed with security and privacy in mind. They have been around with luxury and concessions because of the challenges it supposes to look at the data without seeing the data. However, as technology moves forward inside decision systems, core operations and more delicate applications, there is a need for this to change.
You can think of security in Machine Learning in two dimensions, which also happen to occur at different times:
While Machine learning keeps achieving considerable successes in solving and improving any kind of problems, an ever-growing number of disciplines rely on it. However, this success crucially relies on machine learning experts to perform manual tasks. AutoML promises to change this reality and transform it into an experience of the type “data in, model out”.
For an introduction to the Automated Machine Learning topic and the description of the methods used by Google AutoML (ENAS), Microsoft Automated ML, AutoKeras and auto-sklearn, visit:
In this post, I will try to explain and help you build some intuition about how AutoGluon, H2O…
For an introduction to the subject see:
As we saw in my Introduction to the Model interpretability, the most straightforward way to get an interpretable machine learning model is to use an algorithm that creates interpretable models at the very beginning. This includes models like Linear Models, Decision Trees, etc.
Surrogate models try to extend this idea by training an interpretable model to “mimic” the behavior of a black-box model hoping that by understanding the “mimic” model we will get an understanding of how the black-box model behaves. …
If we shall name two trends in modern applications, one should mention containers and machine learning. More and more applications are taking advantage of containerized microservices architectures in order to enable improved elasticity, fault-tolerance, and scalability — whether or on-premise or in the cloud. At the same time, achine Learning have increasingly become a basic requirement in any application.
However, the two things do not always get on each other, and this is the case of Spark. Spark is now-a-days the industry defacto standard for big data analytics, either by using Cloudera, Azure Databricks, AWS EMR, or whatever. …
For an introduction to the topic Model Interpretability see:
In a previous post, I wrote about why checking model fairness is such a critical task. I also wrote a post showing the different methods available to achieve so including Shapley Values and Feature Importance. Now it’s time for a new method: LIME.
I shown that Shapley Values is conceptually great, but requires a lot of computing time. For more than a few features, the exact solution to this problem becomes problematic. This is the reason why most of the packages, like SHAP, implement an approximation method since it is likely…
Following the sequence of posts about model interpretability, it is time to talk about a different method to explain model predictions: Feature Importance or more precisely Permutation Feature Importance. It belongs to the family of model-agnostic methods, which as explained before, are methods that don’t rely on any particularity of the model we want to interpret.
For information about the other methods for interpretability see: Model interpretability — Making your model confess: Shapley values
One of the most basic questions we might ask about a model is what features have the biggest impact on predictions. This concept is called feature…
In a previous post, I wrote about why checking model fairness is such a critical task. I am starting here a series of post where I will share with you some ways you can achieve different levels of interpretability from your model and make it confess. Today I will introduce you to the topic and our first method: Shapley values.
Miller, Tim. 2017 “Explanation in Artificial Intelligence: Insights from the Social Sciences.” defines interpretability as “the degree to which a human can understand the cause of a decision in a model”. So it means it’s something that you achieve in…
Two weeks ago OpenAI published its new language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs reading comprehension, machine translation, question answering, and summarization — all without task-specific training and on an unsupervised fashion. As opposite to Burger King’s AI-generated ads, it is really impressive!
This achievement has a lot of implications. It’s not that a language model will come and kill us all (is it?), but it will certainly affect how people consume information as now you have potentially more synthetic-automatically-generated-controlled sources of digital content which you may not be…
Azure Machine Learning Services (AML) provides a cloud-based environment you can use to develop, train, test, deploy, manage, and track machine learning models. I’m gonna cover three main topics in a sequence of posts:
Solution Architect at the Office of CTO @ Microsoft. Machine Learning and Advanced Analytics. Sensemaking by engaging first hand. Frustrated sociologist.