Introduction to Deep Learning: Complete Autoencoder with MNIST

Introduction to Deep Learning: Complete Autoencoder with MNIST

Column

❈The cat , the columnist of the Python Chinese community, the designer of the new logo of the Python Chinese community, the purebred non-CS subject data analyst, is addicted to Keras. I did a little thing in Cambridge, a primary school student of deep learning. ❈

Abstract: Use Tensorflow to complete the realization of Autoencoder, briefly introduce what is Autoencoder and the application of Autoencoder. I briefly introduced VAE. VAE related code is placed in Github of the Python Chinese community.

Autoencoder is basically the most classic thing of Deep Learning, and it is also the only way to get started. Autoencoder is a data compression algorithm, in which the data compression and decompression functions must be data-related, lossy, and automatically learned from samples. In most cases where autoencoders are mentioned, the functions of compression and decompression are implemented through neural networks.

Here, I will complete an Autoencoder of the MNIST data set for everyone

First download the MNIST data. Here is a friendly reminder that for some reasons, the download speed of the MNIST data set is very slow. It is recommended to download from THE MNIST DATABASE. After the download is complete, create a MNIST_data folder and put it in.

Mention here, why do we want to set the picture to 28*28?

The 28*28 feature map size can prevent the input connection from falling outside the boundary without causing gradient loss.

Everyone thinks that the autoencoder can learn useful expressions of data when there is no label. However, the autoencoder is not a real Unsupervised Learning algorithm, but a Self-Supervised Learning algorithm. Also, Self-Supervised Learning is a part of Supervised Learning, and its labels are generated from input data.

To obtain a self-supervised model, you need to come up with a reliable objective function and a loss function. We first train Autoencoder with these pictures to get a vector of 784 length. At the same time, the images of these data sets have been normalized, which means that they are either one or zero. 1. we first create a single-layer ReLu hidden layer to complete a very simple Autoencoder, this layer is used for compression. Then the encoder is the input layer and the hidden layer, and the decoder is the hidden layer and the output layer. This sentence is more difficult to understand, that is, the input layer is input, and some transformations are performed through the hidden layer in the middle. The hidden layer is shared by the encoder and the decoder. Then to the output layer to get the result, but because we regularize the image, we need to add a Sigmoid function to the output layer to get the result.

By the way, here is an explanation of why it is 784:

Reference: https://cloud.tencent.com/developer/article/1033647 Introduction to Deep Learning: Complete Autoencoder with MNIST-Cloud + Community-Tencent Cloud