Introduction to Deep Learning: Complete Autoencoder with MNIST (continued)

Introduction to Deep Learning: Complete Autoencoder with MNIST (continued)

Column

❈The cat , the columnist of the Python Chinese community, the designer of the new logo of the Python Chinese community, the purebred non-CS subject data analyst, is addicted to Keras. I did a little thing in Cambridge, a primary school student of deep learning. ❈

See the last episode:

Introduction to Deep Learning: Complete Autoencoder with MNIST

In fact, under normal circumstances, Autoencoder does data compression, and the performance is not very good. Taking picture compression as an example, it is very difficult to train an autoencoder that can match the performance of JPEG, and to achieve this performance, you must also limit the type of picture to a small range (for example, JPEG is not very good). Some kind of picture). The self-encoder relies on the characteristics of the data, which makes it infeasible to compress real data. You can only get acceptable results on the specified type of data, but who knows what new needs will be in the future? It is rarely used in practical applications. In 2012, it was discovered that using autoencoders in convolutional neural networks to do layer-by-layer pre-training can train deep networks, but soon people found that a good initialization strategy is more laborious than training deep networks. The layer-by-layer pre-training is much more effective. The Batch Normalization technology that appeared in 2014 allows deeper networks to be effectively trained. By the end of 2015, we can basically train neural networks of any depth by using residual learning (ResNet).

Variational autoencoder (Variational autoencoder, VAE)

VAE is a younger and more interesting type of Autoencoder, which imposes constraints on codewords so that the encoder can learn the latent variable model of the input data. In the Reference section at the end of the chapter, there is the original ArXiv of VAE. Interested students can learn about it, but the object-oriented papers are not for everyone after all, which makes some students a little tired to read. Although there are many articles on VAE, I am here. Briefly introduce my understanding.

VAE and GAN are both Generative Models. And all generative models are defined in the probability distribution of some potential high space data point X

, And generative models (Generative Models) is to produce a value close to X near X. If you want to pretend to be compelling, you can say: "A vector in its high-dimensional space

There is a latent variable inside, which allows us to define according to the probability density function (PDF) (it’s really called PDF)

"——In short, it is very unlike human words. GAN is derived from Game Theory, which is to find the Nash Equilibrium between the generation network and the confrontation network, which means the strategy of each participant at the same time It is the optimal response to the strategies of other participants. You can understand it as a smart good guy and a smart bad guy singled out. Their worst is "fishing and breaking the net", and the Nash equilibrium is the reaction that can make both of them the most profitable. Talking about decision-making. For example, when the bad guys and the good guys fight, there is a peacemaker who said, "Don’t fight, don’t fight." After all, if the strength of the two parties is similar, neither party will be pleased. Here, not fighting seems to be a loss. 0, and all kinds of fights and the like will cause losses, so it is good not to fight at this time, that is, the two sides have the highest profit, which is the Nash equilibrium. GAN is quite different from VAE in the training part, after all, VAE is derived from Bayeux Si reasoning, model the probability distribution of the data, and then obtain new data from this distribution.

Without introducing a lot of mathematical formulas and proofs, I will tell you a short story to understand.

Let me give you an example: first let you guess a thing (data). For example, let me let you guess an animal. This animal has four feet and can swim-all the hippos and platypuses that you brushed have come out.

In the process of guessing what this animal is, our imaginary object, such as a hippo, a platypus, is a Latent Variable (I really don’t know how to translate...)

First of all, in order to guess the answer to the hippo or the platypus, we must first narrow the scope. We can’t guess what airplanes and cannons come from, but animals. Then our brains think about animals, but the problem is, if you don’t know it is. What about animals? In VAE, we use Variational Inference, which is commonly used in Bayesian inference. The advanced nature of VI is much better than that of MCMC (Markov Chain Monte Carol). Use KL divergence to determine the difference between one distribution and another, that is to say, KL Divergence can determine the difference between people and the difference between people and dogs. When we don't know what the result is, here, that is, when we don't know that it is a collection of animals, we can use the collection of plants and inorganics to compare with our goals. In the real world, this comparison is actually very fast. The comparison between electric and optical flint is finished. The famous Paper of Cell tells us that there are only more than 200 cells in face recognition, and our algorithm. . . Don't mention it. . . It can be regarded as a slight difference between the real world and the logical world of mathematics. I may not explain this very well, so please tell me about my mistakes. In this way we can. . Get the collection we want quickly and well?

Yes, VAE played a "probably", how about I guess, I guess I can finally produce rhino and platypus? But this did not affect the performance of VAE in a high-level, high-capacity model.

Of course, VAE is much more difficult than Autoencoder. If you only use Tensorflow, you can write more than a hundred lines (mainly because of your limited level). There is a feeling of hydrology here. Interested children's shoes can come to Github Take a look on.

Reference

  1. Udacity Deep Learning-Autoencoder
  2. Keras Document
  3. Why Does Unsupervised Pre-training Help Deep Learning?
  4. VAE original
  5. Variational Autoencoder: Intuition and Implementation
Reference: https://cloud.tencent.com/developer/article/1033652 Introduction to Deep Learning: Complete Autoencoder with MNIST (Continued)-Cloud + Community-Tencent Cloud