Member-only story
Multilayer Perceptron (MLP)
This is how you should learn it! (Atomic-learning)
The Basics
A neural network consists of a series of stacked layers. Each layer contains units that are connected to the previous layer’s units through a set of weights. As we shall see, there are many different types of layers, but one of the most common is the fully connected (or dense) layer that connects all units in the layer directly to every unit in the previous layer.
Neural networks where all adjacent layers are fully connected are called multilayer perceptrons (MLPs).
The input (e.g., an image) is transformed by each layer in turn, in what is known as a forward pass through the network, until it reaches the output layer. Specifically, each unit applies a nonlinear transformation to a weighted sum of its inputs and passes the output through to the subsequent layer. The final output layer is the culmination of this process, where the single unit outputs a probability that the original input belongs to a particular category (e.g., smiling)
The magic of deep neural networks lies in finding the set of weights for each layer that results in the most accurate predictions. The process of finding these weights is what we mean by training the network.