Thinking about Neural Network Layers

There are the two different neural network layer types. One of them is trying to expose information that the machine learning model can use to learn, and the other is helping the machine learning model use the information to learn.

Examples of the first type are number of filters in a convolutional layer, skip connections, max polling.

Examples of the second type are dropout and batch normalization.

New efforts in architecture development are typically focused on either exposing more relevant information by constraining which kind of information the network has access to, or helping it exploit the information it already has access to more efficiently by helping it learn better.

If your model performs poorly you can ask yourself the question: What kind of information would help the model make better choices on the dataset and create a layer which exposes that information to the model as its training.