4.2. Implementation of Multilayer Perceptron from Scratch
Open the notebook in Colab

Now that we have characterized multilayer perceptrons (MLPs) mathematically, let us try to implement one ourselves.

import d2l
from mxnet import gluon, np, npx
npx.set_np()

To compare against our previous results achieved with (linear) softmax regression (Section 3.6), we will continue work with the Fashion-MNIST image classification dataset (Section 3.5).

batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)

4.2.1. Initializing Model Parameters

Recall that Fashion-MNIST contains \(10\) classes, and that each image consists of a \(28 \times 28 = 784\) grid of (black and white) pixel values. Again, we will disregard the spatial structure among the pixels (for now), so we can think of this as simply a classification dataset with \(784\) input features and \(10\) classes. To begin, we will implement an MLP with one hidden layer and \(256\) hidden units. Note that we can regard both of these quantities as hyperparameters and ought in general to set them based on performance on validation data. Typically, we choose layer widths in powers of \(2\) which tends to be computationally efficient because of how memory is alotted and addressed in hardware.

Again, we will represent our parameters with several ndarrays. Note that for every layer, we must keep track of one weight matrix and one bias vector. As always, we call attach_grad to allocate memory for the gradients (of the loss) with respect to these parameters.

num_inputs, num_outputs, num_hiddens = 784, 10, 256

W1 = np.random.normal(scale=0.01, size=(num_inputs, num_hiddens))
b1 = np.zeros(num_hiddens)
W2 = np.random.normal(scale=0.01, size=(num_hiddens, num_outputs))
b2 = np.zeros(num_outputs)
params = [W1, b1, W2, b2]

for param in params:
    param.attach_grad()

4.2.2. Activation Function

To make sure we know how everything works, we will implement the ReLU activation ourselves using the maximum function rather than invoking npx.relu directly.

def relu(X):
    return np.maximum(X, 0)

4.2.3. The model

Because we are disregarding spatial structure, we reshape each 2D image into a flat vector of length num_inputs. Finally, we implement our model with just a few lines of code.

def net(X):
    X = X.reshape(-1, num_inputs)
    H = relu(np.dot(X, W1) + b1)
    return np.dot(H, W2) + b2

4.2.4. The Loss Function

To ensure numerical stability (and because we already implemented the softmax function from scratch (Section 3.6), we leverage Gluon’s integrated function for calculating the softmax and cross-entropy loss. Recall our earlier discussion of these intricacies (Section 4.1). We encourage the interested reader to examine the source code for mxnet.gluon.loss.SoftmaxCrossEntropyLoss to deepen their knowledge of implementation details.

loss = gluon.loss.SoftmaxCrossEntropyLoss()

4.2.5. Training

Fortunately, the training loop for MLPs is exactly the same as for softmax regression. Leveraging the d2l package again, we call the train_ch3 function
(see Section 3.6), setting the number of epochs to \(10\) and the learning rate to \(0.5\).
num_epochs, lr = 10, 0.5
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs,
              lambda batch_size: d2l.sgd(params, lr, batch_size))
../_images/output_mlp-scratch_84a1c8_13_0.svg

To evaluate the learned model, we apply it on some test data.

d2l.predict_ch3(net, test_iter)
../_images/output_mlp-scratch_84a1c8_15_0.svg

This looks a bit better than our previous result, using simple linear models and gives us some signal that we are on the right path.

4.2.6. Summary

We saw that implementing a simple MLP is easy, even when done manually. That said, with a large number of layers, this can still get messy (e.g., naming and keeping track of our model’s parameters, etc).

4.2.7. Exercises

  1. Change the value of the hyperparameter num_hiddens and see how this hyperparameter influences your results. Determine the best value of this hyperparameter, keeping all others constant.

  2. Try adding an additional hidden layer to see how it affects the results.

  3. How does changing the learning rate alter your results? Fixing the model architecture and other hyperparameters (including number of epochs), what learning rate gives you the best results?

  4. What is the best result you can get by optimizing over all the parameters (learning rate, iterations, number of hidden layers, number of hidden units per layer) jointly?

  5. Describe why it is much more challenging to deal with multiple hyperparameters.

  6. What is the smartest strategy you can think of for structuring a search over multiple hyperparameters?

4.2.8. Discussions

image0