# 3.6. Implementation of Softmax Regression from Scratch¶

Just as we implemented linear regression from scratch, we believe that
multiclass logistic (softmax) regression is similarly fundamental and
you ought to know the gory details of how to implement it from scratch.
As with linear regression, after doing things by hand we will breeze
through an implementation in Gluon for comparison. To begin, let’s
import our packages (only `autograd`

, `nd`

are needed here because
we will be doing the heavy lifting ourselves.)

```
import d2l
from mxnet import autograd, nd, gluon
from IPython import display
```

We will work with the Fashion-MNIST dataset just introduced, cuing up an iterator with batch size 256.

```
batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
```

## 3.6.1. Initialize Model Parameters¶

Just as in linear regression, we represent each example as a vector. Since each example is a \(28 \times 28\) image, we can flatten each example, treating them as \(784\) dimensional vectors. In the future, we’ll talk about more sophisticated strategies for exploiting the spatial structure in images, but for now we treat each pixel location as just another feature.

Recall that in softmax regression, we have as many outputs as there are categories. Because our dataset has \(10\) categories, our network will have an output dimension of \(10\). Consequently, our weights will constitute a \(784 \times 10\) matrix and the biases will constitute a \(1 \times 10\) vector. As with linear regression, we will initialize our weights \(W\) with Gaussian noise and our biases to take the initial value \(0\).

```
num_inputs = 784
num_outputs = 10
W = nd.random.normal(scale=0.01, shape=(num_inputs, num_outputs))
b = nd.zeros(num_outputs)
```

Recall that we need to *attach gradients* to the model parameters. More
literally, we are allocating memory for future gradients to be stored
and notifiying MXNet that we want gradients to be calculated with
respect to these parameters in the first place.

```
W.attach_grad()
b.attach_grad()
```

## 3.6.2. The Softmax¶

Before implementing the softmax regression model, let’s briefly review
how operators such as `sum`

work along specific dimensions in an
NDArray. Given a matrix `X`

we can sum over all elements (default) or
only over elements in the same column (`axis=0`

) or the same row
(`axis=1`

). Note that if `X`

is an array with shape `(2, 3)`

and
we sum over the columns (`X.sum(axis=0`

), the result will be a (1D)
vector with shape `(3,)`

. If we want to keep the number of axes in the
original array (resulting in a 2D array with shape `(1,3)`

), rather
than collapsing out the dimension that we summed over we can specify
`keepdims=True`

when invoking `sum`

.

```
X = nd.array([[1, 2, 3], [4, 5, 6]])
X.sum(axis=0, keepdims=True), X.sum(axis=1, keepdims=True)
```

```
(
[[5. 7. 9.]]
<NDArray 1x3 @cpu(0)>,
[[ 6.]
[15.]]
<NDArray 2x1 @cpu(0)>)
```

We are now ready to implement the softmax function. Recall that softmax
consists of two steps: First, we exponentiate each term (using `exp`

).
Then, we sum over each row (we have one row per example in the batch) to
get the normalization constants for each example. Finally, we divide
each row by its normalization constant, ensuring that the result sums to
\(1\). Before looking at the code, let’s recall what this looks
expressed as an equation:

The denominator, or normalization constant, is also sometimes called the partition function (and its logarithm the log-partition function). The origins of that name are in statistical physics where a related equation models the distribution over an ensemble of particles).

```
def softmax(X):
X_exp = X.exp()
partition = X_exp.sum(axis=1, keepdims=True)
return X_exp / partition # The broadcast mechanism is applied here
```

As you can see, for any random input, we turn each element into a non-negative number. Moreover, each row sums up to 1, as is required for a probability. Note that while this looks correct mathematically, we were a bit sloppy in our implementation because failed to take precautions against numerical overflow or underflow due to large (or very small) elements of the matrix, as we did in Section 2.5.

```
X = nd.random.normal(shape=(2, 5))
X_prob = softmax(X)
X_prob, X_prob.sum(axis=1)
```

```
(
[[0.21324193 0.33961776 0.1239742 0.27106097 0.05210521]
[0.11462264 0.3461234 0.19401033 0.29583326 0.04941036]]
<NDArray 2x5 @cpu(0)>,
[1.0000001 1. ]
<NDArray 2 @cpu(0)>)
```

## 3.6.3. The Model¶

Now that we have defined the softmax operation, we can implement the
softmax regression model. The below code defines the forward pass
through the network. Note that we flatten each original image in the
batch into a vector with length `num_inputs`

with the `reshape`

function before passing the data through our model.

```
def net(X):
return softmax(nd.dot(X.reshape((-1, num_inputs)), W) + b)
```

## 3.6.4. The Loss Function¶

Next, we need to implement the cross-entropy loss function, introduced in Section 3.4. This may be the most common loss function in all of deep learning because, at the moment, classification problems far outnumber regression problems.

Recall that cross-entropy takes the negative log likelihood of the
predicted probability assigned to the true label \(-\log p(y|x)\).
Rather than iterating over the predictions with a Python `for`

loop
(which tends to be inefficient), we can use the `pick`

function which
allows us to select the appropriate terms from the matrix of softmax
entries easily. Below, we illustrate the `pick`

function on a toy
example, with 3 categories and 2 examples.

```
y_hat = nd.array([[0.1, 0.3, 0.6], [0.3, 0.2, 0.5]])
y = nd.array([0, 2], dtype='int32')
nd.pick(y_hat, y)
```

```
[0.1 0.5]
<NDArray 2 @cpu(0)>
```

Now we can implement the cross-entropy loss function efficiently with just one line of code.

```
def cross_entropy(y_hat, y):
return - nd.pick(y_hat, y).log()
```

## 3.6.5. Classification Accuracy¶

Given the predicted probability distribution `y_hat`

, we typically
choose the class with highest predicted probability whenever we must
output a *hard* prediction. Indeed, many applications require that we
make a choice. Gmail must catetegorize an email into Primary, Social,
Updates, or Forums. It might estimate probabilities internally, but at
the end of the day it has to choose one among the categories.

When predictions are consistent with the actual category `y`

, they are
correct. The classification accuracy is the fraction of all predictions
that are correct. Although we cannot optimize accuracy directly (it is
not differentiable), it’s often the performance metric that we care most
about, and we will nearly always report it when training classifiers.

To compute accuracy we do the following: First, we execute
`y_hat.argmax(axis=1)`

to gather the predicted classes (given by the
indices for the largest entires each row). The result has the same shape
as the variable `y`

. Now we just need to check how frequently the two
match. Since the equality operator `==`

is datatype-sensitive (e.g. an
`int`

and a `float32`

are never equal), we also need to convert both
to the same type (we pick `float32`

). The result is an NDArray
containing entries of 0 (false) and 1 (true). Taking the mean yields the
desired result.

```
# Save to the d2l package.
def accuracy(y_hat, y):
return (y_hat.argmax(axis=1) == y.astype('float32')).sum().asscalar()
```

We will continue to use the variables `y_hat`

and `y`

defined in the
`pick`

function, as the predicted probability distribution and label,
respectively. We can see that the first example’s prediction category is
2 (the largest element of the row is 0.6 with an index of 2), which is
inconsistent with the actual label, 0. The second example’s prediction
category is 2 (the largest element of the row is 0.5 with an index of
2), which is consistent with the actual label, 2. Therefore, the
classification accuracy rate for these two examples is 0.5.

```
accuracy(y_hat, y) / len(y)
```

```
0.5
```

Similarly, we can evaluate the accuracy for model `net`

on the data
set (accessed via `data_iter`

).

```
# Save to the d2l package.
def evaluate_accuracy(net, data_iter):
metric = Accumulator(2) # num_corrected_examples, num_examples
for X, y in data_iter:
y = y.astype('float32')
metric.add(accuracy(net(X), y), y.size)
return metric[0] / metric[1]
```

Here `Accumulator`

is a utility class to accumulated sum over multiple
numbers.

```
# Save to the d2l package.
class Accumulator(object):
"""Sum a list of numbers over time"""
def __init__(self, n):
self.data = [0.0] * n
def add(self, *args):
self.data = [a+b for a, b in zip(self.data, args)]
def reset(self):
self.data = [0] * len(self.data)
def __getitem__(self, i):
return self.data[i]
```

Because we initialized the `net`

model with random weights, the
accuracy of this model should be close to random guessing, i.e. 0.1 for
10 classes.

```
evaluate_accuracy(net, test_iter)
```

```
0.0925
```

## 3.6.6. Model Training¶

The training loop for softmax regression should look strikingly familiar
if you read through our implementation of linear regression in
Section 3.2. Here we refactor the implementation
to make it reusable. First, we define a function to train for one data
epoch. Note that `updater`

is general function to update the model
parameters, which accepts the batch size as an argument. It can be
either a wrapper of `d2l.sgd`

or a Gluon trainer.

```
# Save to the d2l package.
def train_epoch_ch3(net, train_iter, loss, updater):
metric = Accumulator(3) # train_loss_sum, train_acc_sum, num_examples
if isinstance(updater, gluon.Trainer):
updater = updater.step
for X, y in train_iter:
# compute gradients and update parameters
with autograd.record():
y_hat = net(X)
l = loss(y_hat, y)
l.backward()
updater(X.shape[0])
metric.add(l.sum().asscalar(), accuracy(y_hat, y), y.size)
# Return training loss and training accuracy
return metric[0]/metric[2], metric[1]/metric[2]
```

Before showing the implementation of the training function, we define a utility class that draw data in animation. Again, it aims to simplify the codes in later chapters.

```
# Save to the d2l package.
class Animator(object):
def __init__(self, xlabel=None, ylabel=None, legend=[], xlim=None,
ylim=None, xscale='linear', yscale='linear', fmts=None,
nrows=1, ncols=1, figsize=(3.5, 2.5)):
"""Incrementally plot multiple lines."""
d2l.use_svg_display()
self.fig, self.axes = d2l.plt.subplots(nrows, ncols, figsize=figsize)
if nrows * ncols == 1: self.axes = [self.axes,]
# use a lambda to capture arguments
self.config_axes = lambda : d2l.set_axes(
self.axes[0], xlabel, ylabel, xlim, ylim, xscale, yscale, legend)
self.X, self.Y, self.fmts = None, None, fmts
def add(self, x, y):
"""Add multiple data points into the figure."""
if not hasattr(y, "__len__"): y = [y]
n = len(y)
if not hasattr(x, "__len__"): x = [x] * n
if not self.X: self.X = [[] for _ in range(n)]
if not self.Y: self.Y = [[] for _ in range(n)]
if not self.fmts: self.fmts = ['-'] * n
for i, (a, b) in enumerate(zip(x, y)):
if a is not None and b is not None:
self.X[i].append(a)
self.Y[i].append(b)
self.axes[0].cla()
for x, y, fmt in zip(self.X, self.Y, self.fmts):
self.axes[0].plot(x, y, fmt)
self.config_axes()
display.display(self.fig)
display.clear_output(wait=True)
```

The training function then runs multiple epochs and visualize the training progress.

```
# Save to the d2l package.
def train_ch3(net, train_iter, test_iter, loss, num_epochs, updater):
trains, test_accs = [], []
animator = Animator(xlabel='epoch', xlim=[1, num_epochs],
ylim=[0.3, 0.9],
legend=['train loss', 'train acc', 'test acc'])
for epoch in range(num_epochs):
train_metrics = train_epoch_ch3(net, train_iter, loss, updater)
test_acc = evaluate_accuracy(net, test_iter)
animator.add(epoch+1, train_metrics+(test_acc,))
```

Again, we use the mini-batch stochastic gradient descent to optimize the
loss function of the model. Note that the number of epochs
(`num_epochs`

), and learning rate (`lr`

) are both adjustable
hyper-parameters. By changing their values, we may be able to increase
the classification accuracy of the model. In practice we’ll want to
split our data three ways into training, validation, and test data,
using the validation data to choose the best values of our
hyperparameters.

```
num_epochs, lr = 10, 0.1
updater = lambda batch_size: d2l.sgd([W, b], lr, batch_size)
train_ch3(net, train_iter, test_iter, cross_entropy, num_epochs, updater)
```

## 3.6.7. Prediction¶

Now that training is complete, our model is ready to classify some images. Given a series of images, we will compare their actual labels (first line of text output) and the model predictions (second line of text output).

```
# Save to the d2l package.
def predict_ch3(net, test_iter, n=6):
for X, y in test_iter:
break
trues = d2l.get_fashion_mnist_labels(y.asnumpy())
preds = d2l.get_fashion_mnist_labels(net(X).argmax(axis=1).asnumpy())
titles = [true+'\n'+ pred for true, pred in zip(trues, preds)]
d2l.show_images(X[0:n].reshape((n,28,28)), 1, n, titles=titles[0:n])
predict_ch3(net, test_iter)
```

## 3.6.8. Summary¶

With softmax regression, we can train models for multi-category classification. The training loop is very similar to that in linear regression: retrieve and read data, define models and loss functions, then train models using optimization algorithms. As you’ll soon find out, most common deep learning models have similar training procedures.

## 3.6.9. Exercises¶

- In this section, we directly implemented the softmax function based on the mathematical definition of the softmax operation. What problems might this cause (hint - try to calculate the size of \(\exp(50)\))?
- The function
`cross_entropy`

in this section is implemented according to the definition of the cross-entropy loss function. What could be the problem with this implementation (hint - consider the domain of the logarithm)? - What solutions you can think of to fix the two problems above?
- Is it always a good idea to return the most likely label. E.g. would you do this for medical diagnosis?
- Assume that we want to use softmax regression to predict the next word based on some features. What are some problems that might arise from a large vocabulary?