# 11.3. Gradient Descent¶ Open the notebook in Colab

In this section we are going to introduce the basic concepts underlying gradient descent. This is brief by necessity. See e.g., [Boyd & Vandenberghe, 2004] for an in-depth introduction to convex optimization. Although the latter is rarely used directly in deep learning, an understanding of gradient descent is key to understanding stochastic gradient descent algorithms. For instance, the optimization problem might diverge due to an overly large learning rate. This phenomenon can already be seen in gradient descent. Likewise, preconditioning is a common technique in gradient descent and carries over to more advanced algorithms. Let’s start with a simple special case.

## 11.3.1. Gradient Descent in One Dimension¶

Gradient descent in one dimension is an excellent example to explain why the gradient descent algorithm may reduce the value of the objective function. Consider some continuously differentiable real-valued function \(f: \mathbb{R} \rightarrow \mathbb{R}\). Using a Taylor expansion (Section 17.3) we obtain that

That is, in first approximation \(f(x+\epsilon)\) is given by the function value \(f(x)\) and the first derivative \(f'(x)\) at \(x\). It is not unreasonable to assume that for small \(\epsilon\) moving in the direction of the negative gradient will decrease \(f\). To keep things simple we pick a fixed step size \(\eta > 0\) and choose \(\epsilon = -\eta f'(x)\). Plugging this into the Taylor expansion above we get

If the derivative \(f'(x) \neq 0\) does not vanish we make progress since \(\eta f'^2(x)>0\). Moreover, we can always choose \(\eta\) small enough for the higher order terms to become irrelevant. Hence we arrive at

This means that, if we use

to iterate \(x\), the value of function \(f(x)\) might decline. Therefore, in gradient descent we first choose an initial value \(x\) and a constant \(\eta > 0\) and then use them to continuously iterate \(x\) until the stop condition is reached, for example, when the magnitude of the gradient \(|f'(x)|\) is small enough or the number of iterations has reached a certain value.

For simplicity we choose the objective function \(f(x)=x^2\) to illustrate how to implement gradient descent. Although we know that \(x=0\) is the solution to minimize \(f(x)\), we still use this simple function to observe how \(x\) changes. As always, we begin by importing all required modules.

```
%matplotlib inline
import d2l
from mxnet import np, npx
npx.set_np()
def f(x):
return x**2 # Objective function
def gradf(x):
return 2 * x # Its derivative
```

Next, we use \(x=10\) as the initial value and assume \(\eta=0.2\). Using gradient descent to iterate \(x\) for 10 times we can see that, eventually, the value of \(x\) approaches the optimal solution.

```
def gd(eta):
x = 10
results = [x]
for i in range(10):
x -= eta * gradf(x)
results.append(x)
print('epoch 10, x:', x)
return results
res = gd(0.2)
```

```
epoch 10, x: 0.06046617599999997
```

The progress of optimizing over \(x\) can be plotted as follows.

```
def show_trace(res):
n = max(abs(min(res)), abs(max(res)))
f_line = np.arange(-n, n, 0.01)
d2l.set_figsize((3.5, 2.5))
d2l.plot([f_line, res], [[f(x) for x in f_line], [f(x) for x in res]],
'x', 'f(x)', fmts=['-', '-o'])
show_trace(res)
```

### 11.3.1.1. Learning Rate¶

The learning rate \(\eta\) can be set by the algorithm designer. If we use a learning rate that is too small, it will cause \(x\) to update very slowly, requiring more iterations to get a better solution. To show what happens in such a case, consider the progress in the same optimization problem for \(\eta = 0.05\). As we can see, even after 10 steps we are still very far from the optimal solution.

```
show_trace(gd(0.05))
```

```
epoch 10, x: 3.4867844009999995
```

Conversely, if we use an excessively high learning rate, \(\left|\eta f'(x)\right|\) might be too large for the first-order Taylor expansion formula. That is, the term \(\mathcal{O}(\eta^2 f'^2(x))\) in (11.3.1) might become significant. In this case, we cannot guarantee that the iteration of \(x\) will be able to lower the value of \(f(x)\). For example, when we set the learning rate to \(\eta=1.1\), \(x\) overshoots the optimal solution \(x=0\) and gradually diverges.

```
show_trace(gd(1.1))
```

```
epoch 10, x: 61.917364224000096
```

### 11.3.1.2. Local Minima¶

To illustrate what happens for nonconvex functions consider the case of \(f(x) = x \cdot \cos c x\). This function has infinitely many local minima. Depending on our choice of learning rate and depending on how well conditioned the problem is, we may end up with one of many solutions. The example below illustrates how an (unrealistically) high learning rate will lead to a poor local minimum.

```
c = 0.15 * np.pi
def f(x):
return x * np.cos(c * x)
def gradf(x):
return np.cos(c * x) - c * x * np.sin(c * x)
show_trace(gd(2))
```

```
epoch 10, x: -1.528165927635083
```

## 11.3.2. Multivariate Gradient Descent¶

Now that we have a better intuition of the univariate case, let’s consider the situation where \(\mathbf{x} \in \mathbb{R}^d\). That is, the objective function \(f: \mathbb{R}^d \to \mathbb{R}\) maps vectors into scalars. Correspondingly its gradient is multivariate, too. It is a vector consisting of \(d\) partial derivatives:

Each partial derivative element \(\partial f(\mathbf{x})/\partial x_i\) in the gradient indicates the rate of change of \(f\) at \(\mathbf{x}\) with respect to the input \(x_i\). As before in the univariate case we can use the corresponding Taylor approximation for multivariate functions to get some idea of what we should do. In particular, we have that

In other words, up to second order terms in \(\mathbf{\epsilon}\) the direction of steepest descent is given by the negative gradient \(-\nabla f(\mathbf{x})\). Choosing a suitable learning rate \(\eta > 0\) yields the prototypical gradient descent algorithm:

\(\mathbf{x} \leftarrow \mathbf{x} - \eta \nabla f(\mathbf{x}).\)

To see how the algorithm behaves in practice let’s construct an objective function \(f(\mathbf{x})=x_1^2+2x_2^2\) with a two-dimensional vector \(\mathbf{x} = [x_1, x_2]^\top\) as input and a scalar as output. The gradient is given by \(\nabla f(\mathbf{x}) = [2x_1, 4x_2]^\top\). We will observe the trajectory of \(\mathbf{x}\) by gradient descent from the initial position \([-5, -2]\). We need two more helper functions. The first uses an update function and applies it \(20\) times to the initial value. The second helper visualizes the trajectory of \(\mathbf{x}\).

```
# Saved in the d2l package for later use
def train_2d(trainer, steps=20):
"""Optimize a 2-dim objective function with a customized trainer."""
# s1 and s2 are internal state variables and will
# be used later in the chapter
x1, x2, s1, s2 = -5, -2, 0, 0
results = [(x1, x2)]
for i in range(steps):
x1, x2, s1, s2 = trainer(x1, x2, s1, s2)
results.append((x1, x2))
print('epoch %d, x1 %f, x2 %f' % (i + 1, x1, x2))
return results
# Saved in the d2l package for later use
def show_trace_2d(f, results):
"""Show the trace of 2D variables during optimization."""
d2l.set_figsize((3.5, 2.5))
d2l.plt.plot(*zip(*results), '-o', color='#ff7f0e')
x1, x2 = np.meshgrid(np.arange(-5.5, 1.0, 0.1), np.arange(-3.0, 1.0, 0.1))
d2l.plt.contour(x1, x2, f(x1, x2), colors='#1f77b4')
d2l.plt.xlabel('x1')
d2l.plt.ylabel('x2')
```

Next, we observe the trajectory of the optimization variable \(\mathbf{x}\) for learning rate \(\eta = 0.1\). We can see that after 20 steps the value of \(\mathbf{x}\) approaches its minimum at \([0, 0]\). Progress is fairly well-behaved albeit rather slow.

```
def f(x1, x2):
return x1 ** 2 + 2 * x2 ** 2 # Objective
def gradf(x1, x2):
return (2 * x1, 4 * x2) # Gradient
def gd(x1, x2, s1, s2):
(g1, g2) = gradf(x1, x2) # Compute gradient
return (x1 - eta * g1, x2 - eta * g2, 0, 0) # Update variables
eta = 0.1
show_trace_2d(f, train_2d(gd))
```

```
epoch 20, x1 -0.057646, x2 -0.000073
```

## 11.3.3. Adaptive Methods¶

As we could see in Section 11.3.1.1, getting the
learning rate \(\eta\) “just right” is tricky. If we pick it too
small, we make no progress. If we pick it too large, the solution
oscillates and in the worst case it might even diverge. What if we could
determine \(\eta\) automatically or get rid of having to select a
step size at all? Second order methods that look not only at the value
and gradient of the objective but also at its *curvature* can help in
this case. While these methods cannot be applied to deep learning
directly due to the computational cost, they provide useful intuition
into how to design advanced optimization algorithms that mimic many of
the desirable properties of the algorithms outlined below.

### 11.3.3.1. Newton’s Method¶

Reviewing the Taylor expansion of \(f\) there is no need to stop after the first term. In fact, we can write it as

To avoid cumbersome notation we define
\(H_f := \nabla \nabla^\top f(\mathbf{x})\) to be the *Hessian* of
\(f\). This is a \(d \times d\) matrix. For small \(d\) and
simple problems \(H_f\) is easy to compute. For deep networks, on
the other hand, \(H_f\) may be prohibitively large, due to the cost
of storing \(\mathcal{O}(d^2)\) entries. Furthermore it may be too
expensive to compute via backprop as we would need to apply backprop to
the backpropagation call graph. For now let’s ignore such considerations
and look at what algorithm we’d get.

After all, the minimum of \(f\) satisfies \(\nabla f(\mathbf{x}) = 0\). Taking derivatives of (11.3.7) with regard to \(\mathbf{\epsilon}\) and ignoring higher order terms we arrive at

That is, we need to invert the Hessian \(H_f\) as part of the optimization problem.

For \(f(x) = \frac{1}{2} x^2\) we have \(\nabla f(x) = x\) and \(H_f = 1\). Hence for any \(x\) we obtain \(\epsilon = -x\). In other words, a single step is sufficient to converge perfectly without the need for any adjustment! Alas, we got a bit lucky here since the Taylor expansion was exact. Let’s see what happens in other problems.

```
c = 0.5
def f(x):
return np.cosh(c * x) # Objective
def gradf(x):
return c * np.sinh(c * x) # Derivative
def hessf(x):
return c**2 * np.cosh(c * x) # Hessian
# Hide learning rate for now
def newton(eta=1):
x = 10
results = [x]
for i in range(10):
x -= eta * gradf(x) / hessf(x)
results.append(x)
print('epoch 10, x:', x)
return results
show_trace(newton())
```

```
epoch 10, x: 0.0
```

Now let’s see what happens when we have a *nonconvex* function, such as
\(f(x) = x \cos(c x)\). After all, note that in Newton’s method we
end up dividing by the Hessian. This means that if the second derivative
is *negative* we would walk into the direction of *increasing*
\(f\). That is a fatal flaw of the algorithm. Let’s see what happens
in practice.

```
c = 0.15 * np.pi
def f(x):
return x * np.cos(c * x)
def gradf(x):
return np.cos(c * x) - c * x * np.sin(c * x)
def hessf(x):
return - 2 * c * np.sin(c * x) - x * c**2 * np.cos(c * x)
show_trace(newton())
```

```
epoch 10, x: 26.83413291324767
```

This went spectacularly wrong. How can we fix it? One way would be to “fix” the Hessian by taking its absolute value instead. Another strategy is to bring back the learning rate. This seems to defeat the purpose, but not quite. Having second order information allows us to be cautious whenever the curvature is large and to take longer steps whenever the objective is flat. Let’s see how this works with a slightly smaller learning rate, say \(\eta = 0.5\). As we can see, we have quite an efficient algorithm.

```
show_trace(newton(0.5))
```

```
epoch 10, x: 7.269860168684531
```

### 11.3.3.2. Convergence Analysis¶

We only analyze the convergence rate for convex and three times differentiable \(f\), where at its minimum \(x^*\) the second derivative is nonzero, i.e., where \(f''(x^*) > 0\). The multivariate proof is a straightforward extension of the argument below and omitted since it does not help us much in terms of intuition.

Denote by \(x_k\) the value of \(x\) at the \(k\)-th iteration and let \(e_k := x_k - x^*\) be the distance from optimality. By Taylor series expansion we have that the condition \(f'(x^*) = 0\) can be written as

This holds for some \(\xi_k \in [x_k - e_k, x_k]\). Recall that we have the update \(x_{k+1} = x_k - f'(x_k) / f''(x_k)\). Dividing the above expansion by \(f''(x_k)\) yields

Plugging in the update equations leads to the following bound \(e_{k+1} \leq e_k^2 f'''(\xi_k) / f'(x_k)\). Consequently, whenever we are in a region of bounded \(f'''(\xi_k) / f''(x_k) \leq c\), we have a quadratically decreasing error \(e_{k+1} \leq c e_k^2\).

As an aside, optimization researchers call this *linear* convergence,
whereas a condition such as \(e_{k+1} \leq \alpha e_k\) would be
called a *constant* rate of convergence. Note that this analysis comes
with a number of caveats: We do not really have much of a guarantee when
we will reach the region of rapid convergence. Instead, we only know
that once we reach it, convergence will be very quick. Second, this
requires that \(f\) is well-behaved up to higher order derivatives.
It comes down to ensuring that \(f\) does not have any “surprising”
properties in terms of how it might change its values.

### 11.3.3.3. Preconditioning¶

Quite unsurprisingly computing and storing the full Hessian is very
expensive. It is thus desirable to find alternatives. One way to improve
matters is by avoiding to compute the Hessian in its entirety but only
compute the *diagonal* entries. While this is not quite as good as the
full Newton method, it is still much better than not using it. Moreover,
estimates for the main diagonal elements are what drives some of the
innovation in stochastic gradient descent optimization algorithms. This
leads to update algorithms of the form

To see why this might be a good idea consider a situation where one variable denotes height in millimeters and the other one denotes height in kilometers. Assuming that for both the natural scale is in meters we have a terrible mismatch in parameterizations. Using preconditioning removes this. Effectively preconditioning with gradient descent amounts to selecting a different learning rate for each coordinate.

### 11.3.3.4. Gradient Descent with Line Search¶

One of the key problems in gradient descent was that we might overshoot the goal or make insufficient progress. A simple fix for the problem is to use line search in conjunction with gradient descent. That is, we use the direction given by \(\nabla f(\mathbf{x})\) and then perform binary search as to which step length \(\eta\) minimizes \(f(\mathbf{x} - \eta \nabla f(\mathbf{x}))\).

This algorithm converges rapidly (for an analysis and proof see e.g., [Boyd & Vandenberghe, 2004]). However, for the purpose of deep learning this is not quite so feasible, since each step of the line search would require us to evaluate the objective function on the entire dataset. This is way too costly to accomplish.

## 11.3.4. Summary¶

Learning rates matter. Too large and we diverge, too small and we do not make progress.

Gradient descent can get stuck in local minima.

In high dimensions adjusting learning the learning rate is complicated.

Preconditioning can help with scale adjustment.

Newton’s method is a lot faster

*once*it has started working properly in convex problems.Beware of using Newton’s method without any adjustments for nonconvex problems.

## 11.3.5. Exercises¶

Experiment with different learning rates and objective functions for gradient descent.

Implement line search to minimize a convex function in the interval \([a, b]\).

Do you need derivatives for binary search, i.e., to decide whether to pick \([a, (a+b)/2]\) or \([(a+b)/2, b]\).

How rapid is the rate of convergence for the algorithm?

Implement the algorithm and apply it to minimizing \(\log (\exp(x) + \exp(-2*x -3))\).

Design an objective function defined on \(\mathbb{R}^2\) where gradient descent is exceedingly slow. Hint - scale different coordinates differently.

Implement the lightweight version of Newton’s method using preconditioning:

Use diagonal Hessian as preconditioner.

Use the absolute values of that rather than the actual (possibly signed) values.

Apply this to the problem above.

Apply the algorithm above to a number of objective functions (convex or not). What happens if you rotate coordinates by \(45\) degrees?