5.2. Parameter Management
Open the notebook in Colab

Once we have chosen an architecture and set our hyperparameters, we proceed to the training loop, where our goal is to find parameter values that minimize our objective function. After training, we will need these parameters in order to make future predictions. Additionally, we will sometimes wish to extract the parameters either to reuse them in some other context, to save our model to disk so that it may be exectuted in other software, or for examination in the hopes of gaining scientific understanding.

Most of the time, we will be able to ignore the nitty-gritty details of how parameters are declared and manipulated, relying on Gluon to do the heavy lifting. However, when we move away from stacked architectures with standard layers, we will sometimes need to get into the weeds of declaring and manipulate parameters. In this section, we cover the following:

  • Accessing parameters for debugging, diagnostics, and visualiziations.

  • Parameter initialization.

  • Sharing parameters across different model components.

We start by focusing on an MLP with one hidden layer.

from mxnet import init, np, npx
from mxnet.gluon import nn
npx.set_np()

net = nn.Sequential()
net.add(nn.Dense(256, activation='relu'))
net.add(nn.Dense(10))
net.initialize()  # Use the default initialization method

x = np.random.uniform(size=(2, 20))
net(x)  # Forward computation
array([[ 0.06240272, -0.03268593,  0.02582653,  0.02254182, -0.03728798,
        -0.04253786,  0.00540613, -0.01364186, -0.09915452, -0.02272738],
       [ 0.02816677, -0.03341204,  0.03565666,  0.02506382, -0.04136416,
        -0.04941845,  0.01738528,  0.01081961, -0.09932579, -0.01176298]])

5.2.1. Parameter Access

Let us start with how to access parameters from the models that you already know. When a model is defined via the Sequential class, we can first access any layer by indexing into the model as though it were a list. Each layer’s parameters are conveniently located in its params attribute. We can inspect the parameters of the net defined above.

print(net[0].params)
print(net[1].params)
dense0_ (
  Parameter dense0_weight (shape=(256, 20), dtype=float32)
  Parameter dense0_bias (shape=(256,), dtype=float32)
)
dense1_ (
  Parameter dense1_weight (shape=(10, 256), dtype=float32)
  Parameter dense1_bias (shape=(10,), dtype=float32)
)

The output tells us a few important things. First, each fully-connected layer contains two parameters, e.g., dense0_weight and dense0_bias, corresponding to that layer’s weights and biases, respectively. Both are stored as single precision floats. Note that the names of the parameters are allow us to uniquely identify each layer’s parameters, even in a network contains hundreds of layers.

5.2.1.1. Targeted Parameters

Note that each parameters is represented as an instance of the Parameter class. To do anything useful with the parameters, we first need to access the underlying numerical values. There are several ways to do this. Some are simpler while others are more general. To begin, given a layer, we can access one of its parameters via the bias or weight attributes, and further access that parameter’s value via its data() method. The following code extracts the bias from the second neural network layer.

print(net[1].bias)
print(net[1].bias.data())
Parameter dense1_bias (shape=(10,), dtype=float32)
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]

Parameters are complex objects, containing data, gradients, and additional information. That’s why we need to request the data explicitly. Note that the bias vector consists of zeroes because we have not updated the network since it was initialized. We can also access each parameter by name, e.g., dense0_weight as follows. Under the hood this is possible because each layer contains a parameter dictionary.

print(net[0].params['dense0_weight'])
print(net[0].params['dense0_weight'].data())
Parameter dense0_weight (shape=(256, 20), dtype=float32)
[[ 0.06700657 -0.00369488  0.0418822  ... -0.05517294 -0.01194733
  -0.00369594]
 [-0.03296221 -0.04391347  0.03839272 ...  0.05636378  0.02545484
  -0.007007  ]
 [-0.0196689   0.01582889 -0.00881553 ...  0.01509629 -0.01908049
  -0.02449339]
 ...
 [-0.02055008 -0.02618652  0.06762936 ... -0.02315108 -0.06794678
  -0.04618235]
 [ 0.02802853  0.06672969  0.05018687 ... -0.02206502 -0.01315478
  -0.03791244]
 [-0.00638592  0.00914261  0.06667828 ... -0.00800052  0.03406764
  -0.03954004]]

Note that unlike the biases, the weights are nonzero. This is because unlike biases, weights are initialized randomly. In addition to data, each Parameter also provides a grad() method for accessing the gradient. It has the same shape as the weight. Because we have not invoked backpropagation for this network yet, its values are all 0.

net[0].weight.grad()
array([[0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       ...,
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.]])

5.2.1.2. All Parameters at Once

When we need to perform operations on all parameters, accessing them one-by-one can grow tedious. The situation can grow especially unwieldy when we work with more complex Blocks, (e.g., nested Blocks), since we would need to recurse through the entire tree in to extact each sub-Block’s parameters. To avoid this, each Block comes with a collect_params method that returns all Parameters in a single dictionary. We can invoke collect_params on a single layer or a whole network as follows:

# parameters only for the first layer
print(net[0].collect_params())
# parameters of the entire network
print(net.collect_params())
dense0_ (
  Parameter dense0_weight (shape=(256, 20), dtype=float32)
  Parameter dense0_bias (shape=(256,), dtype=float32)
)
sequential0_ (
  Parameter dense0_weight (shape=(256, 20), dtype=float32)
  Parameter dense0_bias (shape=(256,), dtype=float32)
  Parameter dense1_weight (shape=(10, 256), dtype=float32)
  Parameter dense1_bias (shape=(10,), dtype=float32)
)

This provides us with a third way of accessing the parameters of the network:

net.collect_params()['dense1_bias'].data()
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])

Throughout the book we encounter Blocks that name their sub-Blocks in various ways. Sequential simply numbers them. We can exploit this naming convention by leveraging one clever feature of collect_params: it allows us to filter the parameters returned by using regular expressions.

print(net.collect_params('.*weight'))
print(net.collect_params('dense0.*'))
sequential0_ (
  Parameter dense0_weight (shape=(256, 20), dtype=float32)
  Parameter dense1_weight (shape=(10, 256), dtype=float32)
)
sequential0_ (
  Parameter dense0_weight (shape=(256, 20), dtype=float32)
  Parameter dense0_bias (shape=(256,), dtype=float32)
)

5.2.1.3. Collecting Parameters from Nested Blocks

Let us see how the parameter naming conventions work if we nest multiple blocks inside each other. For that we first define a function that produces Blocks (a Block factory, so to speak) and then combine these inside yet larger Blocks.

def block1():
    net = nn.Sequential()
    net.add(nn.Dense(32, activation='relu'))
    net.add(nn.Dense(16, activation='relu'))
    return net

def block2():
    net = nn.Sequential()
    for i in range(4):
        net.add(block1())
    return net

rgnet = nn.Sequential()
rgnet.add(block2())
rgnet.add(nn.Dense(10))
rgnet.initialize()
rgnet(x)
array([[-4.1923025e-09,  1.9830502e-09,  8.9444063e-10,  6.2912990e-09,
        -3.3241778e-09,  5.4330038e-09,  1.6013515e-09, -3.7408681e-09,
         8.5468477e-09, -6.4805539e-09],
       [-3.7507064e-09,  1.4866974e-09,  6.8314709e-10,  5.6925784e-09,
        -2.6349172e-09,  4.8626667e-09,  1.4280275e-09, -3.4603027e-09,
         7.4127922e-09, -5.7896132e-09]])

Now that we have designed the network, let us see how it is organized. Notice below that while collect_params() produces a list of named parameters, invoking collect_params as an attribute reveals our network’s structure.

print(rgnet.collect_params)
print(rgnet.collect_params())
<bound method Block.collect_params of Sequential(
  (0): Sequential(
    (0): Sequential(
      (0): Dense(20 -> 32, Activation(relu))
      (1): Dense(32 -> 16, Activation(relu))
    )
    (1): Sequential(
      (0): Dense(16 -> 32, Activation(relu))
      (1): Dense(32 -> 16, Activation(relu))
    )
    (2): Sequential(
      (0): Dense(16 -> 32, Activation(relu))
      (1): Dense(32 -> 16, Activation(relu))
    )
    (3): Sequential(
      (0): Dense(16 -> 32, Activation(relu))
      (1): Dense(32 -> 16, Activation(relu))
    )
  )
  (1): Dense(16 -> 10, linear)
)>
sequential1_ (
  Parameter dense2_weight (shape=(32, 20), dtype=float32)
  Parameter dense2_bias (shape=(32,), dtype=float32)
  Parameter dense3_weight (shape=(16, 32), dtype=float32)
  Parameter dense3_bias (shape=(16,), dtype=float32)
  Parameter dense4_weight (shape=(32, 16), dtype=float32)
  Parameter dense4_bias (shape=(32,), dtype=float32)
  Parameter dense5_weight (shape=(16, 32), dtype=float32)
  Parameter dense5_bias (shape=(16,), dtype=float32)
  Parameter dense6_weight (shape=(32, 16), dtype=float32)
  Parameter dense6_bias (shape=(32,), dtype=float32)
  Parameter dense7_weight (shape=(16, 32), dtype=float32)
  Parameter dense7_bias (shape=(16,), dtype=float32)
  Parameter dense8_weight (shape=(32, 16), dtype=float32)
  Parameter dense8_bias (shape=(32,), dtype=float32)
  Parameter dense9_weight (shape=(16, 32), dtype=float32)
  Parameter dense9_bias (shape=(16,), dtype=float32)
  Parameter dense10_weight (shape=(10, 16), dtype=float32)
  Parameter dense10_bias (shape=(10,), dtype=float32)
)

Since the layers are hierarchically nested, we can also access them as though indexing through nested lists. For instance, we can access the first major block, within it the second subblock, and within that the bias of the first layer, with as follows:

rgnet[0][1][0].bias.data()
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])

5.2.2. Parameter Initialization

Now that we know how to access the parameters, let us look at how to initialize them properly. We discussed the need for initialization in Section 4.8. By default, MXNet initializes weight matrices uniformly by drawing from \(U[-0.07, 0.07]\) and the bias parameters are all set to \(0\). However, we will often want to initialize our weights according to various other protocols. MXNet’s init module provides a variety of preset initialization methods. If we want to create a custom initializer, we need to do some extra work.

5.2.2.1. Built-in Initialization

Let us begin by calling on built-in initializers. The code below initializes all parameters as Gaussian random variables with standard deviation \(.01\).

# force_reinit ensures that variables are freshly initialized
# even if they were already initialized previously
net.initialize(init=init.Normal(sigma=0.01), force_reinit=True)
net[0].weight.data()[0]
array([-9.8788980e-03,  5.3957910e-03, -7.0842835e-03, -7.4317548e-03,
       -1.4880489e-02,  6.4959107e-03, -8.2659349e-03,  1.8743129e-02,
        1.6201857e-02,  1.4534278e-03,  2.2331164e-03,  1.5926110e-02,
       -1.2915777e-02, -8.8592555e-05, -1.7293986e-03, -7.2338698e-03,
        8.7698260e-03, -4.9947016e-03, -9.6906107e-03,  2.0079101e-03])

We can also initialize all parameters to a given constant value (say, \(1\)), by using the initializer Constant(1).

net.initialize(init=init.Constant(1), force_reinit=True)
net[0].weight.data()[0]
array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
       1., 1., 1.])

We can also apply different initialziers for certain Blocks. For example, below we initialize the first layer with the Xavier initializer and initialize the second layer to a constant value of 42.

net[0].weight.initialize(init=init.Xavier(), force_reinit=True)
net[1].initialize(init=init.Constant(42), force_reinit=True)
print(net[0].weight.data()[0])
print(net[1].weight.data()[0, 0])
[-0.06319056 -0.10960881  0.11757872 -0.07595599 -0.0849717   0.0851637
  0.08330765  0.04028694 -0.0305525   0.02012795 -0.03856885  0.1375024
  0.10155623 -0.05016676 -0.02575382 -0.14205234  0.14225402  0.02719662
 -0.0888046  -0.00962897]
42.0

5.2.2.2. Custom Initialization

Sometimes, the initialization methods we need are not provided in the init module. In these cases, we can define a subclass of Initializer. Usually, we only need to implement the _init_weight function which takes an ndarray argument (data) and assigns to it the desired initialized values. In the example below, we define an initializer for the following strange distribution:

(5.2.1)\[\begin{split}\begin{aligned} w \sim \begin{cases} U[5, 10] & \text{ with probability } \frac{1}{4} \\ 0 & \text{ with probability } \frac{1}{2} \\ U[-10, -5] & \text{ with probability } \frac{1}{4} \end{cases} \end{aligned}\end{split}\]
class MyInit(init.Initializer):
    def _init_weight(self, name, data):
        print('Init', name, data.shape)
        data[:] = np.random.uniform(-10, 10, data.shape)
        data *= np.abs(data) >= 5

net.initialize(MyInit(), force_reinit=True)
net[0].weight.data()[0]
Init dense0_weight (256, 20)
Init dense1_weight (10, 256)
array([-5.172625 , -7.0209026,  5.1446533, -9.844563 ,  8.545956 ,
       -0.       ,  0.       , -0.       ,  5.107664 ,  9.658335 ,
        5.8564453,  7.4483128,  0.       ,  0.       , -0.       ,
        7.9034443,  0.       ,  5.4223766,  8.5655575,  5.1224785])

Note that we always have the option of setting parameters directly by calling data() to access the underlying ndarray. A note for advanced users: if you want to adjust parameters within an autograd scope, you need to use set_data to avoid confusing the automatic differentiation mechanics.

net[0].weight.data()[:] += 1
net[0].weight.data()[0, 0] = 42
net[0].weight.data()[0]
array([42.       , -6.0209026,  6.1446533, -8.844563 ,  9.545956 ,
        1.       ,  1.       ,  1.       ,  6.107664 , 10.658335 ,
        6.8564453,  8.448313 ,  1.       ,  1.       ,  1.       ,
        8.903444 ,  1.       ,  6.4223766,  9.5655575,  6.1224785])

5.2.3. Tied Parameters

Often, we want to share parameters across multiple layers. Later we will see that when learning word embeddings, it might be sensible to use the same parameters both for encoding and decoding words. We discussed one such case when we introduced Section 5.1. Let us see how to do this a bit more elegantly. In the following we allocate a dense layer and then use its parameters specifically to set those of another layer.

net = nn.Sequential()
# We need to give the shared layer a name such that we can reference its
# parameters
shared = nn.Dense(8, activation='relu')
net.add(nn.Dense(8, activation='relu'),
        shared,
        nn.Dense(8, activation='relu', params=shared.params),
        nn.Dense(10))
net.initialize()

x = np.random.uniform(size=(2, 20))
net(x)

# Check whether the parameters are the same
print(net[1].weight.data()[0] == net[2].weight.data()[0])
net[1].weight.data()[0, 0] = 100
# Make sure that they are actually the same object rather than just having the
# same value
print(net[1].weight.data()[0] == net[2].weight.data()[0])
[ True  True  True  True  True  True  True  True]
[ True  True  True  True  True  True  True  True]

This example shows that the parameters of the second and third layer are tied. They are not just equal, they are represented by the same exact ndarray. Thus, if we change one of the parameters, the other one changes, too. You might wonder, when parameters are tied what happens to the gradients? Since the model parameters contain gradients, the gradients of the second hidden layer and the third hidden layer are added together in shared.params.grad( ) during backpropagation.

5.2.4. Summary

  • We have several ways to access, initialize, and tie model parameters.

  • We can use custom initialization.

  • Gluon has a sophisticated mechanism for accessing parameters in a unique and hierarchical manner.

5.2.5. Exercises

  1. Use the FancyMLP defined in Section 5.1 and access the parameters of the various layers.

  2. Look at the MXNet documentation and explore different initializers.

  3. Try accessing the model parameters after net.initialize() and before net(x) to observe the shape of the model parameters. What changes? Why?

  4. Construct a multilayer perceptron containing a shared parameter layer and train it. During the training process, observe the model parameters and gradients of each layer.

  5. Why is sharing parameters a good idea?

5.2.6. Discussions

image0