# 11.2. Asynchronous Computing¶

MXNet utilizes asynchronous programming to improve computing performance. Understanding how asynchronous programming works helps us to develop more efficient programs, by proactively reducing computational requirements and thereby minimizing the memory overhead required in the case of limited memory resources. First, we will import the package or module needed for this section’s experiment.

import d2l
from mxnet import autograd, gluon, nd
from mxnet.gluon import nn
import os
import subprocess


## 11.2.1. Asynchronous Programming in MXNet¶

Broadly speaking, MXNet includes the front-end directly used by users for interaction, as well as the back-end used by the system to perform the computation. For example, users can write MXNet programs in various front-end languages, such as Python, R, Scala and C++. Regardless of the front-end programming language used, the execution of MXNet programs occurs primarily in the back-end of C++ implementations. In other words, front-end MXNet programs written by users are passed on to the back-end to be computed. The back-end possesses its own threads that continuously collect and execute queued tasks.

Through the interaction between front-end and back-end threads, MXNet is able to implement asynchronous programming. Asynchronous programming means that the front-end threads continue to execute subsequent instructions without having to wait for the back-end threads to return the results from the current instruction. For simplicity’s sake, assume that the Python front-end thread calls the following four instructions.

a = nd.ones((1, 2))
b = nd.ones((1, 2))
c = a * b + 2
c

[[3. 3.]]
<NDArray 1x2 @cpu(0)>


In Asynchronous Computing, whenever the Python front-end thread executes one of the first three statements, it simply returns the task to the back-end queue. When the last statement’s results need to be printed, the Python front-end thread will wait for the C++ back-end thread to finish computing result of the variable c. One benefit of such as design is that the Python front-end thread in this example does not need to perform actual computations. Thus, there is little impact on the program’s overall performance, regardless of Python’s performance. MXNet will deliver consistently high performance, regardless of the front-end language’s performance, provided the C++ back-end can meet the efficiency requirements.

The following example uses timing to demonstrate the effect of asynchronous programming. As we can see, when y = nd.dot(x, x).sum() is returned, it does not actually wait for the variable y to be calculated. Only when the print function needs to print the variable y must the function wait for it to be calculated.

timer = d2l.Timer()
x = nd.random.uniform(shape=(2000, 2000))
y = nd.dot(x, x).sum()
print('Workloads are queued. Time %.4f sec' % timer.stop())

print('sum =', y)
print('Workloads are finished. Time %.4f sec' % timer.stop())

Workloads are queued. Time 0.0007 sec
sum =
[2.0003661e+09]
<NDArray 1 @cpu(0)>
Workloads are finished. Time 0.1492 sec


In truth, whether or not the current result is already calculated in the memory is irrelevant, unless we need to print or save the computation results. So long as the data is stored in NDArray and the operators provided by MXNet are used, MXNet will utilize asynchronous programming by default to attain superior computing performance.

## 11.2.2. Use of the Synchronization Function to Allow the Front-End to Wait for the Computation Results¶

In addition to the print function we just introduced, there are other ways to make the front-end thread wait for the completion of the back-end computations. The wait_to_read function can be used to make the front-end wait for the complete computation of the NDArray results, and then execute following statement. Alternatively, we can use the waitall function to make the front-end wait for the completion of all previous computations. The latter is a common method used in performance testing.

Below, we use the wait_to_read function as an example. The time output includes the calculation time of y.

timer.start()
y = nd.dot(x, x)
print('Done in %.4f sec' % timer.stop())

Done in 0.0380 sec


Below, we use waitall as an example. The time output includes the calculation time of y and z respectively.

timer.start()
y = nd.dot(x, x)
z = nd.dot(x, x)
nd.waitall()
print('Done in %.4f sec' % timer.stop())

Done in 0.0732 sec


Additionally, any operation that does not support asynchronous programming but converts the NDArray into another data structure will cause the front-end to have to wait for computation results. For example, calling the asnumpy and asscalar functions:

timer.start()
y = nd.dot(x, x)
y.asnumpy()
print('Done in %.4f sec' % timer.stop())

Done in 0.0393 sec

timer.start()
y = nd.dot(x, x)
y.norm().asscalar()
print('Done in %.4f sec' % timer.stop())

Done in 0.1345 sec


The wait_to_read, waitall, asnumpy, asscalar and theprint functions described above will cause the front-end to wait for the back-end computation results. Such functions are often referred to as synchronization functions.

## 11.2.3. Using Asynchronous Programming to Improve Computing Performance¶

In the following example, we will use the “for” loop to continuously assign values to the variable y. Asynchronous programming is not used in tasks when the synchronization function wait_to_read is used in the “for” loop. However, when the synchronization function waitall is used outside of the “for” loop, asynchronous programming is used.

timer.start()
for _ in range(1000):
y = x + 1
print('Synchronous. Done in %.4f sec' % timer.stop())

timer.start()
for _ in range(1000):
y = x + 1
nd.waitall()
print('Asynchronous. Done in %.4f sec' % timer.stop())

Synchronous. Done in 0.3380 sec
Asynchronous. Done in 0.1949 sec


We have observed that certain aspects of computing performance can be improved by making use of asynchronous programming. To explain this, we will slightly simplify the interaction between the Python front-end thread and the C++ back-end thread. In each loop, the interaction between front and back-ends can be largely divided into three stages:

1. The front-end orders the back-end to insert the calculation task y = x + 1 into the queue.
2. The back-end then receives the computation tasks from the queue and performs the actual computations.
3. The back-end then returns the computation results to the front-end.

Assume that the durations of these three stages are $$t_1, t_2, t_3$$, respectively. If we do not use asynchronous programming, the total time taken to perform 1000 computations is approximately $$1000 (t_1+ t_2 + t_3)$$. If asynchronous programming is used, the total time taken to perform 1000 computations can be reduced to $$t_1 + 1000 t_2 + t_3$$ (assuming $$1000t_2 > 999t_1$$), since the front-end does not have to wait for the back-end to return computation results for each loop.

## 11.2.4. The Impact of Asynchronous Programming on Memory¶

In order to explain the impact of asynchronous programming on memory usage, recall what we learned in the previous chapters. Throughout the model training process implemented in the previous chapters, we usually evaluated things like the loss or accuracy of the model in each mini-batch. Detail-oriented readers may have discovered that such evaluations often make use of synchronization functions, such as asscalar or asnumpy. If these synchronization functions are removed, the front-end will pass a large number of mini-batch computing tasks to the back-end in a very short time, which might cause a spike in memory usage. When the mini-batches makes use of synchronization functions, on each iteration, the front-end will only pass one mini-batch task to the back-end to be computed, which will typically reduce memory use.

Because the deep learning model is usually large and memory resources are usually limited, we recommend the use of synchronization functions for each mini-batch throughout model training, for example by using the asscalar or asnumpy functions to evaluate model performance. Similarly, we also recommend utilizing synchronization functions for each mini-batch prediction (such as directly printing out the current batch’s prediction results), in order to reduce memory usage during model prediction.

Next, we will demonstrate asynchronous programming’s impact on memory. We will first define a data retrieval function data_iter, which upon being called, will start timing and regularly print out the time taken to retrieve data batches.

def data_iter():
timer.start()
num_batches, batch_size = 100, 1024
for i in range(num_batches):
X = nd.random.normal(shape=(batch_size, 512))
y = nd.ones((batch_size,))
yield X, y
if (i + 1) % 50 == 0:
print('batch %d, time %.4f sec' % (i + 1, timer.stop()))


The multilayer perceptron, optimization algorithm, and loss function are defined below.

net = nn.Sequential()
nn.Dense(512, activation='relu'),
nn.Dense(1))
net.initialize()
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.005})
loss = gluon.loss.L2Loss()


A helper function to monitor memory use is defined here. It should be noted that this function can only be run on Linux or MacOS operating systems.

def get_mem():
res = subprocess.check_output(['ps', 'u', '-p', str(os.getpid())])
return int(str(res).split()[15]) / 1e3


Now we can begin testing. To initialize the net parameters we will try running the system once. See Section 5.3 for further discussions related to initialization.

for X, y in data_iter():
break


For the net training model, the synchronization function asscalar can naturally be used to record the loss of each mini-batch output by the NDArray format and to print out the model loss after each iteration. At this point, the generation interval of each mini-batch increases, but with a small memory overhead.

l_sum, mem = 0, get_mem()
for X, y in data_iter():
l = loss(y, net(X))
# Use of the Asscalar synchronization function
l_sum += l.mean().asscalar()
l.backward()
trainer.step(X.shape[0])
nd.waitall()
print('increased memory: %f MB' % (get_mem() - mem))

batch 50, time 2.1001 sec
batch 100, time 4.3568 sec
increased memory: 4.944000 MB


Even though each mini-batch’s generation interval is shorter, the memory usage may still be high during training if the synchronization function is removed. This is because, in default asynchronous programming, the front-end will pass on all mini-batch computations to the back-end in a short amount of time. As a result of this, a large amount of intermediate results cannot be released and may end up piled up in memory. In this experiment, we can see that all data (X and y) is generated in under a second. However, because of an insufficient training speed, this data can only be stored in the memory and cannot be cleared in time, resulting in extra memory usage.

mem = get_mem()
for X, y in data_iter():
l = loss(y, net(X))
l.backward()
trainer.step(X.shape[0])
nd.waitall()
print('increased memory: %f MB' % (get_mem() - mem))

batch 50, time 0.0782 sec
batch 100, time 0.1554 sec
increased memory: 196.624000 MB


## 11.2.5. Summary¶

• MXNet includes the front-end used directly by users for interaction and the back-end used by the system to perform the computation.
• MXNet can improve computing performance through the use of asynchronous programming.
• We recommend using at least one synchronization function for each mini-batch training or prediction to avoid passing on too many computation tasks to the back-end in a short period of time.

## 11.2.6. Exercises¶

• In the section “Use of Asynchronous Programming to Improve Computing Performance”, we mentioned that using asynchronous computation can reduce the total amount of time needed to perform 1000 computations to $$t_1 + 1000 t_2 + t_3$$. Why do we have to assume $$1000t_2 > 999t_1$$ here?