23.8. The d2l
API Document¶ Open the notebook in SageMaker Studio Lab
This section displays classes and functions (sorted alphabetically) in
the d2l
package, showing where they are defined in the book so you
can find more detailed implementations and explanations. See also the
source code on the GitHub
repository.
23.8.1. Classes¶
- class d2l.torch.AdditiveAttention(num_hiddens, dropout, **kwargs)[source]¶
Bases:
Module
Additive attention.
Defined in Section 11.3.2.2
- forward(queries, keys, values, valid_lens)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class d2l.torch.AddNorm(norm_shape, dropout)[source]¶
Bases:
Module
The residual connection followed by layer normalization.
Defined in Section 11.7.2
- forward(X, Y)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class d2l.torch.AttentionDecoder[source]¶
Bases:
Decoder
The base attention-based decoder interface.
Defined in Section 11.4
- property attention_weights¶
- training: bool¶
- class d2l.torch.Classifier(plot_train_per_epoch=2, plot_valid_per_epoch=1)[source]¶
Bases:
Module
The base class of classification models.
Defined in Section 4.3
- accuracy(Y_hat, Y, averaged=True)[source]¶
Compute the number of correct predictions.
Defined in Section 4.3
- layer_summary(X_shape)[source]¶
Defined in Section 7.6
- loss(Y_hat, Y, averaged=True)[source]¶
Defined in Section 4.5
- training: bool¶
- class d2l.torch.DataModule(root='../data', num_workers=4)[source]¶
Bases:
HyperParameters
The base class of data.
Defined in Section 3.2.2
- get_tensorloader(tensors, train, indices=slice(0, None, None))[source]¶
Defined in Section 3.3
- class d2l.torch.Decoder[source]¶
Bases:
Module
The base decoder interface for the encoder-decoder architecture.
Defined in Section 10.6
- forward(X, state)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class d2l.torch.DotProductAttention(dropout)[source]¶
Bases:
Module
Scaled dot product attention.
Defined in Section 11.3.2.2
- forward(queries, keys, values, valid_lens=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class d2l.torch.Encoder[source]¶
Bases:
Module
The base encoder interface for the encoder-decoder architecture.
Defined in Section 10.6
- forward(X, *args)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class d2l.torch.EncoderDecoder(encoder, decoder)[source]¶
Bases:
Classifier
The base class for the encoder-decoder architecture.
Defined in Section 10.6
- forward(enc_X, dec_X, *args)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- predict_step(batch, device, num_steps, save_attention_weights=False)[source]¶
Defined in Section 10.7.6
- training: bool¶
- class d2l.torch.FashionMNIST(batch_size=64, resize=(28, 28))[source]¶
Bases:
DataModule
The Fashion-MNIST dataset.
Defined in Section 4.2
- get_dataloader(train)[source]¶
Defined in Section 4.2
- text_labels(indices)[source]¶
Return text labels.
Defined in Section 4.2
- visualize(batch, nrows=1, ncols=8, labels=[])[source]¶
Defined in Section 4.2
- class d2l.torch.GRU(num_inputs, num_hiddens, num_layers, dropout=0)[source]¶
Bases:
RNN
The multi-layer GRU model.
Defined in Section 10.3
- training: bool¶
- class d2l.torch.HyperParameters[source]¶
Bases:
object
The base class of hyperparameters.
- save_hyperparameters(ignore=[])[source]¶
Save function arguments into class attributes.
Defined in Section 23.7
- class d2l.torch.LeNet(lr=0.1, num_classes=10)[source]¶
Bases:
Classifier
The LeNet-5 model.
Defined in Section 7.6
- training: bool¶
- class d2l.torch.LinearRegression(lr)[source]¶
Bases:
Module
The linear regression model implemented with high-level APIs.
Defined in Section 3.5
- configure_optimizers()[source]¶
Defined in Section 3.5
- forward(X)[source]¶
Defined in Section 3.5
- get_w_b()[source]¶
Defined in Section 3.5
- loss(y_hat, y)[source]¶
Defined in Section 3.5
- training: bool¶
- class d2l.torch.LinearRegressionScratch(num_inputs, lr, sigma=0.01)[source]¶
Bases:
Module
The linear regression model implemented from scratch.
Defined in Section 3.4
- configure_optimizers()[source]¶
Defined in Section 3.4
- forward(X)[source]¶
Defined in Section 3.4
- loss(y_hat, y)[source]¶
Defined in Section 3.4
- training: bool¶
- class d2l.torch.Module(plot_train_per_epoch=2, plot_valid_per_epoch=1)[source]¶
Bases:
Module
,HyperParameters
The base class of models.
Defined in Section 3.2
- apply_init(inputs, init=None)[source]¶
Defined in Section 6.4
- configure_optimizers()[source]¶
Defined in Section 4.3
- forward(X)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class d2l.torch.MTFraEng(batch_size, num_steps=9, num_train=512, num_val=128)[source]¶
Bases:
DataModule
The English-French dataset.
Defined in Section 10.5
- build(src_sentences, tgt_sentences)[source]¶
Defined in Section 10.5.3
- get_dataloader(train)[source]¶
Defined in Section 10.5.3
- class d2l.torch.MultiHeadAttention(num_hiddens, num_heads, dropout, bias=False, **kwargs)[source]¶
Bases:
Module
Multi-head attention.
Defined in Section 11.5
- forward(queries, keys, values, valid_lens)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- transpose_output(X)[source]¶
Reverse the operation of transpose_qkv.
Defined in Section 11.5
- transpose_qkv(X)[source]¶
Transposition for parallel computation of multiple attention heads.
Defined in Section 11.5
- class d2l.torch.PositionalEncoding(num_hiddens, dropout, max_len=1000)[source]¶
Bases:
Module
Positional encoding.
Defined in Section 11.6
- forward(X)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class d2l.torch.PositionWiseFFN(ffn_num_hiddens, ffn_num_outputs)[source]¶
Bases:
Module
The positionwise feed-forward network.
Defined in Section 11.7
- forward(X)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class d2l.torch.ProgressBoard(xlabel=None, ylabel=None, xlim=None, ylim=None, xscale='linear', yscale='linear', ls=['-', '--', '-.', ':'], colors=['C0', 'C1', 'C2', 'C3'], fig=None, axes=None, figsize=(3.5, 2.5), display=True)[source]¶
Bases:
HyperParameters
The board that plots data points in animation.
Defined in Section 3.2
- draw(x, y, label, every_n=1)[source]¶
Defined in Section 23.7
- class d2l.torch.Residual(num_channels, use_1x1conv=False, strides=1)[source]¶
Bases:
Module
The Residual block of ResNet models.
Defined in Section 8.6
- forward(X)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class d2l.torch.ResNeXtBlock(num_channels, groups, bot_mul, use_1x1conv=False, strides=1)[source]¶
Bases:
Module
The ResNeXt block.
Defined in Section 8.6.2
- forward(X)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class d2l.torch.RNN(num_inputs, num_hiddens)[source]¶
Bases:
Module
The RNN model implemented with high-level APIs.
Defined in Section 9.6
- forward(inputs, H=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class d2l.torch.RNNLM(rnn, vocab_size, lr=0.01)[source]¶
Bases:
RNNLMScratch
The RNN-based language model implemented with high-level APIs.
Defined in Section 9.6
- output_layer(hiddens)[source]¶
Defined in Section 9.5
- training: bool¶
- class d2l.torch.RNNLMScratch(rnn, vocab_size, lr=0.01)[source]¶
Bases:
Classifier
The RNN-based language model implemented from scratch.
Defined in Section 9.5
- forward(X, state=None)[source]¶
Defined in Section 9.5
- one_hot(X)[source]¶
Defined in Section 9.5
- output_layer(rnn_outputs)[source]¶
Defined in Section 9.5
- predict(prefix, num_preds, vocab, device=None)[source]¶
Defined in Section 9.5
- training: bool¶
- class d2l.torch.RNNScratch(num_inputs, num_hiddens, sigma=0.01)[source]¶
Bases:
Module
The RNN model implemented from scratch.
Defined in Section 9.5
- forward(inputs, state=None)[source]¶
Defined in Section 9.5
- training: bool¶
- class d2l.torch.Seq2Seq(encoder, decoder, tgt_pad, lr)[source]¶
Bases:
EncoderDecoder
The RNN encoder-decoder for sequence to sequence learning.
Defined in Section 10.7.3
- configure_optimizers()[source]¶
Defined in Section 4.3
- training: bool¶
- class d2l.torch.Seq2SeqEncoder(vocab_size, embed_size, num_hiddens, num_layers, dropout=0)[source]¶
Bases:
Encoder
The RNN encoder for sequence to sequence learning.
Defined in Section 10.7
- forward(X, *args)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class d2l.torch.SGD(params, lr)[source]¶
Bases:
HyperParameters
Minibatch stochastic gradient descent.
Defined in Section 3.4
- class d2l.torch.SoftmaxRegression(num_outputs, lr)[source]¶
Bases:
Classifier
The softmax regression model.
Defined in Section 4.5
- forward(X)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class d2l.torch.SyntheticRegressionData(w, b, noise=0.01, num_train=1000, num_val=1000, batch_size=32)[source]¶
Bases:
DataModule
Synthetic data for linear regression.
Defined in Section 3.3
- get_dataloader(train)[source]¶
Defined in Section 3.3
- class d2l.torch.TimeMachine(batch_size, num_steps, num_train=10000, num_val=5000)[source]¶
Bases:
DataModule
The Time Machine dataset.
Defined in Section 9.2
- build(raw_text, vocab=None)[source]¶
Defined in Section 9.2
- get_dataloader(train)[source]¶
Defined in Section 9.3.3
- class d2l.torch.Trainer(max_epochs, num_gpus=0, gradient_clip_val=0)[source]¶
Bases:
HyperParameters
The base class for training models with data.
Defined in Section 3.2.2
- clip_gradients(grad_clip_val, model)[source]¶
Defined in Section 9.5
- fit_epoch()[source]¶
Defined in Section 3.4
- prepare_batch(batch)[source]¶
Defined in Section 6.7
- prepare_model(model)[source]¶
Defined in Section 6.7
- class d2l.torch.TransformerEncoder(vocab_size, num_hiddens, ffn_num_hiddens, num_heads, num_blks, dropout, use_bias=False)[source]¶
Bases:
Encoder
The Transformer encoder.
Defined in Section 11.7.4
- forward(X, valid_lens)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class d2l.torch.TransformerEncoderBlock(num_hiddens, ffn_num_hiddens, num_heads, dropout, use_bias=False)[source]¶
Bases:
Module
The Transformer encoder block.
Defined in Section 11.7.2
- forward(X, valid_lens)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
23.8.2. Functions¶
- d2l.torch.add_to_class(Class)[source]¶
Register functions as methods in created class.
Defined in Section 3.2
- d2l.torch.bleu(pred_seq, label_seq, k)[source]¶
Compute the BLEU.
Defined in Section 10.7.6
- d2l.torch.check_len(a, n)[source]¶
Check the length of a list.
Defined in Section 9.5
- d2l.torch.check_shape(a, shape)[source]¶
Check the shape of a tensor.
Defined in Section 9.5
- d2l.torch.corr2d(X, K)[source]¶
Compute 2D cross-correlation.
Defined in Section 7.2
- d2l.torch.cpu()[source]¶
Get the CPU device.
Defined in Section 6.7
- d2l.torch.gpu(i=0)[source]¶
Get a GPU device.
Defined in Section 6.7
- d2l.torch.init_cnn(module)[source]¶
Initialize weights for CNNs.
Defined in Section 7.6
- d2l.torch.init_seq2seq(module)[source]¶
Initialize weights for Seq2Seq.
Defined in Section 10.7
- d2l.torch.masked_softmax(X, valid_lens)[source]¶
Perform softmax operation by masking elements on the last axis.
Defined in Section 11.3
- d2l.torch.num_gpus()[source]¶
Get the number of available GPUs.
Defined in Section 6.7
- d2l.torch.plot(X, Y=None, xlabel=None, ylabel=None, legend=[], xlim=None, ylim=None, xscale='linear', yscale='linear', fmts=('-', 'm--', 'g-.', 'r:'), figsize=(3.5, 2.5), axes=None)[source]¶
Plot data points.
Defined in Section 2.4
- d2l.torch.set_axes(axes, xlabel, ylabel, xlim, ylim, xscale, yscale, legend)[source]¶
Set the axes for matplotlib.
Defined in Section 2.4
- d2l.torch.set_figsize(figsize=(3.5, 2.5))[source]¶
Set the figure size for matplotlib.
Defined in Section 2.4
- d2l.torch.show_heatmaps(matrices, xlabel, ylabel, titles=None, figsize=(2.5, 2.5), cmap='Reds')[source]¶
Show heatmaps of matrices.
Defined in Section 11.1
- d2l.torch.show_list_len_pair_hist(legend, xlabel, ylabel, xlist, ylist)[source]¶
Plot the histogram for list length pairs.
Defined in Section 10.5
- d2l.torch.try_all_gpus()[source]¶
Return all available GPUs, or [cpu(),] if no GPU exists.
Defined in Section 6.7
- d2l.torch.try_gpu(i=0)[source]¶
Return gpu(i) if exists, otherwise return cpu().
Defined in Section 6.7
- d2l.torch.use_svg_display()[source]¶
Use the svg format to display a plot in Jupyter.
Defined in Section 2.4