espnet.nets.chainer_backend.rnn.training.CustomParallelUpdater
Less than 1 minute
espnet.nets.chainer_backend.rnn.training.CustomParallelUpdater
class espnet.nets.chainer_backend.rnn.training.CustomParallelUpdater(train_iters, optimizer, converter, devices, accum_grad=1)
Bases: MultiprocessParallelUpdater
Custom Parallel Updater for chainer.
Defines the main update routine.
- Parameters:
- train_iter (iterator | dict *[*str , iterator ]) – Dataset iterator for the training dataset. It can also be a dictionary that maps strings to iterators. If this is just an iterator, then the iterator is registered by the name
'main'
. - optimizer (optimizer | dict *[*str , optimizer ]) – Optimizer to update parameters. It can also be a dictionary that maps strings to optimizers. If this is just an optimizer, then the optimizer is registered by the name
'main'
. - converter (espnet.asr.chainer_backend.asr.CustomConverter) – Converter function to build input arrays. Each batch extracted by the main iterator and the
device
option are passed to this function.chainer.dataset.concat_examples()
is used by default. - device (torch.device) – Device to which the training data is sent. Negative value indicates the host memory (CPU).
- accum_grad (int) – The number of gradient accumulation. if set to 2, the network parameters will be updated once in twice, i.e. actual batchsize will be doubled.
- train_iter (iterator | dict *[*str , iterator ]) – Dataset iterator for the training dataset. It can also be a dictionary that maps strings to iterators. If this is just an iterator, then the iterator is registered by the name
Initialize Custom Parallel Updater.
update()
Update optimizers.
update_core()
Execute main update routine of the custom parallel updater.