espnet2.train.distributed_utils.get_node_rank
Less than 1 minute
espnet2.train.distributed_utils.get_node_rank
espnet2.train.distributed_utils.get_node_rank(prior=None, launcher: str | None = None) → int | None
Get Node Rank.
Use for “multiprocessing distributed” mode. The initial RANK equals to the Node id in this case and the real Rank is set as (nGPU * NodeID) + LOCAL_RANK in torch.distributed.