bash utility tools

ESPnet provides several command-line bash tools under utils/

asr_align_wav.sh

No help found.

clean_corpus.sh

Usage: clean_corpus.sh [options] <data-dir> <langs>
e.g.: clean_corpus.sh data/train "en de"
Options:
  --maxframes        # number of maximum input frame length
  --maxchars         # number of maximum character length
  --utt_extra_files  # extra text files for target sequence
  --no_feat          # set to True for MT recipe

convert_fbank.sh

Usage: convert_fbank.sh [options] <data-dir> [<log-dir> [<fbank-dir>] ]
e.g.: convert_fbank.sh data/train exp/griffin_lim/train wav
Note: <log-dir> defaults to <data-dir>/log, and <fbank-dir> defaults to <data-dir>/data
Options:
  --nj <nj>                  # number of parallel jobs
  --fs <fs>                  # sampling rate
  --fmax <fmax>              # maximum frequency
  --fmin <fmin>              # minimum frequency
  --n_fft <n_fft>            # number of FFT points (default=1024)
  --n_shift <n_shift>        # shift size in point (default=256)
  --win_length <win_length>  # window length in point (default=)
  --n_mels <n_mels>          # number of mel basis (default=80)
  --iters <iters>            # number of Griffin-lim iterations (default=64)
  --cmd (utils/run.pl|utils/queue.pl <queue opts>) # how to run jobs.

data2json.sh

Usage: data2json.sh <data-dir> <dict>
e.g. data2json.sh data/train data/lang_1char/train_units.txt
Options:
  --nj <nj>                                        # number of parallel jobs
  --cmd (utils/run.pl|utils/queue.pl <queue opts>) # how to run jobs.
  --feat <feat-scp>                                # feat.scp or feat1.scp,feat2.scp,...
  --oov <oov-word>                                 # Default: <unk>
  --out <outputfile>                               # If omitted, write in stdout
  --filetype <mat|hdf5|sound.hdf5>                 # Specify the format of feats file
  --preprocess-conf <json>                         # Apply preprocess to feats when creating shape.scp
  --verbose <num>                                  # Default: 0

download_from_google_drive.sh

Usage: download_from_google_drive.sh <share-url> [<download_dir> <file_ext>]
e.g.: download_from_google_drive.sh https://drive.google.com/open?id=1zF88bRNbJhw9hNBq3NrDg8vnGGibREmg downloads zip
Options:
    <download_dir>: directory to save downloaded file. (Default=downloads)
    <file_ext>: file extension of the file to be downloaded. (Default=zip)

dump.sh

Usage: dump.sh <scp> <cmvnark> <logdir> <dumpdir>

dump_pcm.sh

Usage: dump_pcm.sh [options] <data-dir> [<log-dir> [<pcm-dir>] ]
e.g.: dump_pcm.sh data/train exp/dump_pcm/train pcm
Note: <log-dir> defaults to <data-dir>/log, and <pcm-dir> defaults to <data-dir>/data
Options:
  --nj <nj>                                        # number of parallel jobs
  --cmd (utils/run.pl|utils/queue.pl <queue opts>) # how to run jobs.
  --write-utt2num-frames <true|false>     # If true, write utt2num_frames file.
  --filetype <mat|hdf5|sound.hdf5>                 # Specify the format of feats file

eval_source_separation.sh

Usage: eval_source_separation.sh reffiles enffiles <dir>
    e.g. eval_source_separation.sh reference.scp enhanced.scp outdir

And also supporting multiple sources:
    e.g. eval_source_separation.sh "ref1.scp,ref2.scp" "enh1.scp,enh2.scp" outdir

Options:
  --nj <nj>                                        # number of parallel jobs
  --cmd (utils/run.pl|utils/queue.pl <queue opts>) # how to run jobs.

feat_to_shape.sh

Usage: feat_to_shape.sh [options] <input-scp> <output-scp> [<log-dir>]
e.g.: feat_to_shape.sh data/train/feats.scp data/train/shape.scp data/train/log
Options:
  --nj <nj>                                        # number of parallel jobs
  --cmd (utils/run.pl|utils/queue.pl <queue opts>) # how to run jobs.
  --filetype <mat|hdf5|sound.hdf5>                 # Specify the format of feats file
  --preprocess-conf <json>                         # Apply preprocess to feats when creating shape.scp
  --verbose <num>                                  # Default: 0

generate_wav.sh

Usage:
  generate_wav.sh [options] <model-path> <data-dir> [<log-dir> [<fbank-dir>] ]
Example:
  generate_wav.sh ljspeech.wavenet.ns.v1/checkpoint-1000000.pkl data/train exp/wavenet_vocoder/train wav
Note:
  <log-dir> defaults to <data-dir>/log, and <fbank-dir> defaults to <data-dir>/data
Options:
  --nj <nj>             # number of parallel jobs
  --fs <fs>             # sampling rate (default=22050)
  --n_fft <n_fft>       # number of FFT points (default=1024)
  --n_shift <n_shift>   # shift size in point (default=256)
  --cmd (utils/run.pl|utils/queue.pl <queue opts>) # how to run jobs.

make_fbank.sh

Usage: make_fbank.sh [options] <data-dir> [<log-dir> [<fbank-dir>] ]
e.g.: make_fbank.sh data/train exp/make_fbank/train mfcc
Note: <log-dir> defaults to <data-dir>/log, and <fbank-dir> defaults to <data-dir>/data
Options:
  --nj <nj>                                        # number of parallel jobs
  --cmd (utils/run.pl|utils/queue.pl <queue opts>) # how to run jobs.
  --filetype <mat|hdf5|sound.hdf5>                 # Specify the format of feats file

make_stft.sh

Usage: make_stft.sh [options] <data-dir> [<log-dir> [<stft-dir>] ]
e.g.: make_stft.sh data/train exp/make_stft/train stft
Note: <log-dir> defaults to <data-dir>/log, and <stft-dir> defaults to <data-dir>/data
Options:
  --nj <nj>                                        # number of parallel jobs
  --cmd (utils/run.pl|utils/queue.pl <queue opts>) # how to run jobs.
  --filetype <mat|hdf5|sound.hdf5>                 # Specify the format of feats file

pack_model.sh

Usage: pack_model.sh --lm <lm> --dict <dict> <tr_conf> <dec_conf> <cmvn> <e2e>, for example:
<lm>:       exp/train_rnnlm/rnnlm.model.best
<dict>:     data/lang_char
<tr_conf>:  conf/train.yaml
<dec_conf>: conf/decode.yaml
<cmvn>:     data/tr_it/cmvn.ark
<e2e>:      exp/tr_it_pytorch_train/results/model.last10.avg.best

recog_wav.sh

Usage:
    recog_wav.sh [options] <wav_file>

Options:
    --backend <chainer|pytorch>     # chainer or pytorch (Default: pytorch)
    --ngpu <ngpu>                   # Number of GPUs (Default: 0)
    --decode_dir <directory_name>   # Name of directory to store decoding temporary data
    --models <model_name>           # Model name (e.g. tedlium2.transformer.v1)
    --cmvn <path>                   # Location of cmvn.ark
    --lang_model <path>             # Location of language model
    --recog_model <path>            # Location of E2E model
    --decode_config <path>          # Location of configuration file
    --api <api_version>             # API version (v1 or v2, available in only pytorch backend)

Example:
    # Record audio from microphone input as example.wav
    rec -c 1 -r 16000 example.wav trim 0 5

    # Decode using model name
    recog_wav.sh --models tedlium2.transformer.v1 example.wav

    # Decode with streaming mode (only RNN with API v1 is supported)
    recog_wav.sh --models tedlium2.rnn.v2 --api v1 example.wav

    # Decode using model file
    recog_wav.sh --cmvn cmvn.ark --lang_model rnnlm.model.best --recog_model model.acc.best --decode_config conf/decode.yaml example.wav

    # Decode with GPU (require batchsize > 0 in configuration file)
    recog_wav.sh --ngpu 1 example.wav

Available models:
    - tedlium2.rnn.v1
    - tedlium2.rnn.v2
    - tedlium2.transformer.v1
    - tedlium3.transformer.v1
    - librispeech.transformer.v1
    - librispeech.transformer.v1.transformerlm.v1
    - commonvoice.transformer.v1
    - csj.transformer.v1

reduce_data_dir.sh

usage: reduce_data_dir.sh srcdir turnlist destdir

remove_longshortdata.sh

usage: remove_longshortdata.sh olddatadir newdatadir

score_bleu.sh

No help found.

score_sclite.sh

Usage: score_sclite.sh <data-dir> <dict>

score_sclite_case.sh

No help found.

score_sclite_wo_dict.sh

Usage: score_sclite_wo_dict.sh <data-dir>

show_result.sh

No help found.

speed_perturb.sh

Usage: speed_perturb.sh [options] <data-dir> <destination-dir> <fbankdir>
e.g.: speed_perturb.sh data/train en de
Options:
  --cases                              # target case information (e.g., lc.rm, lc, tc)
  --speeds                             # speed used in speed perturbation (e.g., 0.9. 1.0, 1.1)
  --langs                              # all languages (source + target)
  --write_utt2num_frames               # write utt2num_frames in steps/make_fbank_pitch.sh
  --cmd <run.pl|queue.pl <queue opts>> # how to run jobs
  --nj <nj>                            # number of parallel jobs

synth_wav.sh

Usage:
    $ synth_wav.sh <text>

Note:
    This code does not include text frontend part. Please clean the input
    text manually. Also, you need to modify feature configuration according
    to the model. Default setting is for ljspeech models, so if you want to
    use other pretrained models, please modify the parameters by yourself.
    For our provided models, you can find them in the tables at
    https://github.com/espnet/espnet#tts-demo.
    If you are beginner, instead of this script, I strongly recommend trying
    the following colab notebook at first, which includes all of the procedure
    from text frontend, feature generation, and waveform generation.
    https://colab.research.google.com/github/espnet/notebook/blob/master/tts_realtime_demo.ipynb

Example:
    # make text file and then generate it
    # (for the default model, ljspeech, we use upper-case char sequence as the input)
    echo "THIS IS A DEMONSTRATION OF TEXT TO SPEECH." > example.txt
    synth_wav.sh example.txt

    # also you can use multiple text
    echo "THIS IS A DEMONSTRATION OF TEXT TO SPEECH." > example.txt
    echo "TEXT TO SPEECH IS A TECHQNIQUE TO CONVERT TEXT INTO SPEECH." >> example.txt
    synth_wav.sh example.txt

    # you can specify the pretrained models
    synth_wav.sh --models ljspeech.transformer.v3 example.txt

    # also you can specify vocoder model
    synth_wav.sh --vocoder_models ljspeech.wavenet.mol.v2 example.txt

Available models:
    - ljspeech.tacotron2.v1
    - ljspeech.tacotron2.v2
    - ljspeech.tacotron2.v3
    - ljspeech.transformer.v1
    - ljspeech.transformer.v2
    - ljspeech.transformer.v3
    - ljspeech.fastspeech.v1
    - ljspeech.fastspeech.v2
    - ljspeech.fastspeech.v3
    - libritts.tacotron2.v1
    - libritts.transformer.v1
    - jsut.transformer.v1
    - jsut.tacotron2.v1
    - csmsc.transformer.v1
    - csmsc.fastspeech.v3

Available vocoder models:
    - ljspeech.wavenet.softmax.ns.v1
    - ljspeech.wavenet.mol.v1
    - ljspeech.parallel_wavegan.v1
    - libritts.wavenet.mol.v1
    - jsut.wavenet.mol.v1
    - jsut.parallel_wavegan.v1
    - csmsc.wavenet.mol.v1
    - csmsc.parallel_wavegan.v1

Model details:
    | Model name              | Lang | Fs [Hz] | Mel range [Hz] | FFT / Shift / Win [pt] | Input type |
    | ----------------------- | ---- | ------- | -------------- | ---------------------- | ---------- |
    | ljspeech.tacotron2.v1   | EN   | 22.05k  | None           | 1024 / 256 / None      | char       |
    | ljspeech.tacotron2.v2   | EN   | 22.05k  | None           | 1024 / 256 / None      | char       |
    | ljspeech.tacotron2.v3   | EN   | 22.05k  | None           | 1024 / 256 / None      | char       |
    | ljspeech.transformer.v1 | EN   | 22.05k  | None           | 1024 / 256 / None      | char       |
    | ljspeech.transformer.v2 | EN   | 22.05k  | None           | 1024 / 256 / None      | char       |
    | ljspeech.transformer.v3 | EN   | 22.05k  | None           | 1024 / 256 / None      | phn        |
    | ljspeech.fastspeech.v1  | EN   | 22.05k  | None           | 1024 / 256 / None      | char       |
    | ljspeech.fastspeech.v2  | EN   | 22.05k  | None           | 1024 / 256 / None      | char       |
    | ljspeech.fastspeech.v3  | EN   | 22.05k  | None           | 1024 / 256 / None      | phn        |
    | libritts.tacotron2.v1   | EN   | 24k     | 80-7600        | 1024 / 256 / None      | char       |
    | libritts.transformer.v1 | EN   | 24k     | 80-7600        | 1024 / 256 / None      | char       |
    | jsut.tacotron2          | JP   | 24k     | 80-7600        | 2048 / 300 / 1200      | phn        |
    | jsut.transformer        | JP   | 24k     | 80-7600        | 2048 / 300 / 1200      | phn        |
    | csmsc.transformer.v1    | ZH   | 24k     | 80-7600        | 2048 / 300 / 1200      | pinyin     |
    | csmsc.fastspeech.v3     | ZH   | 24k     | 80-7600        | 2048 / 300 / 1200      | pinyin     |

Vocoder model details:
    | Model name                     | Lang | Fs [Hz] | Mel range [Hz] | FFT / Shift / Win [pt] | Model type       |
    | ------------------------------ | ---- | ------- | -------------- | ---------------------- | ---------------- |
    | ljspeech.wavenet.softmax.ns.v1 | EN   | 22.05k  | None           | 1024 / 256 / None      | Softmax WaveNet  |
    | ljspeech.wavenet.mol.v1        | EN   | 22.05k  | None           | 1024 / 256 / None      | MoL WaveNet      |
    | ljspeech.parallel_wavegan.v1   | EN   | 22.05k  | None           | 1024 / 256 / None      | Parallel WaveGAN |
    | libritts.wavenet.mol.v1        | EN   | 24k     | None           | 1024 / 256 / None      | MoL WaveNet      |
    | jsut.wavenet.mol.v1            | JP   | 24k     | 80-7600        | 2048 / 300 / 1200      | MoL WaveNet      |
    | jsut.parallel_wavegan.v1       | JP   | 24k     | 80-7600        | 2048 / 300 / 1200      | Parallel WaveGAN |
    | csmsc.wavenet.mol.v1           | ZH   | 24k     | 80-7600        | 2048 / 300 / 1200      | MoL WaveNet      |
    | csmsc.parallel_wavegan.v1      | ZH   | 24k     | 80-7600        | 2048 / 300 / 1200      | Parallel WaveGAN |

translate_wav.sh

Usage:
    translate_wav.sh [options] <wav_file>

Options:
    --ngpu <ngpu>                   # Number of GPUs (Default: 0)
    --decode_dir <directory_name>   # Name of directory to store decoding temporary data
    --models <model_name>           # Model name (e.g. must_c.transformer.v1.en-fr)
    --cmvn <path>                   # Location of cmvn.ark
    --trans_model <path>            # Location of E2E model
    --decode_config <path>          # Location of configuration file
    --api <api_version>             # API version (v1 or v2)

Example:
    # Record audio from microphone input as example.wav
    rec -c 1 -r 16000 example.wav trim 0 5

    # Decode using model name
    translate_wav.sh --models must_c.transformer.v1.en-fr example.wav

    # Decode using model file
    translate_wav.sh --cmvn cmvn.ark --trans_model model.acc.best --decode_config conf/decode.yaml example.wav

    # Decode with GPU (require batchsize > 0 in configuration file)
    translate_wav.sh --ngpu 1 example.wav

Available models:
    - must_c.transformer.v1.en-fr
    - fisher_callhome_spanish.transformer.v1.es-en

trim_silence.sh

Usage: trim_silence.sh [options] <data-dir> <log-dir>
e.g.: trim_silence.sh data/train exp/trim_silence/train
Options:
  --fs <fs>                      # sampling frequency (default=16000)
  --win_length <win_length>      # window length in point (default=1024)
  --shift_length <shift_length>  # shift length in point (default=256)
  --threshold <threshold>        # power threshold in db (default=60)
  --min_silence <sec>            # minimum silence length in sec (default=0.01)
  --normalize <bit>              # audio bit (default=16)
  --cmd <cmd>                    # how to run jobs (default=run.pl)
  --nj <nj>                      # number of parallel jobs (default=32)

update_json.sh

Usage: update_json.sh <json> <data-dir> <dict>
e.g. update_json.sh data/train data/lang_1char/train_units.txt
Options:
  --oov <oov-word>                                 # Default: <unk>
  --verbose <num>                                  # Default: 0

spm_decode

usage: spm_decode [-h] --model MODEL [--input INPUT]
                  [--input_format {piece,id}]

optional arguments:
  --model MODEL         sentencepiece model to use for decoding
  --input INPUT         input file to decode
  --input_format {piece,id}

spm_encode

usage: spm_encode [-h] --model MODEL [--inputs INPUTS [INPUTS ...]]
                  [--outputs OUTPUTS [OUTPUTS ...]]
                  [--output_format {piece,id}] [--min-len N] [--max-len N]

optional arguments:
  --model MODEL         sentencepiece model to use for encoding
  --inputs INPUTS [INPUTS ...]
                        input files to filter/encode
  --outputs OUTPUTS [OUTPUTS ...]
                        path to save encoded outputs
  --output_format {piece,id}
  --min-len N           filter sentence pairs with fewer than N tokens
  --max-len N           filter sentence pairs with more than N tokens