espnetez.preprocess.sentencepiece.add_special_tokens
Less than 1 minute
espnetez.preprocess.sentencepiece.add_special_tokens
espnetez.preprocess.sentencepiece.add_special_tokens(tokenizer, converter, embedding, special_tokens, insert_after='<st_zho>')
Add special tokens to the tokenizer. For detailed usage, please refer to the demo notebook for ESPnetEZ with SLU task.
Parameters:
- tokenizer – Sentencepiece tokenizer.
- converter – Sentencepiece converter.
- embedding – nn.Embedding object.
- special_tokens (list) – List of special tokens.
Returns: Tuple( : tokenizer: new tokenizer, converter: new converter, embedding: new embedding,
)