Here, under `tips and tricks`..... https://github.com/huggingface/transformers/blob/master/examples/seq2seq/README.md#tips-and-tricks `Both finetuning and eval are 30% faster with --fp16. For that you need to install apex.` But in the documentation... https://huggingface.co/transformers/master/model_doc/pegasus.html#examples `FP16 is not supported (help/ideas on this appreciated!).` Also in the documentation https://huggingface.co/transformers/master/model_doc/pegasus.html#examples `Script to fine-tune pegasus on the XSUM dataset.` leads to a 404: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune_pegasus_xsum.sh