• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

google-pegasus: Google AI 团队基于 Transformer 编/译码器的天马架构

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称:

google-pegasus

开源软件地址:

https://gitee.com/mirrors/google-pegasus

开源软件介绍:

PEGASUS library

Pre-training with Extracted Gap-sentences for Abstractive SUmmarizationSequence-to-sequence models, or PEGASUS, uses self-supervised objective GapSentences Generation (GSG) to train a transformer encoder-decoder model. Thepaper can be found on arXiv. ICML 2020 accepted.

If you use this code or these models, please cite the following paper:

@misc{zhang2019pegasus,    title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},    author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},    year={2019},    eprint={1912.08777},    archivePrefix={arXiv},    primaryClass={cs.CL}}

Results update

We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.

datasetC4HugeNewsMixed & Stochastic
xsum45.20/22.06/36.9947.21/24.56/39.2547.60/24.83/39.64
cnn_dailymail43.90/21.20/40.7644.17/21.47/41.1144.16/21.56/41.30
newsroom45.07/33.39/41.2845.15/33.51/41.3345.98/34.20/42.18
multi_news46.74/17.95/24.2647.52/18.72/24.9147.65/18.75/24.95
gigaword38.75/19.96/36.1439.12/19.86/36.2439.65/20.47/36.76
wikihow43.07/19.70/34.7941.35/18.51/33.4246.39/22.12/38.41 *
reddit_tifu26.54/8.94/21.6426.63/9.01/21.6027.99/9.81/22.94
big_patent53.63/33.16/42.2553.41/32.89/42.0752.29/33.08/41.66 *
arxiv44.70/17.27/25.8044.67/17.18/25.7344.21/16.95/25.67
pubmed45.49/19.90/27.6945.09/19.56/27.4245.97/20.15/28.25
aeslc37.69/21.85/36.8437.40/21.22/36.4537.68/21.25/36.51
billsum57.20/39.56/45.8057.31/40.19/45.8259.67/41.58/47.59

The "Mixed & Stochastic" model has the following changes:

  • trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
  • trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
  • the model uniformly sample a gap sentence ratio between 15% and 45%.
  • importance sentences are sampled using a 20% uniform noise to importance scores.
  • the sentencepiece tokenizer is updated to be able to encode newline character.

(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:

  • wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
  • we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.

Setup

create an instance on google cloud with GPU (optional)

Please create a project first and create an instance

gcloud compute instances create \  ${VM_NAME} \  --zone=${ZONE} \  --machine-type=n1-highmem-8 \  --accelerator type=nvidia-tesla-v100,count=1 \  --boot-disk-size=500GB \  --image-project=ml-images \  --image-family=tf-1-15 \  --maintenance-policy TERMINATE --restart-on-failure

install library and dependencies

Clone library on github and install requirements.

git clone https://github.com/google-research/pegasuscd pegasusexport PYTHONPATH=.pip3 install -r requirements.txt

Download vocab, pretrained and fine-tuned checkpoints of all experiments from Google Cloud.

Alternatively in terminal, follow the instruction and install gsutil. Then

mkdir ckptgsutil cp -r gs://pegasus_ckpt/ ckpt/

Finetuning on downstream datasets

on existing dataset

Finetune on an existing dataset aeslc.

python3 pegasus/bin/train.py --params=aeslc_transformer \--param_overrides=vocab_filename=ckpt/pegasus_ckpt/c4.unigram.newline.10pct.96000.model \--train_init_checkpoint=ckpt/pegasus_ckpt/model.ckpt-1500000 \--model_dir=ckpt/pegasus_ckpt/aeslc

If you would like to finetune on a subset of dataset, please refer to the example of input pattern.

Evaluate on the finetuned dataset.

python3 pegasus/bin/evaluate.py --params=aeslc_transformer \--param_overrides=vocab_filename=ckpt/pegasus_ckpt/c4.unigram.newline.10pct.96000.model,batch_size=1,beam_size=5,beam_alpha=0.6 \--model_dir=ckpt/pegasus_ckpt/aeslc

Note that the above example is using a single GPU so the batch_size is much smallerthan the results reported in the paper.

add new finetuning dataset

Two types of dataset format are supported: TensorFlow Datasets (TFDS) or TFRecords.

This tutorial shows how to add a new dataset in TFDS.(The fine-tuning dataset is expected to be supervised, please providesupervised_keys in dataset info).

Tfrecords format requires each record to be a tf example of {"inputs":tf.string, "targets":tf.string}.

For example, if you registered a TFDS dataset called new_tfds_dataset for training and evaluation, and have some files in tfrecord format called new_dataset_files.tfrecord* for test, they can be registered in /pegasus/params/public_params.py.

@registry.register("new_params")def my_param(param_overrides):  return public_params.transformer_params(      {          "train_pattern": "tfds:new_tfds_dataset,train",          "dev_pattern": "tfds:new_tfds_dataset,validation",          "test_pattern": "tfrecord:new_dataset_files.tfrecord*",          "max_input_len": 512,          "max_output_len": 128,          "train_steps": 10000,          "learning_rate": 0.0001,          "batch_size": 8,      }, param_overrides)

Evaluation metrics.

Evaluation results can be found in mode_dir. Summarization metrics are automaticallycalculated for each evaluation point.

  • ROUGE is the main metricfor summarization quality.

  • BLEU is an alternativequality metric for language generation.

  • Extractive Fragments Coverage & Densityare metrics that measures the abstractiveness of the summary.

  • Repetition Rates measures generation repetition failure modes.

  • Length statistics measures the length distribution of decodes comparing to gold summary.

Several types of output files can be found in model_dir

  • text_metrics-*.txt: above metrics in text format. Each row contains metricname, 95% lower bound value, mean value, 95% upper bound value.
  • inputs-.txt, targets-.txt, predictions-*.txt: raw text files of modelinputs/outputs.

Pre-training

Pretraining (on C4 or any other corpus) requires a customly built tensorflow that includes ops for on-the-fly parsing that processes raw text document into model inputs and targets ids. Please refer to pegasus/ops/pretrain_parsing_ops.cc and pegasus/data/parsers.py for details.

Acknowledgements

Contains parts of code and design for training and evaluation of summarization models originally by Ben Goodrich [email protected].


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
热门话题
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap