- Description:
GEM is a benchmark environment for Natural Language Generation with a focus on its Evaluation, both through human annotations and automated Metrics.
GEM aims to: (1) measure NLG progress across 13 datasets spanning many NLG tasks and languages. (2) provide an in-depth analysis of data and models presented via data statements and challenge sets. (3) develop standards for evaluation of generated text using both automated and human metrics.
More information can be found at https://gem-benchmark.com.
Additional Documentation: Explore on Papers With Code
Homepage: https://gem-benchmark.com
Source code:
tfds.text.gem.Gem
Versions:
1.0.0
: Initial version1.0.1
: Update bad links filter for MLSum1.1.0
(default): Release of the Challenge Sets
Supervised keys (See
as_supervised
doc):None
Figure (tfds.show_examples): Not supported.
gem/common_gen (default config)
Config description: CommonGen is a constrained text generation task, associated with a benchmark dataset, to explicitly test machines for the ability of generative commonsense reasoning. Given a set of common concepts; the task is to generate a coherent sentence describing an everyday scenario using these concepts.
Download size:
1.84 MiB
Dataset size:
16.84 MiB
Auto-cached (documentation): Yes
Splits:
Split | Examples |
---|---|
'challenge_test_scramble' |
500 |
'challenge_train_sample' |
500 |
'challenge_validation_sample' |
500 |
'test' |
1,497 |
'train' |
67,389 |
'validation' |
993 |
- Feature structure:
FeaturesDict({
'concept_set_id': int32,
'concepts': Sequence(string),
'gem_id': string,
'gem_parent_id': string,
'references': Sequence(string),
'target': string,
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
concept_set_id | Tensor | int32 | ||
concepts | Sequence(Tensor) | (None,) | string | |
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
target | Tensor | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{lin2020commongen,
title = "CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning",
author = "Lin, Bill Yuchen and
Zhou, Wangchunshu and
Shen, Ming and
Zhou, Pei and
Bhagavatula, Chandra and
Choi, Yejin and
Ren, Xiang",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.165",
pages = "1823--1840",
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/cs_restaurants
Config description: The task is generating responses in the context of a (hypothetical) dialogue system that provides information about restaurants. The input is a basic intent/dialogue act type and a list of slots (attributes) and their values. The output is a natural language sentence.
Download size:
1.46 MiB
Dataset size:
2.71 MiB
Auto-cached (documentation): Yes
Splits:
Split | Examples |
---|---|
'challenge_test_scramble' |
500 |
'challenge_train_sample' |
500 |
'challenge_validation_sample' |
500 |
'test' |
842 |
'train' |
3,569 |
'validation' |
781 |
- Feature structure:
FeaturesDict({
'dialog_act': string,
'dialog_act_delexicalized': string,
'gem_id': string,
'gem_parent_id': string,
'references': Sequence(string),
'target': string,
'target_delexicalized': string,
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
dialog_act | Tensor | string | ||
dialog_act_delexicalized | Tensor | string | ||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
target | Tensor | string | ||
target_delexicalized | Tensor | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{cs_restaurants,
address = {Tokyo, Japan},
title = {Neural {Generation} for {Czech}: {Data} and {Baselines} },
shorttitle = {Neural {Generation} for {Czech} },
url = {https://www.aclweb.org/anthology/W19-8670/},
urldate = {2019-10-18},
booktitle = {Proceedings of the 12th {International} {Conference} on {Natural} {Language} {Generation} ({INLG} 2019)},
author = {Dušek, Ondřej and Jurčíček, Filip},
month = oct,
year = {2019},
pages = {563--574}
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/dart
Config description: DART is a large and open-domain structured DAta Record to Text generation corpus with high-quality sentence annotations with each input being a set of entity-relation triples following a tree-structured ontology.
Download size:
28.01 MiB
Dataset size:
33.78 MiB
Auto-cached (documentation): Yes
Splits:
Split | Examples |
---|---|
'test' |
6,959 |
'train' |
62,659 |
'validation' |
2,768 |
- Feature structure:
FeaturesDict({
'dart_id': int32,
'gem_id': string,
'gem_parent_id': string,
'references': Sequence(string),
'subtree_was_extended': bool,
'target': string,
'target_sources': Sequence(string),
'tripleset': Sequence(string),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
dart_id | Tensor | int32 | ||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
subtree_was_extended | Tensor | bool | ||
target | Tensor | string | ||
target_sources | Sequence(Tensor) | (None,) | string | |
tripleset | Sequence(Tensor) | (None,) | string |
- Examples (tfds.as_dataframe):
- Citation:
@article{radev2020dart,
title=Dart: Open-domain structured data record to text generation,
author={Radev, Dragomir and Zhang, Rui and Rau, Amrit and Sivaprasad, Abhinand and Hsieh, Chiachun and Rajani, Nazneen Fatema and Tang, Xiangru and Vyas, Aadit and Verma, Neha and Krishna, Pranav and others},
journal={arXiv preprint arXiv:2007.02871},
year={2020}
}
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/e2e_nlg
Config description: The E2E dataset is designed for a limited-domain data-to-text task -- generation of restaurant descriptions/recommendations based on up to 8 different attributes (name, area, price range etc.)
Download size:
13.99 MiB
Dataset size:
16.92 MiB
Auto-cached (documentation): Yes
Splits:
Split | Examples |
---|---|
'challenge_test_scramble' |
500 |
'challenge_train_sample' |
500 |
'challenge_validation_sample' |
500 |
'test' |
4,693 |
'train' |
33,525 |
'validation' |
4,299 |
- Feature structure:
FeaturesDict({
'gem_id': string,
'gem_parent_id': string,
'meaning_representation': string,
'references': Sequence(string),
'target': string,
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
meaning_representation | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
target | Tensor | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{e2e_cleaned,
address = {Tokyo, Japan},
title = {Semantic {Noise} {Matters} for {Neural} {Natural} {Language} {Generation} },
url = {https://www.aclweb.org/anthology/W19-8652/},
booktitle = {Proceedings of the 12th {International} {Conference} on {Natural} {Language} {Generation} ({INLG} 2019)},
author = {Dušek, Ondřej and Howcroft, David M and Rieser, Verena},
year = {2019},
pages = {421--426},
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/mlsum_de
Config description: MLSum is a large-scale multiLingual summarization dataset. It is buillt from online news outlets, this split focusing on German.
Download size:
345.98 MiB
Dataset size:
963.60 MiB
Auto-cached (documentation): No
Splits:
Split | Examples |
---|---|
'challenge_test_covid' |
5,058 |
'challenge_train_sample' |
500 |
'challenge_validation_sample' |
500 |
'test' |
10,695 |
'train' |
220,748 |
'validation' |
11,392 |
- Feature structure:
FeaturesDict({
'date': string,
'gem_id': string,
'gem_parent_id': string,
'references': Sequence(string),
'target': string,
'text': string,
'title': string,
'topic': string,
'url': string,
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
date | Tensor | string | ||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
target | Tensor | string | ||
text | Tensor | string | ||
title | Tensor | string | ||
topic | Tensor | string | ||
url | Tensor | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{scialom-etal-2020-mlsum,
title = "{MLSUM}: The Multilingual Summarization Corpus",
author = {Scialom, Thomas and Dray, Paul-Alexis and Lamprier, Sylvain and Piwowarski, Benjamin and Staiano, Jacopo},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
year = {2020}
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/mlsum_es
Config description: MLSum is a large-scale multiLingual summarization dataset. It is buillt from online news outlets, this split focusing on Spanish.
Download size:
501.27 MiB
Dataset size:
1.29 GiB
Auto-cached (documentation): No
Splits:
Split | Examples |
---|---|
'challenge_test_covid' |
1,938 |
'challenge_train_sample' |
500 |
'challenge_validation_sample' |
500 |
'test' |
13,366 |
'train' |
259,888 |
'validation' |
9,977 |
- Feature structure:
FeaturesDict({
'date': string,
'gem_id': string,
'gem_parent_id': string,
'references': Sequence(string),
'target': string,
'text': string,
'title': string,
'topic': string,
'url': string,
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
date | Tensor | string | ||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
target | Tensor | string | ||
text | Tensor | string | ||
title | Tensor | string | ||
topic | Tensor | string | ||
url | Tensor | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{scialom-etal-2020-mlsum,
title = "{MLSUM}: The Multilingual Summarization Corpus",
author = {Scialom, Thomas and Dray, Paul-Alexis and Lamprier, Sylvain and Piwowarski, Benjamin and Staiano, Jacopo},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
year = {2020}
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/schema_guided_dialog
Config description: The Schema-Guided Dialogue (SGD) dataset contains 18K multi-domain task-oriented dialogues between a human and a virtual assistant, which covers 17 domains ranging from banks and events to media, calendar, travel, and weather.
Download size:
17.00 MiB
Dataset size:
201.19 MiB
Auto-cached (documentation): Yes (challenge_test_backtranslation, challenge_test_bfp02, challenge_test_bfp05, challenge_test_nopunc, challenge_test_scramble, challenge_train_sample, challenge_validation_sample, test, validation), Only when
shuffle_files=False
(train)Splits:
Split | Examples |
---|---|
'challenge_test_backtranslation' |
500 |
'challenge_test_bfp02' |
500 |
'challenge_test_bfp05' |
500 |
'challenge_test_nopunc' |
500 |
'challenge_test_scramble' |
500 |
'challenge_train_sample' |
500 |
'challenge_validation_sample' |
500 |
'test' |
10,000 |
'train' |
164,982 |
'validation' |
10,000 |
- Feature structure:
FeaturesDict({
'context': Sequence(string),
'dialog_acts': Sequence({
'act': ClassLabel(shape=(), dtype=int64, num_classes=18),
'slot': string,
'values': Sequence(string),
}),
'dialog_id': string,
'gem_id': string,
'gem_parent_id': string,
'prompt': string,
'references': Sequence(string),
'service': string,
'target': string,
'turn_id': int32,
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
context | Sequence(Tensor) | (None,) | string | |
dialog_acts | Sequence | |||
dialog_acts/act | ClassLabel | int64 | ||
dialog_acts/slot | Tensor | string | ||
dialog_acts/values | Sequence(Tensor) | (None,) | string | |
dialog_id | Tensor | string | ||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
prompt | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
service | Tensor | string | ||
target | Tensor | string | ||
turn_id | Tensor | int32 |
- Examples (tfds.as_dataframe):
- Citation:
@article{rastogi2019towards,
title={Towards Scalable Multi-domain Conversational Agents: The Schema-Guided Dialogue Dataset},
author={Rastogi, Abhinav and Zang, Xiaoxue and Sunkara, Srinivas and Gupta, Raghav and Khaitan, Pranav},
journal={arXiv preprint arXiv:1909.05855},
year={2019}
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/totto
Config description: ToTTo is a Table-to-Text NLG task. The task is as follows: Given a Wikipedia table with row names, column names and table cells, with a subset of cells highlighted, generate a natural language description for the highlighted part of the table.
Download size:
180.75 MiB
Dataset size:
645.86 MiB
Auto-cached (documentation): No
Splits:
Split | Examples |
---|---|
'challenge_test_scramble' |
500 |
'challenge_train_sample' |
500 |
'challenge_validation_sample' |
500 |
'test' |
7,700 |
'train' |
121,153 |
'validation' |
7,700 |
- Feature structure:
FeaturesDict({
'example_id': string,
'gem_id': string,
'gem_parent_id': string,
'highlighted_cells': Sequence(Sequence(int32)),
'overlap_subset': string,
'references': Sequence(string),
'sentence_annotations': Sequence({
'final_sentence': string,
'original_sentence': string,
'sentence_after_ambiguity': string,
'sentence_after_deletion': string,
}),
'table': Sequence(Sequence({
'column_span': int32,
'is_header': bool,
'row_span': int32,
'value': string,
})),
'table_page_title': string,
'table_section_text': string,
'table_section_title': string,
'table_webpage_url': string,
'target': string,
'totto_id': int32,
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
example_id | Tensor | string | ||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
highlighted_cells | Sequence(Sequence(Tensor)) | (None, None) | int32 | |
overlap_subset | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
sentence_annotations | Sequence | |||
sentence_annotations/final_sentence | Tensor | string | ||
sentence_annotations/original_sentence | Tensor | string | ||
sentence_annotations/sentence_after_ambiguity | Tensor | string | ||
sentence_annotations/sentence_after_deletion | Tensor | string | ||
table | Sequence | |||
table/column_span | Tensor | int32 | ||
table/is_header | Tensor | bool | ||
table/row_span | Tensor | int32 | ||
table/value | Tensor | string | ||
table_page_title | Tensor | string | ||
table_section_text | Tensor | string | ||
table_section_title | Tensor | string | ||
table_webpage_url | Tensor | string | ||
target | Tensor | string | ||
totto_id | Tensor | int32 |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{parikh2020totto,
title=ToTTo: A Controlled Table-To-Text Generation Dataset,
author={Parikh, Ankur and Wang, Xuezhi and Gehrmann, Sebastian and Faruqui, Manaal and Dhingra, Bhuwan and Yang, Diyi and Das, Dipanjan},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
pages={1173--1186},
year={2020}
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/web_nlg_en
Config description: WebNLG is a bi-lingual dataset (English, Russian) of parallel DBpedia triple sets and short texts that cover about 450 different DBpedia properties. The WebNLG data was originally created to promote the development of RDF verbalisers able to generate short text and to handle micro-planning.
Download size:
12.57 MiB
Dataset size:
19.91 MiB
Auto-cached (documentation): Yes
Splits:
Split | Examples |
---|---|
'challenge_test_numbers' |
500 |
'challenge_test_scramble' |
500 |
'challenge_train_sample' |
502 |
'challenge_validation_sample' |
499 |
'test' |
1,779 |
'train' |
35,426 |
'validation' |
1,667 |
- Feature structure:
FeaturesDict({
'category': string,
'gem_id': string,
'gem_parent_id': string,
'input': Sequence(string),
'references': Sequence(string),
'target': string,
'webnlg_id': string,
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
category | Tensor | string | ||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
input | Sequence(Tensor) | (None,) | string | |
references | Sequence(Tensor) | (None,) | string | |
target | Tensor | string | ||
webnlg_id | Tensor | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{gardent2017creating,
author = "Gardent, Claire
and Shimorina, Anastasia
and Narayan, Shashi
and Perez-Beltrachini, Laura",
title = "Creating Training Corpora for NLG Micro-Planners",
booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
year = "2017",
publisher = "Association for Computational Linguistics",
pages = "179--188",
location = "Vancouver, Canada",
doi = "10.18653/v1/P17-1017",
url = "http://www.aclweb.org/anthology/P17-1017"
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/web_nlg_ru
Config description: WebNLG is a bi-lingual dataset (English, Russian) of parallel DBpedia triple sets and short texts that cover about 450 different DBpedia properties. The WebNLG data was originally created to promote the development of RDF verbalisers able to generate short text and to handle micro-planning.
Download size:
7.49 MiB
Dataset size:
11.30 MiB
Auto-cached (documentation): Yes
Splits:
Split | Examples |
---|---|
'challenge_test_scramble' |
500 |
'challenge_train_sample' |
501 |
'challenge_validation_sample' |
500 |
'test' |
1,102 |
'train' |
14,630 |
'validation' |
790 |
- Feature structure:
FeaturesDict({
'category': string,
'gem_id': string,
'gem_parent_id': string,
'input': Sequence(string),
'references': Sequence(string),
'target': string,
'webnlg_id': string,
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
category | Tensor | string | ||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
input | Sequence(Tensor) | (None,) | string | |
references | Sequence(Tensor) | (None,) | string | |
target | Tensor | string | ||
webnlg_id | Tensor | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{gardent2017creating,
author = "Gardent, Claire
and Shimorina, Anastasia
and Narayan, Shashi
and Perez-Beltrachini, Laura",
title = "Creating Training Corpora for NLG Micro-Planners",
booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
year = "2017",
publisher = "Association for Computational Linguistics",
pages = "179--188",
location = "Vancouver, Canada",
doi = "10.18653/v1/P17-1017",
url = "http://www.aclweb.org/anthology/P17-1017"
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/wiki_auto_asset_turk
Config description: WikiAuto provides a set of aligned sentences from English Wikipedia and Simple English Wikipedia as a resource to train sentence simplification systems. ASSET and TURK are high-quality simplification datasets used for testing.
Download size:
121.01 MiB
Dataset size:
202.40 MiB
Auto-cached (documentation): Yes (challenge_test_asset_backtranslation, challenge_test_asset_bfp02, challenge_test_asset_bfp05, challenge_test_asset_nopunc, challenge_test_turk_backtranslation, challenge_test_turk_bfp02, challenge_test_turk_bfp05, challenge_test_turk_nopunc, challenge_train_sample, challenge_validation_sample, test_asset, test_turk, validation), Only when
shuffle_files=False
(train)Splits:
Split | Examples |
---|---|
'challenge_test_asset_backtranslation' |
359 |
'challenge_test_asset_bfp02' |
359 |
'challenge_test_asset_bfp05' |
359 |
'challenge_test_asset_nopunc' |
359 |
'challenge_test_turk_backtranslation' |
359 |
'challenge_test_turk_bfp02' |
359 |
'challenge_test_turk_bfp05' |
359 |
'challenge_test_turk_nopunc' |
359 |
'challenge_train_sample' |
500 |
'challenge_validation_sample' |
500 |
'test_asset' |
359 |
'test_turk' |
359 |
'train' |
483,801 |
'validation' |
20,000 |
- Feature structure:
FeaturesDict({
'gem_id': string,
'gem_parent_id': string,
'references': Sequence(string),
'source': string,
'target': string,
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
source | Tensor | string | ||
target | Tensor | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{jiang-etal-2020-neural,
title = "Neural {CRF} Model for Sentence Alignment in Text Simplification",
author = "Jiang, Chao and
Maddela, Mounica and
Lan, Wuwei and
Zhong, Yang and
Xu, Wei",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.709",
doi = "10.18653/v1/2020.acl-main.709",
pages = "7943--7960",
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/xsum
Config description: The dataset is for the task of abstractive summarization in its extreme form, its about summarizing a document in a single sentence.
Download size:
246.31 MiB
Dataset size:
78.89 MiB
Auto-cached (documentation): Yes
Splits:
Split | Examples |
---|---|
'challenge_test_backtranslation' |
500 |
'challenge_test_bfp_02' |
500 |
'challenge_test_bfp_05' |
500 |
'challenge_test_covid' |
401 |
'challenge_test_nopunc' |
500 |
'challenge_train_sample' |
500 |
'challenge_validation_sample' |
500 |
'test' |
1,166 |
'train' |
23,206 |
'validation' |
1,117 |
- Feature structure:
FeaturesDict({
'document': string,
'gem_id': string,
'gem_parent_id': string,
'references': Sequence(string),
'target': string,
'xsum_id': string,
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
document | Tensor | string | ||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
target | Tensor | string | ||
xsum_id | Tensor | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{Narayan2018dont,
author = "Shashi Narayan and Shay B. Cohen and Mirella Lapata",
title = "Don't Give Me the Details, Just the Summary! {T}opic-Aware Convolutional Neural Networks for Extreme Summarization",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing ",
year = "2018",
address = "Brussels, Belgium",
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/wiki_lingua_arabic_ar
Config description: Wikilingua is a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems..
Download size:
56.25 MiB
Dataset size:
291.42 MiB
Auto-cached (documentation): No
Splits:
Split | Examples |
---|---|
'test' |
5,841 |
'train' |
20,441 |
'validation' |
2,919 |
- Feature structure:
FeaturesDict({
'gem_id': string,
'gem_parent_id': string,
'references': Sequence(string),
'source': string,
'source_aligned': Translation({
'ar': Text(shape=(), dtype=string),
'en': Text(shape=(), dtype=string),
}),
'target': string,
'target_aligned': Translation({
'ar': Text(shape=(), dtype=string),
'en': Text(shape=(), dtype=string),
}),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
source | Tensor | string | ||
source_aligned | Translation | |||
source_aligned/ar | Text | string | ||
source_aligned/en | Text | string | ||
target | Tensor | string | ||
target_aligned | Translation | |||
target_aligned/ar | Text | string | ||
target_aligned/en | Text | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{ladhak-wiki-2020,
title=WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization,
author={Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},
booktitle={Findings of EMNLP, 2020},
year={2020}
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/wiki_lingua_chinese_zh
Config description: Wikilingua is a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems..
Download size:
31.38 MiB
Dataset size:
122.06 MiB
Auto-cached (documentation): Yes
Splits:
Split | Examples |
---|---|
'test' |
3,775 |
'train' |
13,211 |
'validation' |
1,886 |
- Feature structure:
FeaturesDict({
'gem_id': string,
'gem_parent_id': string,
'references': Sequence(string),
'source': string,
'source_aligned': Translation({
'en': Text(shape=(), dtype=string),
'zh': Text(shape=(), dtype=string),
}),
'target': string,
'target_aligned': Translation({
'en': Text(shape=(), dtype=string),
'zh': Text(shape=(), dtype=string),
}),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
source | Tensor | string | ||
source_aligned | Translation | |||
source_aligned/en | Text | string | ||
source_aligned/zh | Text | string | ||
target | Tensor | string | ||
target_aligned | Translation | |||
target_aligned/en | Text | string | ||
target_aligned/zh | Text | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{ladhak-wiki-2020,
title=WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization,
author={Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},
booktitle={Findings of EMNLP, 2020},
year={2020}
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/wiki_lingua_czech_cs
Config description: Wikilingua is a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems..
Download size:
13.84 MiB
Dataset size:
58.05 MiB
Auto-cached (documentation): Yes
Splits:
Split | Examples |
---|---|
'test' |
1,438 |
'train' |
5,033 |
'validation' |
718 |
- Feature structure:
FeaturesDict({
'gem_id': string,
'gem_parent_id': string,
'references': Sequence(string),
'source': string,
'source_aligned': Translation({
'cs': Text(shape=(), dtype=string),
'en': Text(shape=(), dtype=string),
}),
'target': string,
'target_aligned': Translation({
'cs': Text(shape=(), dtype=string),
'en': Text(shape=(), dtype=string),
}),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
source | Tensor | string | ||
source_aligned | Translation | |||
source_aligned/cs | Text | string | ||
source_aligned/en | Text | string | ||
target | Tensor | string | ||
target_aligned | Translation | |||
target_aligned/cs | Text | string | ||
target_aligned/en | Text | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{ladhak-wiki-2020,
title=WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization,
author={Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},
booktitle={Findings of EMNLP, 2020},
year={2020}
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/wiki_lingua_dutch_nl
Config description: Wikilingua is a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems..
Download size:
53.88 MiB
Dataset size:
237.97 MiB
Auto-cached (documentation): Yes (test, validation), Only when
shuffle_files=False
(train)Splits:
Split | Examples |
---|---|
'test' |
6,248 |
'train' |
21,866 |
'validation' |
3,123 |
- Feature structure:
FeaturesDict({
'gem_id': string,
'gem_parent_id': string,
'references': Sequence(string),
'source': string,
'source_aligned': Translation({
'en': Text(shape=(), dtype=string),
'nl': Text(shape=(), dtype=string),
}),
'target': string,
'target_aligned': Translation({
'en': Text(shape=(), dtype=string),
'nl': Text(shape=(), dtype=string),
}),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
source | Tensor | string | ||
source_aligned | Translation | |||
source_aligned/en | Text | string | ||
source_aligned/nl | Text | string | ||
target | Tensor | string | ||
target_aligned | Translation | |||
target_aligned/en | Text | string | ||
target_aligned/nl | Text | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{ladhak-wiki-2020,
title=WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization,
author={Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},
booktitle={Findings of EMNLP, 2020},
year={2020}
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/wiki_lingua_english_en
Config description: Wikilingua is a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems..
Download size:
112.56 MiB
Dataset size:
657.51 MiB
Auto-cached (documentation): No
Splits:
Split | Examples |
---|---|
'test' |
28,614 |
'train' |
99,020 |
'validation' |
13,823 |
- Feature structure:
FeaturesDict({
'gem_id': string,
'gem_parent_id': string,
'references': Sequence(string),
'source': string,
'source_aligned': Translation({
'en': Text(shape=(), dtype=string),
}),
'target': string,
'target_aligned': Translation({
'en': Text(shape=(), dtype=string),
}),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
source | Tensor | string | ||
source_aligned | Translation | |||
source_aligned/en | Text | string | ||
target | Tensor | string | ||
target_aligned | Translation | |||
target_aligned/en | Text | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{ladhak-wiki-2020,
title=WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization,
author={Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},
booktitle={Findings of EMNLP, 2020},
year={2020}
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/wiki_lingua_french_fr
Config description: Wikilingua is a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems..
Download size:
113.26 MiB
Dataset size:
522.28 MiB
Auto-cached (documentation): No
Splits:
Split | Examples |
---|---|
'test' |
12,731 |
'train' |
44,556 |
'validation' |
6,364 |
- Feature structure:
FeaturesDict({
'gem_id': string,
'gem_parent_id': string,
'references': Sequence(string),
'source': string,
'source_aligned': Translation({
'en': Text(shape=(), dtype=string),
'fr': Text(shape=(), dtype=string),
}),
'target': string,
'target_aligned': Translation({
'en': Text(shape=(), dtype=string),
'fr': Text(shape=(), dtype=string),
}),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
source | Tensor | string | ||
source_aligned | Translation | |||
source_aligned/en | Text | string | ||
source_aligned/fr | Text | string | ||
target | Tensor | string | ||
target_aligned | Translation | |||
target_aligned/en | Text | string | ||
target_aligned/fr | Text | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{ladhak-wiki-2020,
title=WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization,
author={Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},
booktitle={Findings of EMNLP, 2020},
year={2020}
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/wiki_lingua_german_de
Config description: Wikilingua is a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems..
Download size:
102.65 MiB
Dataset size:
452.46 MiB
Auto-cached (documentation): No
Splits:
Split | Examples |
---|---|
'test' |
11,669 |
'train' |
40,839 |
'validation' |
5,833 |
- Feature structure:
FeaturesDict({
'gem_id': string,
'gem_parent_id': string,
'references': Sequence(string),
'source': string,
'source_aligned': Translation({
'de': Text(shape=(), dtype=string),
'en': Text(shape=(), dtype=string),
}),
'target': string,
'target_aligned': Translation({
'de': Text(shape=(), dtype=string),
'en': Text(shape=(), dtype=string),
}),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
source | Tensor | string | ||
source_aligned | Translation | |||
source_aligned/de | Text | string | ||
source_aligned/en | Text | string | ||
target | Tensor | string | ||
target_aligned | Translation | |||
target_aligned/de | Text | string | ||
target_aligned/en | Text | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{ladhak-wiki-2020,
title=WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization,
author={Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},
booktitle={Findings of EMNLP, 2020},
year={2020}
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/wiki_lingua_hindi_hi
Config description: Wikilingua is a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems..
Download size:
20.07 MiB
Dataset size:
138.06 MiB
Auto-cached (documentation): Yes
Splits:
Split | Examples |
---|---|
'test' |
1,984 |
'train' |
6,942 |
'validation' |
991 |
- Feature structure:
FeaturesDict({
'gem_id': string,
'gem_parent_id': string,
'references': Sequence(string),
'source': string,
'source_aligned': Translation({
'en': Text(shape=(), dtype=string),
'hi': Text(shape=(), dtype=string),
}),
'target': string,
'target_aligned': Translation({
'en': Text(shape=(), dtype=string),
'hi': Text(shape=(), dtype=string),
}),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
source | Tensor | string | ||
source_aligned | Translation | |||
source_aligned/en | Text | string | ||
source_aligned/hi | Text | string | ||
target | Tensor | string | ||
target_aligned | Translation | |||
target_aligned/en | Text | string | ||
target_aligned/hi | Text | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{ladhak-wiki-2020,
title=WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization,
author={Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},
booktitle={Findings of EMNLP, 2020},
year={2020}
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/wiki_lingua_indonesian_id
Config description: Wikilingua is a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems..
Download size:
80.08 MiB
Dataset size:
370.63 MiB
Auto-cached (documentation): No
Splits:
Split | Examples |
---|---|
'test' |
9,497 |
'train' |
33,237 |
'validation' |
4,747 |
- Feature structure:
FeaturesDict({
'gem_id': string,
'gem_parent_id': string,
'references': Sequence(string),
'source': string,
'source_aligned': Translation({
'en': Text(shape=(), dtype=string),
'id': Text(shape=(), dtype=string),
}),
'target': string,
'target_aligned': Translation({
'en': Text(shape=(), dtype=string),
'id': Text(shape=(), dtype=string),
}),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
source | Tensor | string | ||
source_aligned | Translation | |||
source_aligned/en | Text | string | ||
source_aligned/id | Text | string | ||
target | Tensor | string | ||
target_aligned | Translation | |||
target_aligned/en | Text | string | ||
target_aligned/id | Text | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{ladhak-wiki-2020,
title=WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization,
author={Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},
booktitle={Findings of EMNLP, 2020},
year={2020}
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/wiki_lingua_italian_it
Config description: Wikilingua is a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems..
Download size:
84.80 MiB
Dataset size:
374.40 MiB
Auto-cached (documentation): No
Splits:
Split | Examples |
---|---|
'test' |
10,189 |
'train' |
35,661 |
'validation' |
5,093 |
- Feature structure:
FeaturesDict({
'gem_id': string,
'gem_parent_id': string,
'references': Sequence(string),
'source': string,
'source_aligned': Translation({
'en': Text(shape=(), dtype=string),
'it': Text(shape=(), dtype=string),
}),
'target': string,
'target_aligned': Translation({
'en': Text(shape=(), dtype=string),
'it': Text(shape=(), dtype=string),
}),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
source | Tensor | string | ||
source_aligned | Translation | |||
source_aligned/en | Text | string | ||
source_aligned/it | Text | string | ||
target | Tensor | string | ||
target_aligned | Translation | |||
target_aligned/en | Text | string | ||
target_aligned/it | Text | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{ladhak-wiki-2020,
title=WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization,
author={Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},
booktitle={Findings of EMNLP, 2020},
year={2020}
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/wiki_lingua_japanese_ja
Config description: Wikilingua is a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems..
Download size:
21.75 MiB
Dataset size:
103.19 MiB
Auto-cached (documentation): Yes
Splits:
Split | Examples |
---|---|
'test' |
2,530 |
'train' |
8,853 |
'validation' |
1,264 |
- Feature structure:
FeaturesDict({
'gem_id': string,
'gem_parent_id': string,
'references': Sequence(string),
'source': string,
'source_aligned': Translation({
'en': Text(shape=(), dtype=string),
'ja': Text(shape=(), dtype=string),
}),
'target': string,
'target_aligned': Translation({
'en': Text(shape=(), dtype=string),
'ja': Text(shape=(), dtype=string),
}),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
source | Tensor | string | ||
source_aligned | Translation | |||
source_aligned/en | Text | string | ||
source_aligned/ja | Text | string | ||
target | Tensor | string | ||
target_aligned | Translation | |||
target_aligned/en | Text | string | ||
target_aligned/ja | Text | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{ladhak-wiki-2020,
title=WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization,
author={Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},
booktitle={Findings of EMNLP, 2020},
year={2020}
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/wiki_lingua_korean_ko
Config description: Wikilingua is a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems..
Download size:
22.26 MiB
Dataset size:
102.35 MiB
Auto-cached (documentation): Yes
Splits:
Split | Examples |
---|---|
'test' |
2,436 |
'train' |
8,524 |
'validation' |
1,216 |
- Feature structure:
FeaturesDict({
'gem_id': string,
'gem_parent_id': string,
'references': Sequence(string),
'source': string,
'source_aligned': Translation({
'en': Text(shape=(), dtype=string),
'ko': Text(shape=(), dtype=string),
}),
'target': string,
'target_aligned': Translation({
'en': Text(shape=(), dtype=string),
'ko': Text(shape=(), dtype=string),
}),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
source | Tensor | string | ||
source_aligned | Translation | |||
source_aligned/en | Text | string | ||
source_aligned/ko | Text | string | ||
target | Tensor | string | ||
target_aligned | Translation | |||
target_aligned/en | Text | string | ||
target_aligned/ko | Text | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{ladhak-wiki-2020,
title=WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization,
author={Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},
booktitle={Findings of EMNLP, 2020},
year={2020}
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/wiki_lingua_portuguese_pt
Config description: Wikilingua is a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems..
Download size:
131.17 MiB
Dataset size:
570.46 MiB
Auto-cached (documentation): No
Splits:
Split | Examples |
---|---|
'test' |
16,331 |
'train' |
57,159 |
'validation' |
8,165 |
- Feature structure:
FeaturesDict({
'gem_id': string,
'gem_parent_id': string,
'references': Sequence(string),
'source': string,
'source_aligned': Translation({
'en': Text(shape=(), dtype=string),
'pt': Text(shape=(), dtype=string),
}),
'target': string,
'target_aligned': Translation({
'en': Text(shape=(), dtype=string),
'pt': Text(shape=(), dtype=string),
}),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
source | Tensor | string | ||
source_aligned | Translation | |||
source_aligned/en | Text | string | ||
source_aligned/pt | Text | string | ||
target | Tensor | string | ||
target_aligned | Translation | |||
target_aligned/en | Text | string | ||
target_aligned/pt | Text | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{ladhak-wiki-2020,
title=WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization,
author={Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},
booktitle={Findings of EMNLP, 2020},
year={2020}
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/wiki_lingua_russian_ru
Config description: Wikilingua is a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems..
Download size:
101.36 MiB
Dataset size:
564.69 MiB
Auto-cached (documentation): No
Splits:
Split | Examples |
---|---|
'test' |
10,580 |
'train' |
37,028 |
'validation' |
5,288 |
- Feature structure:
FeaturesDict({
'gem_id': string,
'gem_parent_id': string,
'references': Sequence(string),
'source': string,
'source_aligned': Translation({
'en': Text(shape=(), dtype=string),
'ru': Text(shape=(), dtype=string),
}),
'target': string,
'target_aligned': Translation({
'en': Text(shape=(), dtype=string),
'ru': Text(shape=(), dtype=string),
}),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
source | Tensor | string | ||
source_aligned | Translation | |||
source_aligned/en | Text | string | ||
source_aligned/ru | Text | string | ||
target | Tensor | string | ||
target_aligned | Translation | |||
target_aligned/en | Text | string | ||
target_aligned/ru | Text | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{ladhak-wiki-2020,
title=WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization,
author={Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},
booktitle={Findings of EMNLP, 2020},
year={2020}
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/wiki_lingua_spanish_es
Config description: Wikilingua is a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems..
Download size:
189.06 MiB
Dataset size:
849.75 MiB
Auto-cached (documentation): No
Splits:
Split | Examples |
---|---|
'test' |
22,632 |
'train' |
79,212 |
'validation' |
11,316 |
- Feature structure:
FeaturesDict({
'gem_id': string,
'gem_parent_id': string,
'references': Sequence(string),
'source': string,
'source_aligned': Translation({
'en': Text(shape=(), dtype=string),
'es': Text(shape=(), dtype=string),
}),
'target': string,
'target_aligned': Translation({
'en': Text(shape=(), dtype=string),
'es': Text(shape=(), dtype=string),
}),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
source | Tensor | string | ||
source_aligned | Translation | |||
source_aligned/en | Text | string | ||
source_aligned/es | Text | string | ||
target | Tensor | string | ||
target_aligned | Translation | |||
target_aligned/en | Text | string | ||
target_aligned/es | Text | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{ladhak-wiki-2020,
title=WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization,
author={Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},
booktitle={Findings of EMNLP, 2020},
year={2020}
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/wiki_lingua_thai_th
Config description: Wikilingua is a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems..
Download size:
28.60 MiB
Dataset size:
193.77 MiB
Auto-cached (documentation): Yes (test, validation), Only when
shuffle_files=False
(train)Splits:
Split | Examples |
---|---|
'test' |
2,950 |
'train' |
10,325 |
'validation' |
1,475 |
- Feature structure:
FeaturesDict({
'gem_id': string,
'gem_parent_id': string,
'references': Sequence(string),
'source': string,
'source_aligned': Translation({
'en': Text(shape=(), dtype=string),
'th': Text(shape=(), dtype=string),
}),
'target': string,
'target_aligned': Translation({
'en': Text(shape=(), dtype=string),
'th': Text(shape=(), dtype=string),
}),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
source | Tensor | string | ||
source_aligned | Translation | |||
source_aligned/en | Text | string | ||
source_aligned/th | Text | string | ||
target | Tensor | string | ||
target_aligned | Translation | |||
target_aligned/en | Text | string | ||
target_aligned/th | Text | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{ladhak-wiki-2020,
title=WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization,
author={Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},
booktitle={Findings of EMNLP, 2020},
year={2020}
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/wiki_lingua_turkish_tr
Config description: Wikilingua is a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems..
Download size:
6.73 MiB
Dataset size:
30.75 MiB
Auto-cached (documentation): Yes
Splits:
Split | Examples |
---|---|
'test' |
900 |
'train' |
3,148 |
'validation' |
449 |
- Feature structure:
FeaturesDict({
'gem_id': string,
'gem_parent_id': string,
'references': Sequence(string),
'source': string,
'source_aligned': Translation({
'en': Text(shape=(), dtype=string),
'tr': Text(shape=(), dtype=string),
}),
'target': string,
'target_aligned': Translation({
'en': Text(shape=(), dtype=string),
'tr': Text(shape=(), dtype=string),
}),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
source | Tensor | string | ||
source_aligned | Translation | |||
source_aligned/en | Text | string | ||
source_aligned/tr | Text | string | ||
target | Tensor | string | ||
target_aligned | Translation | |||
target_aligned/en | Text | string | ||
target_aligned/tr | Text | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{ladhak-wiki-2020,
title=WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization,
author={Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},
booktitle={Findings of EMNLP, 2020},
year={2020}
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."
gem/wiki_lingua_vietnamese_vi
Config description: Wikilingua is a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems..
Download size:
36.27 MiB
Dataset size:
179.77 MiB
Auto-cached (documentation): Yes
Splits:
Split | Examples |
---|---|
'test' |
3,917 |
'train' |
13,707 |
'validation' |
1,957 |
- Feature structure:
FeaturesDict({
'gem_id': string,
'gem_parent_id': string,
'references': Sequence(string),
'source': string,
'source_aligned': Translation({
'en': Text(shape=(), dtype=string),
'vi': Text(shape=(), dtype=string),
}),
'target': string,
'target_aligned': Translation({
'en': Text(shape=(), dtype=string),
'vi': Text(shape=(), dtype=string),
}),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
gem_id | Tensor | string | ||
gem_parent_id | Tensor | string | ||
references | Sequence(Tensor) | (None,) | string | |
source | Tensor | string | ||
source_aligned | Translation | |||
source_aligned/en | Text | string | ||
source_aligned/vi | Text | string | ||
target | Tensor | string | ||
target_aligned | Translation | |||
target_aligned/en | Text | string | ||
target_aligned/vi | Text | string |
- Examples (tfds.as_dataframe):
- Citation:
@inproceedings{ladhak-wiki-2020,
title=WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization,
author={Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},
booktitle={Findings of EMNLP, 2020},
year={2020}
}
@article{gehrmann2021gem,
author = {Sebastian Gehrmann and
Tosin P. Adewumi and
Karmanya Aggarwal and
Pawan Sasanka Ammanamanchi and
Aremu Anuoluwapo and
Antoine Bosselut and
Khyathi Raghavi Chandu and
Miruna{-}Adriana Clinciu and
Dipanjan Das and
Kaustubh D. Dhole and
Wanyu Du and
Esin Durmus and
Ondrej Dusek and
Chris Emezue and
Varun Gangal and
Cristina Garbacea and
Tatsunori Hashimoto and
Yufang Hou and
Yacine Jernite and
Harsh Jhamtani and
Yangfeng Ji and
Shailza Jolly and
Dhruv Kumar and
Faisal Ladhak and
Aman Madaan and
Mounica Maddela and
Khyati Mahajan and
Saad Mahamood and
Bodhisattwa Prasad Majumder and
Pedro Henrique Martins and
Angelina McMillan{-}Major and
Simon Mille and
Emiel van Miltenburg and
Moin Nadeem and
Shashi Narayan and
Vitaly Nikolaev and
Rubungo Andre Niyongabo and
Salomey Osei and
Ankur P. Parikh and
Laura Perez{-}Beltrachini and
Niranjan Ramesh Rao and
Vikas Raunak and
Juan Diego Rodriguez and
Sashank Santhanam and
Jo{\~{a} }o Sedoc and
Thibault Sellam and
Samira Shaikh and
Anastasia Shimorina and
Marco Antonio Sobrevilla Cabezudo and
Hendrik Strobelt and
Nishant Subramani and
Wei Xu and
Diyi Yang and
Akhila Yerukola and
Jiawei Zhou},
title = {The {GEM} Benchmark: Natural Language Generation, its Evaluation and
Metrics},
journal = {CoRR},
volume = {abs/2102.01672},
year = {2021},
url = {https://arxiv.org/abs/2102.01672},
archivePrefix = {arXiv},
eprint = {2102.01672}
}
Note that each GEM dataset has its own citation. Please see the source to see
the correct citation for each contained dataset."