참고자료:
표준적인
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:id_liputan6/canonical')
- 설명 :
In this paper, we introduce a large-scale Indonesian summarization dataset. We harvest articles from this http URL,
an online news portal, and obtain 215,827 document-summary pairs. We leverage pre-trained language models to develop
benchmark extractive and abstractive summarization methods over the dataset with multilingual and monolingual
BERT-based models. We include a thorough error analysis by examining machine-generated summaries that have
low ROUGE scores, and expose both issues with ROUGE it-self, as well as with extractive and abstractive
summarization models.
- 라이센스 : 알려진 라이센스 없음
- 버전 : 1.0.0
- 분할 :
나뉘다 | 예 |
---|---|
'test' | 10972 |
'train' | 193883 |
'validation' | 10972 |
- 특징 :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"url": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"clean_article": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"clean_summary": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"extractive_summary": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
익스트림
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:id_liputan6/xtreme')
- 설명 :
In this paper, we introduce a large-scale Indonesian summarization dataset. We harvest articles from this http URL,
an online news portal, and obtain 215,827 document-summary pairs. We leverage pre-trained language models to develop
benchmark extractive and abstractive summarization methods over the dataset with multilingual and monolingual
BERT-based models. We include a thorough error analysis by examining machine-generated summaries that have
low ROUGE scores, and expose both issues with ROUGE it-self, as well as with extractive and abstractive
summarization models.
- 라이센스 : 알려진 라이센스 없음
- 버전 : 1.0.0
- 분할 :
나뉘다 | 예 |
---|---|
'test' | 3862 |
'validation' | 4948 |
- 특징 :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"url": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"clean_article": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"clean_summary": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"extractive_summary": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}