참고자료:
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:wiki_split')
- 설명 :
One million English sentences, each split into two sentences that together preserve the original meaning, extracted from Wikipedia
Google's WikiSplit dataset was constructed automatically from the publicly available Wikipedia revision history. Although
the dataset contains some inherent noise, it can serve as valuable training data for models that split or merge sentences.
- 라이센스 : 알려진 라이센스 없음
- 버전 : 0.1.0
- 분할 :
나뉘다 | 예 |
---|---|
'test' | 5000 |
'train' | 989944 |
'validation' | 5000 |
- 특징 :
{
"complex_sentence": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"simple_sentence_1": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"simple_sentence_2": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}