Referensi:
teks wiki-103-v1
Gunakan perintah berikut untuk memuat kumpulan data ini di TFDS:
ds = tfds.load('huggingface:wikitext/wikitext-103-v1')
- Keterangan :
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified
Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike
License.
- Lisensi : Creative Commons Attribution-ShareAlike 4.0 Internasional (CC BY-SA 4.0)
- Versi : 1.0.0
- Perpecahan :
Membelah | Contoh |
---|---|
'test' | 4358 |
'train' | 1801350 |
'validation' | 3760 |
- Fitur :
{
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
teks wiki-2-v1
Gunakan perintah berikut untuk memuat kumpulan data ini di TFDS:
ds = tfds.load('huggingface:wikitext/wikitext-2-v1')
- Keterangan :
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified
Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike
License.
- Lisensi : Creative Commons Attribution-ShareAlike 4.0 Internasional (CC BY-SA 4.0)
- Versi : 1.0.0
- Perpecahan :
Membelah | Contoh |
---|---|
'test' | 4358 |
'train' | 36718 |
'validation' | 3760 |
- Fitur :
{
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
teks wiki-103-mentah-v1
Gunakan perintah berikut untuk memuat kumpulan data ini di TFDS:
ds = tfds.load('huggingface:wikitext/wikitext-103-raw-v1')
- Keterangan :
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified
Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike
License.
- Lisensi : Creative Commons Attribution-ShareAlike 4.0 Internasional (CC BY-SA 4.0)
- Versi : 1.0.0
- Perpecahan :
Membelah | Contoh |
---|---|
'test' | 4358 |
'train' | 1801350 |
'validation' | 3760 |
- Fitur :
{
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
teks wiki-2-mentah-v1
Gunakan perintah berikut untuk memuat kumpulan data ini di TFDS:
ds = tfds.load('huggingface:wikitext/wikitext-2-raw-v1')
- Keterangan :
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified
Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike
License.
- Lisensi : Creative Commons Attribution-ShareAlike 4.0 Internasional (CC BY-SA 4.0)
- Versi : 1.0.0
- Perpecahan :
Membelah | Contoh |
---|---|
'test' | 4358 |
'train' | 36718 |
'validation' | 3760 |
- Fitur :
{
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}