بزرگ_اسپانیایی_جسم

مراجع:

JRC

برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:

ds = tfds.load('huggingface:large_spanish_corpus/JRC')
  • توضیحات :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • مجوز : MIT
  • نسخه : 1.1.0
  • تقسیم ها :
تقسیم کنید نمونه ها
'train' 3410620
  • ویژگی ها :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

EMEA

برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:

ds = tfds.load('huggingface:large_spanish_corpus/EMEA')
  • توضیحات :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • مجوز : MIT
  • نسخه : 1.1.0
  • تقسیم ها :
تقسیم کنید نمونه ها
'train' 1221233
  • ویژگی ها :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

GlobalVoices

برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:

ds = tfds.load('huggingface:large_spanish_corpus/GlobalVoices')
  • توضیحات :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • مجوز : MIT
  • نسخه : 1.1.0
  • تقسیم ها :
تقسیم کنید نمونه ها
'train' 897075
  • ویژگی ها :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

بانک مرکزی اروپا

برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:

ds = tfds.load('huggingface:large_spanish_corpus/ECB')
  • توضیحات :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • مجوز : MIT
  • نسخه : 1.1.0
  • تقسیم ها :
تقسیم کنید نمونه ها
'train' 1875738
  • ویژگی ها :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

DOGC

برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:

ds = tfds.load('huggingface:large_spanish_corpus/DOGC')
  • توضیحات :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • مجوز : MIT
  • نسخه : 1.1.0
  • تقسیم ها :
تقسیم کنید نمونه ها
'train' 10917053
  • ویژگی ها :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

all_wikis

برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:

ds = tfds.load('huggingface:large_spanish_corpus/all_wikis')
  • توضیحات :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • مجوز : MIT
  • نسخه : 1.1.0
  • تقسیم ها :
تقسیم کنید نمونه ها
'train' 28109484
  • ویژگی ها :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

تد

برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:

ds = tfds.load('huggingface:large_spanish_corpus/TED')
  • توضیحات :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • مجوز : MIT
  • نسخه : 1.1.0
  • تقسیم ها :
تقسیم کنید نمونه ها
'train' 157910
  • ویژگی ها :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

multiUN

برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:

ds = tfds.load('huggingface:large_spanish_corpus/multiUN')
  • توضیحات :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • مجوز : MIT
  • نسخه : 1.1.0
  • تقسیم ها :
تقسیم کنید نمونه ها
'train' 13127490
  • ویژگی ها :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

یوروپارل

برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:

ds = tfds.load('huggingface:large_spanish_corpus/Europarl')
  • توضیحات :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • مجوز : MIT
  • نسخه : 1.1.0
  • تقسیم ها :
تقسیم کنید نمونه ها
'train' 2174141
  • ویژگی ها :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

اخبار تفسیر11

برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:

ds = tfds.load('huggingface:large_spanish_corpus/NewsCommentary11')
  • توضیحات :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • مجوز : MIT
  • نسخه : 1.1.0
  • تقسیم ها :
تقسیم کنید نمونه ها
'train' 288771
  • ویژگی ها :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

سازمان ملل متحد

برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:

ds = tfds.load('huggingface:large_spanish_corpus/UN')
  • توضیحات :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • مجوز : MIT
  • نسخه : 1.1.0
  • تقسیم ها :
تقسیم کنید نمونه ها
'train' 74067
  • ویژگی ها :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

EUBookshop

برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:

ds = tfds.load('huggingface:large_spanish_corpus/EUBookShop')
  • توضیحات :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • مجوز : MIT
  • نسخه : 1.1.0
  • تقسیم ها :
تقسیم کنید نمونه ها
'train' 8214959
  • ویژگی ها :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

پارا کرال

برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:

ds = tfds.load('huggingface:large_spanish_corpus/ParaCrawl')
  • توضیحات :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • مجوز : MIT
  • نسخه : 1.1.0
  • تقسیم ها :
تقسیم کنید نمونه ها
'train' 15510649
  • ویژگی ها :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

OpenSubtitles2018

برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:

ds = tfds.load('huggingface:large_spanish_corpus/OpenSubtitles2018')
  • توضیحات :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • مجوز : MIT
  • نسخه : 1.1.0
  • تقسیم ها :
تقسیم کنید نمونه ها
'train' 213508602
  • ویژگی ها :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

DGT

برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:

ds = tfds.load('huggingface:large_spanish_corpus/DGT')
  • توضیحات :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • مجوز : MIT
  • نسخه : 1.1.0
  • تقسیم ها :
تقسیم کنید نمونه ها
'train' 3168368
  • ویژگی ها :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

ترکیب شده است

برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:

ds = tfds.load('huggingface:large_spanish_corpus/combined')
  • توضیحات :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • مجوز : MIT
  • نسخه : 1.1.0
  • تقسیم ها :
تقسیم کنید نمونه ها
'train' 302656160
  • ویژگی ها :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}