مراجع:
X-CSQA-en
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CSQA-en')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1074 |
'validation' | 1000 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CSQA-zh
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CSQA-zh')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1074 |
'validation' | 1000 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CSQA-de
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CSQA-de')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1074 |
'validation' | 1000 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CSQA-es
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CSQA-es')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1074 |
'validation' | 1000 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CSQA-fr
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CSQA-fr')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1074 |
'validation' | 1000 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CSQA-it
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CSQA-it')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1074 |
'validation' | 1000 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CSQA-jap
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CSQA-jap')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1074 |
'validation' | 1000 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CSQA-nl
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CSQA-nl')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1074 |
'validation' | 1000 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CSQA-pl
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CSQA-pl')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1074 |
'validation' | 1000 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CSQA-pt
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CSQA-pt')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1074 |
'validation' | 1000 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CSQA-ru
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CSQA-ru')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1074 |
'validation' | 1000 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CSQA-ar
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CSQA-ar')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1074 |
'validation' | 1000 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CSQA-vi
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CSQA-vi')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1074 |
'validation' | 1000 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CSQA-سلام
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CSQA-hi')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1074 |
'validation' | 1000 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CSQA-sw
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CSQA-sw')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1074 |
'validation' | 1000 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CSQA-ur
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CSQA-ur')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1074 |
'validation' | 1000 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CODAH-en
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CODAH-en')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1000 |
'validation' | 300 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CODAH-zh
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CODAH-zh')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1000 |
'validation' | 300 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CODAH-de
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CODAH-de')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1000 |
'validation' | 300 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CODAH-es
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CODAH-es')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1000 |
'validation' | 300 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CODAH-fr
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CODAH-fr')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1000 |
'validation' | 300 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CODAH-it
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CODAH-it')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1000 |
'validation' | 300 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CODAH-jap
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CODAH-jap')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1000 |
'validation' | 300 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CODAH-nl
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CODAH-nl')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1000 |
'validation' | 300 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CODAH-pl
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CODAH-pl')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1000 |
'validation' | 300 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CODAH-pt
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CODAH-pt')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1000 |
'validation' | 300 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CODAH-ru
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CODAH-ru')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1000 |
'validation' | 300 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CODAH-ar
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CODAH-ar')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1000 |
'validation' | 300 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CODAH-vi
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CODAH-vi')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1000 |
'validation' | 300 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CODAH-سلام
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CODAH-hi')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1000 |
'validation' | 300 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CODAH-sw
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CODAH-sw')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1000 |
'validation' | 300 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
X-CODAH-ur
برای بارگذاری این مجموعه داده در TFDS از دستور زیر استفاده کنید:
ds = tfds.load('huggingface:xcsr/X-CODAH-ur')
- توضیحات :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- مجوز : مجوز شناخته شده ای وجود ندارد
- نسخه : 1.1.0
- تقسیمات :
تقسیم کنید | نمونه ها |
---|---|
'test' | 1000 |
'validation' | 300 |
- ویژگی ها :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}