सन्दर्भ:
एक्स-सीएसक्यूए-एन
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CSQA-en')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1074 |
'validation' | 1000 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-CSQA-zh
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CSQA-zh')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1074 |
'validation' | 1000 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-सीएसक्यूए-डे
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CSQA-de')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1074 |
'validation' | 1000 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-सीएसक्यूए-ईएस
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CSQA-es')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1074 |
'validation' | 1000 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-सीएसक्यूए-एफआर
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CSQA-fr')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1074 |
'validation' | 1000 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-सीएसक्यूए-आईटी
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CSQA-it')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1074 |
'validation' | 1000 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-सीएसक्यूए-जैप
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CSQA-jap')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1074 |
'validation' | 1000 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-सीएसक्यूए-एनएल
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CSQA-nl')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1074 |
'validation' | 1000 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-सीएसक्यूए-पीएल
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CSQA-pl')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1074 |
'validation' | 1000 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-सीएसक्यूए-पीटी
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CSQA-pt')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1074 |
'validation' | 1000 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-सीएसक्यूए-आरयू
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CSQA-ru')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1074 |
'validation' | 1000 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-सीएसक्यूए-एआर
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CSQA-ar')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1074 |
'validation' | 1000 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-सीएसक्यूए-vi
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CSQA-vi')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1074 |
'validation' | 1000 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-सीएसक्यूए-हाय
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CSQA-hi')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1074 |
'validation' | 1000 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-सीएसक्यूए-एसडब्ल्यू
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CSQA-sw')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1074 |
'validation' | 1000 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-सीएसक्यूए-उर
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CSQA-ur')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1074 |
'validation' | 1000 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-CODAH-एन
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CODAH-en')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1000 |
'validation' | 300 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-CODAH-zh
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CODAH-zh')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1000 |
'validation' | 300 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-CODAH-डी
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CODAH-de')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1000 |
'validation' | 300 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-CODAH-es
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CODAH-es')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1000 |
'validation' | 300 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-CODAH-fr
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CODAH-fr')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1000 |
'validation' | 300 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-CODAH-यह
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CODAH-it')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1000 |
'validation' | 300 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-CODAH-जाप
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CODAH-jap')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1000 |
'validation' | 300 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-CODAH-nl
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CODAH-nl')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1000 |
'validation' | 300 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-CODAH-pl
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CODAH-pl')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1000 |
'validation' | 300 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-CODAH-पीटी
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CODAH-pt')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1000 |
'validation' | 300 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-CODAH-आरयू
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CODAH-ru')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1000 |
'validation' | 300 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-CODAH-ar
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CODAH-ar')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1000 |
'validation' | 300 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-CODAH-vi
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CODAH-vi')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1000 |
'validation' | 300 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-CODAH-हाय
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CODAH-hi')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1000 |
'validation' | 300 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-CODAH-sw
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CODAH-sw')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1000 |
'validation' | 300 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
एक्स-CODAH-यूआर
इस डेटासेट को TFDS में लोड करने के लिए निम्नलिखित कमांड का उपयोग करें:
ds = tfds.load('huggingface:xcsr/X-CODAH-ur')
- विवरण :
To evaluate multi-lingual language models (ML-LMs) for commonsense reasoning in a cross-lingual zero-shot transfer setting (X-CSR), i.e., training in English and test in other languages, we create two benchmark datasets, namely X-CSQA and X-CODAH. Specifically, we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR. As our goal is to evaluate different ML-LMs in a unified evaluation protocol for X-CSR, we argue that such translated examples, although might contain noise, can serve as a starting benchmark for us to obtain meaningful analysis, before more human-translated datasets will be available in the future.
- लाइसेंस : कोई ज्ञात लाइसेंस नहीं
- संस्करण : 1.1.0
- विभाजन :
विभाजित करना | उदाहरण |
---|---|
'test' | 1000 |
'validation' | 300 |
- विशेषताएँ :
{
"id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"lang": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tag": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question": {
"feature": {
"stem": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"choices": {
"feature": {
"label": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
}
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"answerKey": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}