Références :
coda
Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :
ds = tfds.load('huggingface:codah/codah')
- Description :
The COmmonsense Dataset Adversarially-authored by Humans (CODAH) is an evaluation set for commonsense question-answering in the sentence completion style of SWAG. As opposed to other automatically generated NLI datasets, CODAH is adversarially constructed by humans who can view feedback from a pre-trained model and use this information to design challenging commonsense questions. Our experimental results show that CODAH questions present a complementary extension to the SWAG dataset, testing additional modes of common sense.
- Licence : Aucune licence connue
- Version : 1.0.0
- Divisions :
Diviser | Exemples |
---|---|
'train' | 2776 |
- Caractéristiques :
{
"id": {
"dtype": "int32",
"id": null,
"_type": "Value"
},
"question_category": {
"num_classes": 6,
"names": [
"Idioms",
"Reference",
"Polysemy",
"Negation",
"Quantitative",
"Others"
],
"names_file": null,
"id": null,
"_type": "ClassLabel"
},
"question_propmt": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"candidate_answers": {
"feature": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"correct_answer_idx": {
"dtype": "int32",
"id": null,
"_type": "Value"
}
}
pli_0
Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :
ds = tfds.load('huggingface:codah/fold_0')
- Description :
The COmmonsense Dataset Adversarially-authored by Humans (CODAH) is an evaluation set for commonsense question-answering in the sentence completion style of SWAG. As opposed to other automatically generated NLI datasets, CODAH is adversarially constructed by humans who can view feedback from a pre-trained model and use this information to design challenging commonsense questions. Our experimental results show that CODAH questions present a complementary extension to the SWAG dataset, testing additional modes of common sense.
- Licence : Aucune licence connue
- Version : 1.0.0
- Divisions :
Diviser | Exemples |
---|---|
'test' | 555 |
'train' | 1665 |
'validation' | 556 |
- Caractéristiques :
{
"id": {
"dtype": "int32",
"id": null,
"_type": "Value"
},
"question_category": {
"num_classes": 6,
"names": [
"Idioms",
"Reference",
"Polysemy",
"Negation",
"Quantitative",
"Others"
],
"names_file": null,
"id": null,
"_type": "ClassLabel"
},
"question_propmt": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"candidate_answers": {
"feature": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"correct_answer_idx": {
"dtype": "int32",
"id": null,
"_type": "Value"
}
}
pli_1
Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :
ds = tfds.load('huggingface:codah/fold_1')
- Description :
The COmmonsense Dataset Adversarially-authored by Humans (CODAH) is an evaluation set for commonsense question-answering in the sentence completion style of SWAG. As opposed to other automatically generated NLI datasets, CODAH is adversarially constructed by humans who can view feedback from a pre-trained model and use this information to design challenging commonsense questions. Our experimental results show that CODAH questions present a complementary extension to the SWAG dataset, testing additional modes of common sense.
- Licence : Aucune licence connue
- Version : 1.0.0
- Divisions :
Diviser | Exemples |
---|---|
'test' | 555 |
'train' | 1665 |
'validation' | 556 |
- Caractéristiques :
{
"id": {
"dtype": "int32",
"id": null,
"_type": "Value"
},
"question_category": {
"num_classes": 6,
"names": [
"Idioms",
"Reference",
"Polysemy",
"Negation",
"Quantitative",
"Others"
],
"names_file": null,
"id": null,
"_type": "ClassLabel"
},
"question_propmt": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"candidate_answers": {
"feature": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"correct_answer_idx": {
"dtype": "int32",
"id": null,
"_type": "Value"
}
}
pli_2
Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :
ds = tfds.load('huggingface:codah/fold_2')
- Description :
The COmmonsense Dataset Adversarially-authored by Humans (CODAH) is an evaluation set for commonsense question-answering in the sentence completion style of SWAG. As opposed to other automatically generated NLI datasets, CODAH is adversarially constructed by humans who can view feedback from a pre-trained model and use this information to design challenging commonsense questions. Our experimental results show that CODAH questions present a complementary extension to the SWAG dataset, testing additional modes of common sense.
- Licence : Aucune licence connue
- Version : 1.0.0
- Divisions :
Diviser | Exemples |
---|---|
'test' | 555 |
'train' | 1665 |
'validation' | 556 |
- Caractéristiques :
{
"id": {
"dtype": "int32",
"id": null,
"_type": "Value"
},
"question_category": {
"num_classes": 6,
"names": [
"Idioms",
"Reference",
"Polysemy",
"Negation",
"Quantitative",
"Others"
],
"names_file": null,
"id": null,
"_type": "ClassLabel"
},
"question_propmt": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"candidate_answers": {
"feature": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"correct_answer_idx": {
"dtype": "int32",
"id": null,
"_type": "Value"
}
}
pli_3
Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :
ds = tfds.load('huggingface:codah/fold_3')
- Description :
The COmmonsense Dataset Adversarially-authored by Humans (CODAH) is an evaluation set for commonsense question-answering in the sentence completion style of SWAG. As opposed to other automatically generated NLI datasets, CODAH is adversarially constructed by humans who can view feedback from a pre-trained model and use this information to design challenging commonsense questions. Our experimental results show that CODAH questions present a complementary extension to the SWAG dataset, testing additional modes of common sense.
- Licence : Aucune licence connue
- Version : 1.0.0
- Divisions :
Diviser | Exemples |
---|---|
'test' | 555 |
'train' | 1665 |
'validation' | 556 |
- Caractéristiques :
{
"id": {
"dtype": "int32",
"id": null,
"_type": "Value"
},
"question_category": {
"num_classes": 6,
"names": [
"Idioms",
"Reference",
"Polysemy",
"Negation",
"Quantitative",
"Others"
],
"names_file": null,
"id": null,
"_type": "ClassLabel"
},
"question_propmt": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"candidate_answers": {
"feature": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"correct_answer_idx": {
"dtype": "int32",
"id": null,
"_type": "Value"
}
}
pli_4
Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :
ds = tfds.load('huggingface:codah/fold_4')
- Description :
The COmmonsense Dataset Adversarially-authored by Humans (CODAH) is an evaluation set for commonsense question-answering in the sentence completion style of SWAG. As opposed to other automatically generated NLI datasets, CODAH is adversarially constructed by humans who can view feedback from a pre-trained model and use this information to design challenging commonsense questions. Our experimental results show that CODAH questions present a complementary extension to the SWAG dataset, testing additional modes of common sense.
- Licence : Aucune licence connue
- Version : 1.0.0
- Divisions :
Diviser | Exemples |
---|---|
'test' | 556 |
'train' | 1665 |
'validation' | 555 |
- Caractéristiques :
{
"id": {
"dtype": "int32",
"id": null,
"_type": "Value"
},
"question_category": {
"num_classes": 6,
"names": [
"Idioms",
"Reference",
"Polysemy",
"Negation",
"Quantitative",
"Others"
],
"names_file": null,
"id": null,
"_type": "ClassLabel"
},
"question_propmt": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"candidate_answers": {
"feature": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"correct_answer_idx": {
"dtype": "int32",
"id": null,
"_type": "Value"
}
}