참고자료:
기차
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:textvqa/train')
- 설명 :
TextVQA requires models to read and reason about text in images to answer questions about them.
Specifically, models need to incorporate a new modality of text present in the images and reason
over it to answer TextVQA questions. TextVQA dataset contains 45,336 questions over 28,408 images
from the OpenImages dataset.
- 라이센스 : CC BY 4.0
- 버전 : 0.5.1
- 분할 :
나뉘다 | 예 |
---|---|
'test' | 5734 |
'train' | 34602 |
'validation' | 5000 |
- 특징 :
{
"image_id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_id": {
"dtype": "int32",
"id": null,
"_type": "Value"
},
"question": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tokens": {
"feature": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"image": {
"decode": true,
"id": null,
"_type": "Image"
},
"image_width": {
"dtype": "int32",
"id": null,
"_type": "Value"
},
"image_height": {
"dtype": "int32",
"id": null,
"_type": "Value"
},
"flickr_original_url": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"flickr_300k_url": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"answers": {
"feature": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"image_classes": {
"feature": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"set_name": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
발
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:textvqa/val')
- 설명 :
TextVQA requires models to read and reason about text in images to answer questions about them.
Specifically, models need to incorporate a new modality of text present in the images and reason
over it to answer TextVQA questions. TextVQA dataset contains 45,336 questions over 28,408 images
from the OpenImages dataset.
- 라이센스 : CC BY 4.0
- 버전 : 0.5.1
- 분할 :
나뉘다 | 예 |
---|---|
'test' | 5734 |
'train' | 34602 |
'validation' | 5000 |
- 특징 :
{
"image_id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_id": {
"dtype": "int32",
"id": null,
"_type": "Value"
},
"question": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tokens": {
"feature": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"image": {
"decode": true,
"id": null,
"_type": "Image"
},
"image_width": {
"dtype": "int32",
"id": null,
"_type": "Value"
},
"image_height": {
"dtype": "int32",
"id": null,
"_type": "Value"
},
"flickr_original_url": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"flickr_300k_url": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"answers": {
"feature": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"image_classes": {
"feature": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"set_name": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
시험
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:textvqa/test')
- 설명 :
TextVQA requires models to read and reason about text in images to answer questions about them.
Specifically, models need to incorporate a new modality of text present in the images and reason
over it to answer TextVQA questions. TextVQA dataset contains 45,336 questions over 28,408 images
from the OpenImages dataset.
- 라이센스 : CC BY 4.0
- 버전 : 0.5.1
- 분할 :
나뉘다 | 예 |
---|---|
'test' | 5734 |
'train' | 34602 |
'validation' | 5000 |
- 특징 :
{
"image_id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_id": {
"dtype": "int32",
"id": null,
"_type": "Value"
},
"question": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tokens": {
"feature": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"image": {
"decode": true,
"id": null,
"_type": "Image"
},
"image_width": {
"dtype": "int32",
"id": null,
"_type": "Value"
},
"image_height": {
"dtype": "int32",
"id": null,
"_type": "Value"
},
"flickr_original_url": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"flickr_300k_url": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"answers": {
"feature": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"image_classes": {
"feature": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"set_name": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}
텍스트vqa
TFDS에 이 데이터세트를 로드하려면 다음 명령어를 사용하세요.
ds = tfds.load('huggingface:textvqa/textvqa')
- 설명 :
TextVQA requires models to read and reason about text in images to answer questions about them.
Specifically, models need to incorporate a new modality of text present in the images and reason
over it to answer TextVQA questions. TextVQA dataset contains 45,336 questions over 28,408 images
from the OpenImages dataset.
- 라이센스 : CC BY 4.0
- 버전 : 0.5.1
- 분할 :
나뉘다 | 예 |
---|---|
'test' | 5734 |
'train' | 34602 |
'validation' | 5000 |
- 특징 :
{
"image_id": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_id": {
"dtype": "int32",
"id": null,
"_type": "Value"
},
"question": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"question_tokens": {
"feature": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"image": {
"decode": true,
"id": null,
"_type": "Image"
},
"image_width": {
"dtype": "int32",
"id": null,
"_type": "Value"
},
"image_height": {
"dtype": "int32",
"id": null,
"_type": "Value"
},
"flickr_original_url": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"flickr_300k_url": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"answers": {
"feature": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"image_classes": {
"feature": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"length": -1,
"id": null,
"_type": "Sequence"
},
"set_name": {
"dtype": "string",
"id": null,
"_type": "Value"
}
}