Referencias:
tarea01
Utilice el siguiente comando para cargar este conjunto de datos en TFDS:
ds = tfds.load('huggingface:poleval2019_cyberbullying/task01')
- Descripción :
In Task 6-1, the participants are to distinguish between normal/non-harmful tweets (class: 0) and tweets
that contain any kind of harmful information (class: 1). This includes cyberbullying, hate speech and
related phenomena.
In Task 6-2, the participants shall distinguish between three classes of tweets: 0 (non-harmful),
1 (cyberbullying), 2 (hate-speech). There are various definitions of both cyberbullying and hate-speech,
some of them even putting those two phenomena in the same group. The specific conditions on which we based
our annotations for both cyberbullying and hate-speech, which have been worked out during ten years of research
will be summarized in an introductory paper for the task, however, the main and definitive condition to 1
distinguish the two is whether the harmful action is addressed towards a private person(s) (cyberbullying),
or a public person/entity/large group (hate-speech).
- Licencia : Sin licencia conocida
- Versión : 1.0.0
- Divisiones :
Separar | Ejemplos |
---|---|
'test' | 1000 |
'train' | 10041 |
- Características :
{
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"label": {
"num_classes": 2,
"names": [
"0",
"1"
],
"names_file": null,
"id": null,
"_type": "ClassLabel"
}
}
tarea02
Utilice el siguiente comando para cargar este conjunto de datos en TFDS:
ds = tfds.load('huggingface:poleval2019_cyberbullying/task02')
- Descripción :
In Task 6-1, the participants are to distinguish between normal/non-harmful tweets (class: 0) and tweets
that contain any kind of harmful information (class: 1). This includes cyberbullying, hate speech and
related phenomena.
In Task 6-2, the participants shall distinguish between three classes of tweets: 0 (non-harmful),
1 (cyberbullying), 2 (hate-speech). There are various definitions of both cyberbullying and hate-speech,
some of them even putting those two phenomena in the same group. The specific conditions on which we based
our annotations for both cyberbullying and hate-speech, which have been worked out during ten years of research
will be summarized in an introductory paper for the task, however, the main and definitive condition to 1
distinguish the two is whether the harmful action is addressed towards a private person(s) (cyberbullying),
or a public person/entity/large group (hate-speech).
- Licencia : Sin licencia conocida
- Versión : 1.0.0
- Divisiones :
Separar | Ejemplos |
---|---|
'test' | 1000 |
'train' | 10041 |
- Características :
{
"text": {
"dtype": "string",
"id": null,
"_type": "Value"
},
"label": {
"num_classes": 3,
"names": [
"0",
"1",
"2"
],
"names_file": null,
"id": null,
"_type": "ClassLabel"
}
}