Difference between revisions of "2019:Music Detection Results"
(→Segment-level Evaluation) |
(→Segment-level Evaluation) |
||
Line 106: | Line 106: | ||
|- | |- | ||
! MMG1 | ! MMG1 | ||
− | | 0. | + | | 0.8152 || 0.7874 || 0.7392 || 0.7626 || 0.7941 || 0.7647 || 0.7791 || 0.8398 || 0.8834 || 0.861 |
|- | |- | ||
! MMG2 | ! MMG2 | ||
− | | 0. | + | | 0.8414 || 0.8741 || 0.7055 || 0.7808 || 0.8035 || 0.8154 || 0.8095 || 0.8648 || 0.9079 || 0.8858 |
|- | |- | ||
! MMG3 | ! MMG3 | ||
− | | 0. | + | | 0.8749 || 0.8375 || 0.8291 || 0.8333 || 0.8238 || 0.8896 || 0.8554 || 0.9387 || 0.8773 || 0.907 |
|} | |} | ||
Revision as of 06:50, 30 October 2019
Contents
Introduction
These are the results for the 2018 running of the Music and/or Speech Detection tasks. For background information about this task set please refer to the 2018:Music and/or Speech Detection page.
General Legend
Sub code | Abstract | Contributors |
---|---|---|
MMG1 | Blai Meléndez-Catalán, Emilio Molina, Emilia Gómez | |
MMG2 | Blai Meléndez-Catalán, Emilio Molina, Emilia Gómez | |
MMG3 | Blai Meléndez-Catalán, Emilio Molina, Emilia Gómez |
Statistics notation
Accuracy = segment-level accuracy
<class>_P = segment-level precision for the class <class>
<class>_R = segment-level recall for the class <class>
<class>_F = segment-level F-measure for the class <class>
<class>_F_500_on = onset-only event-level F-measure (500 ms tolerance) for the class <class>
<class>_F_500_onoff = onset-offset event-level F-measure (500 ms tolerance) for the class <class>
<class>_F_1000_on = onset-only event-level F-measure (1000 ms tolerance) for the class <class>
<class>_F_1000_onoff = onset-offset event-level F-measure (1000 ms tolerance) for the class <class>
Datasets description
Task 1: Music Detection
Segment-level Evaluation
Sub code | Accuracy | Music_P | Music_R | Music_F | No-Music_P | No-Music_R | No-Music_F |
---|---|---|---|---|---|---|---|
MMG1 | 0.8713 | 0.9056 | 0.8513 | 0.8776 | 0.8354 | 0.8953 | 0.8643 |
MMG2 | 0.8928 | 0.9186 | 0.8803 | 0.899 | 0.8648 | 0.9079 | 0.8858 |
MMG3 | 0.9178 | 0.9026 | 0.9511 | 0.9262 | 0.9381 | 0.8787 | 0.9074 |
Event-level Evaluation
Sub code | Music_F_500_on | Music_F_500_onoff | Music_F_1000_on | Music_F_1000_onoff |
---|---|---|---|---|
MMG1 | 0.5177 | 0.2693 | 0.5813 | 0.3502 |
MMG2 | 0.5177 | 0.2693 | 0.5813 | 0.3502 |
MMG3 | 0.4403 | 0.1991 | 0.4973 | 0.2788 |
Task 2: Relative Music Loudness Estimation
Segment-level Evaluation
Sub code | Accuracy | Fg-Music_P | Fg-Music_R | Fg-Music_F | Bg-Music_P | Bg-Music_R | Bg-Music_F | No-Music_P | No-Music_R | No-Music_F |
---|---|---|---|---|---|---|---|---|---|---|
MMG1 | 0.8152 | 0.7874 | 0.7392 | 0.7626 | 0.7941 | 0.7647 | 0.7791 | 0.8398 | 0.8834 | 0.861 |
MMG2 | 0.8414 | 0.8741 | 0.7055 | 0.7808 | 0.8035 | 0.8154 | 0.8095 | 0.8648 | 0.9079 | 0.8858 |
MMG3 | 0.8749 | 0.8375 | 0.8291 | 0.8333 | 0.8238 | 0.8896 | 0.8554 | 0.9387 | 0.8773 | 0.907 |
Event-level Evaluation
Sub code | Fg-Music_F_500_on | Fg-Music_F_500_onoff | Fg-Music_F_1000_on | Fg-Music_F_1000_onoff | Bg-Music_F_500_on | Bg-Music_F_500_onoff | Bg-Music_F_1000_on | Bg-Music_F_1000_onoff | No-Music_F_500_on | No-Music_F_500_onoff | No-Music_F_1000_on | Speech_F_1000_onoff |
---|---|---|---|---|---|---|---|---|---|---|---|---|
MMG1 | 0.3298 | 0.1775 | 0.4106 | 0.2742 | 0.3853 | 0.1388 | 0.4463 | 0.2024 | 0.5254 | 0.3123 | 0.5927 | 0.3925 |
MMG2 | 0.3298 | 0.1775 | 0.4106 | 0.2742 | 0.3853 | 0.1388 | 0.4463 | 0.2024 | 0.5254 | 0.3123 | 0.5927 | 0.3925 |
MMG3 | 0.3298 | 0.1775 | 0.4106 | 0.2742 | 0.3853 | 0.1388 | 0.4463 | 0.2024 | 0.5254 | 0.3123 | 0.5927 | 0.3925 |