Difference between revisions of "2013:Audio Chord Estimation Results MIREX 2009"

From MIREX Wiki
(Summary)
(Submissions)
Line 81: Line 81:
 
==Submissions==
 
==Submissions==
  
* ''coming soon...''
+
{| class="wikitable"
 +
!
 +
! Abstract
 +
! Contributors
 +
|-
 +
| CB3
 +
| style="text-align: center;" | [https://www.music-ir.org/mirex/abstracts/2013/CB3.pdf PDF]
 +
| Taemin Cho & Juan P. Bello
 +
|-
 +
| CB4
 +
| style="text-align: center;" | [https://www.music-ir.org/mirex/abstracts/2013/CB4.pdf PDF]
 +
| Taemin Cho & Juan P. Bello
 +
|-
 +
| CF2
 +
| style="text-align: center;" | [https://www.music-ir.org/mirex/abstracts/2013/CF2.pdf PDF]
 +
| Chris Cannam, Matthias Mauch, Matthew E. P. Davies, Simon Dixon, Christian Landone, Katy Noland, Mark Levy, Massimiliano Zanoni, Dan Stowell & Luís A. Figueira
 +
|-
 +
| KO1
 +
| style="text-align: center;" | [https://www.music-ir.org/mirex/abstracts/2013/KO1.pdf PDF]
 +
| Maksim Khadkevich & Maurizio Omologo
 +
|-
 +
| KO2
 +
| style="text-align: center;" | [https://www.music-ir.org/mirex/abstracts/2013/KO2.pdf PDF]
 +
| Maksim Khadkevich & Maurizio Omologo
 +
|-
 +
| NG1
 +
| style="text-align: center;" | [https://www.music-ir.org/mirex/abstracts/2013/NG1.pdf PDF]
 +
| Nikolay Glazyrin
 +
|-
 +
| NG2
 +
| style="text-align: center;" | [https://www.music-ir.org/mirex/abstracts/2013/NG2.pdf PDF]
 +
| Nikolay Glazyrin
 +
|-
 +
| NMSD1
 +
| style="text-align: center;" | [https://www.music-ir.org/mirex/abstracts/2013/NMSD1.pdf PDF]
 +
| Yizhao Ni, Matt Mcvicar, Raul Santos-Rodriguez & Tijl De Bie
 +
|-
 +
| NMSD2
 +
| style="text-align: center;" | [https://www.music-ir.org/mirex/abstracts/2013/NMSD2.pdf PDF]
 +
| Yizhao Ni, Matt Mcvicar, Raul Santos-Rodriguez & Tijl De Bie
 +
|-
 +
| PP3
 +
| style="text-align: center;" | [https://www.music-ir.org/mirex/abstracts/2013/PP3.pdf PDF]
 +
| Johan Pauwels & Geoffroy Peeters
 +
|-
 +
| PP4
 +
| style="text-align: center;" | [https://www.music-ir.org/mirex/abstracts/2013/PP4.pdf PDF]
 +
| Johan Pauwels & Geoffroy Peeters
 +
|-
 +
| SB8
 +
| style="text-align: center;" | [https://www.music-ir.org/mirex/abstracts/2013/SB8.pdf PDF]
 +
| Nikolaas Steenbergen & John Ashley Burgoyne
 +
|}
  
 
==Results==
 
==Results==

Revision as of 12:07, 30 November 2013

Introduction

This year, we have started a new evaluation battery for audio chord estimation. This page contains the results of these new evaluations for the Isophonics dataset, a.k.a. the MIREX 2009 dataset. It comprises the collected Beatles, Queen, and Zweieck datasets from Queen Mary, University of London, and has been used for audio chord estimation in MIREX for many years.

Why evaluate differently?

  • Researchers interested in automatic chord estimation have been dissatisfied with the traditional evaluation techniques used for this task at MIREX.
  • Numerous alternatives have been proposed in the literature (Harte, 2010; Mauch, 2010; Pauwels & Peeters, 2013).
  • At ISMIR 2010 in Utrecht, a group discussed alternatives and developed the Utrecht Agreement for updating the task, but until this year, nobody had implemented any of the suggestions.

What’s new?

More precise recall estimation

  • MIREX typically uses chord symbol recall (CSR) to estimate how well the predicted chords match the ground truth: the total duration of segments where the predictions match the ground truth divided by the total duration of the song.
  • In previous years, MIREX has used an approximate CSR by sampling both the ground-truth and the automatic annotations every 10 ms.
  • Following Harte (2010), we view the ground-truth and estimated annotations instead as continuous segmentations of the audio because (1) this is more precise and also (2) more computationally efficient.
  • Moreover, because pieces of music come in a wide variety of lengths, we believe it is better to weight the CSR by the length of the song. This final number is referred to as the weighted chord symbol recall (WCSR).

Advanced chord vocabularies

  • We computed WCSR with five different chord vocabulary mappings:
  1. Chord root note only;
  2. Major and minor;
  3. Seventh chords;
  4. Major and minor with inversions; and
  5. Seventh chords with inversions.
  • With the exception of no-chords, calculating the vocabulary mapping involves examining the root note, the bass note, and the relative interval structure of the chord labels.
  • A mapping exists if both the root notes and bass notes match, and the structure of the output label is the largest possible subset of the input label given the vocabulary.
  • For instance, in the major and minor case, G:7(#9) is mapped to G:maj because the interval set of G:maj, {1,3,5}, is a subset of the interval set of the G:7(#9), {1,3,5,b7,#9}. In the seventh-chord case, G:7(#9) is mapped to G:7 instead because the interval set of G:7 {1, 3, 5, b7} is also a subset of G:7(#9) but is larger than G:maj.
  • Our recommendations are motivated by the frequencies of chord qualities in the Billboard corpus of American popular music (Burgoyne et al., 2011).
Most Frequent Chord Qualities in the Billboard Corpus
Quality Freq. Cum. Freq.
maj 52 52
min 13 65
7 10 75
min7 8 83
maj7 3 86

Evaluation of segmentation

  • The chord transcription literature includes several other evaluation metrics, which mainly focus on the segmentation of the transcription.
  • We propose to include the directional Hamming distance in the evaluation. The directional Hamming distance is calculated by finding for each annotated segment the maximally overlapping segment in the other annotation, and then summing the differences (Abdallah et al., 2005; Mauch, 2010).
  • Depending on the order of application, the directional Hamming distance yields a measure of over- or under-segmentation. To keep the scaling consistent with WCSR values (1.0 is best and 0.0 is worst), we report 1 – over-segmentation and 1 – under-segmentation, as well as the harmonic mean of these values (cf. Harte, 2010).

Comparative Statistics

  • coming soon...

Submissions

Abstract Contributors
CB3 PDF Taemin Cho & Juan P. Bello
CB4 PDF Taemin Cho & Juan P. Bello
CF2 PDF Chris Cannam, Matthias Mauch, Matthew E. P. Davies, Simon Dixon, Christian Landone, Katy Noland, Mark Levy, Massimiliano Zanoni, Dan Stowell & Luís A. Figueira
KO1 PDF Maksim Khadkevich & Maurizio Omologo
KO2 PDF Maksim Khadkevich & Maurizio Omologo
NG1 PDF Nikolay Glazyrin
NG2 PDF Nikolay Glazyrin
NMSD1 PDF Yizhao Ni, Matt Mcvicar, Raul Santos-Rodriguez & Tijl De Bie
NMSD2 PDF Yizhao Ni, Matt Mcvicar, Raul Santos-Rodriguez & Tijl De Bie
PP3 PDF Johan Pauwels & Geoffroy Peeters
PP4 PDF Johan Pauwels & Geoffroy Peeters
SB8 PDF Nikolaas Steenbergen & John Ashley Burgoyne

Results

Summary

All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary. Algorithms that conducted training are marked with an asterisk; all others were submitted pre-trained.

Algorithm Root Only MajMin MajMin + Inv Sevenths Sevenths + Inv Mean Seg Under-Seg Over-Seg
CB4* 84 82 79 69 66 89 86 92
KO1 83 82 80 76 73 88 86 91
CB3 84 82 79 69 66 89 87 92
NMSD2 83 81 79 69 66 88 86 90
NMSD1 83 81 78 66 64 87 85 90
KO2* 81 79 77 69 66 87 84 91
CF2 79 75 72 55 52 87 87 86
NG1 77 75 72 66 63 85 82 88
PP3 77 75 72 67 64 86 84 88
PP4 75 73 70 61 58 86 82 89
NG2 73 71 68 43 41 84 83 84
SB8 9 7 7 7 6 56 92 40

download these results as csv

Comparative Statistics

  • coming soon...

Complete Results

  • coming soon...

Algorithmic Output

  • coming soon...