Difference between revisions of "2007:Audio Onset Detection Results"

From MIREX Wiki
(New page: Category: Results ==Introduction== These are the results for the 2006 running of the Audio Onset Detection task set. For background information about this task set please refer to the ...)
 
(Introduction)
Line 1: Line 1:
 
[[Category: Results]]
 
[[Category: Results]]
 
==Introduction==
 
==Introduction==
These are the results for the 2006 running of the Audio Onset Detection task set. For background information about this task set please refer to the [[Audio Onset Detection]] page.
+
These are the results for the 2007 running of the Audio Onset Detection task set. For background information about this task set please refer to the [[Audio Onset Detection]] page.
  
 
The aim of the Audio Onset Detection task is to find the time locations at which all musical events in a recording begin. The dataset consists of 85 recordings across 9 different "classes" (e.g. solo drums, polyphonic pitched, etc.). For each sound file, ground truth annotations produced by 3-5 listeners were used for the evaluation. Each algorithm was tested across 10-20 different parameterizations (e.g. thresholds) in order to produce Precision vs. Recall Operating Characteristic (P-ROC) curves. The primary evauluation metric used was the F1-Measure (the equal weighted harmonic mean of precision and recall).  
 
The aim of the Audio Onset Detection task is to find the time locations at which all musical events in a recording begin. The dataset consists of 85 recordings across 9 different "classes" (e.g. solo drums, polyphonic pitched, etc.). For each sound file, ground truth annotations produced by 3-5 listeners were used for the evaluation. Each algorithm was tested across 10-20 different parameterizations (e.g. thresholds) in order to produce Precision vs. Recall Operating Characteristic (P-ROC) curves. The primary evauluation metric used was the F1-Measure (the equal weighted harmonic mean of precision and recall).  
 +
 +
*Note: There were a few faulty ground truth annotations in the 2005 and 2006 runs of this task. These have been removed for this year's evaluation. Thanks to Dan Stowell for finding these.
  
 
===General Legend===
 
===General Legend===
 
====Team ID====
 
====Team ID====
  
'''dixon''' = [https://www.music-ir.org/evaluation/MIREX/2006_abstracts/OD_dixon.pdf Simon Dixon]<br />
+
'''lacoste''' = [https://www.music-ir.org/evaluation/MIREX/2006_abstracts/OD_dixon.pdf Simon Dixon]<br />
 +
'''lee''' = [https://www.music-ir.org/evaluation/MIREX/2006_abstracts/OD_du.pdf Yunfeng Du, Ming Li, Jian Liu]<br />
 
'''roebel''' = [https://www.music-ir.org/evaluation/MIREX/2006_abstracts/OD_roebel.pdf A. R├╢bel]<br />
 
'''roebel''' = [https://www.music-ir.org/evaluation/MIREX/2006_abstracts/OD_roebel.pdf A. R├╢bel]<br />
'''brossier''' = [https://www.music-ir.org/evaluation/MIREX/2006_abstracts/AME_BT_OD_TE_brossier.pdf Paul Brossier]<br />
+
'''stowell''' = [https://www.music-ir.org/evaluation/MIREX/2006_abstracts/AME_BT_OD_TE_brossier.pdf Paul Brossier]<br />
'''du''' = [https://www.music-ir.org/evaluation/MIREX/2006_abstracts/OD_du.pdf Yunfeng Du, Ming Li, Jian Liu]<br />
+
'''zhou''' = [https://www.music-ir.org/evaluation/MIREX/2006_abstracts/AME_BT_OD_TE_brossier.pdf Paul Brossier]<br />
 
 
*Dixon's NWPD submission was modified by Andreas Ehmann, and requires the author's verification
 
  
 
==Overall Summary Results==
 
==Overall Summary Results==

Revision as of 13:42, 15 September 2007

Introduction

These are the results for the 2007 running of the Audio Onset Detection task set. For background information about this task set please refer to the Audio Onset Detection page.

The aim of the Audio Onset Detection task is to find the time locations at which all musical events in a recording begin. The dataset consists of 85 recordings across 9 different "classes" (e.g. solo drums, polyphonic pitched, etc.). For each sound file, ground truth annotations produced by 3-5 listeners were used for the evaluation. Each algorithm was tested across 10-20 different parameterizations (e.g. thresholds) in order to produce Precision vs. Recall Operating Characteristic (P-ROC) curves. The primary evauluation metric used was the F1-Measure (the equal weighted harmonic mean of precision and recall).

  • Note: There were a few faulty ground truth annotations in the 2005 and 2006 runs of this task. These have been removed for this year's evaluation. Thanks to Dan Stowell for finding these.

General Legend

Team ID

lacoste = Simon Dixon
lee = Yunfeng Du, Ming Li, Jian Liu
roebel = A. R├╢bel
stowell = Paul Brossier
zhou = Paul Brossier

Overall Summary Results

MIREX 2006 Audio Onset Detection Summary Results - Peak F-measure performance across all parameterizations

file /nema-raid/www/mirex/results/onset06_sum.csv not found

MIREX 2006 Audio Onset Detection Summary Plot

File:Onset06 summary.png

MIREX 2006 Audio Onset Detection Runtime Data

file /nema-raid/www/mirex/results/onset06_runtime.csv not found

Results by Class

Individual Results