Difference between revisions of "2007:Audio Onset Detection"

From MIREX Wiki
(Dataset(s))
m (Dataset(s))
Line 78: Line 78:
 
I (Dan) am happy to use the dataset as used in 2005/2006 - any comments/agreement/disagreement re that?
 
I (Dan) am happy to use the dataset as used in 2005/2006 - any comments/agreement/disagreement re that?
  
'''''Note:''''' I found some problems with the dataset - a couple of the files are faulty (e.g. they're annotations of the wrong audio). At Queen Mary's we've been replacing those faulty files with new annotations, and we'd be happy to share the "fixed" dataset. I'd suggest that it's better to use accurate annotations, even though that sacrifices an element of comparability against the 05/06 results. ''(Still, it's only a small fraction of files that were at fault, so the results will be largely comparable.)''
+
:'''''Note:''''' I found some problems with the dataset - a couple of the files are faulty (e.g. they're annotations of the wrong audio). At Queen Mary's we've been replacing those faulty files with new annotations, and we'd be happy to share the "fixed" dataset. I'd suggest that it's better to use accurate annotations, even though that sacrifices an element of comparability against the 05/06 results. ''(Still, it's only a small fraction of files that were at fault, so the results will be largely comparable.)''

Revision as of 09:22, 23 February 2007

Proposers

Originally proposed (2005) by Paul Brossier and Pierre Leveau [1]. Has run in 2005 and 2006.

Participants

Description

The text of this section is largely copied from the 2006 page

The onset detection contest is a continuation of the 2005 Onset Detection contest. The main interest for a repeated evaluation is the fact that in 2005 there was not enough time to run the algorithms with different parameters, such that the initial goal to create and compare ROC curves could not be achieved. Having established the basic framework this years goal is to allow participants to submit their algorithms with a number of different parameter sets, such that the ROC curves of the algorithms can be computed and compared.

Input data

essentially the same as 2005/2006

Audio format:

The data are monophonic sound files, with the associated onset times and data about the annotation robustness.

  • CD-quality (PCM, 16-bit, 44100 Hz)
  • single channel (mono)
  • file length between 2 and 36 seconds (total time: 14 minutes)
  • File names:

Audio content:

The dataset is subdivided into classes, because onset detection is sometimes performed in applications dedicated to a single type of signal (ex: segmentation of a single track in a mix, drum transcription, complex mixes databases segmentation...). The performance of each algorithm will be assessed on the whole dataset but also on each class separately. The dataset contains 85 files from 5 classes annotated as follows:

  • 30 solo drum excerpts cross-annotated by 3 people
  • 30 solo monophonic pitched instruments excerpts cross-annotated by 3 people
  • 10 solo polyphonic pitched instruments excerpts cross-annotated by 3 people
  • 15 complex mixes cross-annotated by 5 people

Moreover the monophonic pitched instruments class is divided into 6 sub-classes: brass (2 excerpts), winds (4), sustained strings (6), plucked strings (9), bars and bells (4), singing voice (5). Nomenclature <AudioFileName>.wav for the audio file 2) Output data The onset detection algorithms will return onset times in a text file: <Results of evaluated Algo path>/<AudioFileName>.output.

Onset file Format

<onset time(in seconds)>\n where \n denotes the end of line. The < and > characters are not included.

README file

A README file accompanying each submission should contain explicit instructions on how to to run the program. In particular, each command line to run should be specified, using %input% for the input sound file and %output% for the resulting text file.

For instance, to test the program foobar with different values for parameters param1 and param2, the README file would look like:

foobar -param1 .1 -param2 1 -i %input% -o %output%
foobar -param1 .1 -param2 2 -i %input% -o %output%
foobar -param1 .2 -param2 1 -i %input% -o %output%
foobar -param1 .2 -param2 2 -i %input% -o %output%
foobar -param1 .3 -param2 1 -i %input% -o %output%
...

For a submission using MATLAB, the README file could look like:

matlab -r "foobar(.1,1,'%input%','%output%');quit;"
matlab -r "foobar(.1,2,'%input%','%output%');quit;"
matlab -r "foobar(.2,1,'%input%','%output%');quit;" 
matlab -r "foobar(.2,2,'%input%','%output%');quit;"
matlab -r "foobar(.3,1,'%input%','%output%');quit;"
...

The different command lines to evaluate the performance of each parameter set over the whole database will be generated automatically from each line in the README file containing both '%input%' and '%output%' strings.


Evaluation procedures

Dataset(s)

I (Dan) am happy to use the dataset as used in 2005/2006 - any comments/agreement/disagreement re that?

Note: I found some problems with the dataset - a couple of the files are faulty (e.g. they're annotations of the wrong audio). At Queen Mary's we've been replacing those faulty files with new annotations, and we'd be happy to share the "fixed" dataset. I'd suggest that it's better to use accurate annotations, even though that sacrifices an element of comparability against the 05/06 results. (Still, it's only a small fraction of files that were at fault, so the results will be largely comparable.)