Difference between revisions of "2010:Audio Melody Extraction"
(→Output File Format (Audio Melody Extraction)) |
|||
Line 42: | Line 42: | ||
=== Output File Format (Audio Melody Extraction) === | === Output File Format (Audio Melody Extraction) === | ||
− | The Audio Melody Extraction output file format is a tab-delimited ASCII text format. Fundamental frequencies of the main melody are reported on a 10ms grid. If an algorithm estimates that there is no melody present within a given time frame it is to report a NEGATIVE frequency estimate. This allows the algorithm to still output a pitch estimate even if its voiced/unvoiced detection mechanism is incorrect. Therefore, pitch accuracy and segmentation performance can be evaluated separately. If the algorithm performs no segmentation, it can report all positive fundamental frequencies (and | + | The Audio Melody Extraction output file format is a tab-delimited ASCII text format. Fundamental frequencies of the main melody are reported on a 10ms grid. If an algorithm estimates that there is no melody present within a given time frame it is to report a NEGATIVE frequency estimate. This allows the algorithm to still output a pitch estimate even if its voiced/unvoiced detection mechanism is incorrect. Therefore, pitch accuracy and segmentation performance can be evaluated separately. If the algorithm performs no segmentation, it can report all positive fundamental frequencies (and the segmentation aspects of the evaluation ignored). If the time-stamp in the algorithm output is not on a 10ms time-grid, it will be resampled using 0th-order interpolation during evaluation. Therefore, we encourage the use of a 10ms frame hop-size. Each line of the output file should look like: |
<timestamp (seconds)>\t<frequency (Hz)>\n | <timestamp (seconds)>\t<frequency (Hz)>\n |
Revision as of 13:44, 20 May 2010
Contents
Description
The aim of the MIREX audio melody extraction evaluation is to identify the melody pitch contour from polyphonic musical audio. Pitch is expressed as the fundamental frequency of the main melodic voice, and is reported in a frame-based manner on an evenly-spaced time-grid.
The task consists of two parts:
- Voicing detection (deciding whether a particular time frame contains a "melody pitch" or not),
- pitch detection (deciding the most likely melody pitch for each time frame).
We structure the submission to allow these parts to be done independently within a single output file. That is, it is possible (via a negative pitch value) to guess a pitch even for frames that were being judged unvoiced. Algorithms which don't perform a discrimination between melodic and non-melodic parts are also welcome!
Data
Collections
- MIREX09 database : 374 Karaoke recordings of Chinese songs. Each recording is mixed at three different levels of Signal-to-Accompaniment Ratio {-5dB, 0dB, +5 dB} for a total of 1122 audio clips. Instruments: singing voice (male, female), synthetic accompaniment.
- MIREX08 database : 4 excerpts of 1 min. from "north Indian classical vocal performances", instruments: singing voice (male, female), tanpura (Indian instrument, perpetual background drone), harmonium (secondary melodic instrument) and tablas (pitched percussions). There are two different mixtures of each of the 4 excerpts with differing amounts of accompaniment for a total of 8 audio clips.
- MIREX05 database : 25 phrase excerpts of 10-40 sec from the following genres: Rock, R&B, Pop, Jazz, Solo classical piano.
- ADC04 database : Dataset from the 2004 Audio Description Contest. 20 excerpts of about 20s each.
- manually annotated reference data (10 ms time grid)
Audio Formats
- CD-quality (PCM, 16-bit, 44100 Hz)
- single channel (mono)
Submission Format
Submissions to this task will have to conform to a specified format detailed below. Submissions should be packaged and contain at least two files: The algorithm itself and a README containing contact information and detailing, in full, the use of the algorithm.
Input Data
Participating algorithms will have to read audio in the following format:
- Sample rate: 44.1 KHz
- Sample size: 16 bit
- Number of channels: 1 (mono)
- Encoding: WAV
Output Data
The melody extraction algorithms will return the melody contour in an ASCII text file for each input .wav audio file. The specification of this output file is immediately below.
Output File Format (Audio Melody Extraction)
The Audio Melody Extraction output file format is a tab-delimited ASCII text format. Fundamental frequencies of the main melody are reported on a 10ms grid. If an algorithm estimates that there is no melody present within a given time frame it is to report a NEGATIVE frequency estimate. This allows the algorithm to still output a pitch estimate even if its voiced/unvoiced detection mechanism is incorrect. Therefore, pitch accuracy and segmentation performance can be evaluated separately. If the algorithm performs no segmentation, it can report all positive fundamental frequencies (and the segmentation aspects of the evaluation ignored). If the time-stamp in the algorithm output is not on a 10ms time-grid, it will be resampled using 0th-order interpolation during evaluation. Therefore, we encourage the use of a 10ms frame hop-size. Each line of the output file should look like:
<timestamp (seconds)>\t<frequency (Hz)>\n
where \t denotes a tab, \n denotes the end of line. The < and > characters are not included. An example output file would look something like:
0.00 -439.3 0.01 -439.4 0.02 440.2 0.03 440.3 0.04 440.2
Algorithm Calling Format
The submitted algorithm must take as arguments a SINGLE .wav file to perform the melody extraction on as well as the full output path and filename of the output file. Specifying the output path and file name is essential. Denoting the input .wav file path and name as %input and the output file path and name as %output, a program called foobar could be called from the command-line as follows:
foobar %input %output foobar -i %input -o %output
Moreover, if your submission takes additional parameters, foobar could be called like:
foobar .1 %input %output foobar -param1 .1 -i %input -o %output
If your submission is in MATLAB, it should be submitted as a function. Once again, the function must contain String inputs for the full path and names of the input and output files. Parameters could also be specified as input arguments of the function. For example:
foobar('%input','%output') foobar(.1,'%input','%output')
README File
A README file accompanying each submission should contain explicit instructions on how to to run the program (as well as contact information, etc.). In particular, each command line to run should be specified, using %input for the input sound file and %output for the resulting text file.
For instance, to test the program foobar with a specific value for parameter param1, the README file would look like:
foobar -param1 .1 -i %input -o %output
...
For a submission using MATLAB, the README file could look like:
matlab -r "foobar(.1,'%input','%output');quit;"
...
Evaluation procedures
Descibe the measures etc
Output Format
- In order to allow for generalization among potential approaches (i.e. frame size, hop size, etc), submitted algorithms should output pitch estimates, in Hz, at discrete instants in time
- so the output file successively contains the time stamp [space or tab] the corresponding frequency value [new line]
- the time grid of the reference file is 10 ms, yet the submission may use a different time grid as output (for example 5.8 ms)
- Instants which are identified unvoiced (there is no dominant melody) can either be scored as 0 Hz or as a negative pitch value. If negative pitch values are given the statistics for Raw Pitch Accuracy and Raw Chroma Accuracy may be improved.
Relevant Development Collections
- MIR-1K: MIR-1K for MIREX(Note that this is not the one used for evaluation. The MIREX 2009 dataset used for evaluation last year was created in the same way but has different content and singers).
- Graham's collection: you find the test set here and further explanations on the pages http://www.ee.columbia.edu/~graham/mirex_melody/ and http://labrosa.ee.columbia.edu/projects/melody/
- For the ISMIR 2004 Audio Description Contest, the Music Technology Group of the Pompeu Fabra University assembled a diverse of audio segments and corresponding melody transcriptions including audio excerpts from such genres as Rock, R&B, Pop, Jazz, Opera, and MIDI. (full test set with the reference transcriptions (28.6 MB))
Potential Participants
- Chao-Ling Leon Hsu and Jyh-Shing Roger Jang (Department of Computer Science, National Tsing-Hua University, Hsinchu, Taiwan)