Difference between revisions of "2008:Audio Melody Extraction"

From MIREX Wiki
(JL's Comments 30/07/08)
m (Potential Participants)
Line 29: Line 29:
 
* Vishweshwara Rao (Indian Institute of Technology), vishu_rao@iitb.ac.in
 
* Vishweshwara Rao (Indian Institute of Technology), vishu_rao@iitb.ac.in
 
* Karin Dressler (kadressler@gmail.com)
 
* Karin Dressler (kadressler@gmail.com)
 +
* Matti Ryynänen and Anssi Klapuri (Tampere University of Technology), matti.ryynanen <at> tut.fi, anssi.klapuri <at> tut.fi
  
 
=JL's Comments 11/07/08=
 
=JL's Comments 11/07/08=

Revision as of 06:01, 4 August 2008

[this page is for now a pale copy/paste of MIREX06 webpage: Audio_Melody_Extraction]

Goal

To extract the melody line from polyphonic audio.

Description

The aim of the MIREX audio melody extraction evaluation is to identify the melody pitch contour from polyphonic musical audio. The task consists of two parts: Voicing detection (deciding whether a particular time frame contains a "melody pitch" or not), and pitch detection (deciding the most likely melody pitch for each time frame). We structure the submission to allow these parts to be done independently, i.e. it is possible (via a negative pitch value) to guess a pitch even for frames that were being judged unvoiced. Algorithms which don't perform a discrimination between melodic and non-melodic parts are also welcome!

(The audio melody extraction evaluation will be essentially a re-run of last years contest i.e. the same test data is used.)

Dataset:

  • MIREX05 database : 25 phrase excerpts of 10-40 sec from the following genres: Rock, R&B, Pop, Jazz, Solo classical piano
  • ISMIR04 database : 20 excerpts of about 20s each
  • CD-quality (PCM, 16-bit, 44100 Hz)
  • single channel (mono)
  • manually annotated reference data (10 ms time grid)

Output Format:

  • In order to allow for generalization among potential approaches (i.e. frame size, hop size, etc), submitted algorithms should output pitch estimates, in Hz, at discrete instants in time
  • so the output file successively contains the time stamp [space or tab] the corresponding frequency value [new line]
  • the time grid of the reference file is 10 ms, yet the submission may use a different time grid as output (for example 5.8 ms)
  • Instants which are identified unvoiced (there is no dominant melody) can either be scored as 0 Hz or as a negative pitch value. If negative pitch values are given the statistics for Raw Pitch Accuracy and Raw Chroma Accuracy may be improved.

Relevant Test Collections

  • For the ISMIR 2004 Audio Description Contest, the Music Technology Group of the Pompeu Fabra University assembled a diverse of audio segments and corresponding melody transcriptions including audio excerpts from such genres as Rock, R&B, Pop, Jazz, Opera, and MIDI. (full test set with the reference transcriptions (28.6 MB))
  • Graham's collection: you find the test set here and further explanations on the pages http://www.ee.columbia.edu/~graham/mirex_melody/ and http://labrosa.ee.columbia.edu/projects/melody/

Potential Participants

  • Jean-Louis Durrieu (TELECOM ParisTech, formerly ENST), durrieu@enst.fr
  • Pablo Cancela (pcancela@gmail.com)
  • Vishweshwara Rao (Indian Institute of Technology), vishu_rao@iitb.ac.in
  • Karin Dressler (kadressler@gmail.com)
  • Matti Ryyn├ñnen and Anssi Klapuri (Tampere University of Technology), matti.ryynanen <at> tut.fi, anssi.klapuri <at> tut.fi

JL's Comments 11/07/08

We propose to re-run the Audio Melody Extraction task this year. It was dropped last year, but since 2006, there were probably other research on this topic. Anyone interested ?

Vishu's comments 14/07/08

May I also suggest that we additionally have a separate evaluation for cases where the main melody is carried by the human singing voice as opposed to other musical instruments? I ask this for two reasons, the first being that for most popular music the melody is indeed carreid by the human voice. And the second reason is that, while our predominant F0 detector is quite generic, our voicing detector is 'tuned' to the human voice and so less likely to perform well for other instruments.

JL's Comments 15/07/08

Concerning the vocal/non-vocal distinction: this has been done in previous evaluations of audio melody extraction (see https://www.music-ir.org/mirex/2006/index.php/Audio_Melody_Extraction_Results for the results of the MIREX06 task). I guess separated results for vocal and vocal+non-vocal should be possible once again.

I had another concern: does anyone know of some extra corpus ? It could be nice to have some more material to test the algorithms. Maybe some more classical excerpts? Does anyone know a way to obtain such data, I mean, with separated track of the main melody so that the work can be half-way done by some automatic algorithm?

Vishu's comments : Multi-track Audio available 22/07/08

We are in possession of about 4 min 15 sec of Indian classical vocal performances with separated tracks of the main melody. For a 10 ms hop, there are about 21000 vocal frames. Would this data be of interest?

Karin's comments 22/07/08

Hi Vishu and others! Any new data is appreciated - and a classical Indian performance would definitely add an interesting new genre :-) I have only made minor changes to my own melody extraction algorithm since I have shifted my priorities to midi note estimation (onset/offset and tone height) of the melody voice. Anyway, I am interested in a new evalutation of my algorithm. I know that the ISMIR 2004 dataset has annotated midi notes available. Maybe we could also evaluate the extracted midi melody notes - at least for this data set! Is there anyone else interested in this evaluation?

JL's Comments 30/07/08

Hi everyone!
A few comments...
To Vishu: could you upload anything to mert? I would also like to know how you annotated the data. The people who did the groundtruth for ISMIR2004 (E. Gomez in particular) told me that they used 46.44ms long windows (for 44.1kHz sampling rate, that s 2048 samples, hence the "strange" number), with 5.8ms hopsize. This groundtruth has been modified by Andreas (Ehmann) such that the hopsize became 10ms in MIREX05.
The groundtruth for both collections give as first column the time stamp of the _center_ of the window (at least, that s what they did for ISMIR04), and as the second column the corresponding frequency in Hz.
To Karin: It s nice to see former participants coming again risking their algorithms on the same task! I think that s also rather important for further studies: that way, we can directly compare ourselves to the state of the art!