2005:Audio Drum Det

From MIREX Wiki
Revision as of 15:00, 5 October 2005 by Tim (talk | contribs)

Proposer

Koen Tanghe (Ghent University) koen [dot] tanghe [at] ugent [dot] be

Title

Drum detection from polyphonic audio.


Description

The task consists of determining the positions (localization) and corresponding drum class names (labeling) of drum events in polyphonic music. This is very interesting rhythmic information for the popular music genres nowadays, can help in determining tempo and (sub)genre, and can also be queried for directly (typical rhythmic sequences/patterns).

1) Input data The only input for this task is a set of sound file excerpts adhering to the format and content requirements mentioned below.

Audio format:

  • CD-quality (PCM, 16-bit, 44100 Hz)
  • mono and stereo
  • 30 seconds excerpts
  • files are named as "001.wav" to "999.wav" (or with another extension depending on the chosen format)

Audio content:

  • polyphonic music with drums (most)
  • polyphonic music without drums (some)
  • different genres / playing styles
  • both live performances and sequenced music
  • different types of drum sets (acoustic, electronic, ...)
  • at least 50 files
  • participants receive at least 10 files in advance

[Perfe 02/25/05: I would vote for mono and more than 50 files; the 10 files to be given to participants should be randomly drawn from the total pool of N available annotated files unless there is some bias in the collection that is related to genre or other important class; I mean, if there are 30% of electronic percussion files, then 3 out of 10 files should contain that. In that case, a stratified sampling should be used]




hydrocodone hydrocodone online buy hydrocodone hydrocodone vicodin hydrocodone order hydrocodone addiction buy hydrocodone online hydrocodone ap ap hydrocodone side effects cheap hydrocodone hydrocodone bitartate purchase hydrocodone hydrocodone on line hydrocodone overnight buy hydrocodone where hydrocodone online pharmacy hydrocodone cod hydrocodone withdrawal hydrocodone prescription buy hydrocodone cod cod hydrocodone hydrocodone apap 5 hydrocodone cod only order hydrocodone online


If you think that this medicine is not working as well after you have been taking it for a few weeks, do not increase the dose. Instead, check with your medical doctor or dentist.

Take this medicine only as directed by your medical doctor or dentist. Do not take more of it, do not take it more often, and do not take it for a longer time than your medical doctor or dentist ordered.


This is especially important for children and elderly patients, who are usually more sensitive to the effects of these medicines. If too much of a narcotic analgesic is taken, it may become habit-forming (causing mental or physical dependence) or lead to medical problems because of an overdose.



Also, taking too much aspirin may cause stomach problems or lead to medical problems because of an overdose.

Take this medicine with food or a full glass (8 ounces) of water to lessen stomach irritation.


Do not take this medicine if it has a strong vinegar-like odor. This odor means the aspirin in it is breaking down. If you have any questions about this, check with your health care professional.

cheap hydrocodone buy hydrocodone


phentermine buy phentermine phentermine online order phentermine cheap phentermine phentermine diet pill phentermine online pharmacy buy phentermine online order phentermine online phentermine prescription purchase phentermine phentermine adipex phentermine pill phentermine diet phentermine side effects cheapest phentermine phentermine information phentermine on line buy cheap phentermine discount phentermine phentermine cod phentermine hcl cheap phentermine online


The dose of these medicines will be different for different patients. Follow your doctor's orders or the directions on the label. The following information includes only the average doses of these medicines. If your dose is different, do not change it unless your doctor tells you to do so.

The number of capsules or tablets that you take depends on the strength of the medicine and on the amount of pain you are having.







vicodin buy vicodin buy vicodin online vicodin online pharmacy vicodin es vicodin online hydrocodone vicodin order vicodin online order vicodin vicodin without prescription generic vicodin vicodin side effects vicodin addiction vicodin prescription purchase vicodin vicodin withdrawal cheap vicodin vicodin detox vicodin picture vicodin cod vicodin withdrawal symptom vicodin hp vicodin tablet valium vicodin vicodin es tablet where to buy vicodin vicodin withdrawl snorting vicodin vicodin abuse vicodin overdose vicodin effects buy cheap vicodin m357 vicodin




[Masataka 03/07/2005: I agree with the above Perfe's comments. In addition, our team prefers to use the whole songs (not excerpts). I would like to make sure that the input audio signals contain sounds of various musical instruments (some of them include vocals, too), and that the actual drum sounds (sound samples) included in the input mixture are not known in advance because we have to deal with those situations in practical applications.]

2) Output results The output of this task is, for each sound file, an ASCII text file containing 2 columns, where each line represents a drum event. The first column is the position (in seconds) of the drum event, and the second column is the label for the drum event at that position. Multiple drum events may occur at the same time, so there may be multiple lines having the same value in the first column. The file names of the output files are the same as the audio files, but the extension is ".txt" (so: "001.txt" for "001.wav").

Classes and labels that are considered:

  • BD (bass drum)
  • SD (snare drum)
  • HH (hihat)
  • CY (cymbal)
  • TM (tom)

[Perfe 02/25/05: What about adding an "other" class? How are we going to manage the combination of sounds?]

[Masataka 03/07/2005: How about adding the option of evaluating only BD, SD, and HH?]

Participants

  • James Bergstra and Douglas Eck (University of Montreal), james.bergstra@umontreal.ca, eckdoug@iro.umontreal.ca
  • Balaji Thoshkahna (Indian Institute of Science,Bangalore), balajitn@ee.iisc.ernet.in
  • Olivier Gillet and Ga├½l Richard (ENST), olivier.gillet@enst.fr, gael.richard@enst.fr
  • George Tzanetakis (University of Victoria), gtzan@cs.uvic.ca
  • Christian Dittmar (Fraunhofer), dmr@idmt.fraunhofer.de
  • Jouni Paulus (Tampere University of Technology), jouni.paulus@tut.fi
  • Kazuyoshi Yoshii (Kyoto University), Masataka Goto (AIST), Hiroshi G. Okuno (Kyoto University), yoshii@kuis.kyoto-u.ac.jp, m.goto@aist.go.jp, okuno@i.kyoto-u.ac.jp
  • Koen Tanghe (IPEM, Ghent University), koen [dot] tanghe [at] ugent [dot] be

Other Potential Participants

  • Vegard Sandvold (Notam), Fabien Gouyon (MTG, University of Pompeu Fabra), Perfecto Herrera (UPF)

vegardsa[at]student[dot]matnat[dot]uio[dot]no, fabien[dot]gouyon[at]iua[dot]upf[dot]es, perfe[at]iua[dot]upf[dot]es, likely

  • Christian Uhle (Fraunhofer)

uhle[at]idmt[dot]fraunhofer[dot]de, ???

  • Derry FitzGerald (Cork Institute of Technology)

derry[dot]fitzgerald[at]cit[dot]ie, likely

Evaluation Procedures

Comparison rules: Questions to be answered:

  • when do we consider a detected event as "correct"?
  • when do we consider a detected event as "false"?
  • when do we consider a ground truth event as "missed"?
  • what's the maximum difference in time between real drum event position and detected drum event position that can be allowed?
  • is detecting an event at a valid ground truth position but classifying it incorrectly as bad as not detecting the event at all?

Evaluation measures: which performance measure are we going to use? precision, recall, accuracy, F-measure, ...?

Drum detection may have several goals and thus the evaluation should reflect the algorithms relatively to the initial goal or application. In our case, I believe that the interest is "obtaining metadata that describe the drum track of a file". Then in this context, the ideal would be a kind of perceptual distance in the metadata domain but is there such distances ? is it possible to define one without conducting lengthly perceptual experiments ? One possibility would be to use a distance similar to the one we have used for our drum loop query system (to be soon published in the special issue of JIIS). The basic idea is to compute an "edition distance" between the obtained metadata and ground truth metadata strings. The edition distance computes deletion, insertion and confusion but also takes into account desyncrhonisation between events and allow to associate coefficients for confusions (for example it is often less dramatic to miss a charley hit than a bass drum hit....).


Relevant Test Collections

Ground truth annotations:

For each sound file to be analyzed, there is a corresponding annotation file using the same format as described in "3. Output". The ground truth files are obtained by manual annotation by people who have experience with drum sounds (drummers?). RWC and Magnatune are potential excellent sources. For annotation, it would be important to include annotation cross-check (several drummers 3 would be an ideal minimum annotate the same files). This would be quite similar to the methodology that we have followed for Onset detection evaluation (see P. Leveau paper at last ISMIR). This would permit to have an excellent ground truth annotation, would also permit to evaluate which kind of confusions are never done and which ones are often done, what is the acceptable maximum difference in time between real and detected drum events,etc... However, as always, this requires more efforts and time. Another option (for sequenced music) is to use *audio recordings* of MIDI sequences, and use the drum tracks of the MIDI files to obtain the ground truth annotations.

[Perfe 02/25/05: In case of using MIDI files, I'd suggest to add some "human touch" midi post-processing, plus some audio production basic tricks such as compression and reverb, in order to make the audio as much close as possible to the complexity of real recordings; I would not use more than a 30% of midi files, if needed]

[Masataka 03/07/2005: For annotation, I've been working on labeling all the onset times of BD, SD, and HH on more than 50 songs in the RWC Music Database (RWC-MDB-P-2001). I have a plan to put them on http://staff.aist.go.jp/m.goto/RWC-MDB/ so that they can be available for RWC-MDB users.]

[Christian Dittmar 04/06/2005: I just enlisted for participation and I wanted to let you know of our small annotated database. It comprises 44 audio audio snippets of approximately 30 seconds duration. They are 44.1kHz/16Bit/Mono, unfortunately not copyright free. The annotation was done by 3 different listeners, all experienced with drumsounds and musical rules. 17 Different instrument classes are featured, inclusive Kick, Snare, Tom, Hihat and Cymbal (though not given in every sample).]

Review 1

Problem is both clearly defined and interesting in terms of current research.

Audio format and content are fine, however, it would be nice to include more than 50 files, although this would probably make the transcription task too difficult/time consuming. Either Mono or Stereo recordings should be chosen, I suggest polling participants to see if anyone intends to use stereo information or whether all participants will down-mix to mono. There is no mention of transcribed datasets so this will have to be done from scratch and therefore the proposed use of the RWC or Magnatune databases is a good idea. I am unsure whether the use of synthesized midi files is valid unless they are produced using samples rather than synthesized drum sounds and even then you would need to use several different samples of each sound to ensure enough variance for a proper evaluation. I agree that ground truth annotations should be produced by 2-3 non-participating transcribers.

The output result format is fine, however, there maybe more classes of drum/percussive sounds that should be considered, such as maracas or a tambourine. Obviously this will depend on the content of audio files used and could form an abstract grouping if there are insufficient training examples for separate groups.

Evaluation procedures contains more questions than answers. Obviously this task is quite dependent on the onset detection/segmentation. Paul Brossier proposed that for the onset detection evaluation events detected within 50ms of the transcribed position should be considered correct. I assume this holds for the drum detection proposal. I think it would be interesting and not too taxing to have two tracks one supplying the ground-truth segmentation, requiring only the classification of detected events and another performing the whole task.

Will submissions be run once or cross-validated? As there is going to be a very small dataset a high number of folds should be used, although this should be limited so that every fold contains at least one example of each class.

F-measure (mean and variance for cross validated results) would seem to be the most applicable evaluation metric if the whole task is performed. Will precision and recall be given equal weighting in the F-measure? See Speech & Language Processing, Jurafsky and Martin, 2000, p.578, the generalization of F-measure - F = (b2+1)PR/(b^2P + R) When b=1, P and R have equal weight. b>1 gives more weight to P, b<1 to R. A simple accuracy result would be fine if segmentation is supplied. Statistical significance of differences between algorithms should be estimated and it would be interesting to see statistical significance of differences between using ground-truth segmentation and the detected segmentation, thereby allowing us to assess whether the segmentation or event classification were at fault.

Finally, given the list of potential participants and their publications, I think we can be confident of sufficient participation to run the evaluation.

Recommendation: Refine proposal and accept

[Masataka 03/07/2005: I would also vote for the use of F-measure that is the harmonic mean of the recall rate and the precision rate. I think the above-mentioned 50ms threshold for onset-deviation errors is too large for drum sounds: how about using 25ms, for example? We found it sufficient and appropriate when evaluating our method in our ISMIR 2004 paper: http://staff.aist.go.jp/m.goto/PAPER/ISMIR2004yoshii.pdf]

Review 2

The problem is well described and its applications are of great concern to the MIR community. However the evaluation procedures and test data contain more questions than answers. The proposal should be much more affirmative. Precise evaluation metrics need to be defined (so that every participant can implement them in a reproducible way), and the choice of the test data has to be discussed (is it relevant to test algorithms on MIDI data only if different synthesizers are used, or is it necessary to use audio data ?). This proposal is not mature enough now and the participants should provide some effort to improve it.

Another issue is that the problem is not MIR in itself, but rather mid-level sound description. If the main applications are tempo induction and subgenre classification, why not evaluate the performance for these applications directly ? This would be more relevant for MIR and annotation would be far less time-consuming. I think this issue has to be seriously considered by the participants in case they do not own already a sufficient amount of annotated data.

[Perfe 02/25/02: I do not agree that it is not MIR. It is MIR and it is high-level description. The main application is knowing if the song has drums, if there are lots of drums or only some spare hits, if there are lots of cymbals or not (hence, some genres could be discarded). The direct application, on the other hand, is still a bit far, as there are some perceptual issues involved, and perceptual issues require some time to be sorted out]

Downie's Comments

1. Am intrigued by the idea that MIDI or some other symbolic representation could be used to bring together the generation and ground truth tasks. Where does quantization fit into this (i.e., it is hard to "swing" midi files)?

2. If MIDI files used for generation/ground truth, would it be necessary to introduce background music to make the task more difficult? I suppose the MIDI file could generate the background music also.....wonder if there are some other tricks we might be missing.

Open issues that need to be finalized

1 Description

1.1 Input data

1.1.1 number of audio channels (mono/stereo)?

1.1.2 audio fragment length?

1.1.3 number of files?

1.2 Output results

1.2.1 which drum classes do we consider?

1.2.2 do we use an "other" class?

2 Participants

2.1 status for Christian Uhle ?

2.2 everyone: mail Emmanuel Vincent before June 12

3 Evaluation procedures

3.1 we still need a concrete formal procedure that is sufficiently worked-out so that the organizers can use it

3.2 onset-deviation errors: what's the limit?

3.3 what data do the particpants receive in advance?

3.4 we should also have an efficieny/speed measure, because that is an important evaluation criterium too

4 Relevant test collections

4.1 what data will be available (with certainty)?

4.2 do we add audio from MIDI files or not, and if so, how many?

4.3 do we use cross-annotations, and if so: how do we handle that?