Difference between revisions of "2009:Audio Chord Detection"
m (→Evaluation) |
(→Evaluation) |
||
Line 53: | Line 53: | ||
* I would like to disagree on Matthias' previous point: I think we cannot assume that there is a chord present in every frame, one can think for instance of a drum solo, an acapella break, ethnic music or simply the beginning and ending of a file. In melody extraction or beat detection, there also isn't a continuity assumption. I must say that at the moment, our system isn't able to generate a no-chord either, so it is not in my personal interest to add this to the evaluation, but I feel this should be part of a general chord extraction system. I've also learned from some premature experiments that with the current frame-based evaluation, it is actually not even beneficial to include such a no-chord generator, because of the inequality of prior chances between a chord and a no-chord (14% for a N.C. in our little dataset, I suspect it to be even less for the Beatles set). The consequence is that the chord/no-chord distinction must be very accurate in order to increase the performance. | * I would like to disagree on Matthias' previous point: I think we cannot assume that there is a chord present in every frame, one can think for instance of a drum solo, an acapella break, ethnic music or simply the beginning and ending of a file. In melody extraction or beat detection, there also isn't a continuity assumption. I must say that at the moment, our system isn't able to generate a no-chord either, so it is not in my personal interest to add this to the evaluation, but I feel this should be part of a general chord extraction system. I've also learned from some premature experiments that with the current frame-based evaluation, it is actually not even beneficial to include such a no-chord generator, because of the inequality of prior chances between a chord and a no-chord (14% for a N.C. in our little dataset, I suspect it to be even less for the Beatles set). The consequence is that the chord/no-chord distinction must be very accurate in order to increase the performance. | ||
A related, minor topic is the naming of this task. Why isn't it "audio chord extraction" just like "melody extraction". For me "chord detection" is making the distinction between chords and no-chords and "chord extraction" is naming detected chords. Anyway, just nitpicking on that one. --[[User:JohanPauwels|Johan]] 15:43, 16 July 2009 (CET) | A related, minor topic is the naming of this task. Why isn't it "audio chord extraction" just like "melody extraction". For me "chord detection" is making the distinction between chords and no-chords and "chord extraction" is naming detected chords. Anyway, just nitpicking on that one. --[[User:JohanPauwels|Johan]] 15:43, 16 July 2009 (CET) | ||
+ | |||
+ | * I think we can assume a contiguous sequence of chords if we treat "no chord" as a chord. --[[User:Matthiasmauch|Matthias]] 16:01, 7 August 2009 (UTC) | ||
* I believe we should move forward in two ways to get a more meaningful evaluation: | * I believe we should move forward in two ways to get a more meaningful evaluation: | ||
Line 60: | Line 62: | ||
*# Agree, while the frame-based evaluation is certainly easy, it is not the most musically sensible. An evaluation on note-level or chord-segment basis might be a little too complicated for now, but this is a start. --[[User:JohanPauwels|Johan]] 15:51, 16 July 2009 (CET) | *# Agree, while the frame-based evaluation is certainly easy, it is not the most musically sensible. An evaluation on note-level or chord-segment basis might be a little too complicated for now, but this is a start. --[[User:JohanPauwels|Johan]] 15:51, 16 July 2009 (CET) | ||
*# Do you think we could consider several evaluation cases for chord detection with various chord dictionnaries? (For instance one with major/minor triads, one with major/minor/diminished/augmented/dominant etc.) so that each particpant can choose a case that can be handled by his/her algorithm? --Helene (IRCAM) | *# Do you think we could consider several evaluation cases for chord detection with various chord dictionnaries? (For instance one with major/minor triads, one with major/minor/diminished/augmented/dominant etc.) so that each particpant can choose a case that can be handled by his/her algorithm? --Helene (IRCAM) | ||
+ | *# to Helene: Several (maybe two) evaluation cases /could/ be good, but I think everyone's algorithm should be tested on every task, I think choosing the one you want would mean you can't compare it to other people's method. | ||
+ | *# to Johan: in your chord list, did you mean to give the same list as I did? Anyway, I like to add "dominant" because it is used often, and musically (Jazz harmony theory) there's a functional difference between "dominant" and "major" (not between "minor" and "minor 7", and not between "major" and "major 7" or "major 6"). --[[User:Matthiasmauch|Matthias]] 16:01, 7 August 2009 (UTC) | ||
* Something to consider when broadening the scope of used chords, is the inequality in prior chance of different chords (much like the problem with chords/no-chords I mentioned above). When looking for augmented and diminished triads in the Beatles set in addition to major and minor, I'm quite positive the (or at least my) overall performance will decrease. Some processing/selecting could level the priors, but just limitting the data set to the duration of the least frequent chord won't leave us with much data, I'm afraid. The thing is of course that the inequality is also there in reality, so I'm not really convinced myself that this should be done. Another option is not changing the data, but letting the evaluation take it into account. --[[User:JohanPauwels|Johan]] 16:23, 16 July 2009 (CET) | * Something to consider when broadening the scope of used chords, is the inequality in prior chance of different chords (much like the problem with chords/no-chords I mentioned above). When looking for augmented and diminished triads in the Beatles set in addition to major and minor, I'm quite positive the (or at least my) overall performance will decrease. Some processing/selecting could level the priors, but just limitting the data set to the duration of the least frequent chord won't leave us with much data, I'm afraid. The thing is of course that the inequality is also there in reality, so I'm not really convinced myself that this should be done. Another option is not changing the data, but letting the evaluation take it into account. --[[User:JohanPauwels|Johan]] 16:23, 16 July 2009 (CET) |
Revision as of 10:01, 7 August 2009
Contents
Description
The text of this section is copied from the 2008 page. This task was first run in 2008. Please add your comments and discussions for 2009.
For many applications in music information retrieval, extracting the harmonic structure is very desirable, for example for segmenting pieces into characteristic segments, for finding similar pieces, or for semantic analysis of music.
The extraction of the harmonic structure requires the detection of as many chords as possible in a piece. That includes the characterisation of chords with a key and type as well as a chronological order with onset and duration of the chords.
Although some publications are available on this topic [1,2,3,4,5], comparison of the results is difficult, because different measures are used to assess the performance. To overcome this problem an accurately defined methodology is needed. This includes a repertory of the findable chords, a defined test set along with ground truth and unambiguous calculation rules to measure the performance.
Regarding this we suggest to introduced the new evaluation task Audio Chord Detection.
The deadline for this task is TBA.
Discussions for 2009
Data
As this is intended for music information retrieval, the analysis should be performed on real world audio, not resynthesized MIDI or special renditions of single chords. We suggest the test bed consists of WAV-files in CD quality (with a sampling rate of 44,1kHz and a solution of 16 bit). A representative test bed should consist of more than 50 songs of different genres like pop, rock, jazz and so on.
For each song in the test bed, a ground truth is needed. This should comprise all detectable chords in this piece with their tonic, type and temporal position (onset and duration) in a machine readable format that is still to be specified.
To define the ground truth, a set of detectable chords has to be identified. We propose to use the following set of chords build upon each of the twelve semitones.
Triads: major, minor, diminished, augmented, suspended4 Quads: major-major 7, major-minor 7, major add9, major maj7/#5 minor-major 7, minor-minor 7, minor add9, minor 7/b5 maj7/sus4, 7/sus4
An approach for text annotation of musical chords is presented in [6].
We could contribute excerpts of approximately 30 pop and rock songs including a ground truth.
Evaluation
Two common measures from field of information retrieval are recall and precision. They can be used to evaluate a chord detection system.
Recall: number of time units where the chords have been correctly identified by the algorithm divided by the number of time units which contain detectable chords in the ground truth.
Precision: number of time units where the chords have been correctly identified by the algorithm divided by the total number of time units where the algorithm detected a chord event.
Points to discuss:
- The Precision measure has not been used last year, and I believe it should not because (unlike in beat extraction) we can assume a contiguous sequence of chords, i.e. all time units should feature a chord label. --Matthias 11:04, 27 June 2009 (UTC)
- I would like to disagree on Matthias' previous point: I think we cannot assume that there is a chord present in every frame, one can think for instance of a drum solo, an acapella break, ethnic music or simply the beginning and ending of a file. In melody extraction or beat detection, there also isn't a continuity assumption. I must say that at the moment, our system isn't able to generate a no-chord either, so it is not in my personal interest to add this to the evaluation, but I feel this should be part of a general chord extraction system. I've also learned from some premature experiments that with the current frame-based evaluation, it is actually not even beneficial to include such a no-chord generator, because of the inequality of prior chances between a chord and a no-chord (14% for a N.C. in our little dataset, I suspect it to be even less for the Beatles set). The consequence is that the chord/no-chord distinction must be very accurate in order to increase the performance.
A related, minor topic is the naming of this task. Why isn't it "audio chord extraction" just like "melody extraction". For me "chord detection" is making the distinction between chords and no-chords and "chord extraction" is naming detected chords. Anyway, just nitpicking on that one. --Johan 15:43, 16 July 2009 (CET)
- I think we can assume a contiguous sequence of chords if we treat "no chord" as a chord. --Matthias 16:01, 7 August 2009 (UTC)
- I believe we should move forward in two ways to get a more meaningful evaluation:
- evaluate separate recall measures for several chord classes, my proposal is major, minor, diminished, augmented, dominant (meaning major chords with a minor seventh). A final recall score can then be calculated as a (weighted) average of the recall on different chords. --Matthias 11:04, 27 June 2009 (UTC)
- We use just triads major, minor, diminished, augmented, dominant which I think is a more sensible distinction. Once you start using quads, why limit yourself to dominant 7 and not use minor 7, major 7, full diminished, etc. So I'm more in favour of just triads (maybe add sus too) or more quads. --Johan 15:48, 16 July 2009 (CET)
- Segmentation should be considered. For example, a chord extraction algorithm that has reasonable recall may still be heavily fragmented thus producing an output difficult to read for humans. One measure to check for similarity in segmentation is directional Hamming distance (or divergence). --Matthias 11:04, 27 June 2009 (UTC)
- Agree, while the frame-based evaluation is certainly easy, it is not the most musically sensible. An evaluation on note-level or chord-segment basis might be a little too complicated for now, but this is a start. --Johan 15:51, 16 July 2009 (CET)
- Do you think we could consider several evaluation cases for chord detection with various chord dictionnaries? (For instance one with major/minor triads, one with major/minor/diminished/augmented/dominant etc.) so that each particpant can choose a case that can be handled by his/her algorithm? --Helene (IRCAM)
- to Helene: Several (maybe two) evaluation cases /could/ be good, but I think everyone's algorithm should be tested on every task, I think choosing the one you want would mean you can't compare it to other people's method.
- to Johan: in your chord list, did you mean to give the same list as I did? Anyway, I like to add "dominant" because it is used often, and musically (Jazz harmony theory) there's a functional difference between "dominant" and "major" (not between "minor" and "minor 7", and not between "major" and "major 7" or "major 6"). --Matthias 16:01, 7 August 2009 (UTC)
- Something to consider when broadening the scope of used chords, is the inequality in prior chance of different chords (much like the problem with chords/no-chords I mentioned above). When looking for augmented and diminished triads in the Beatles set in addition to major and minor, I'm quite positive the (or at least my) overall performance will decrease. Some processing/selecting could level the priors, but just limitting the data set to the duration of the least frequent chord won't leave us with much data, I'm afraid. The thing is of course that the inequality is also there in reality, so I'm not really convinced myself that this should be done. Another option is not changing the data, but letting the evaluation take it into account. --Johan 16:23, 16 July 2009 (CET)
- Should chord data be expressed in absolute (aka "F major-minor 7") or relative (aka "C: IV major-minor 7") terms?
- Should different inversions of chords be considered in the evaluation process?
- What temporal resolution should be used for ground truth and results?
- How should enharmonic and other confusions of chords be handled?
- How will Ground Truth be determined?
- What degree of chordal/tonal complexity will the music contain?
- Will we include any atonal or polytonal music in the Ground Truth dataset?
- What is the maximal acceptable onset deviation between ground truth and result?
- What file format should be used for ground truth and output?
Submission Format
Submissions have to conform to the specified format below:
extractFeaturesAndTrain "/path/to/trainFileList.txt" "/path/to/scratch/dir"
Where fileList.txt has the paths to each wav file. The features extracted on this stage can be stored under "/path/to/scratch/dir" The ground truth files for the supervised learning will be in the same path with a ".txt" extension at the end. For example for "/path/to/trainFile1.wav", there will be a corresponding ground truth file called "/path/to/trainFile1.wav.txt" .
For testing:
doChordID.sh "/path/to/testFileList.txt" "/path/to/scratch/dir" "/path/to/results/dir"
If there is no training, you can ignore the second argument here. In the results directory, there should be one file for each testfile with same name as the test file + .txt . The results file should be structured as below described by Matti.
Programs can use their working directory if they need to keep temporary cache files or internal debuggin info. Stdout and stderr will be logged.
Potential Participants
- Johan Pauwels/Ghent University, Belgium (firstname.lastname@elis.ugent.be) (still interested on Aug, 5th but also on holiday from 8-20 Aug)
- Matthias Mauch, Centre for Digital Music, Queen Mary, University of London --Matthias 10:33, 27 June 2009 (UTC)
- Laurent Oudre, TELECOM ParisTech, France (firstname.lastname@telecom-paristech.fr) (still interested, probably 2 algorithms)
- Maksim Khadkevich, Fondazione Bruno Kessler, Italy (lastname_at_fbk_dot_eu)
- Thomas Rocher, LaBRI Université Bordeaux 1, France (firstname.lastname@labri.fr)
- Yushi Ueda, The University of Tokyo, Japan (lastname@hil.t.u-tokyo.ac.jp)
- Your name here
Bibliography
1.Harte,C.A. and Sandler,M.B.(2005). Automatic chord identification using a quantised chromagram. Proceedings of 118th Audio Engineering Society's Convention.
2.Sailer,C. and Rosenbauer K.(2006). A bottom-up approach to chord detection. Proceedings of International Computer Music Conference 2006.
3.Shenoy,A. and Wang,Y.(2005). Key, chord, and rythm tracking of popular music recordings. Computer Music Journal 29(3), 75-86.
4.Sheh,A. and Ellis,D.P.W.(2003). Chord segmentation and recognition using em-trained hidden markov models. Proceedings of 4th International Conference on Music Information Retrieval.
5.Yoshioka,T. et al.(2004). Automatic Chord Transcription with concurrent recognition of chord symbols and boundaries. Proceedings of 5th International Conference on Music Information Retrieval.
6.Harte,C. and Sandler,M. and Abdallah,S. and G├│mez,E.(2005). Symbolic representation of musical chords: a proposed syntax for text annotations. Proceedings of 6th International Conference on Music Information Retrieval.
7.Papadopoulos,H. and Peeters,G.(2007). Large-scale study of chord estimation algorithms based on chroma representation and HMM. Proceedings of 5th International Conference on Content-Based Multimedia Indexing.