Difference between revisions of "2006:Audio Cover Song Identification Results"
(→General Legend) |
(→General Legend) |
||
Line 8: | Line 8: | ||
===General Legend=== | ===General Legend=== | ||
====Team ID==== | ====Team ID==== | ||
− | '''CS''' = Christian Sailer and Karin Dressler<sup>(1)</sup><br /> | + | '''CS''' = [https://www.music-ir.org/evaluation/MIREX/2006_abstracts/CS_sailer.pdf Christian Sailer and Karin Dressler<sup>(1)</sup>]<br /> |
− | '''DE''' = Daniel P. W. Ellis <sup>(1)</sup><br /> | + | '''DE''' = [https://www.music-ir.org/evaluation/MIREX/2006_abstracts/CS_ellis.pdf Daniel P. W. Ellis <sup>(1)</sup>]<br /> |
'''KL1''' = Kyogu Lee 1 <sup>(1)</sup><br /> | '''KL1''' = Kyogu Lee 1 <sup>(1)</sup><br /> | ||
'''KL2''' = Kyogu Lee 2 <sup>(1)</sup><br /> | '''KL2''' = Kyogu Lee 2 <sup>(1)</sup><br /> |
Revision as of 17:13, 19 October 2006
Contents
Introduction
These are the results for the 2006 running of the Audio Cover Song Identification task. For background information about this task set please refer to the Audio Cover Song page.
Each system was given a collection of 1000 songs which included of 30 different classes (sets) of cover songs where each class/set was represented by 11 different versions of a particular song. Each of the 330 cover songs were used as queries and the systems were required to return 10 results for each query. Systems were evaluated on the number of the songs from the same class/set as the query that were returned in the list of 10 results for each query. Special note: Some systems (e.g., KWL, KWT, LR and TP) participated as a by-product of the Audio Music Similarity and Retrieval task, that is, these systems were not specifically written to detect cover song variants. Other systems (e.g., CS, DE, KL1 and KL2) were specifically written to detect cover song variants.
General Legend
Team ID
CS = Christian Sailer and Karin Dressler(1)
DE = Daniel P. W. Ellis (1)
KL1 = Kyogu Lee 1 (1)
KL2 = Kyogu Lee 2 (1)
KWL = Kris West (Likely) (2)
KWT = Kris West (Trans) (2)
LR = Thomas Lidy and Andreas Rauber (2)
TP = Tim Pohle (2)
(1) Denotes submissions specifically designed to detect cover song variants.
(2) Denotes submissions not-specifically designed to detect cover song variants (See Audio Music Similarity and Retrieval Results).
Calculating Summary Measures
MRR = Mean Reciprocal Rank. Reciprocal rank is the reciprocal of the rank of the first correctly identified cover for each query (1/rank). These values are averaged for each cover song group as well as overall.
Overall Summary Results
file /nema-raid/www/mirex/results/coversong_overall.csv not found
Audio Cover Song Identification Runtime Data
file /nema-raid/www/mirex/results/ac06_runtime.csv not found
Friedman Test with Multiple Comparisons Results (p=0.05)
The Friedman test was run in MATLAB against the MRR summary data over the 30 song groups.
Command: [c,m,h,gnames] = multcompare(stats, 'ctype', 'tukey-kramer','estimate', 'friedman', 'alpha', 0.05);
file /nema-raid/www/mirex/results/cover_song_sum_friedman.csv not found file /nema-raid/www/mirex/results/audiocover_friedman_mrr.csv not found
Mean Reciprocal Rank (MRR) Results
file /nema-raid/www/mirex/results/coversong_mrr.csv not found
Raw Results
The total number of correctly identified covers for each query, and the maximum number of covers identified for each cover song group. file /nema-raid/www/mirex/results/coversong_results.csv not found