Difference between revisions of "2009:Audio Tag Classification (Mood Set) Results"
(→General Results) |
(→General Results) |
||
Line 83: | Line 83: | ||
===General Results=== | ===General Results=== | ||
[https://music-ir.org/mirex/2009/results/tag/Mood/affinity_tag_fold_AUC_ROC.csv affinity_tag_fold_AUC_ROC.csv]<br /> | [https://music-ir.org/mirex/2009/results/tag/Mood/affinity_tag_fold_AUC_ROC.csv affinity_tag_fold_AUC_ROC.csv]<br /> | ||
− | [https://music-ir.org/mirex/2009/results/tag/Mood/affinity_clip_AUC_ROC.csv affinity_clip_AUC_ROC.csv] | + | [https://music-ir.org/mirex/2009/results/tag/Mood/affinity_clip_AUC_ROC.csv affinity_clip_AUC_ROC.csv]<br /> |
[https://music-ir.org/mirex/2009/results/tag/Mood/binary_avg_Accuracy.csv binary_avg_Accuracy.csv]<br /> | [https://music-ir.org/mirex/2009/results/tag/Mood/binary_avg_Accuracy.csv binary_avg_Accuracy.csv]<br /> | ||
[https://music-ir.org/mirex/2009/results/tag/Mood/binary_per_fold_Accuracy.csv binary_per_fold_Accuracy.csv]<br /> | [https://music-ir.org/mirex/2009/results/tag/Mood/binary_per_fold_Accuracy.csv binary_per_fold_Accuracy.csv]<br /> |
Revision as of 21:37, 13 October 2009
Contents
General Legend
Team ID
BP1 = Juan José Burred, Geoffroy Peeters
BP2 = Juan José Burred, Geoffroy Peeters
CL1 = Chuan Cao,Ming Li
CL2 = Chuan Cao,Ming Li
CL3 = Chuan Cao,Ming Li
CL4 = Chuan Cao,Ming Li
GP = Geoffroy Peeters
GT1 = George Tzanetakis
GT2 = George Tzanetakis
LWW1 = Hung-Yi Lo,Ju-Chiang Wang,Hsin-Min Wang
LWW2 = Hung-Yi Lo,Ju-Chiang Wang,Hsin-Min Wang
HCB = Matthew D.Hoffman,David M. Blei,Perry R.Cook
Overall Summary Results (Binary)
file /nema-raid/www/mirex/results/tag/Mood/summary_binary.csv not found
Summary Binary Relevance F-Measure (Average Across All Folds)
file /nema-raid/www/mirex/results/tag/Mood/binary_avg_Fmeasure.csv not found
Summary Binary Accuracy (Average Across All Folds)
file /nema-raid/www/mirex/results/tag/Mood/binary_avg_Accuracy.csv not found
Summary Positive Example Accuracy (Average Across All Folds)
file /nema-raid/www/mirex/results/tag/Mood/binary_avg_positive_example_Accuracy.csv not found
Summary Negative Example Accuracy (Average Across All Folds)
file /nema-raid/www/mirex/results/tag/Mood/binary_avg_negative_example_Accuracy.csv not found
file /nema-raid/www/mirex/results/tag/rounded/tag.affinity_tag_AUC_ROC.csv not found
Overall Summary Results (Affinity)
file /nema-raid/www/mirex/results/tag/Mood/summary_affinity.csv not found
Summary AUC-ROC Tag (Average Across All Folds)
file /nema-raid/www/mirex/results/tag/Mood/affinity_tag_AUC_ROC.csv not found
Friedman's Test Results
Tag F-measure (Binary) Friedman Test
The following table and plot show the results of Friedman's ANOVA with Tukey-Kramer multiple comparisons computed over the F-measure for each tag in the test, averaged over all folds.
file /nema-raid/www/mirex/results/tag/Mood/binary_FMeasure.friedman.tukeyKramerHSD.csv not found
https://music-ir.org/mirex/2009/results/tag/Mood/small.binary_FMeasure.friedman.tukeyKramerHSD.png
Per Track F-measure (Binary) Friedman Test
The following table and plot show the results of Friedman's ANOVA with Tukey-Kramer multiple comparisons computed over the F-measure for each track in the test, averaged over all folds. file /nema-raid/www/mirex/results/tag/Mood/binary_FMeasure_per_track.friedman.tukeyKramerHSD.csv not found
Per Track AUC-ROC (Affinity) Friedman Test
The following table and plot show the results of Friedman's ANOVA with Tukey-Kramer multiple comparisons computed over the Area Under the ROC curve (AUC-ROC) for each track/clip in the test, averaged over all folds.
file /nema-raid/www/mirex/results/tag/Mood/affinity.AUC_ROC_TRACK.friedman.tukeyKramerHSD.csv not found
Assorted Results Files for Download
General Results
affinity_tag_fold_AUC_ROC.csv
affinity_clip_AUC_ROC.csv
binary_avg_Accuracy.csv
binary_per_fold_Accuracy.csv
binary_per_fold_Fmeasure.csv
binary_per_fold_negative_example_Accuracy.csv
binary_per_fold_per_track_Accuracy.csv
binary_per_fold_per_track_Fmeasure.csv
binary_per_fold_per_track_negative_example_Accuracy.csv
binary_per_fold_per_track_positive_example_Accuracy.csv
binary_per_fold_positive_example_Accuracy.csv
Friedman's Tests Results
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
Results By Algorithm
(.tar.gz)
LB = L. Barrington, D. Turnbull, G. Lanckriet
BBE 1 = T. Bertin-Mahieux, Y. Bengio, D. Eck (KNN)
BBE 2 = T. Bertin-Mahieux, Y. Bengio, D. Eck (NNet)
BBE 3 = T. Bertin-Mahieux, D. Eck, P. Lamere, Y. Bengio (Thierry/Lamere Boosting)
TB = Bertin-Mahieux (dumb/smurf)
ME1 = M. I. Mandel, D. P. W. Ellis 1
ME2 = M. I. Mandel, D. P. W. Ellis 2
ME3 = M. I. Mandel, D. P. W. Ellis 3
GP1 = G. Peeters 1
GP2 = G. Peeters 2
TTKV = K. Trohidis, G. Tsoumakas, G. Kalliris, I. Vlahavas