Difference between revisions of "2006:Audio Music Similarity and Retrieval Results"
(→Results from Automatic Evaluation) |
(→Results from Automatic Evaluation) |
||
Line 107: | Line 107: | ||
=== Other Results from Automatic Evaluation=== | === Other Results from Automatic Evaluation=== | ||
See [[Audio Music Similarity and Retrieval Other Automatic Evaluation Results]] page. | See [[Audio Music Similarity and Retrieval Other Automatic Evaluation Results]] page. | ||
+ | |||
+ | |||
+ | === references === | ||
+ | # [http://gatekeeper.research.compaq.com/pub/compaq/CRL/publications/logan/icme2001_logan.pdf Logan and Salomon (ICME 2001), '''A Music Similarity Function Based On Signal Analysis'''].<br>One of the first papers on this topic. Reports a small scale listening test (2 users) which rate items in a playlists as similar or not similar to the query song. In addition automatic evaluation is reported: percentage of top 5, 10, 20 most similar songs in the same genre/artist/album as query. | ||
+ | # [http://www.ofai.at/~elias.pampalk/publications/pampalk06thesis.pdf E. Pampalk, '''Computational Models of Music Similarity and their Application in Music Information Retrieval.'''] | ||
+ | PhD thesis, Vienna University of Technology, Austria, March 2006 |
Revision as of 11:01, 10 October 2006
Contents
Introduction
These are the results for the 2006 running of the Audio Music Similarity and Retrieval task set. For background information about this task set please refer to the Audio Music Similarity and Retrieval page.
Each system was given 5000 songs chosen from "uspop", "uscrap" and "cover song" collections. Each system then returned a 5000x5000 distance matrix. 60 songs were randomly selected as queries and the first 5 most highly ranked songs out of the 5000 were extracted for each query (after filtering out the query itself, returned results from the same artist and members of the cover song collection). Then, for each query, the returned results from all participants were grouped and were evaluated by human graders, each query being evaluated by 3 different graders with two scores (using the Evalutron 6000 system). Graders were asked to provide 1 categorical score with 3 categories: NS,SS,VS as explained below, and one fine score (in the range from 0 to 10).
Summary Data on Human Evaluations (Evalutron 6000)
Number of evaluators = 24
Number of evaluation per query/candidate pair = 3
Number of queries per grader = 7~8
Size of the candidate lists = Maximum 30 (with no overlap)
Number of randomly selected queries = 60
General Legend
Team ID
EP = Elias Pampalk
TP = Tim Pohle
VS = Vitor Soares
LR = Thomas Lidy and Andreas Rauber
KWT = Kris West (Trans)
KWL = Kris West (Likely)
Broad Categories
NS = Not Similar
SS = Somewhat Similar
VS = Very Similar
Calculating Summary Measures
Fine(1) = Sum of fine-grained human similarity decisions (0-10).
PSum(1) = Sum of human broad similarity decisions: NS=0, SS=1, VS=2.
WCsum(1) = 'World Cup' scoring: NS=0, SS=1, VS=3 (rewards Very Similar).
SDsum(1) = 'Stephen Downie' scoring: NS=0, SS=1, VS=4 (strongly rewards Very Similar).
Greater0(1) = NS=0, SS=1, VS=1 (binary relevance judgement).
Greater1(1) = NS=0, SS=0, VS=1 (binary relevance judgement using only Very Similar).
(1)Normalized to the range 0 to 1.
Overall Summary Results
file /nema-raid/www/mirex/results/mirex06_as_overalllist.csv not found
http://staff.aist.go.jp/elias.pampalk/papers/mirex06/friedman.png
This figure shows the official ranking of the submissions computed using a Friedman test. The blue lines indicate significance boundaries at the p=0.05 level. As can be seen, the differences are not significant. For a more detailed description and discussion see [1].
Audio Music Similarity and Retrieval Runtime Data
file /nema-raid/www/mirex/results/as06_runtime.csv not found
For a description of the computers the submission ran on see MIREX_2006_Equipment.
Friedman Test with Multiple Comparisons Results (p=0.05)
The Friedman test was run in MATLAB against the Fine summary data over the 60 queries.
Command: [c,m,h,gnames] = multcompare(stats, 'ctype', 'tukey-kramer','estimate', 'friedman', 'alpha', 0.05);
file /nema-raid/www/mirex/results/AV_sum_friedman.csv not found
file /nema-raid/www/mirex/results/AV_fine_result.csv not found
Summary Results by Query
file /nema-raid/www/mirex/results/mirex06_as_uberlist.csv not found
Raw Scores
The raw data derived from the Evalutron 6000 human evaluations are located on the Audio Music Similarity and Retrieval Raw Data page.
Query Meta Data
file /nema-raid/www/mirex/results/as06_queries.csv not found
Results from Automatic Evaluation
file /nema-raid/www/mirex/results/as06_nonhuman_results.csv not found
Introduction to automatic evaluation
Automated evaluation of music similarity techniques based on a metadata catalogue has several advantages:
- It does not require costly human ΓÇÿgradersΓÇÖ
- Allows testing of incremental changes in indexing algorithms
- Can achieve complete coverage over the test collection
- Provides a target for machine-learning, feature-selection and optimisation experiments
- Can predict the visualisation performance of an indexing technique
- Can identify indexing ΓÇÿanomoliesΓÇÖ in the indices tested
Automated ΓÇÿpseudo-objectiveΓÇÖ evaluation of music similarity estimation techniques was introduced by Logan & Saloman [1] and were shown to be highly correlated with careful human-based evaluations by Pampalk [2]. The results of this contest support the conclusions of Pampalk [2] although further work is required to fully understand the evaluation statistics.
Description of evaluation statistics
The evaluation statistics
- Neighbourhood clustering (artist, genre, album)
- average % of the top N results for each query in the collection with the same same label
- Artist-filtered genre neighbourhood
- average % of the top N results for each query belonging to the same genre label, ignoring matches from the same artist (ensures that results reflect musical not audio similarity)
- Mean Artist-filtered genre neighbourhood
- normalised form of the above statistic equally weighting each genre, penalising lop-sided performance
- Normalised average distance between examples
- average distance between examples with the same label, indicates degree of clustering and potential for visual organisation of a collection
- Always similar (hubs)
- largest # of times an example appears in top N results for other queries, a result that appears too often will adversely affect performance without affecting other statistics
- Never similar (orphans)
- % of examples that never appear in a top N result list and cannot be retrieved by search
- Triangular inequality (metric space)
- indicates whether the function produces a metric distance space and therefore what visualisation techniques may be applied to it
Other Results from Automatic Evaluation
See Audio Music Similarity and Retrieval Other Automatic Evaluation Results page.
references
- Logan and Salomon (ICME 2001), A Music Similarity Function Based On Signal Analysis.
One of the first papers on this topic. Reports a small scale listening test (2 users) which rate items in a playlists as similar or not similar to the query song. In addition automatic evaluation is reported: percentage of top 5, 10, 20 most similar songs in the same genre/artist/album as query. - E. Pampalk, Computational Models of Music Similarity and their Application in Music Information Retrieval.
PhD thesis, Vienna University of Technology, Austria, March 2006