Difference between revisions of "2006:Audio Music Similarity and Retrieval"
Line 13: | Line 13: | ||
== Important threads on the discussion list == | == Important threads on the discussion list == | ||
+ | |||
+ | === Types of evaluation === | ||
+ | ==== Paul Lamere ==== | ||
+ | There have been a number of papers describing similarity evaluation, including those by Whitman, Berenzweig, Ellis and Logan. The methods used generally fall into the following buckets: | ||
+ | |||
+ | # Subjective precision via user tests | ||
+ | # Expert opinion (similar artist lists from music editors like All Music Guide) | ||
+ | # Playlist Co-occurrence | ||
+ | # User Collection Co-occurrence | ||
+ | # objective statistics based upon album, artist and genre labels. (TopN, average distance) | ||
+ | |||
+ | For a standard, annual evaluation like MIREX, the first four types of | ||
+ | evaluations seem problematic. | ||
+ | |||
+ | 1 - subjective precision - is very expensive to collect this data for alarge music collection, and would likely be unreliable unless many users were evaluated. | ||
+ | |||
+ | 2 - Expert opinion - expert opinion will usually rate similarity of artists but not songs. Also, not transitive, coldplay may sound like the Beatles, but no one ever says the beatles sound like coldplay. This data generally only exists for popular artists (i.e. not for artists typically found in 'free' collections of music). | ||
+ | |||
+ | 3,4 - Playlist Co-occurrence, User Collection co-occurrence - works for popular music, but usually not enough coverage for less popular music, generally not suitable for our test collections (such as magnatune or epitonic), since this music is not listened to by enough people. | ||
+ | |||
+ | |||
+ | |||
== Opt-in survey of Audio music similarity researchers == | == Opt-in survey of Audio music similarity researchers == |
Revision as of 19:34, 12 December 2005
Contents
Overview
This page is devoted to discussions of the evaluation of Audio Music Similarity algorithms at MIREX 2006. Discussions have already begun on the MIREX 06 "AudioSim06" contest planning list and will be briefly digested here. A full digest of the discussions is available to subscribers from the MIREX 06 "AudioSim06" contest planning list archives.
As consensus is achieved on the planning list, a full proposal (Audio Music Similarity proposal) will be produced for the format of the evaluation, including pseudocode for the evaluation metric and suggested formats for submitted algorithms. A skeleton of proposal is already available on the Audio Music Similarity proposal page.
Moderators
- Kris West (University of East Anglia, UK) - kw@cmp.uea.ac.uk
- Elias Pampalk (Austrian Research Institute for Artificial Intelligence (OFAI)) - elias.pampalk@gmail.com
- Paul Lamere (Sun Microsystems Laboratories, USA) - paul.lamere@sun.com
Introduction
Although the automatic extraction of genre and artist labels from audio are interesting tasks, I (KW) believe that they are often used to evaluate more general music similarity techniques that compare two songs based on their audio content. These techniques are hard to evaluate directly, for example with listening tests, as it is not practical to have a human listener rank the similarities of even a small test collection for a number of queries, which might require many hours of listening. Therefore, We have begun discussion of other methods of evaluating music similarity techniques, such as the methods described in Logan & Saloman (A Music Similarity Function Based on Signal Analysis, ICME2001), where the most similar 5, 10 or 20 songs were retrieved and the average number of songs in the same genre, from the same artist and from the same album calculated. This evaluation could be extended to multiple genres if data is available. I believe it is also important that we evaluate other characteristics of these algorithms, such as the descriptor extraction time, query time and memory footprint (which may indicate the applicability of a technique to an application).
Important threads on the discussion list
Types of evaluation
Paul Lamere
There have been a number of papers describing similarity evaluation, including those by Whitman, Berenzweig, Ellis and Logan. The methods used generally fall into the following buckets:
- Subjective precision via user tests
- Expert opinion (similar artist lists from music editors like All Music Guide)
- Playlist Co-occurrence
- User Collection Co-occurrence
- objective statistics based upon album, artist and genre labels. (TopN, average distance)
For a standard, annual evaluation like MIREX, the first four types of evaluations seem problematic.
1 - subjective precision - is very expensive to collect this data for alarge music collection, and would likely be unreliable unless many users were evaluated.
2 - Expert opinion - expert opinion will usually rate similarity of artists but not songs. Also, not transitive, coldplay may sound like the Beatles, but no one ever says the beatles sound like coldplay. This data generally only exists for popular artists (i.e. not for artists typically found in 'free' collections of music).
3,4 - Playlist Co-occurrence, User Collection co-occurrence - works for popular music, but usually not enough coverage for less popular music, generally not suitable for our test collections (such as magnatune or epitonic), since this music is not listened to by enough people.
Opt-in survey of Audio music similarity researchers
In this section we would like to take a brief 'opt-in' survey of researchers actively working in this field. Please feel free to add yourself to the list (or email your details to the moderators listed above).
- Kris West (University of East Anglia, UK) - homepage publications
- Elias Pampalk (Austrian Research Institute for Artificial Intelligence (OFAI)) - homepage publications