2011:Audio Music Similarity and Retrieval with Web access

From MIREX Wiki
Revision as of 10:09, 25 August 2010 by Bfields (talk | contribs) (initial write up)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Description

As the size of digitial music collections grow, music similarity has an increasingly important role as an aid to music discovery. A music similarity system can help a music consumer find new music by finding the music that is most musically similar to specific query songs (or is nearest to songs that the consumer already likes). Additionally, as more information about music is put on the web, it becomes a growing resource for understanding non-content derived similarities between pieces of music, especially but not limited to popularity, social links and cultural data. The web offers a readily accessible way to find this information, though it comes with its own set of problems. Additionally many websites offer APIs that can provide useful information to assist in forming a similarity judgement, in a way that's analogous to a shared library, however these tools cannot be used (or compared) without access to the web.

This page presents the Audio Music Similarity Evaluation with Web Access, including the submission rules and formats. Additionally background information can be found here that should help explain some of the reasoning behind the approach taken in the evaluation. The intention of the Music Audio Search track is to evaluate music similarity searches (A music search engine that takes a single song as a query aka Query-by-example), not playlist generation or music recommendation.

Basically, the idea with this task is to run it in parellel with the standard Audio Music Similarity and Retrieval task (hereafter referred to as AMS). The queries (and by extension the eval) will be the same, algorithms will simply be able communicate with the web as one of the ways to determine similarity.


The Audio Music Similarity and Retrieval task has been run in MIREX 2010, 2009, 2007, and 2006.

Audio Music Similarity and Retrieval task in MIREX 2010 || Results

Audio Music Similarity and Retrieval task in MIREX 2009 || Results

Audio Music Similarity and Retrieval task in MIREX 2007 || Results

Audio Music Similarity and Retrieval task in MIREX 2006 || Results


Issues To Be Resolved

In splitting the task off from the vanilla AMS to allow web access for algorithms, some new issues are raised. Solutions to these issues will need to be agreed to by participants and IMIRSEL prior to the running of the task. If there are any other issues that need to be resolved please feel free to add them below to facilitate discussion.

Track Labels

Given that most useful web data is about named artists or tracks, a label for the audio data will be needed.

Two possible solutions exist here:

  1. the dataset can be given exactly as it is for the standard local AMS task and it is left to the individual algorithms to determine labels (artist, title, MBzID, etc) via some sort of fingerprinter
  2. metadata is provided (what metadata? artist name, track title? Some kind of unique id?)

Going with (1) has the advantage of being more directly comparable to the original AMS task since the task is basically still the same (blind, audio only similarity), however it effectively adds a second task of audio fingerprinting as a preprocess. Alternatively providing label data is more inline with a real world problem, though represents a considerable departure from the original AMS task.

How Much Web

How much web access will be allowed in the task is an open question. A starting point is to allow any data to be used that is available via a public non-authenticated http request over port 80 (basically the public open web). Alternatively, this could be reduced to an agreed upon whitelist of base domains/allowable services. Also, there will almost certainly need to be a ban on the uploading of the raw unprocessed audio content to third party sites, for both copyright and bandwidth reasons.

Runtime Limits

This is basically up to IMIRSEL to set, but this needs to be settled quickly as it will determine how much crawling can be done. 72 hours was allowed for last year's AMS task.

Submission Requirements

Given the nature of the task, more strict disclosure of some kind will be required of all submitted algorithms. One option here is to have all the code that is run locally be published (OSS licence preferred but not required) along with the standard abstract. Rather than preventing the submission of 'get the answer from some website' type submissions, this simply requires that the authors admit that's what they're doing. Fully disclosing the algorithm might be enough as well, though due to the nature of the task running binaries present particularly difficult problems.


Evaluation

Evaluation will most likely be the same as AMS via the evalutron.

Participant Interest List

Please include your name, institution and contact details.

  1. Ben Fields, Goldsmiths University of London, b (dot) fields (at) gold (dot) ac (dot) uk