2005:Audio Artist

From MIREX Wiki
Revision as of 14:52, 31 August 2005 by Wikiman (talk | contribs)

Description

The automatic artist identification of musical audio.

1) Input data The input for this task is a set of sound file excerpts adhering to the format, meta data and content requirements mentioned below.

Audio format:

  • CD-quality (Wave, 16-bit, 44100 Hz or 22050 Hz, Mono or Stereo)
  • Whole files, algorithms may use segments at authors discretion

Audio content:

  • 3 databases: Epitonic, Magantune and USPOP2002
  • data set should include at least 75 different artists or groups working in any genre
  • both live performances and sequenced music are eligible
  • Each artist should be represented by a minimum of 10 examples.
  • Would be good to enforce some sort of cross-album component for the actual contest to avoid producer detection
  • A tuning database will NOT be provided. However the RWC Magnatune database used for the 2004 Audio desciption contest is still available (Training part 1 [1], Training part 2 [2])

Metadata:

  • By definition each example must have an artist or group label corresponding to one of the output classes.
  • It is assumed that artist labels will be correct
  • The genre label may also be supplied
  • The training set should be defined by a text file with one entry per line, in the following format (<> should be omitted, used here for clarity):
    <example path and filename>\t<artist label>\t<genre label>\n

2) Output results

  • Results should be output into a text file with one entry per line in the following format:
    <example path and filename>\t<artist classification>\n

3) Maximum running time

  • The maximum running time for a single iteration of a submitted algorithm will be 24 hours (allowing a maximum of 72 hours for 3-fold cross-validation)