Difference between revisions of "2017:Automatic Lyrics-to-Audio Alignment"

From MIREX Wiki
(Data)
(Data)
Line 11: Line 11:
  
 
You can read in detail about how the dataset was made here: [http://smcnetwork.org/system/files/smc2012-198.pdf Recognition of Phonemes in A-cappella Recordings using Temporal Patterns and Mel Frequency Cepstral Coefficients]. The dataset has been kindly provided by Jens Kofod Hansen.
 
You can read in detail about how the dataset was made here: [http://smcnetwork.org/system/files/smc2012-198.pdf Recognition of Phonemes in A-cappella Recordings using Temporal Patterns and Mel Frequency Cepstral Coefficients]. The dataset has been kindly provided by Jens Kofod Hansen.
test
 
  
 
==Evaluation==
 
==Evaluation==

Revision as of 04:57, 31 May 2017

Description

The task of automatic lyrics-to-audio alignment has as an end goal the synchronization between an audio recording of singing and its corresponding written lyrics. The start and end timestamps of lyrics units can be estimated on different granularity: phonemes, words, lyrics lines, phrases.


Task specific mailing list

Data

The evaluation dataset contains 11 songs of popular music with annotations of timestamps of the words and the sentences. The audio has two versions: the original with instrumental accompaniment and a cappella singing voice only one.

You can read in detail about how the dataset was made here: Recognition of Phonemes in A-cappella Recordings using Temporal Patterns and Mel Frequency Cepstral Coefficients. The dataset has been kindly provided by Jens Kofod Hansen.

Evaluation

Submission Format

Audio Format

Command line calling format

I/O format

Packaging submissions

Time and hardware limits

Submission opening date

Submission closing date

Potential Participants