<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Bfields</id>
	<title>MIREX Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Bfields"/>
	<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/wiki/Special:Contributions/Bfields"/>
	<updated>2026-05-08T03:59:29Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.1</generator>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=MIREX_Impact_Citations&amp;diff=7889</id>
		<title>MIREX Impact Citations</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=MIREX_Impact_Citations&amp;diff=7889"/>
		<updated>2011-05-12T08:16:55Z</updated>

		<summary type="html">&lt;p&gt;Bfields: /* Submitted Citations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Introduction=&lt;br /&gt;
&lt;br /&gt;
Dear MIREX Participant:&lt;br /&gt;
&lt;br /&gt;
We are gathering evidence of impact for MIREX. We would appreciate it greatly if you would post citations to papers (yours or others) that have used MIREX data and/or results. Any acceptable citation format is OK. DOIs or URL to accessible copies especially appreciated. I have started out with some samples from IMIRSEL.&lt;br /&gt;
&lt;br /&gt;
In a similar vein, we are also gathering citations to MIREX-related comments as further evidence of impact for MIREX on the [[MIREX Impact Statements]] page.&lt;br /&gt;
&lt;br /&gt;
We are collecting these statements and citations as evidence influence and success to submit to funding agencies, research administrators and/or future collaborators. &lt;br /&gt;
&lt;br /&gt;
Thank you very much.&lt;br /&gt;
&lt;br /&gt;
J. Stephen Downie&amp;lt;br&amp;gt;&lt;br /&gt;
Director, IMIRSEL&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Submitted Citations=&lt;br /&gt;
&lt;br /&gt;
# J.S. Downie, D. Byrd, and T. Crawford. &amp;quot;Ten Years of ISMIR: Reflections on Challenges and Opportunities&amp;quot;, in the 10th International Symposium on Music Information Retrieval (ISMIR), Kobe, 2009, pp. 13-18. Available: http://ismir2009.ismir.net/proceedings/keynote1.pdf&lt;br /&gt;
# M. Bay, A.F. Ehmann and J.S. Downie, &amp;quot;Evaluation of Multiple-F0 Estimation and Tracking Systems&amp;quot;, in the 10th International Symposium on Music Information Retrieval (ISMIR), Kobe, 2009, pp. 315-320. Available: http://ismir2009.ismir.net/proceedings/PS2-21.pdf &lt;br /&gt;
# E. Law, K. West, M. Mandel, M. Bay and, J. S. Downie, &amp;quot;Evaluation of Algorithms Using Games: The Case of Music Tagging&amp;quot;, in the 10th International Symposium on Music Information Retrieval (ISMIR), Kobe, 2009, pp. 387-392. Available: http://ismir2009.ismir.net/proceedings/OS5-5.pdf&lt;br /&gt;
# J. S. Downie, M. Bay, A. F. Ehmann and M. C. Jones, &amp;quot;Audio Cover Song Identification: MIREX 2006-2007 Results and analysis&amp;quot;, in the 9th International Conference on Music Information Retrieval (ISMIR 2008), Philadelphia, 2008, pp. 468-473. Available: http://ismir2008.ismir.net/papers/ISMIR2008_265.pdf&lt;br /&gt;
# X. Hu, J. S. Downie, C. Laurier, M. Bay and A. F. Ehmann, &amp;quot;The 2007 MIREX Audio Mood Classification Task: Lessons learned&amp;quot;, in the 9th International Conference on Music Information Retrieval (ISMIR 2008), Philadelphia, 2008, pp. 462-467. Available: http://ismir2008.ismir.net/papers/ISMIR2008_263.pdf&lt;br /&gt;
# J. S. Downie, A. F. Ehmann and J. H. Lee, &amp;quot;The Music Information Retrieval Evaluation eXchange (MIREX): Community-led formal evaluations&amp;quot;, in the Digital Humanities 2008 , Oulu Finland, 2008, pp. 239-240.&lt;br /&gt;
# M. McVicar and T. De Bie, &amp;quot;Exploiting Online Resources to Improve Chord Recognition Accuracy&amp;quot;, late-breaking abstract in the 11th International Conference on Music Information Retrieval (ISMIR 2010), Utrecht, Netherlands, 2010.&lt;br /&gt;
# M. McVicar and T. De Bie, &amp;quot;Enhancing chord recognition accuracy using web resources&amp;quot;, 3rd ACM Workshop on Machine Learning and Music, October 25, 2010, Firenze, Italy.&lt;br /&gt;
# M. McVicar, Y. Ni, R. Santos-Rodriguez, T. De Bie, &amp;quot;Using online chord databases to enhance chord recognition&amp;quot;, Journal of New Music Research, Special issue on Music and Machine Learning (in press).&lt;br /&gt;
#R. Zhou , M. Mattavelli and G. Zoia, &amp;quot;Music Onset Detection Based on Resonator Time-frequency Image&amp;quot;, IEEE Transactions On Audio, Speech And Language Processing, vol. 16, num. 8, 2008, p. 1685-1695. &lt;br /&gt;
#Ruohua Zhou, Joshua D. Reiss, Marco Mattavelli and Giorgio Zoia, &amp;quot;A Computationally Efficient Method for Polyphonic Pitch Estimation&amp;quot;, EURASIP Journal on Advances in Signal Processing, Volume 2009 (2009), Article ID 729494, 11 pages.&lt;br /&gt;
# M. Haro and P. Herrera, “From Low-level to Song-level Percussion Descriptors of Polyphonic Music,” in 10th International Society for Music Information Retrieval Conference (ISMIR 2009), Kobe, Japan, 2009. Available: http://ismir2009.ismir.net/proceedings/PS2-9.pdf&lt;br /&gt;
# J. Urbano, J. Morato, M. Marrero and D. Martín, &amp;quot;Crowdsourcing Preference Judgments for Evaluation of Music Similarity Tasks&amp;quot;, ACM SIGIR Workshop on Crowdsourcing for Search Evaluation, pp. 9-16, 2010. [http://julian-urbano.info/publications/012-crowdsourcing-preference-judgments-evaluation-music-similarity-tasks/012-paper.pdf PDF]&lt;br /&gt;
# J. Urbano, M. Marrero, D. Martín and J. Lloréns, &amp;quot;Improving the Generation of Ground Truths based on Partially Ordered Lists&amp;quot;, International Society for Music Information Retrieval Conference, pp. 285-290, 2010. [http://julian-urbano.info/publications/010-improving-generation-ground-truths-based-partially-ordered-lists/010-paper.pdf PDF]&lt;br /&gt;
# J. Urbano, J. Lloréns, J. Morato and S. Sánchez-Cuadrado, &amp;quot;Using the Shape of Music to Compute the Similarity between Symbolic Musical Pieces&amp;quot;, International Symposium on Computer Music Modeling and Retrieval, pp. 385-396, 2010. [http://julian-urbano.info/publications/009-using-shape-music-compute-similarity-between-symbolic-musical-pieces/009-paper.pdf PDF]&lt;br /&gt;
# D. Bogdanov, J. Serrà, N. Wack, P. Herrera and X. Serra. Unifying low-level and high-level music similarity measures. IEEE Transactions on Multimedia. In press. [http://mtg.upf.edu/node/1689]&lt;br /&gt;
# J. Wu, E. Vincent, S. Raczynski, T. Nishimoto, N. Ono and S. Sagayama. Multipitch estimation by joint modeling of harmonic and transient sounds. In Proc. 2011 IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), to appear. [http://hal.inria.fr/inria-00567175/PDF/wu_ICASSP11.pdf]&lt;br /&gt;
#  E. Vincent, N. Bertin and R. Badeau. Adaptive harmonic spectral decomposition for multiple pitch estimation. IEEE Transactions on Audio, Speech and Language Processing, 18(3), pp. 528-537, 2010. [http://hal.inria.fr/inria-00544094/PDF/vincent_TASLP10.pdf]&lt;br /&gt;
# E. Benetos and S. Dixon. Multiple-instrument polyphonic music transcription using a convolutive probabilistic model. In Proc. 8th Sound and Music Computing Conf. (SMC), to appear.&lt;br /&gt;
# B. Fields, K.Jacobson, C. Rhodes, M. d'Inverno, M. Sandler, M. Casey, &amp;quot;Analysis and Exploitation of Musician Social Networks for Recommendation and Discovery,&amp;quot; IEEE Transactions on Multimedia. in press. [http://dx.doi.org/10.1109/TMM.2011.2111365]&lt;br /&gt;
# B. Fields. Contextualize Your Listening: The Playlist as Recommendation Engine, PhD Thesis, Goldsmiths, University of London, April 2011. [http://benfields.net/bfields_thesis.pdf]&lt;br /&gt;
# B. Fields, K. Page, T. Crawford, D. De Roure, &amp;quot;The Segment Ontology: bridging music-generic and domain-specific,&amp;quot; in Workshop on Advances in Music Information Research (AdMIRe), co-located with the IEEE International Conference on Multimedia and Expo (ICME), Barcelona, Spain, July, 2011. [http://benfields.net/papers/admire2011.pdf]&lt;/div&gt;</summary>
		<author><name>Bfields</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=Audio_Music_Similarity_and_Retrieval&amp;diff=7797</id>
		<title>Audio Music Similarity and Retrieval</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=Audio_Music_Similarity_and_Retrieval&amp;diff=7797"/>
		<updated>2010-11-20T16:59:56Z</updated>

		<summary type="html">&lt;p&gt;Bfields: /* Participation in previous years */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
 &lt;br /&gt;
As the size of digital music collections grow,&lt;br /&gt;
music similarity has an increasingly important role as an aid to&lt;br /&gt;
music discovery. A music similarity system can help a music consumer&lt;br /&gt;
find new music by finding the music that is most musically similar to&lt;br /&gt;
specific query songs (or is nearest to songs that the consumer&lt;br /&gt;
already likes). &lt;br /&gt;
 &lt;br /&gt;
This page presents the Audio Music Similarity&lt;br /&gt;
Evaluation, including the submission rules and formats. Additionally&lt;br /&gt;
background information can be found here that should help explain&lt;br /&gt;
some of the reasoning behind the approach taken in the evaluation.&lt;br /&gt;
The intention of the Music Audio Search track is to evaluate music&lt;br /&gt;
similarity searches (A music search engine that takes a single song&lt;br /&gt;
as a query aka Query-by-example), not playlist generation or music&lt;br /&gt;
recommendation. &lt;br /&gt;
&lt;br /&gt;
   &lt;br /&gt;
== Participation in previous years ==&lt;br /&gt;
                           &lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
'''Year'''&lt;br /&gt;
| &lt;br /&gt;
'''Participating Algorithms ''' &lt;br /&gt;
| &lt;br /&gt;
'''URL'''&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
2010&lt;br /&gt;
| &lt;br /&gt;
8&lt;br /&gt;
| &lt;br /&gt;
https://www.music-ir.org/mirex/wiki/2010:Audio_Music_Similarity_and_Retrieval_Results&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
2009&lt;br /&gt;
| &lt;br /&gt;
15&lt;br /&gt;
| &lt;br /&gt;
https://www.music-ir.org/mirex/wiki/2009:Audio_Music_Similarity_and_Retrieval_Results&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
2007&lt;br /&gt;
| &lt;br /&gt;
12&lt;br /&gt;
| &lt;br /&gt;
https://www.music-ir.org/mirex/wiki/2007:Audio_Music_Similarity_and_Retrieval_Results&lt;br /&gt;
|-&lt;br /&gt;
| &lt;br /&gt;
2006&lt;br /&gt;
| &lt;br /&gt;
6&lt;br /&gt;
| &lt;br /&gt;
https://www.music-ir.org/mirex/wiki/2006:Audio_Music_Similarity_and_Retrieval_Results&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Bfields</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2010:Audio_Music_Similarity_and_Retrieval_Results&amp;diff=7788</id>
		<title>2010:Audio Music Similarity and Retrieval Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2010:Audio_Music_Similarity_and_Retrieval_Results&amp;diff=7788"/>
		<updated>2010-09-15T13:47:56Z</updated>

		<summary type="html">&lt;p&gt;Bfields: /* Team ID */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These are the results for the 2010 running of the Audio Music Similarity and Retrieval task set. For background information about this task set please refer to the Audio Music Similarity and Retrieval page.&lt;br /&gt;
&lt;br /&gt;
Each system was given 7000 songs chosen from IMIRSEL's &amp;quot;uspop&amp;quot;, &amp;quot;uscrap&amp;quot; and &amp;quot;american&amp;quot; &amp;quot;classical&amp;quot; and &amp;quot;sundry&amp;quot; collections. Each system then returned a 7000x7000 distance matrix. 100 songs were randomly selected from the 10 genre groups (10 per genre) as queries and the first 5 most highly ranked songs out of the 7000 were extracted for each query (after filtering out the query itself, returned results from the same artist were also omitted). Then, for each query, the returned results (candidates) from all participants were grouped and were evaluated by human graders using the Evalutron 6000 grading system. Each individual query/candidate set was evaluated by a single grader. For each query/candidate pair, graders provided two scores. Graders were asked to provide 1 categorical '''BROAD''' score with 3 categories: NS,SS,VS as explained below, and one '''FINE''' score (in the range from 0 to 100). A description and analysis is provided below.&lt;br /&gt;
&lt;br /&gt;
The systems read in 30 second audio clips as their raw data. The same 30 second clips were used in the grading stage. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== General Legend ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Team ID ====&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
	|- style=&amp;quot;background: yellow;&amp;quot;&lt;br /&gt;
	! width=&amp;quot;80&amp;quot; | Sub code &lt;br /&gt;
	! width=&amp;quot;200&amp;quot; | Submission name &lt;br /&gt;
	! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract &lt;br /&gt;
	! width=&amp;quot;440&amp;quot; | Contributors&lt;br /&gt;
	|-&lt;br /&gt;
	! BWL1&lt;br /&gt;
	| MTG-AMS ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2010/BWL1.pdf PDF] || [http://mtg.upf.edu Dmitry Bogdanov], [http://mtg.upf.edu Joan Serrà], [http://mtg.upf.edu Nicolas Wack], [http://mtg.upf.edu Perfecto Herrera]&lt;br /&gt;
	|-&lt;br /&gt;
	! PS1&lt;br /&gt;
	| PS09 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2009/PS.pdf PDF] || [http://www.cp.jku.at/ Tim Pohle], [http://www.cp.jku.at/ Dominik Schnitzer]&lt;br /&gt;
	|-&lt;br /&gt;
	! PSS1&lt;br /&gt;
	| PSS10 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2010/PSS1.pdf PDF] || [http://www.cp.jku.at/ Tim Pohle], [http://www.cp.jku.at Klaus Seyerlehner], [http://www.cp.jku.at/ Dominik Schnitzer]&lt;br /&gt;
	|-&lt;br /&gt;
	! RZ1&lt;br /&gt;
	| RND ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2010/RZ1.pdf PDF] || [http://www.cp.jku.at Rainer Zufall]&lt;br /&gt;
	|-&lt;br /&gt;
	! SSPK2&lt;br /&gt;
	| cbmr_sim ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2010/SSPK2.pdf PDF] || [http://www.cp.jku.at Klaus Seyerlehner], [http://www.cp.jku.at Markus Schedl], [http://www.cp.jku.at Tim Pohle], [http://www.cp.jku.at Peter Knees]&lt;br /&gt;
	|-&lt;br /&gt;
	! TLN1&lt;br /&gt;
	| MarsyasSimilarity ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2010/TNL1.pdf PDF] || [http://www.cs.uvic.ca/~gtzan George Tzanetakis], [http://sness.net Steven Ness], [http://recherche.ircam.fr/equipes/analyse-synthese/home.html Mathieu Lagrange]&lt;br /&gt;
	|-&lt;br /&gt;
	! TLN2&lt;br /&gt;
	| Post-Processing 1 of Marsyas similarity results ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2010/TLN1.pdf PDF] || [http://www.cs.uvic.ca/~gtzan George Tzanetakis], [http://recherche.ircam.fr/equipes/analyse-synthese/home.html Mathieu Lagrange], [http://sness.net Steven Ness]&lt;br /&gt;
	|-&lt;br /&gt;
	! TLN3&lt;br /&gt;
	| Post-Processing 2 of Marsyas similarity results ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2010/TLN2.pdf PDF] || [http://www.cs.uvic.ca/~gtzan George Tzanetakis], [http://recherche.ircam.fr/equipes/analyse-synthese/home.html Mathieu Lagrange], [http://sness.net Steven Ness]&lt;br /&gt;
	|}&lt;br /&gt;
&lt;br /&gt;
====Broad Categories====&lt;br /&gt;
'''NS''' = Not Similar&amp;lt;br /&amp;gt;&lt;br /&gt;
'''SS''' = Somewhat Similar&amp;lt;br /&amp;gt;&lt;br /&gt;
'''VS''' = Very Similar&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=====Understanding Summary Measures=====&lt;br /&gt;
'''Fine''' = Has a range from 0 (failure) to 100 (perfection). &amp;lt;br /&amp;gt;&lt;br /&gt;
'''Broad''' = Has a range from 0 (failure) to 2 (perfection) as each query/candidate pair is scored with either NS=0, SS=1 or VS=2. &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Human Evaluation==&lt;br /&gt;
===Overall Summary Results===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv p=3&amp;gt;2010/ams/AMS2010summary_evalutron.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
'''Note:RZ1''' is the random result for comparing purpose. &lt;br /&gt;
&lt;br /&gt;
===Friedman's Tests===&lt;br /&gt;
====Friedman's Test (FINE Scores)====&lt;br /&gt;
The Friedman test was run in MATLAB against the '''Fine''' summary data over the 100 queries.&amp;lt;br /&amp;gt;&lt;br /&gt;
Command: [c,m,h,gnames] = multcompare(stats, 'ctype', 'tukey-kramer','estimate', 'friedman', 'alpha', 0.05);&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv p=3&amp;gt;2010/ams/evalutron.fine.friedman.tukeyKramerHSD.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:2010AMS.evalutron.fine.friedman.tukeyKramerHSD.png|500px]]&lt;br /&gt;
&lt;br /&gt;
====Friedman's Test (BROAD Scores)====&lt;br /&gt;
The Friedman test was run in MATLAB against the '''BROAD''' summary data over the 100 queries.&amp;lt;br /&amp;gt;&lt;br /&gt;
Command: [c,m,h,gnames] = multcompare(stats, 'ctype', 'tukey-kramer','estimate', 'friedman', 'alpha', 0.05);&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv p=3&amp;gt;2010/ams/evalutron.cat.friedman.tukeyKramerHSD.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:2010AMS.evalutron.cat.friedman.tukeyKramerHSD.png|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Summary Results by Query===&lt;br /&gt;
====FINE Scores====&lt;br /&gt;
These are the mean FINE scores per query assigned by Evalutron graders. The FINE scores for the 5 candidates returned per algorithm, per query, have been averaged. Values are bounded between 0 and 100. A perfect score would be 100. Genre labels have been included for reference. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv p=1&amp;gt;2010/ams/fine_scores.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====BROAD Scores====&lt;br /&gt;
These are the mean BROAD scores per query assigned by Evalutron graders. The BROAD scores for the 5 candidates returned per algorithm, per query, have been averaged. Values are bounded between 0 (not similar) and 2 (very similar). A perfect score would be 2. Genre labels have been included for reference. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv p=1&amp;gt;2010/ams/cat_scores.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Raw Scores===&lt;br /&gt;
The raw data derived from the Evalutron 6000 human evaluations are located on the [[2010:Audio Music Similarity and Retrieval Raw Data]] page.&lt;br /&gt;
&lt;br /&gt;
==Metadata and Distance Space Evaluation==&lt;br /&gt;
The following reports provide evaluation statistics based on analysis of the distance space and metadata matches and  include:&lt;br /&gt;
* Neighbourhood clustering by artist, album and genre&lt;br /&gt;
* Artist-filtered genre clustering&lt;br /&gt;
* How often the triangular inequality holds&lt;br /&gt;
* Statistics on 'hubs' (tracks similar to many tracks) and orphans (tracks that are not similar to any other tracks at N results).&lt;br /&gt;
&lt;br /&gt;
=== Reports ===&lt;br /&gt;
	&lt;br /&gt;
'''BWL1''' = [https://music-ir.org/mirex/results/2010/ams/statistics/BWL1/report.txt Dmitry Bogdanov, Joan Serrà, Nicolas Wack, Perfecto Herrera]&amp;lt;br /&amp;gt;&lt;br /&gt;
'''PS1''' = [https://music-ir.org/mirex/results/2010/ams/statistics/PS1/report.txt Tim Pohle, Dominik Schnitzer]&amp;lt;br /&amp;gt;&lt;br /&gt;
'''PSS1''' = [https://music-ir.org/mirex/results/2010/ams/statistics/PSS1/report.txt Tim Pohle, Klaus Seyerlehner, Dominik Schnitzer]&amp;lt;br /&amp;gt;&lt;br /&gt;
'''RZ1''' = [https://music-ir.org/mirex/results/2010/ams/statistics/RZ1/report.txt Rainer Zufall]&amp;lt;br /&amp;gt;&lt;br /&gt;
'''SSPK2''' = [https://music-ir.org/mirex/results/2010/ams/statistics/SSPK2/report.txt Klaus Seyerlehner, Markus Schedl, Tim Pohle, Peter Knees]&amp;lt;br /&amp;gt;&lt;br /&gt;
'''TLN1''' = [https://music-ir.org/mirex/results/2010/ams/statistics/TLN1/report.txt George Tzanetakis, Mathieu Lagrange, Steven Ness]&amp;lt;br /&amp;gt;&lt;br /&gt;
'''TLN2''' = [https://music-ir.org/mirex/results/2010/ams/statistics/TLN2/report.txt George Tzanetakis, Mathieu Lagrange, Steven Ness]&amp;lt;br /&amp;gt;&lt;br /&gt;
'''TLN3''' = [https://music-ir.org/mirex/results/2010/ams/statistics/TLN3/report.txt George Tzanetakis, Mathieu Lagrange, Steven Ness]&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Run Times ==&lt;br /&gt;
&amp;lt;csv&amp;gt;2010/ams/audiosim.runtime.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Bfields</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2011:MIREX_Home&amp;diff=7775</id>
		<title>2011:MIREX Home</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2011:MIREX_Home&amp;diff=7775"/>
		<updated>2010-08-30T16:42:52Z</updated>

		<summary type="html">&lt;p&gt;Bfields: /* Welcome to MIREX 2011 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Welcome to MIREX 2011==&lt;br /&gt;
This is the main page for the sixth running of the Music Information Retrieval Evaluation eXchange (MIREX 2011). The International Music Information Retrieval Systems Evaluation Laboratory ([https://music-ir.org/evaluation IMIRSEL]) at the Graduate School of Library and Information Science ([http://www.lis.illinois.edu GSLIS]), University of Illinois at Urbana-Champaign ([http://www.illinois.edu UIUC]) is the principal organizer of MIREX 2011. &lt;br /&gt;
&lt;br /&gt;
The MIREX 2011 community will hold its annual meeting as part of [http://ismir2011.ismir.net/ The 12th International Conference on Music Information Retrieval], ISMIR 2011, which will be held in Miami, Florida, the week of October the 23rd, 2011. The MIREX plenary (working lunch) and poster sessions will be held at a time to be determined during the conference.&lt;br /&gt;
&lt;br /&gt;
J. Stephen Downie&amp;lt;br&amp;gt;&lt;br /&gt;
Director, IMIRSEL&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===MIREX 2011 Evaluation Tasks===&lt;br /&gt;
&lt;br /&gt;
* [[2011:Audio Classification (Train/Test) Tasks]], incorporating:&lt;br /&gt;
** Audio Artist Identification&lt;br /&gt;
** Audio US Pop Genre Classification&lt;br /&gt;
** Audio Latin Genre Classification&lt;br /&gt;
** Audio Music Mood Classification&lt;br /&gt;
** Audio Classical Composer Identification&lt;br /&gt;
* [[2011:Audio Cover Song Identification]]&lt;br /&gt;
* [[2011:Audio Tag Classification]] &lt;br /&gt;
* [[2011:Audio Music Similarity and Retrieval]]&lt;br /&gt;
* [[2011:Symbolic Melodic Similarity]]&lt;br /&gt;
* [[2011:Audio Onset Detection]]&lt;br /&gt;
* [[2011:Audio Key Detection]]&lt;br /&gt;
* [[2011:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
* [[2011:Query by Singing/Humming]]&lt;br /&gt;
* [[2011:Audio Melody Extraction]]&lt;br /&gt;
* [[2011:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
* [[2011:Audio Chord Estimation]]&lt;br /&gt;
* [[2011:Query by Tapping]]&lt;br /&gt;
* [[2011:Audio Beat Tracking]]&lt;br /&gt;
* [[2011:Structural Segmentation]]&lt;br /&gt;
* [[2011:Audio Tempo Estimation]]&lt;br /&gt;
&lt;br /&gt;
===Note to New Participants===&lt;br /&gt;
Please take the time to read the following review article that explains the history and structure of MIREX.&lt;br /&gt;
&lt;br /&gt;
Downie, J. Stephen (2008). The Music Information Retrieval Evaluation Exchange (2005-2007):&amp;lt;br&amp;gt;&lt;br /&gt;
A window into music information retrieval research.''Acoustical Science and Technology 29'' (4): 247-255. &amp;lt;br&amp;gt;&lt;br /&gt;
Available at: [http://dx.doi.org/10.1250/ast.29.247 http://dx.doi.org/10.1250/ast.29.247]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Note to All Participants===&lt;br /&gt;
Because MIREX is premised upon the sharing of ideas and results, '''ALL''' MIREX participants are expected to:&lt;br /&gt;
&lt;br /&gt;
# submit a DRAFT 2-3 page extended abstract PDF in the ISMIR format about the submitted programme(s) to help us and the community better understand how the algorithm works when submitting their programme(s).&lt;br /&gt;
# submit a FINALIZED 2-3 page extended abstract PDF in the ISMIR format prior to ISMIR 2011 for posting on the respective results pages (sometimes the same abstract can be used for multiple submissions; in many cases the DRAFT and FINALIZED abstracts are the same)&lt;br /&gt;
# present a poster at the MIREX 2011 poster session at ISMIR 2011 (Wednesday, 11 August 2011)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Software Dependency Requests===&lt;br /&gt;
If you have not submitted to MIREX before or are unsure whether IMIRSEL/NEMA currently supports some of the software/architecture dependencies for your submission a [https://spreadsheets.google.com/embeddedform?formkey=dDltRjc4NDBDdkZiaF9qZXV0bU5ScUE6MA dependency request form is available]. Please submit details of your dependencies on this form and the IMIRSEL team will attempt to satisfy them for you. &lt;br /&gt;
&lt;br /&gt;
Due to the high volume of submissions expected at MIREX 2011, submissions with difficult to satisfy dependencies that the team has not been given sufficient notice of may result in the submission being rejected.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Finally, you will also be expected to detail your software/architecture dependencies in a README file to be provided to the submission system.&lt;br /&gt;
&lt;br /&gt;
==Getting Involved in MIREX 2011==&lt;br /&gt;
MIREX is a community-based endeavour. Be a part of the community and help make MIREX 2011 the best yet.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Mailing List Participation===&lt;br /&gt;
If you are interested in formal MIR evaluation, you should also subscribe to the &amp;quot;MIREX&amp;quot; (aka &amp;quot;EvalFest&amp;quot;) mail list and participate in the community discussions about defining and running MIREX 2011 tasks. Subscription information at: &lt;br /&gt;
[https://mail.lis.illinois.edu/mailman/listinfo/evalfest EvalFest Central]. &lt;br /&gt;
&lt;br /&gt;
If you are participating in MIREX 2011, it is VERY IMPORTANT that you are subscribed to EvalFest. Deadlines, task updates and other important information will be announced via this mailing list. Please use the EvalFest for discussion of MIREX task proposals and other MIREX related issues. This wiki (MIREX 2011 wiki) will be used to embody and disseminate task proposals, however, task related discussions should be conducted on the MIREX organization mailing list (EvalFest) rather than on this wiki, but should be summarized here. &lt;br /&gt;
&lt;br /&gt;
Where possible, definitions or example code for new evaluation metrics or tasks should be provided to the IMIRSEL team who will embody them in software as part of the NEMA analytics framework, which will be released to the community at or before ISMIR 2011 - providing a standardised set of interfaces and output to disciplined evaluation procedures for a great many MIR tasks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Wiki Participation===&lt;br /&gt;
Please create an account via: [[Special:Userlogin]].&lt;br /&gt;
&lt;br /&gt;
Please note that because of &amp;quot;spam-bots&amp;quot;, MIREX wiki registration requests may be moderated by IMIRSEL members. It might take up to 24 hours for approval (Thank you for your patience!).&lt;br /&gt;
&lt;br /&gt;
==MIREX 2005 - 2010 Wikis==&lt;br /&gt;
Content from MIREX 2005 - 2010 are available at:&lt;br /&gt;
&lt;br /&gt;
'''[[2009:Main_Page|MIREX 2010]]''' &lt;br /&gt;
'''[[2009:Main_Page|MIREX 2009]]''' &lt;br /&gt;
'''[[2008:Main_Page|MIREX 2008]]''' &lt;br /&gt;
'''[[2007:Main_Page|MIREX 2007]]''' &lt;br /&gt;
'''[[2006:Main_Page|MIREX 2006]]''' &lt;br /&gt;
'''[[2005:Main_Page|MIREX 2005]]'''&lt;/div&gt;</summary>
		<author><name>Bfields</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2011:Audio_Music_Similarity_and_Retrieval_with_Web_access&amp;diff=7767</id>
		<title>2011:Audio Music Similarity and Retrieval with Web access</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2011:Audio_Music_Similarity_and_Retrieval_with_Web_access&amp;diff=7767"/>
		<updated>2010-08-25T16:09:05Z</updated>

		<summary type="html">&lt;p&gt;Bfields: initial write up&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
&lt;br /&gt;
As the size of digitial music collections grow, music similarity has an increasingly important role as an aid to music discovery.  A music similarity system can help a music consumer find new music by finding the music that is most musically similar to specific query songs (or is nearest to songs that the consumer already likes).  Additionally, as more information about music is put on the web, it becomes a growing resource for understanding non-content derived similarities between pieces of music, especially but not limited to popularity, social links and cultural data.  The web offers a readily accessible way to find this information, though it comes with its own set of problems.  Additionally many websites offer APIs that can provide useful information to assist in forming a similarity judgement, in a way that's analogous to a shared library, however these tools cannot be used (or compared) without access to the web.&lt;br /&gt;
&lt;br /&gt;
This page presents the Audio Music Similarity Evaluation with Web Access, including the submission rules and formats. Additionally background information can be found here that should help explain some of the reasoning behind the approach taken in the evaluation. The intention of the Music Audio Search track is to evaluate music similarity searches (A music search engine that takes a single song as a query aka Query-by-example), not playlist generation or music recommendation.&lt;br /&gt;
&lt;br /&gt;
Basically, the idea with this task is to run it in parellel with the standard [[2011:Audio_Music_Similarity_and_Retrieval|Audio Music Similarity and Retrieval task]] (hereafter referred to as AMS).  The queries (and by extension the eval) will be the same, algorithms will simply be able communicate with the web as one of the ways to determine similarity.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The Audio Music Similarity and Retrieval task has been run in MIREX 2010, 2009, 2007, and 2006. &lt;br /&gt;
&lt;br /&gt;
[[2010:Audio_Music_Similarity_and_Retrieval|Audio Music Similarity and Retrieval task in MIREX 2010]] || [[2010:Audio_Music_Similarity_and_Retrieval_Results|Results]]&lt;br /&gt;
&lt;br /&gt;
[[2009:Audio_Music_Similarity_and_Retrieval|Audio Music Similarity and Retrieval task in MIREX 2009]] || [[2009:Audio_Music_Similarity_and_Retrieval_Results|Results]]&lt;br /&gt;
&lt;br /&gt;
[[2007:Audio_Music_Similarity_and_Retrieval|Audio Music Similarity and Retrieval task in MIREX 2007]] || [[2007:Audio_Music_Similarity_and_Retrieval_Results|Results]]&lt;br /&gt;
&lt;br /&gt;
[[2006:Audio_Music_Similarity_and_Retrieval|Audio Music Similarity and Retrieval task in MIREX 2006]] || [[2006:Audio_Music_Similarity_and_Retrieval_Results|Results]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Issues To Be Resolved==&lt;br /&gt;
&lt;br /&gt;
In splitting the task off from the vanilla AMS to allow web access for algorithms, some new issues are raised.  Solutions to these issues will need to be agreed to by participants and IMIRSEL prior to the running of the task.  If there are any other issues that need to be resolved please feel free to add them below to facilitate discussion.&lt;br /&gt;
&lt;br /&gt;
===Track Labels===&lt;br /&gt;
&lt;br /&gt;
Given that most useful web data is about named artists or tracks, a label for the audio data will be needed.  &lt;br /&gt;
&lt;br /&gt;
Two possible solutions exist here:&lt;br /&gt;
#the dataset can be given exactly as it is for the standard local AMS task and it is left to the individual algorithms to determine labels (artist, title, MBzID, etc) via some sort of fingerprinter&lt;br /&gt;
#metadata is provided (what metadata?  artist name, track title?  Some kind of unique id?)&lt;br /&gt;
&lt;br /&gt;
Going with (1) has the advantage of being more directly comparable to the original AMS task since the task is basically still the same (blind, audio only similarity), however it effectively adds a second task of audio fingerprinting as a preprocess.  Alternatively providing label data is more inline with a real world problem, though represents a considerable departure from the original AMS task.&lt;br /&gt;
&lt;br /&gt;
===How Much Web===&lt;br /&gt;
&lt;br /&gt;
How much web access will be allowed in the task is an open question.  A starting point is to allow any data to be used that is available via a public non-authenticated http request over port 80 (basically the public open web).  Alternatively, this could be reduced to an agreed upon whitelist of base domains/allowable services.  Also, there will almost certainly need to be a ban on the uploading of the raw unprocessed audio content to third party sites, for both copyright and bandwidth reasons.&lt;br /&gt;
&lt;br /&gt;
===Runtime Limits===&lt;br /&gt;
&lt;br /&gt;
This is basically up to IMIRSEL to set, but this needs to be settled quickly as it will determine how much crawling can be done. 72 hours was allowed for last year's AMS task.&lt;br /&gt;
&lt;br /&gt;
===Submission Requirements===&lt;br /&gt;
&lt;br /&gt;
Given the nature of the task,  more strict disclosure of  some kind will be required of all submitted algorithms.  One option here is to have all the code that is run locally be published (OSS licence preferred but not required) along with the standard abstract.  Rather than preventing the submission of 'get the answer from some website' type submissions, this simply requires that the authors admit that's what they're doing.  Fully disclosing the algorithm might be enough as well, though due to the nature of the task running binaries present particularly difficult problems.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Evaluation==&lt;br /&gt;
&lt;br /&gt;
Evaluation will most likely be the same as AMS via the evalutron.&lt;br /&gt;
&lt;br /&gt;
==Participant Interest List==&lt;br /&gt;
&lt;br /&gt;
Please include your name, institution and contact details.&lt;br /&gt;
&lt;br /&gt;
#Ben Fields, Goldsmiths University of London, b (dot) fields (at) gold (dot) ac (dot) uk&lt;/div&gt;</summary>
		<author><name>Bfields</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2011_talk:Audio_Music_Similarity_and_Retrieval&amp;diff=7766</id>
		<title>2011 talk:Audio Music Similarity and Retrieval</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2011_talk:Audio_Music_Similarity_and_Retrieval&amp;diff=7766"/>
		<updated>2010-08-25T11:31:13Z</updated>

		<summary type="html">&lt;p&gt;Bfields: Created page with 'The new spin off task I'm proposing for next year  (2011:Audio_Music_Similarity_and_Retrieval_with_Web_access) should almost certainly be merged through with this task (as th…'&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The new spin off task I'm proposing for next year  ([[2011:Audio_Music_Similarity_and_Retrieval_with_Web_access]]) should almost certainly be merged through with this task (as the evaluation will probably be done in unison and the i/o formats will be the same)&lt;br /&gt;
&lt;br /&gt;
--[[User:Bfields|Bfields]] 11:31, 25 August 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Bfields</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2011:Audio_Music_Similarity_and_Retrieval&amp;diff=7765</id>
		<title>2011:Audio Music Similarity and Retrieval</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2011:Audio_Music_Similarity_and_Retrieval&amp;diff=7765"/>
		<updated>2010-08-25T11:24:49Z</updated>

		<summary type="html">&lt;p&gt;Bfields: date updates and an addition of the pre flag up top&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;pre&amp;gt;Note: I just moved this over from last year's task write up as a starting point - bfields &amp;lt;/pre&amp;gt;&lt;br /&gt;
== Description ==&lt;br /&gt;
&lt;br /&gt;
As the size of digitial music collections grow, music similarity has an increasingly important role as an aid to music discovery.  A music similarity system can help a music consumer find new music by finding the music that is most musically similar to specific query songs (or is nearest to songs that the consumer already likes).  &lt;br /&gt;
&lt;br /&gt;
This page presents the Audio Music Similarity Evaluation, including the submission rules and formats. Additionally background information can be found here that should help explain some of the reasoning behind the approach taken in the evaluation. The intention of the Music Audio Search track is to evaluate music similarity searches (A music search engine that takes a single song as a query aka Query-by-example), not playlist generation or music recommendation.&lt;br /&gt;
&lt;br /&gt;
The Audio Music Similarity and Retrieval task has been run in MIREX 2010, 2009, 2007, and 2006. &lt;br /&gt;
&lt;br /&gt;
[[2010:Audio_Music_Similarity_and_Retrieval|Audio Music Similarity and Retrieval task in MIREX 2010]] || [[2010:Audio_Music_Similarity_and_Retrieval_Results|Results]]&lt;br /&gt;
&lt;br /&gt;
[[2009:Audio_Music_Similarity_and_Retrieval|Audio Music Similarity and Retrieval task in MIREX 2009]] || [[2009:Audio_Music_Similarity_and_Retrieval_Results|Results]]&lt;br /&gt;
&lt;br /&gt;
[[2007:Audio_Music_Similarity_and_Retrieval|Audio Music Similarity and Retrieval task in MIREX 2007]] || [[2007:Audio_Music_Similarity_and_Retrieval_Results|Results]]&lt;br /&gt;
&lt;br /&gt;
[[2006:Audio_Music_Similarity_and_Retrieval|Audio Music Similarity and Retrieval task in MIREX 2006]] || [[2006:Audio_Music_Similarity_and_Retrieval_Results|Results]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Task specific mailing list ===&lt;br /&gt;
In continuing with the changes that took place last year, task specific lists have been dropped.  Instead we are asking that all discussions take place on the MIREX  [https://mail.lis.illinois.edu/mailman/listinfo/evalfest &amp;quot;EvalFest&amp;quot; list]. If you have an question or comment, simply include the task name in the subject heading.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
Collection statistics: 7000 30-second audio clips drawn from 10 genres (700 clips from each genre).&lt;br /&gt;
&lt;br /&gt;
The Genres that data was drawn from are:&lt;br /&gt;
*Blues&lt;br /&gt;
*Jazz&lt;br /&gt;
*Country/Western&lt;br /&gt;
*Baroque&lt;br /&gt;
*Classical&lt;br /&gt;
*Romantic&lt;br /&gt;
*Electronica&lt;br /&gt;
*Hip-Hop&lt;br /&gt;
*Rock&lt;br /&gt;
*HardRock/Metal&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Audio formats ===&lt;br /&gt;
Participating algorithms will have to read audio in the following format:&lt;br /&gt;
&lt;br /&gt;
* Sample rate: 22 KHz&lt;br /&gt;
* Sample size: 16 bit&lt;br /&gt;
* Number of channels: 1 (mono)&lt;br /&gt;
* Encoding: WAV&lt;br /&gt;
* clip length: 30 secs from the middle of each file&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Evaluation ==&lt;br /&gt;
Two distinct evaluations will be performed&lt;br /&gt;
* Human Evaluation&lt;br /&gt;
* Objective statistics derived from the results lists&lt;br /&gt;
&lt;br /&gt;
Note that at MIREX 2006 particpating algorithms were required to return full distance matrices showing the distance between all tracks, however, in subsequent years we have also supported sparse distance matrix format (detailed below) where only the distances of the top 100 results for each query in the collection are returned.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Human Evaluation ===&lt;br /&gt;
The primary evaluation will involve subjective judgments by human evaluators of the retrieved sets using IMIRSEL's Evalutron 6000 system. This year algorithms will be presented with the same 30 second preview clip that will be reviewed by the human evaluators. &lt;br /&gt;
&lt;br /&gt;
* Evaluator question: Given a search based on track A, the following set of results was returned by all systems. Please place each returned track into one of three classes (not similar, somewhat similar, very similar) and provide an inidcation on a continuous scale of 0 - 10 of high similar the track is to the query. &lt;br /&gt;
* ~120 randomly selected queries,  5 results per query, 1 set of ears, ~10 participating labs&lt;br /&gt;
* Higher number of queries preferred as IR research indicates variance is in queries&lt;br /&gt;
* The songs by the same artist as the query will be filtered out of each result list (artist-filtering) to avoid colouring an evaluators judgement (a cover song or song by the same artist in a result list is likely to reduce the relative ranking of other similar but independent songs - use of songs by the same artist may allow over-fitting to affect the results)&lt;br /&gt;
* It will be possible for researchers to use this data for other types of system comparisons after MIREX 2011 results have been finalized.&lt;br /&gt;
* Human evaluation to be designed and led by IMIRSEL following a similar format to that used at MIREX 2006 (see: [[2006:Evalutron6000_Issues|Evalutron Issues in MIREX 2006]]).&lt;br /&gt;
* Human evaluators will be drawn from the participating labs (and any volunteers from IMIRSEL or on the MIREX lists)&lt;br /&gt;
&lt;br /&gt;
=== Objective Statistics derived from the distance matrix ===&lt;br /&gt;
Statistics of each distance matrix will be calculated including:&lt;br /&gt;
&lt;br /&gt;
* Average % of Genre, Artist and Album matches in the top 5, 10, 20 &amp;amp; 50 results - Precision at 5, 10, 20 &amp;amp; 50&lt;br /&gt;
* Average % of Genre matches in the top 5, 10, 20 &amp;amp; 50 results after artist filtering of results&lt;br /&gt;
* Average % of available Genre, Artist and Album matches in the top 5, 10, 20 &amp;amp; 50 results - Recall at 5, 10, 20 &amp;amp; 50 (just normalising scores when less than 20 matches for an artist, album or genre are available in the database)&lt;br /&gt;
* Always similar - Maximum # times a file was in the top 5, 10, 20 &amp;amp; 50 results&lt;br /&gt;
* % File never similar (never in a top 5, 10, 20 &amp;amp; 50 result list)&lt;br /&gt;
* % of 'test-able' song triplets where triangular inequality holds&lt;br /&gt;
** Note that as we are not requiring full distance matrices this year we will only be testing triangles that are found in the sparse distance matrix.&lt;br /&gt;
* Plot of the  &amp;quot;number of times similar curve&amp;quot; -  plot  of song number vs. number of times it appeared in a top 20 list with songs sorted according to number times it appeared in a top 20 list (to produce the curve). Systems with a sharp rise at the end of this plot have &amp;quot;hubs&amp;quot;, while a long 'zero' tail shows many never similar results.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Runtimes ===&lt;br /&gt;
In addition computation times for feature extraction/Index-building and querying &lt;br /&gt;
will be measured.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submission format ==&lt;br /&gt;
Submission to this task will have to conform to a specified format detailed below. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Implementation details ===&lt;br /&gt;
Scratch folders will be provided for all submissions for the storage of feature files and any model or index files to be produced. Executables will have to accept the path to their scratch folder as a command line parameter. Executables will also have to track which feature files correspond to which audio files internally. To facilitate this process, unique filenames will be assigned to each audio track.&lt;br /&gt;
&lt;br /&gt;
The audio files to be used in the task will be specified in a simple ASCII list file. This file will contain one path per line with no header line. Executables will have to accept the path to these list files as a command line parameter. The formats for the list files are specified below. &lt;br /&gt;
&lt;br /&gt;
Multi-processor compute nodes (2, 4 or 8 cores) will be used to run this task. Hence, participants could attempt to use parrallelism. Ideally, the number of threads to use should be specified as a command line parameter. Alternatively, implementations may be provided in hard-coded 2, 4 or 8 thread configurations. Single threaded submissions will, of course, be accepted but may be disadvantaged by time constraints.&lt;br /&gt;
&lt;br /&gt;
Submissions will have to output either a full distance matrix or a search results file with the top 100 search results for each track in the collection. This list of results will be used to extract the artist-filtered results to present to the human evaluators and will facilitate the computation of the objective statistics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== I/O formats ===&lt;br /&gt;
In this section the input and output files used in this task are described as&lt;br /&gt;
are the command line calling format requirements for submissions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Audio collection list file (input)====&lt;br /&gt;
The list file passed for feature extraction and indexing will be a simple ASCII list file. This file will contain one path per line with no header line, all paths will be absolute (full paths).&lt;br /&gt;
&lt;br /&gt;
e.g.&lt;br /&gt;
&lt;br /&gt;
   /aDirectory/collectionFolder/b002342.wav&lt;br /&gt;
   /aDirectory/collectionFolder/a005921.wav&lt;br /&gt;
   ...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Distance matrix output files ====&lt;br /&gt;
Participants should return one of two available output file formats, a full distance matrix or a sparse distance matrix. The sparse distance matrix format is preferred (as the dense distance matrices can be very large).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===== Sparse Distance Matrix =====&lt;br /&gt;
If computation or exhaustive search is a concern or not a normal output of the indexing algorithm employed, the sparse distance matric format detailed below may be used:&lt;br /&gt;
&lt;br /&gt;
A simple ASCII file listing a name for the algorithm and the top 100 search results for every track in the collection. &lt;br /&gt;
&lt;br /&gt;
This file should start with a header line with a name for the algorithm and should be followed by the results for one query per line, prefixed by the  filename portion of the query path. This should be followed by a tab character and a tab separated, ordered list of the top 100 search results. Each result should include the result filename (e.g. a034728.wav) and the distance (e.g. 17.1 or 0.23) separated by a a comma.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
MyAlgorithm (my.email@address.com)&lt;br /&gt;
&amp;lt;example 1 filename&amp;gt;\t&amp;lt;result 1 name&amp;gt;,&amp;lt;result 1 distance&amp;gt;,\t&amp;lt;result 2 name&amp;gt;,&amp;lt;result 2 distance&amp;gt;, ... \t&amp;lt;result 100 name&amp;gt;,&amp;lt;result 100 distance&amp;gt;&lt;br /&gt;
&amp;lt;example 2 filename&amp;gt;\t&amp;lt;result 1 name&amp;gt;,&amp;lt;result 1 distance&amp;gt;,\t&amp;lt;result 2 name&amp;gt;,&amp;lt;result 2 distance&amp;gt;, ... \t&amp;lt;result 100 name&amp;gt;,&amp;lt;result 100 distance&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which might look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
MyAlgorithm (my.email@address.com)&lt;br /&gt;
a009342.wav	b229311.wav,0.16	a023821.wav,0.19	a001329,0.24  ... etc.&lt;br /&gt;
a009343.wav	a661931.wav,0.12	a043322.wav,0.17	c002346,0.21  ... etc.&lt;br /&gt;
a009347.wav	a671239.wav,0.13	c112393.wav,0.20	b083293,0.25  ... etc.&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The path to which this list file should be written must be accepted as a parameter on the command line.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===== Full Distance Matrix =====&lt;br /&gt;
Full distance matrix files should be generated in the the following format: &lt;br /&gt;
&lt;br /&gt;
* A simple ASCII file listing a name for the algorithm on the first line,&lt;br /&gt;
* Numbered paths for each file appearing in the matrix, these can be in any order (i.e. the files don't have to be i the same order as they appeared in the list file) but should index into the columns/rows of of the distance matrix.&lt;br /&gt;
* A line beginning with 'Q/R' followed by a tab and tab separated list of the numbers 1 to N, where N is the files covered by the matrix.&lt;br /&gt;
* One line per file in the matrix give the distances of that files to each other file in the matrix. All distances should be zero or positive (0.0+) and should not be infinite or NaN. Values should be separated by a single tab character. Obviously the diagonal of the matrix (distance or a  track to itself) should be zero.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Distance matrix header text with system name&lt;br /&gt;
1\t&amp;lt;/path/to/audio/file/1.wav&amp;gt;&lt;br /&gt;
2\t&amp;lt;/path/to/audio/file/2.wav&amp;gt;&lt;br /&gt;
3\t&amp;lt;/path/to/audio/file/3.wav&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
N\t&amp;lt;/path/to/audio/file/N.wav&amp;gt;&lt;br /&gt;
Q/R\t1\t2\t3\t...\tN&lt;br /&gt;
1\t0.0\t&amp;lt;dist 1 to 2&amp;gt;\t&amp;lt;dist 1 to 3&amp;gt;\t...\t&amp;lt;dist 1 to N&amp;gt;&lt;br /&gt;
2\t&amp;lt;dist 2 to 1&amp;gt;\t0.0\t&amp;lt;dist 2 to 3&amp;gt;\t...\t&amp;lt;dist 2 to N&amp;gt;&lt;br /&gt;
3\t&amp;lt;dist 3 to 2&amp;gt;\t&amp;lt;dist 3 to 2&amp;gt;\t0.0\t...\t&amp;lt;dist 3 to N&amp;gt;&lt;br /&gt;
...\t...\t...\t...\t...\t...&lt;br /&gt;
N\t&amp;lt;dist N to 1&amp;gt;\t&amp;lt;dist N to 2&amp;gt;\t&amp;lt;dist N to 3&amp;gt;\t...\t0.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which might look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Example distance matrix 0.1&lt;br /&gt;
1    /path/to/audio/file/1.wav&lt;br /&gt;
2    /path/to/audio/file/2.wav&lt;br /&gt;
3    /path/to/audio/file/3.wav&lt;br /&gt;
4    /path/to/audio/file/4.wav&lt;br /&gt;
Q/R   1        2        3        4&lt;br /&gt;
1     0.00000  1.24100  0.2e-4   0.42559&lt;br /&gt;
2     1.24100  0.00000  0.62640  0.23564&lt;br /&gt;
3     50.2e-4  0.62640  0.00000  0.38000&lt;br /&gt;
4     0.42559  0.23567  0.38000  0.00000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Example submission calling formats ====&lt;br /&gt;
   extractFeatures.sh /path/to/scratch/folder /path/to/collectionListFile.txt&lt;br /&gt;
   Query.sh /path/to/scratch/folder /path/to/collectionListFile.txt /path/to/outputResultsFile.txt&lt;br /&gt;
      &lt;br /&gt;
or&lt;br /&gt;
&lt;br /&gt;
   doAudioSim.sh -numThreads 8 /path/to/scratch/folder /path/to/collectionListFile.txt /path/to/outputResultsFile.txt&lt;br /&gt;
&lt;br /&gt;
=== Packaging submissions ===&lt;br /&gt;
All submissions should be statically linked to all libraries (the presence of &lt;br /&gt;
dynamically linked libraries cannot be guarenteed).&lt;br /&gt;
&lt;br /&gt;
All submissions should include a README file including the following the &lt;br /&gt;
information:&lt;br /&gt;
&lt;br /&gt;
* Command line calling format for all executables and an example formatted set of commands&lt;br /&gt;
* Number of threads/cores used or whether this should be specified on the command line&lt;br /&gt;
* Expected memory footprint&lt;br /&gt;
* Expected runtime&lt;br /&gt;
* Any required environments (and versions), e.g. python, java, bash, matlab.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of particpants in this and other audio tasks,&lt;br /&gt;
hard limits on the runtime of submissions are specified. &lt;br /&gt;
 &lt;br /&gt;
A hard limit of 72 hours will be imposed on runs (total feature extraction and querying times). Submissions that exceed this runtime may not receive a result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== AMS evaluation software ==&lt;br /&gt;
The legacy software for performing various AMS related function is available [https://www.music-ir.org/mirex/results/2010/AMS_tools.zip here], [https://www.music-ir.org/mirex/results/2010/AMS_TOOLS_README%20.txt README file]. It maybe used to benchmark systems prior to submission and to check distance matrix file formats.&lt;br /&gt;
&lt;br /&gt;
This tool set supports the following functions:&lt;br /&gt;
* the import of collection metadata from a delimited text file (e.g. TAB or CSV)&lt;br /&gt;
* the selection of a stratified random list of queries from the collection (i.e. an equal number of queries are chosen for each class of a particular metadata field, such as genre).&lt;br /&gt;
* the generation of results from distance matrices based on a list of pre-chosen queries.&lt;br /&gt;
* (pseudo-)objective statistical evaluation of distance matrices by comparing query metadata to the metadata of the top N results retrieved. Supports artist, album, genre and artist-filtered genre (where results form the same artist as query are skipped). Additionally, the number tracks never returned as results for all possible queries (orphans) and the largest hub (track similar to the most other tracks) are measured. Finally, the number of cases where the triangular inequality holds.&lt;br /&gt;
* preparation and post processing of results for the IMIRSEL Evalutron 6k  human evaluation interface.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submission opening date ==&lt;br /&gt;
&lt;br /&gt;
TBA&lt;br /&gt;
&lt;br /&gt;
== Submission closing date ==&lt;br /&gt;
TBA&lt;/div&gt;</summary>
		<author><name>Bfields</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2011:Audio_Music_Similarity_and_Retrieval&amp;diff=7764</id>
		<title>2011:Audio Music Similarity and Retrieval</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2011:Audio_Music_Similarity_and_Retrieval&amp;diff=7764"/>
		<updated>2010-08-25T11:12:39Z</updated>

		<summary type="html">&lt;p&gt;Bfields: Brought in page from MIREX 2010&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
As the size of digitial music collections grow, music similarity has an increasingly important role as an aid to music discovery.  A music similarity system can help a music consumer find new music by finding the music that is most musically similar to specific query songs (or is nearest to songs that the consumer already likes).  &lt;br /&gt;
&lt;br /&gt;
This page presents the Audio Music Similarity Evaluation, including the submission rules and formats. Additionally background information can be found here that should help explain some of the reasoning behind the approach taken in the evaluation. The intention of the Music Audio Search track is to evaluate music similarity searches (A music search engine that takes a single song as a query aka Query-by-example), not playlist generation or music recommendation.&lt;br /&gt;
&lt;br /&gt;
The Audio Music Similarity and Retrieval task has been run in MIREX 2009, 2007, and 2006. &lt;br /&gt;
&lt;br /&gt;
[[2009:Audio_Music_Similarity_and_Retrieval|Audio Music Similarity and Retrieval task in MIREX 2009]] || [[2009:Audio_Music_Similarity_and_Retrieval_Results|Results]]&lt;br /&gt;
&lt;br /&gt;
[[2007:Audio_Music_Similarity_and_Retrieval|Audio Music Similarity and Retrieval task in MIREX 2007]] || [[2007:Audio_Music_Similarity_and_Retrieval_Results|Results]]&lt;br /&gt;
&lt;br /&gt;
[[2006:Audio_Music_Similarity_and_Retrieval|Audio Music Similarity and Retrieval task in MIREX 2006]] || [[2006:Audio_Music_Similarity_and_Retrieval_Results|Results]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Task specific mailing list ===&lt;br /&gt;
In the past we have use a specific mailing list for the discussion of this task and related tasks (e.g., [[2010:Audio Classification (Train/Test) Tasks]], [[2010:Audio Cover Song Identification]], [[2010:Audio Tag Classification]], [[2010:Audio Music Similarity and Retrieval]]). This year, however, we are asking that all discussions take place on the MIREX  [https://mail.lis.illinois.edu/mailman/listinfo/evalfest &amp;quot;EvalFest&amp;quot; list]. If you have an question or comment, simply include the task name in the subject heading.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
Collection statistics: 7000 30-second audio clips drawn from 10 genres (700 clips from each genre).&lt;br /&gt;
&lt;br /&gt;
The Genres that data was drawn from are:&lt;br /&gt;
*Blues&lt;br /&gt;
*Jazz&lt;br /&gt;
*Country/Western&lt;br /&gt;
*Baroque&lt;br /&gt;
*Classical&lt;br /&gt;
*Romantic&lt;br /&gt;
*Electronica&lt;br /&gt;
*Hip-Hop&lt;br /&gt;
*Rock&lt;br /&gt;
*HardRock/Metal&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Audio formats ===&lt;br /&gt;
Participating algorithms will have to read audio in the following format:&lt;br /&gt;
&lt;br /&gt;
* Sample rate: 22 KHz&lt;br /&gt;
* Sample size: 16 bit&lt;br /&gt;
* Number of channels: 1 (mono)&lt;br /&gt;
* Encoding: WAV&lt;br /&gt;
* clip length: 30 secs from the middle of each file&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Evaluation ==&lt;br /&gt;
Two distinct evaluations will be performed&lt;br /&gt;
* Human Evaluation&lt;br /&gt;
* Objective statistics derived from the results lists&lt;br /&gt;
&lt;br /&gt;
Note that at MIREX 2006 particpating algorithms were required to return full distance matrices showing the distance between all tracks, however, in subsequent years we have also supported sparse distance matrix format (detailed below) where only the distances of the top 100 results for each query in the collection are returned.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Human Evaluation ===&lt;br /&gt;
The primary evaluation will involve subjective judgments by human evaluators of the retrieved sets using IMIRSEL's Evalutron 6000 system. This year algorithms will be presented with the same 30 second preview clip that will be reviewed by the human evaluators. &lt;br /&gt;
&lt;br /&gt;
* Evaluator question: Given a search based on track A, the following set of results was returned by all systems. Please place each returned track into one of three classes (not similar, somewhat similar, very similar) and provide an inidcation on a continuous scale of 0 - 10 of high similar the track is to the query. &lt;br /&gt;
* ~120 randomly selected queries,  5 results per query, 1 set of eyes, ~10 participating labs&lt;br /&gt;
* Higher number of queries preferred as IR research indicates variance is in queries&lt;br /&gt;
* The songs by the same artist as the query will be filtered out of each result list (artist-filtering) to avoid colouring an evaluators judgement (a cover song or song by the same artist in a result list is likely to reduce the relative ranking of other similar but independent songs - use of songs by the same artist may allow over-fitting to affect the results)&lt;br /&gt;
* It will be possible for researchers to use this data for other types of system comparisons after MIREX 2010 results have been finalized.&lt;br /&gt;
* Human evaluation to be designed and led by IMIRSEL following a similar format to that used at MIREX 2006 (see: [[2006:Evalutron6000_Issues|Evalutron Issues in MIREX 2006]]).&lt;br /&gt;
* Human evaluators will be drawn from the participating labs (and any volunteers from IMIRSEL or on the MIREX lists)&lt;br /&gt;
&lt;br /&gt;
=== Objective Statistics derived from the distance matrix ===&lt;br /&gt;
Statistics of each distance matrix will be calculated including:&lt;br /&gt;
&lt;br /&gt;
* Average % of Genre, Artist and Album matches in the top 5, 10, 20 &amp;amp; 50 results - Precision at 5, 10, 20 &amp;amp; 50&lt;br /&gt;
* Average % of Genre matches in the top 5, 10, 20 &amp;amp; 50 results after artist filtering of results&lt;br /&gt;
* Average % of available Genre, Artist and Album matches in the top 5, 10, 20 &amp;amp; 50 results - Recall at 5, 10, 20 &amp;amp; 50 (just normalising scores when less than 20 matches for an artist, album or genre are available in the database)&lt;br /&gt;
* Always similar - Maximum # times a file was in the top 5, 10, 20 &amp;amp; 50 results&lt;br /&gt;
* % File never similar (never in a top 5, 10, 20 &amp;amp; 50 result list)&lt;br /&gt;
* % of 'test-able' song triplets where triangular inequality holds&lt;br /&gt;
** Note that as we are not requiring full distance matrices this year we will only be testing triangles that are found in the sparse distance matrix.&lt;br /&gt;
* Plot of the  &amp;quot;number of times similar curve&amp;quot; -  plot  of song number vs. number of times it appeared in a top 20 list with songs sorted according to number times it appeared in a top 20 list (to produce the curve). Systems with a sharp rise at the end of this plot have &amp;quot;hubs&amp;quot;, while a long 'zero' tail shows many never similar results.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Runtimes ===&lt;br /&gt;
In addition computation times for feature extraction/Index-building and querying &lt;br /&gt;
will be measured.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submission format ==&lt;br /&gt;
Submission to this task will have to conform to a specified format detailed below. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Implementation details ===&lt;br /&gt;
Scratch folders will be provided for all submissions for the storage of feature files and any model or index files to be produced. Executables will have to accept the path to their scratch folder as a command line parameter. Executables will also have to track which feature files correspond to which audio files internally. To facilitate this process, unique filenames will be assigned to each audio track.&lt;br /&gt;
&lt;br /&gt;
The audio files to be used in the task will be specified in a simple ASCII list file. This file will contain one path per line with no header line. Executables will have to accept the path to these list files as a command line parameter. The formats for the list files are specified below. &lt;br /&gt;
&lt;br /&gt;
Multi-processor compute nodes (2, 4 or 8 cores) will be used to run this task. Hence, participants could attempt to use parrallelism. Ideally, the number of threads to use should be specified as a command line parameter. Alternatively, implementations may be provided in hard-coded 2, 4 or 8 thread configurations. Single threaded submissions will, of course, be accepted but may be disadvantaged by time constraints.&lt;br /&gt;
&lt;br /&gt;
Submissions will have to output either a full distance matrix or a search results file with the top 100 search results for each track in the collection. This list of results will be used to extract the artist-filtered results to present to the human evaluators and will facilitate the computation of the objective statistics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== I/O formats ===&lt;br /&gt;
In this section the input and output files used in this task are described as&lt;br /&gt;
are the command line calling format requirements for submissions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Audio collection list file (input)====&lt;br /&gt;
The list file passed for feature extraction and indexing will be a simple ASCII list file. This file will contain one path per line with no header line, all paths will be absolute (full paths).&lt;br /&gt;
&lt;br /&gt;
e.g.&lt;br /&gt;
&lt;br /&gt;
   /aDirectory/collectionFolder/b002342.wav&lt;br /&gt;
   /aDirectory/collectionFolder/a005921.wav&lt;br /&gt;
   ...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Distance matrix output files ====&lt;br /&gt;
Participants should return one of two available output file formats, a full distance matrix or a sparse distance matrix. The sparse distance matrix format is preferred (as the dense distance matrices can be very large).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===== Sparse Distance Matrix =====&lt;br /&gt;
If computation or exhaustive search is a concern or not a normal output of the indexing algorithm employed, the sparse distance matric format detailed below may be used:&lt;br /&gt;
&lt;br /&gt;
A simple ASCII file listing a name for the algorithm and the top 100 search results for every track in the collection. &lt;br /&gt;
&lt;br /&gt;
This file should start with a header line with a name for the algorithm and should be followed by the results for one query per line, prefixed by the  filename portion of the query path. This should be followed by a tab character and a tab separated, ordered list of the top 100 search results. Each result should include the result filename (e.g. a034728.wav) and the distance (e.g. 17.1 or 0.23) separated by a a comma.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
MyAlgorithm (my.email@address.com)&lt;br /&gt;
&amp;lt;example 1 filename&amp;gt;\t&amp;lt;result 1 name&amp;gt;,&amp;lt;result 1 distance&amp;gt;,\t&amp;lt;result 2 name&amp;gt;,&amp;lt;result 2 distance&amp;gt;, ... \t&amp;lt;result 100 name&amp;gt;,&amp;lt;result 100 distance&amp;gt;&lt;br /&gt;
&amp;lt;example 2 filename&amp;gt;\t&amp;lt;result 1 name&amp;gt;,&amp;lt;result 1 distance&amp;gt;,\t&amp;lt;result 2 name&amp;gt;,&amp;lt;result 2 distance&amp;gt;, ... \t&amp;lt;result 100 name&amp;gt;,&amp;lt;result 100 distance&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which might look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
MyAlgorithm (my.email@address.com)&lt;br /&gt;
a009342.wav	b229311.wav,0.16	a023821.wav,0.19	a001329,0.24  ... etc.&lt;br /&gt;
a009343.wav	a661931.wav,0.12	a043322.wav,0.17	c002346,0.21  ... etc.&lt;br /&gt;
a009347.wav	a671239.wav,0.13	c112393.wav,0.20	b083293,0.25  ... etc.&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The path to which this list file should be written must be accepted as a parameter on the command line.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===== Full Distance Matrix =====&lt;br /&gt;
Full distance matrix files should be generated in the the following format: &lt;br /&gt;
&lt;br /&gt;
* A simple ASCII file listing a name for the algorithm on the first line,&lt;br /&gt;
* Numbered paths for each file appearing in the matrix, these can be in any order (i.e. the files don't have to be i the same order as they appeared in the list file) but should index into the columns/rows of of the distance matrix.&lt;br /&gt;
* A line beginning with 'Q/R' followed by a tab and tab separated list of the numbers 1 to N, where N is the files covered by the matrix.&lt;br /&gt;
* One line per file in the matrix give the distances of that files to each other file in the matrix. All distances should be zero or positive (0.0+) and should not be infinite or NaN. Values should be separated by a single tab character. Obviously the diagonal of the matrix (distance or a  track to itself) should be zero.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Distance matrix header text with system name&lt;br /&gt;
1\t&amp;lt;/path/to/audio/file/1.wav&amp;gt;&lt;br /&gt;
2\t&amp;lt;/path/to/audio/file/2.wav&amp;gt;&lt;br /&gt;
3\t&amp;lt;/path/to/audio/file/3.wav&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
N\t&amp;lt;/path/to/audio/file/N.wav&amp;gt;&lt;br /&gt;
Q/R\t1\t2\t3\t...\tN&lt;br /&gt;
1\t0.0\t&amp;lt;dist 1 to 2&amp;gt;\t&amp;lt;dist 1 to 3&amp;gt;\t...\t&amp;lt;dist 1 to N&amp;gt;&lt;br /&gt;
2\t&amp;lt;dist 2 to 1&amp;gt;\t0.0\t&amp;lt;dist 2 to 3&amp;gt;\t...\t&amp;lt;dist 2 to N&amp;gt;&lt;br /&gt;
3\t&amp;lt;dist 3 to 2&amp;gt;\t&amp;lt;dist 3 to 2&amp;gt;\t0.0\t...\t&amp;lt;dist 3 to N&amp;gt;&lt;br /&gt;
...\t...\t...\t...\t...\t...&lt;br /&gt;
N\t&amp;lt;dist N to 1&amp;gt;\t&amp;lt;dist N to 2&amp;gt;\t&amp;lt;dist N to 3&amp;gt;\t...\t0.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which might look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Example distance matrix 0.1&lt;br /&gt;
1    /path/to/audio/file/1.wav&lt;br /&gt;
2    /path/to/audio/file/2.wav&lt;br /&gt;
3    /path/to/audio/file/3.wav&lt;br /&gt;
4    /path/to/audio/file/4.wav&lt;br /&gt;
Q/R   1        2        3        4&lt;br /&gt;
1     0.00000  1.24100  0.2e-4   0.42559&lt;br /&gt;
2     1.24100  0.00000  0.62640  0.23564&lt;br /&gt;
3     50.2e-4  0.62640  0.00000  0.38000&lt;br /&gt;
4     0.42559  0.23567  0.38000  0.00000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Example submission calling formats ====&lt;br /&gt;
   extractFeatures.sh /path/to/scratch/folder /path/to/collectionListFile.txt&lt;br /&gt;
   Query.sh /path/to/scratch/folder /path/to/collectionListFile.txt /path/to/outputResultsFile.txt&lt;br /&gt;
      &lt;br /&gt;
or&lt;br /&gt;
&lt;br /&gt;
   doAudioSim.sh -numThreads 8 /path/to/scratch/folder /path/to/collectionListFile.txt /path/to/outputResultsFile.txt&lt;br /&gt;
&lt;br /&gt;
=== Packaging submissions ===&lt;br /&gt;
All submissions should be statically linked to all libraries (the presence of &lt;br /&gt;
dynamically linked libraries cannot be guarenteed).&lt;br /&gt;
&lt;br /&gt;
All submissions should include a README file including the following the &lt;br /&gt;
information:&lt;br /&gt;
&lt;br /&gt;
* Command line calling format for all executables and an example formatted set of commands&lt;br /&gt;
* Number of threads/cores used or whether this should be specified on the command line&lt;br /&gt;
* Expected memory footprint&lt;br /&gt;
* Expected runtime&lt;br /&gt;
* Any required environments (and versions), e.g. python, java, bash, matlab.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of particpants in this and other audio tasks,&lt;br /&gt;
hard limits on the runtime of submissions are specified. &lt;br /&gt;
 &lt;br /&gt;
A hard limit of 72 hours will be imposed on runs (total feature extraction and querying times). Submissions that exceed this runtime may not receive a result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== AMS evaluation software ==&lt;br /&gt;
The legacy software for performing various AMS related function is available [https://www.music-ir.org/mirex/results/2010/AMS_tools.zip here], [https://www.music-ir.org/mirex/results/2010/AMS_TOOLS_README%20.txt README file]. It maybe used to benchmark systems prior to submission and to check distance matrix file formats.&lt;br /&gt;
&lt;br /&gt;
This tool set supports the following functions:&lt;br /&gt;
* the import of collection metadata from a delimited text file (e.g. TAB or CSV)&lt;br /&gt;
* the selection of a stratified random list of queries from the collection (i.e. an equal number of queries are chosen for each class of a particular metadata field, such as genre).&lt;br /&gt;
* the generation of results from distance matrices based on a list of pre-chosen queries.&lt;br /&gt;
* (pseudo-)objective statistical evaluation of distance matrices by comparing query metadata to the metadata of the top N results retrieved. Supports artist, album, genre and artist-filtered genre (where results form the same artist as query are skipped). Additionally, the number tracks never returned as results for all possible queries (orphans) and the largest hub (track similar to the most other tracks) are measured. Finally, the number of cases where the triangular inequality holds.&lt;br /&gt;
* preparation and post processing of results for the IMIRSEL Evalutron 6k  human evaluation interface.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submission opening date ==&lt;br /&gt;
&lt;br /&gt;
Friday 4th June 2010&lt;br /&gt;
&lt;br /&gt;
== Submission closing date ==&lt;br /&gt;
TBA&lt;/div&gt;</summary>
		<author><name>Bfields</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2011_talk:MIREX_Home&amp;diff=7763</id>
		<title>2011 talk:MIREX Home</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2011_talk:MIREX_Home&amp;diff=7763"/>
		<updated>2010-08-25T11:10:11Z</updated>

		<summary type="html">&lt;p&gt;Bfields: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Just wanted to note that I cleaned up the welcome text to actually refer to ISMIR 2011's dates and locations instead of ISMIR 2010's (Utrecht -&amp;gt; Miami, August -&amp;gt; October)&lt;br /&gt;
&lt;br /&gt;
--[[User:Bfields|Bfields]] 11:09, 25 August 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Bfields</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2011_talk:MIREX_Home&amp;diff=7762</id>
		<title>2011 talk:MIREX Home</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2011_talk:MIREX_Home&amp;diff=7762"/>
		<updated>2010-08-25T11:09:54Z</updated>

		<summary type="html">&lt;p&gt;Bfields: Created page with 'Just wanted to note that I cleaned up the welcome text to actually refer to ISMIR 2011's dates and locations instead of ISMIR 2010's (Utrecht -&amp;gt; Miami, August -&amp;gt; October) --~~~~'&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Just wanted to note that I cleaned up the welcome text to actually refer to ISMIR 2011's dates and locations instead of ISMIR 2010's (Utrecht -&amp;gt; Miami, August -&amp;gt; October)&lt;br /&gt;
--[[User:Bfields|Bfields]] 11:09, 25 August 2010 (UTC)&lt;/div&gt;</summary>
		<author><name>Bfields</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2011:MIREX_Home&amp;diff=7761</id>
		<title>2011:MIREX Home</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2011:MIREX_Home&amp;diff=7761"/>
		<updated>2010-08-25T11:08:23Z</updated>

		<summary type="html">&lt;p&gt;Bfields: /* Welcome to MIREX 2011 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Welcome to MIREX 2011==&lt;br /&gt;
This is the main page for the sixth running of the Music Information Retrieval Evaluation eXchange (MIREX 2011). The International Music Information Retrieval Systems Evaluation Laboratory ([https://music-ir.org/evaluation IMIRSEL]) at the Graduate School of Library and Information Science ([http://www.lis.illinois.edu GSLIS]), University of Illinois at Urbana-Champaign ([http://www.illinois.edu UIUC]) is the principal organizer of MIREX 2011. &lt;br /&gt;
&lt;br /&gt;
The MIREX 2011 community will hold its annual meeting as part of [http://ismir2011.ismir.net/ The 11th International Conference on Music Information Retrieval], ISMIR 2011, which will be held in Miami, Florida, the week of October the 23rd, 2011. The MIREX plenary (working lunch) and poster sessions will be held at a time to be determined during the conference.&lt;br /&gt;
&lt;br /&gt;
J. Stephen Downie&amp;lt;br&amp;gt;&lt;br /&gt;
Director, IMIRSEL&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===MIREX 2011 Evaluation Tasks===&lt;br /&gt;
&lt;br /&gt;
* [[2011:Audio Classification (Train/Test) Tasks]], incorporating:&lt;br /&gt;
** Audio Artist Identification&lt;br /&gt;
** Audio US Pop Genre Classification&lt;br /&gt;
** Audio Latin Genre Classification&lt;br /&gt;
** Audio Music Mood Classification&lt;br /&gt;
** Audio Classical Composer Identification&lt;br /&gt;
* [[2011:Audio Cover Song Identification]]&lt;br /&gt;
* [[2011:Audio Tag Classification]] &lt;br /&gt;
* [[2011:Audio Music Similarity and Retrieval]]&lt;br /&gt;
* [[2011:Symbolic Melodic Similarity]]&lt;br /&gt;
* [[2011:Audio Onset Detection]]&lt;br /&gt;
* [[2011:Audio Key Detection]]&lt;br /&gt;
* [[2011:Real-time Audio to Score Alignment (a.k.a Score Following)]]&lt;br /&gt;
* [[2011:Query by Singing/Humming]]&lt;br /&gt;
* [[2011:Audio Melody Extraction]]&lt;br /&gt;
* [[2011:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]]&lt;br /&gt;
* [[2011:Audio Chord Estimation]]&lt;br /&gt;
* [[2011:Query by Tapping]]&lt;br /&gt;
* [[2011:Audio Beat Tracking]]&lt;br /&gt;
* [[2011:Structural Segmentation]]&lt;br /&gt;
* [[2011:Audio Tempo Estimation]]&lt;br /&gt;
&lt;br /&gt;
===Note to New Participants===&lt;br /&gt;
Please take the time to read the following review article that explains the history and structure of MIREX.&lt;br /&gt;
&lt;br /&gt;
Downie, J. Stephen (2008). The Music Information Retrieval Evaluation Exchange (2005-2007):&amp;lt;br&amp;gt;&lt;br /&gt;
A window into music information retrieval research.''Acoustical Science and Technology 29'' (4): 247-255. &amp;lt;br&amp;gt;&lt;br /&gt;
Available at: [http://dx.doi.org/10.1250/ast.29.247 http://dx.doi.org/10.1250/ast.29.247]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Note to All Participants===&lt;br /&gt;
Because MIREX is premised upon the sharing of ideas and results, '''ALL''' MIREX participants are expected to:&lt;br /&gt;
&lt;br /&gt;
# submit a DRAFT 2-3 page extended abstract PDF in the ISMIR format about the submitted programme(s) to help us and the community better understand how the algorithm works when submitting their programme(s).&lt;br /&gt;
# submit a FINALIZED 2-3 page extended abstract PDF in the ISMIR format prior to ISMIR 2011 for posting on the respective results pages (sometimes the same abstract can be used for multiple submissions; in many cases the DRAFT and FINALIZED abstracts are the same)&lt;br /&gt;
# present a poster at the MIREX 2011 poster session at ISMIR 2011 (Wednesday, 11 August 2011)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Software Dependency Requests===&lt;br /&gt;
If you have not submitted to MIREX before or are unsure whether IMIRSEL/NEMA currently supports some of the software/architecture dependencies for your submission a [https://spreadsheets.google.com/embeddedform?formkey=dDltRjc4NDBDdkZiaF9qZXV0bU5ScUE6MA dependency request form is available]. Please submit details of your dependencies on this form and the IMIRSEL team will attempt to satisfy them for you. &lt;br /&gt;
&lt;br /&gt;
Due to the high volume of submissions expected at MIREX 2011, submissions with difficult to satisfy dependencies that the team has not been given sufficient notice of may result in the submission being rejected.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Finally, you will also be expected to detail your software/architecture dependencies in a README file to be provided to the submission system.&lt;br /&gt;
&lt;br /&gt;
==Getting Involved in MIREX 2011==&lt;br /&gt;
MIREX is a community-based endeavour. Be a part of the community and help make MIREX 2011 the best yet.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Mailing List Participation===&lt;br /&gt;
If you are interested in formal MIR evaluation, you should also subscribe to the &amp;quot;MIREX&amp;quot; (aka &amp;quot;EvalFest&amp;quot;) mail list and participate in the community discussions about defining and running MIREX 2011 tasks. Subscription information at: &lt;br /&gt;
[https://mail.lis.illinois.edu/mailman/listinfo/evalfest EvalFest Central]. &lt;br /&gt;
&lt;br /&gt;
If you are participating in MIREX 2011, it is VERY IMPORTANT that you are subscribed to EvalFest. Deadlines, task updates and other important information will be announced via this mailing list. Please use the EvalFest for discussion of MIREX task proposals and other MIREX related issues. This wiki (MIREX 2011 wiki) will be used to embody and disseminate task proposals, however, task related discussions should be conducted on the MIREX organization mailing list (EvalFest) rather than on this wiki, but should be summarized here. &lt;br /&gt;
&lt;br /&gt;
Where possible, definitions or example code for new evaluation metrics or tasks should be provided to the IMIRSEL team who will embody them in software as part of the NEMA analytics framework, which will be released to the community at or before ISMIR 2011 - providing a standardised set of interfaces and output to disciplined evaluation procedures for a great many MIR tasks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Wiki Participation===&lt;br /&gt;
Please create an account via: [[Special:Userlogin]].&lt;br /&gt;
&lt;br /&gt;
Please note that because of &amp;quot;spam-bots&amp;quot;, MIREX wiki registration requests may be moderated by IMIRSEL members. It might take up to 24 hours for approval (Thank you for your patience!).&lt;br /&gt;
&lt;br /&gt;
==MIREX 2005 - 2010 Wikis==&lt;br /&gt;
Content from MIREX 2005 - 2010 are available at:&lt;br /&gt;
&lt;br /&gt;
'''[[2009:Main_Page|MIREX 2010]]''' &lt;br /&gt;
'''[[2009:Main_Page|MIREX 2009]]''' &lt;br /&gt;
'''[[2008:Main_Page|MIREX 2008]]''' &lt;br /&gt;
'''[[2007:Main_Page|MIREX 2007]]''' &lt;br /&gt;
'''[[2006:Main_Page|MIREX 2006]]''' &lt;br /&gt;
'''[[2005:Main_Page|MIREX 2005]]'''&lt;/div&gt;</summary>
		<author><name>Bfields</name></author>
		
	</entry>
</feed>