<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Heywhoah</id>
	<title>MIREX Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Heywhoah"/>
	<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/wiki/Special:Contributions/Heywhoah"/>
	<updated>2026-04-30T10:12:45Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.1</generator>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2011:Audio_Similarity_2011_Graders&amp;diff=8218</id>
		<title>2011:Audio Similarity 2011 Graders</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2011:Audio_Similarity_2011_Graders&amp;diff=8218"/>
		<updated>2011-09-28T16:51:22Z</updated>

		<summary type="html">&lt;p&gt;Heywhoah: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=AMS 2011 Graders=&lt;br /&gt;
&lt;br /&gt;
Welcome to the AMS grader sign-up page. Please give us your name and email contact information. If you obscure your email, please make it relatively obvious to us how to parse the address.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Template:&amp;lt;/b&amp;gt; Name. Location. &amp;lt;Email&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Sample:&amp;lt;/b&amp;gt; J. Stephen Downie. Illinois, USA. &amp;lt;jdownie@illinois.edu&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Special Comments==&lt;br /&gt;
We are under time constraints this year because ISMIR 2011 &lt;br /&gt;
begins Monday, 24th of October. We need to have all the final  &lt;br /&gt;
results calculated and posted by the 14th of  October target date (fingers crossed).&lt;br /&gt;
&lt;br /&gt;
We hope to open the the Evalutron 6000 (E6K) v.2 grading system by Friday, 30th&lt;br /&gt;
September. To meet our goal, we must have all the AMS and SMS &lt;br /&gt;
similarity grades entered into the E6K by Wednesday, Oct. 12th. So, if you are kind enough to sign up to be a grader, please understand that we really need you complete your assigned grading &lt;br /&gt;
by Wednesday, Oct. 12th.&lt;br /&gt;
&lt;br /&gt;
If you are a SMS or AMS participant, we ask that you do what you can to &lt;br /&gt;
encourage adults over 18 years of age to be graders.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We are looking for 50 graders for AMS this year. If we make our quota of 50 graders, each grader will be responsible for two query lists. If we fall short, and get around 34 graders, we will be asking each grader to grade 3 queries. In this worst case scenario, we still expect the grading process to take between 2.5 to 3 hours (or less) for each grader.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This year the query lists seem to be moderate in length so tackling two queries lists should not be too onerous. For safety's sake, we would like to see say, an extra 5 or so names on the sign up sheet below. The &amp;quot;extra names&amp;quot; on the sign up sheet will be considered &amp;quot;back up&amp;quot; graders. We will assign grading tasks in the order of the names as they appear below.&lt;br /&gt;
&lt;br /&gt;
=Sign Up Area=&lt;br /&gt;
&lt;br /&gt;
# Brian McFee. California, USA. &amp;lt;bmcfee@cs.ucsd.edu&amp;gt;&lt;br /&gt;
# Steve Tjoa. San Francisco, CA, USA. &amp;lt;steve at imagine-research com&amp;gt;&lt;br /&gt;
# Jia-Min Ren. Hsinchu, Taiwan. &amp;lt;jmren at mirlab org&amp;gt;&lt;br /&gt;
# Sally Jo Cunningham, Hamilton, New Zealand.  &amp;lt;sallyjo@cs.waikato.ac.nz&amp;gt;&lt;br /&gt;
# Sungkyun Chang. Suwon, Korea. &amp;lt;rayno1 at snu.ac.kr&amp;gt;&lt;br /&gt;
# Yin-Tzu Lin. Taipei, Taiwan. &amp;lt;known at cmlab.csie.ntu.edu.tw&amp;gt;&lt;br /&gt;
# Franz de Leon. Southampton, UK. &amp;lt;fadl1d09@ecs.soton.ac.uk&amp;gt;&lt;br /&gt;
# Simone Sammartino. Málaga, Spain. &amp;lt;ssammartino@ic.uma.es&amp;gt;&lt;br /&gt;
# Arthur Flexer, OFAI, Austria &amp;lt;arthur.flexer at ofai.at&amp;gt;&lt;br /&gt;
# Dominik Schnitzer, OFAI, Austria &amp;lt;dominik.schnitzer at ofai.at&amp;gt;&lt;br /&gt;
# Jan Schlueter, OFAI, Austria &amp;lt;jan.schlueter at ofai.at&amp;gt;&lt;br /&gt;
# Cristina de la Bandera. Málaga, Spain. &amp;lt;cdelabandera@ic.uma.es&amp;gt;&lt;br /&gt;
# Bart Stasiak. Lodz, Poland. &amp;lt;basta -@- ics.p.lodz.pl&amp;gt;&lt;br /&gt;
# Thierry Bertin-Mahieux. New York, USA. &amp;lt;tb2332@columbia.edu&amp;gt;&lt;br /&gt;
# Benjamin Martin. Bordeaux, France. &amp;lt;benjamin.martin@labri.fr&amp;gt;&lt;br /&gt;
# Ruofeng Chen. Georgia, USA. &amp;lt;ruofengchen (at) gatech (dot) edu&amp;gt;&lt;br /&gt;
# Bo Xie. Atlanta, USA. &amp;lt;bo.xie (at) gatech (dot) edu&amp;gt;&lt;br /&gt;
# Ming Li. Beijing, China. &amp;lt;liming.ioa (at) gmail dot com&amp;gt;&lt;br /&gt;
# Chung-Che Wang. Hsinchu, Taiwan. &amp;lt;geniusturtle (at) mirlab dot org&amp;gt;&lt;br /&gt;
# Peter Knees, cp.jku, Austria. &amp;lt;peter.knees (at) jku.at&amp;gt;&lt;br /&gt;
# Markus Schedl, cp.jku, Austria &amp;lt;markus.schedl (at) jku.at&amp;gt;&lt;br /&gt;
# Audrey Laplante. Montréal, Canada. &amp;lt;audrey.laplante (at) umontreal.ca&amp;gt;&lt;br /&gt;
# Ajay Ramaseshan. Espoo, Finland. &amp;lt;ajayram (at) cis.hut.fi&amp;gt;&lt;br /&gt;
# Matt Hoffman. New York, New York, USA. &amp;lt;mdhoffma@cs.princeton.edu&amp;gt;&lt;/div&gt;</summary>
		<author><name>Heywhoah</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2010:Audio_Similarity_2010_Graders&amp;diff=7237</id>
		<title>2010:Audio Similarity 2010 Graders</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2010:Audio_Similarity_2010_Graders&amp;diff=7237"/>
		<updated>2010-07-14T01:02:01Z</updated>

		<summary type="html">&lt;p&gt;Heywhoah: /* Sign Up Area */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=AMS 2010 Graders=&lt;br /&gt;
&lt;br /&gt;
Welcome to the AMS grader sign-up page. Please give us your name and email contact information. If you obscure your email, please make it relatively obvious to us how to parse the address.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Template:&amp;lt;/b&amp;gt; Name. Location. &amp;lt;Email&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Sample:&amp;lt;/b&amp;gt; J. Stephen Downie. Illinois, USA. &amp;lt;jdownie@illinois.edu&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Special Comments=&lt;br /&gt;
We are under extraordinary time constraints this year because ISMIR 2010 &lt;br /&gt;
is being held starting 9 August and we need to have all the final &lt;br /&gt;
results calculated and posted by our 2 August target date (fingers crossed).&lt;br /&gt;
&lt;br /&gt;
We hope to open the the Evalutron 6000 (E6K) v.2 grading system by 20 &lt;br /&gt;
July. To meet our 2 August goal, we must have all the AMS and SMS &lt;br /&gt;
similarity grades entered into the E6K by 27 July. YES, THIS GIVES US &lt;br /&gt;
ONLY ONE WEEK! So, if you are kind enough to sign up to be a grader, &lt;br /&gt;
please understand that we really need you complete your assigned grading &lt;br /&gt;
by 27 July.&lt;br /&gt;
&lt;br /&gt;
If you are a SMS or AMS participant, we ask that you do what you can to &lt;br /&gt;
encourage adults over 18 years of age to be graders.&lt;br /&gt;
&lt;br /&gt;
We are looking for &amp;lt;b&amp;gt;50&amp;lt;/b&amp;gt; graders for AMS this year. Each grader will be responsible for two query lists. This year the query lists seem to be moderate in length so tackling two queries lists should not be too onerous. For safety's sake, we would like to see say, an extra five or so names on the sign up sheet below. The &amp;quot;extra names&amp;quot; on the sign up sheet will be considered &amp;quot;back up&amp;quot; graders. We will assign grading tasks in the order of the names as they appear below.&lt;br /&gt;
&lt;br /&gt;
=Sign Up Area=&lt;br /&gt;
Martin Ariel Hartmann. Buenos Aires, Argentina. &amp;lt;martin.hartmann@jyu.fi&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sally Jo Cunningham. Hamilton, New Zealand. &amp;lt;sallyjo@cs.waikato.ac.nz&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Matt Hoffman. New York, USA. &amp;lt;mdhoffma@cs.princeton.edu&amp;gt;&lt;/div&gt;</summary>
		<author><name>Heywhoah</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2010:Audio_Music_Similarity_and_Retrieval&amp;diff=7147</id>
		<title>2010:Audio Music Similarity and Retrieval</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2010:Audio_Music_Similarity_and_Retrieval&amp;diff=7147"/>
		<updated>2010-06-05T13:22:06Z</updated>

		<summary type="html">&lt;p&gt;Heywhoah: Added link to 2009 page and results.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
As the size of digitial music collections grow, music similarity has an increasingly important role as an aid to music discovery.  A music similarity system can help a music consumer find new music by finding the music that is most musically similar to specific query songs (or is nearest to songs that the consumer already likes).  &lt;br /&gt;
&lt;br /&gt;
This page presents the Audio Music Similarity Evaluation, including the submission rules and formats. Additionally background information can be found here that should help explain some of the reasoning behind the approach taken in the evaluation. The intention of the Music Audio Search track is to evaluate music similarity searches (A music search engine that takes a single song as a query aka Query-by-example), not playlist generation or music recommendation.&lt;br /&gt;
&lt;br /&gt;
The Audio Music Similarity and Retrieval task has been run in MIREX 2009, 2007, and 2006. &lt;br /&gt;
&lt;br /&gt;
[[2009:Audio_Music_Similarity_and_Retrieval|Audio Music Similarity and Retrieval task in MIREX 2009]] || [[2009:Audio_Music_Similarity_and_Retrieval_Results|Results]]&lt;br /&gt;
&lt;br /&gt;
[[2007:Audio_Music_Similarity_and_Retrieval|Audio Music Similarity and Retrieval task in MIREX 2007]] || [[2007:Audio_Music_Similarity_and_Retrieval_Results|Results]]&lt;br /&gt;
&lt;br /&gt;
[[2006:Audio_Music_Similarity_and_Retrieval|Audio Music Similarity and Retrieval task in MIREX 2006]] || [[2006:Audio_Music_Similarity_and_Retrieval_Results|Results]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Task specific mailing list ===&lt;br /&gt;
In the past we have use a specific mailing list for the discussion of this task and related tasks (e.g., [[2010:Audio Classification (Train/Test) Tasks]], [[2010:Audio Cover Song Identification]], [[2010:Audio Tag Classification]], [[2010:Audio Music Similarity and Retrieval]]). This year, however, we are asking that all discussions take place on the MIREX  [https://mail.lis.illinois.edu/mailman/listinfo/evalfest &amp;quot;EvalFest&amp;quot; list]. If you have an question or comment, simply include the task name in the subject heading.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
Collection statistics: 7000 30-second audio clips drawn from 10 genres (700 clips from each genre).&lt;br /&gt;
&lt;br /&gt;
The Genres that data was drawn from are:&lt;br /&gt;
*Blues&lt;br /&gt;
*Jazz&lt;br /&gt;
*Country/Western&lt;br /&gt;
*Baroque&lt;br /&gt;
*Classical&lt;br /&gt;
*Romantic&lt;br /&gt;
*Electronica&lt;br /&gt;
*Hip-Hop&lt;br /&gt;
*Rock&lt;br /&gt;
*HardRock/Metal&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Audio formats ===&lt;br /&gt;
Participating algorithms will have to read audio in the following format:&lt;br /&gt;
&lt;br /&gt;
* Sample rate: 22 KHz&lt;br /&gt;
* Sample size: 16 bit&lt;br /&gt;
* Number of channels: 1 (mono)&lt;br /&gt;
* Encoding: WAV&lt;br /&gt;
* clip length: 30 secs from the middle of each file&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Evaluation ==&lt;br /&gt;
Two distinct evaluations will be performed&lt;br /&gt;
* Human Evaluation&lt;br /&gt;
* Objective statistics derived from the results lists&lt;br /&gt;
&lt;br /&gt;
Note that at MIREX 2006 particpating algorithms were required to return full distance matrices showing the distance between all tracks, however, in subsequent years we have also supported sparse distance matrix format (detailed below) where only the distances of the top 100 results for each query in the collection are returned.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Human Evaluation ===&lt;br /&gt;
The primary evaluation will involve subjective judgments by human evaluators of the retrieved sets using IMIRSEL's Evalutron 6000 system. This year algorithms will be presented with the same 30 second preview clip that will be reviewed by the human evaluators. &lt;br /&gt;
&lt;br /&gt;
* Evaluator question: Given a search based on track A, the following set of results was returned by all systems. Please place each returned track into one of three classes (not similar, somewhat similar, very similar) and provide an inidcation on a continuous scale of 0 - 10 of high similar the track is to the query. &lt;br /&gt;
* ~120 randomly selected queries,  5 results per query, 1 set of eyes, ~10 participating labs&lt;br /&gt;
* Higher number of queries preferred as IR research indicates variance is in queries&lt;br /&gt;
* The songs by the same artist as the query will be filtered out of each result list (artist-filtering) to avoid colouring an evaluators judgement (a cover song or song by the same artist in a result list is likely to reduce the relative ranking of other similar but independent songs - use of songs by the same artist may allow over-fitting to affect the results)&lt;br /&gt;
* It will be possible for researchers to use this data for other types of system comparisons after MIREX 2010 results have been finalized.&lt;br /&gt;
* Human evaluation to be designed and led by IMIRSEL following a similar format to that used at MIREX 2006 (see: [[2006:Evalutron6000_Issues|Evalutron Issues in MIREX 2006]]).&lt;br /&gt;
* Human evaluators will be drawn from the participating labs (and any volunteers from IMIRSEL or on the MIREX lists)&lt;br /&gt;
&lt;br /&gt;
=== Objective Statistics derived from the distance matrix ===&lt;br /&gt;
Statistics of each distance matrix will be calculated including:&lt;br /&gt;
&lt;br /&gt;
* Average % of Genre, Artist and Album matches in the top 5, 10, 20 &amp;amp; 50 results - Precision at 5, 10, 20 &amp;amp; 50&lt;br /&gt;
* Average % of Genre matches in the top 5, 10, 20 &amp;amp; 50 results after artist filtering of results&lt;br /&gt;
* Average % of available Genre, Artist and Album matches in the top 5, 10, 20 &amp;amp; 50 results - Recall at 5, 10, 20 &amp;amp; 50 (just normalising scores when less than 20 matches for an artist, album or genre are available in the database)&lt;br /&gt;
* Always similar - Maximum # times a file was in the top 5, 10, 20 &amp;amp; 50 results&lt;br /&gt;
* % File never similar (never in a top 5, 10, 20 &amp;amp; 50 result list)&lt;br /&gt;
* % of 'test-able' song triplets where triangular inequality holds&lt;br /&gt;
** Note that as we are not requiring full distance matrices this year we will only be testing triangles that are found in the sparse distance matrix.&lt;br /&gt;
* Plot of the  &amp;quot;number of times similar curve&amp;quot; -  plot  of song number vs. number of times it appeared in a top 20 list with songs sorted according to number times it appeared in a top 20 list (to produce the curve). Systems with a sharp rise at the end of this plot have &amp;quot;hubs&amp;quot;, while a long 'zero' tail shows many never similar results.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Runtimes ===&lt;br /&gt;
In addition computation times for feature extraction/Index-building and querying &lt;br /&gt;
will be measured.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submission format ==&lt;br /&gt;
Submission to this task will have to conform to a specified format detailed below. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Implementation details ===&lt;br /&gt;
Scratch folders will be provided for all submissions for the storage of feature files and any model or index files to be produced. Executables will have to accept the path to their scratch folder as a command line parameter. Executables will also have to track which feature files correspond to which audio files internally. To facilitate this process, unique filenames will be assigned to each audio track.&lt;br /&gt;
&lt;br /&gt;
The audio files to be used in the task will be specified in a simple ASCII list file. This file will contain one path per line with no header line. Executables will have to accept the path to these list files as a command line parameter. The formats for the list files are specified below. &lt;br /&gt;
&lt;br /&gt;
Multi-processor compute nodes (2, 4 or 8 cores) will be used to run this task. Hence, participants could attempt to use parrallelism. Ideally, the number of threads to use should be specified as a command line parameter. Alternatively, implementations may be provided in hard-coded 2, 4 or 8 thread configurations. Single threaded submissions will, of course, be accepted but may be disadvantaged by time constraints.&lt;br /&gt;
&lt;br /&gt;
Submissions will have to output either a full distance matrix or a search results file with the top 100 search results for each track in the collection. This list of results will be used to extract the artist-filtered results to present to the human evaluators and will facilitate the computation of the objective statistics.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== I/O formats ===&lt;br /&gt;
In this section the input and output files used in this task are described as&lt;br /&gt;
are the command line calling format requirements for submissions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Audio collection list file (input)====&lt;br /&gt;
The list file passed for feature extraction and indexing will be a simple ASCII list file. This file will contain one path per line with no header line, all paths will be absolute (full paths).&lt;br /&gt;
&lt;br /&gt;
e.g.&lt;br /&gt;
&lt;br /&gt;
   /aDirectory/collectionFolder/b002342.wav&lt;br /&gt;
   /aDirectory/collectionFolder/a005921.wav&lt;br /&gt;
   ...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Distance matrix output files ====&lt;br /&gt;
Participants should return one of two available output file formats, a full distance matrix or a sparse distance matrix. The sparse distance matrix format is preferred (as the dense distance matrices can be very large).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===== Sparse Distance Matrix =====&lt;br /&gt;
If computation or exhaustive search is a concern or not a normal output of the indexing algorithm employed, the sparse distance matric format detailed below may be used:&lt;br /&gt;
&lt;br /&gt;
A simple ASCII file listing a name for the algorithm and the top 100 search results for every track in the collection. &lt;br /&gt;
&lt;br /&gt;
This file should start with a header line with a name for the algorithm and should be followed by the results for one query per line, prefixed by the  filename portion of the query path. This should be followed by a tab character and a tab separated, ordered list of the top 100 search results. Each result should include the result filename (e.g. a034728.wav) and the distance (e.g. 17.1 or 0.23) separated by a a comma.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
MyAlgorithm (my.email@address.com)&lt;br /&gt;
&amp;lt;example 1 filename&amp;gt;\t&amp;lt;result 1 name&amp;gt;,&amp;lt;result 1 distance&amp;gt;,\t&amp;lt;result 2 name&amp;gt;,&amp;lt;result 2 distance&amp;gt;, ... \t&amp;lt;result 100 name&amp;gt;,&amp;lt;result 100 distance&amp;gt;&lt;br /&gt;
&amp;lt;example 2 filename&amp;gt;\t&amp;lt;result 1 name&amp;gt;,&amp;lt;result 1 distance&amp;gt;,\t&amp;lt;result 2 name&amp;gt;,&amp;lt;result 2 distance&amp;gt;, ... \t&amp;lt;result 100 name&amp;gt;,&amp;lt;result 100 distance&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which might look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
MyAlgorithm (my.email@address.com)&lt;br /&gt;
a009342.wav	b229311.wav,0.16	a023821.wav,0.19	a001329,0.24  ... etc.&lt;br /&gt;
a009343.wav	a661931.wav,0.12	a043322.wav,0.17	c002346,0.21  ... etc.&lt;br /&gt;
a009347.wav	a671239.wav,0.13	c112393.wav,0.20	b083293,0.25  ... etc.&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The path to which this list file should be written must be accepted as a parameter on the command line.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===== Full Distance Matrix =====&lt;br /&gt;
Full distance matrix files should be generated in the the following format: &lt;br /&gt;
&lt;br /&gt;
* A simple ASCII file listing a name for the algorithm on the first line,&lt;br /&gt;
* Numbered paths for each file appearing in the matrix, these can be in any order (i.e. the files don't have to be i the same order as they appeared in the list file) but should index into the columns/rows of of the distance matrix.&lt;br /&gt;
* A line beginning with 'Q/R' followed by a tab and tab separated list of the numbers 1 to N, where N is the files covered by the matrix.&lt;br /&gt;
* One line per file in the matrix give the distances of that files to each other file in the matrix. All distances should be zero or positive (0.0+) and should not be infinite or NaN. Values should be separated by a single tab character. Obviously the diagonal of the matrix (distance or a  track to itself) should be zero.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Distance matrix header text with system name&lt;br /&gt;
1\t&amp;lt;/path/to/audio/file/1.wav&amp;gt;&lt;br /&gt;
2\t&amp;lt;/path/to/audio/file/2.wav&amp;gt;&lt;br /&gt;
3\t&amp;lt;/path/to/audio/file/3.wav&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
N\t&amp;lt;/path/to/audio/file/N.wav&amp;gt;&lt;br /&gt;
Q/R\t1\t2\t3\t...\tN&lt;br /&gt;
1\t0.0\t&amp;lt;dist 1 to 2&amp;gt;\t&amp;lt;dist 1 to 3&amp;gt;\t...\t&amp;lt;dist 1 to N&amp;gt;&lt;br /&gt;
2\t&amp;lt;dist 2 to 1&amp;gt;\t0.0\t&amp;lt;dist 2 to 3&amp;gt;\t...\t&amp;lt;dist 2 to N&amp;gt;&lt;br /&gt;
3\t&amp;lt;dist 3 to 2&amp;gt;\t&amp;lt;dist 3 to 2&amp;gt;\t0.0\t...\t&amp;lt;dist 3 to N&amp;gt;&lt;br /&gt;
...\t...\t...\t...\t...\t...&lt;br /&gt;
N\t&amp;lt;dist N to 1&amp;gt;\t&amp;lt;dist N to 2&amp;gt;\t&amp;lt;dist N to 3&amp;gt;\t...\t0.0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which might look like:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Example distance matrix 0.1&lt;br /&gt;
1    /path/to/audio/file/1.wav&lt;br /&gt;
2    /path/to/audio/file/2.wav&lt;br /&gt;
3    /path/to/audio/file/3.wav&lt;br /&gt;
4    /path/to/audio/file/4.wav&lt;br /&gt;
Q/R   1        2        3        4&lt;br /&gt;
1     0.00000  1.24100  0.2e-4   0.42559&lt;br /&gt;
2     1.24100  0.00000  0.62640  0.23564&lt;br /&gt;
3     50.2e-4  0.62640  0.00000  0.38000&lt;br /&gt;
4     0.42559  0.23567  0.38000  0.00000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Example submission calling formats ====&lt;br /&gt;
   extractFeatures.sh /path/to/scratch/folder /path/to/collectionListFile.txt&lt;br /&gt;
   Query.sh /path/to/scratch/folder /path/to/collectionListFile.txt /path/to/outputResultsFile.txt&lt;br /&gt;
      &lt;br /&gt;
or&lt;br /&gt;
&lt;br /&gt;
   doAudioSim.sh -numThreads 8 /path/to/scratch/folder /path/to/collectionListFile.txt /path/to/outputResultsFile.txt&lt;br /&gt;
&lt;br /&gt;
=== Packaging submissions ===&lt;br /&gt;
All submissions should be statically linked to all libraries (the presence of &lt;br /&gt;
dynamically linked libraries cannot be guarenteed).&lt;br /&gt;
&lt;br /&gt;
All submissions should include a README file including the following the &lt;br /&gt;
information:&lt;br /&gt;
&lt;br /&gt;
* Command line calling format for all executables and an example formatted set of commands&lt;br /&gt;
* Number of threads/cores used or whether this should be specified on the command line&lt;br /&gt;
* Expected memory footprint&lt;br /&gt;
* Expected runtime&lt;br /&gt;
* Any required environments (and versions), e.g. python, java, bash, matlab.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of particpants in this and other audio tasks,&lt;br /&gt;
hard limits on the runtime of submissions are specified. &lt;br /&gt;
 &lt;br /&gt;
A hard limit of 72 hours will be imposed on runs (total feature extraction and querying times). Submissions that exceed this runtime may not receive a result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submission opening date ==&lt;br /&gt;
&lt;br /&gt;
Friday 4th June 2010&lt;br /&gt;
&lt;br /&gt;
== Submission closing date ==&lt;br /&gt;
TBA&lt;/div&gt;</summary>
		<author><name>Heywhoah</name></author>
		
	</entry>
</feed>