<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jyh-Shing+Roger+Jang</id>
	<title>MIREX Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jyh-Shing+Roger+Jang"/>
	<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/wiki/Special:Contributions/Jyh-Shing_Roger_Jang"/>
	<updated>2026-04-29T20:13:43Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.1</generator>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10358</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10358"/>
		<updated>2014-08-12T15:08:55Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Time and hardware limits */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) 1,000 of the files are from GTZAN data set, all the others are mainly English and Chinese pop songs. This data set is hidden and not available for download. (Note that there are possibly different numbers of channels (mono and stereo), sampling rates, and bit resolutions for these files.)&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphones, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dir4db%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/000001.mp3&lt;br /&gt;
 ./AFP/database/000002.mp3&lt;br /&gt;
 ./AFP/database/000003.mp3&lt;br /&gt;
 ./AFP/database/000004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The total size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dir4db% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q000001.wav&lt;br /&gt;
 ./AFP/query/q000002.wav&lt;br /&gt;
 ./AFP/query/q000003.wav&lt;br /&gt;
 ./AFP/query/q000004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q000001.wav	./AFP/database/0000004.mp3&lt;br /&gt;
 ./AFP/query/q000002.wav	./AFP/database/0000054.mp3&lt;br /&gt;
 ./AFP/query/q000003.wav	./AFP/database/0001002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 1/50 of real time (For instance, if a music clip has a duration of 4 minutes, it should take 4.8 sec to construct the database in average. For a database of 10000 songs, the total time should around 10000*4/60/50 = 13.33 hours.) || 50KB for 1 minute of music. (For a database of 10000 songs, the total storage for database should be around 50*10000*4/1000000 = 2GB.)&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 5 secs for each query of around 10 seconds (Thus for 5692 queries of around 10 seconds, the total query time should be around 5*5692/3600 = 7.91 hours.) || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10357</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10357"/>
		<updated>2014-08-12T14:56:06Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Time and hardware limits */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) 1,000 of the files are from GTZAN data set, all the others are mainly English and Chinese pop songs. This data set is hidden and not available for download. (Note that there are possibly different numbers of channels (mono and stereo), sampling rates, and bit resolutions for these files.)&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphones, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dir4db%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/000001.mp3&lt;br /&gt;
 ./AFP/database/000002.mp3&lt;br /&gt;
 ./AFP/database/000003.mp3&lt;br /&gt;
 ./AFP/database/000004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The total size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dir4db% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q000001.wav&lt;br /&gt;
 ./AFP/query/q000002.wav&lt;br /&gt;
 ./AFP/query/q000003.wav&lt;br /&gt;
 ./AFP/query/q000004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q000001.wav	./AFP/database/0000004.mp3&lt;br /&gt;
 ./AFP/query/q000002.wav	./AFP/database/0000054.mp3&lt;br /&gt;
 ./AFP/query/q000003.wav	./AFP/database/0001002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 1/50 of real time (For instance, if a music clip has a duration of 4 minutes, it should take 4.8 sec to construct the database in average. For a database of 10000 songs, the total time should around 10000*4/60/50 = 13.33 hours.) || 50KB for 1 minute of music. (For a database of 10000 songs, the total storage for database should be around 50*10000*4/1000000 = 2GB.)&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 2 secs for each query of around 10 seconds (Thus for 5692 queries of around 10 seconds, the total query time should be around 2*5692/3600 = 3.2 hours.) || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10356</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10356"/>
		<updated>2014-08-12T14:43:35Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Submission Format */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) 1,000 of the files are from GTZAN data set, all the others are mainly English and Chinese pop songs. This data set is hidden and not available for download. (Note that there are possibly different numbers of channels (mono and stereo), sampling rates, and bit resolutions for these files.)&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphones, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dir4db%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/000001.mp3&lt;br /&gt;
 ./AFP/database/000002.mp3&lt;br /&gt;
 ./AFP/database/000003.mp3&lt;br /&gt;
 ./AFP/database/000004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The total size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dir4db% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q000001.wav&lt;br /&gt;
 ./AFP/query/q000002.wav&lt;br /&gt;
 ./AFP/query/q000003.wav&lt;br /&gt;
 ./AFP/query/q000004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q000001.wav	./AFP/database/0000004.mp3&lt;br /&gt;
 ./AFP/query/q000002.wav	./AFP/database/0000054.mp3&lt;br /&gt;
 ./AFP/query/q000003.wav	./AFP/database/0001002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 1/100 of real time (For instance, if a music clip has a duration of 4 minutes, it should take 2.4 sec to construct the database in average. For a database of 10000 songs, the total time should around 10000*4/60/100 = 6.7 hours.) || 50KB for 1 minute of music. (For a database of 10000 songs, the total storage for database should be around 50*10000*4/1000000 = 2GB.)&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 2 secs for each query of around 10 seconds (Thus for 5692 queries of around 10 seconds, the total query time should be around 2*5692/3600 = 3.2 hours.) || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10355</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10355"/>
		<updated>2014-08-12T14:39:47Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Query set */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) 1,000 of the files are from GTZAN data set, all the others are mainly English and Chinese pop songs. This data set is hidden and not available for download. (Note that there are possibly different numbers of channels (mono and stereo), sampling rates, and bit resolutions for these files.)&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphones, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dir4db%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/00001.mp3&lt;br /&gt;
 ./AFP/database/00002.mp3&lt;br /&gt;
 ./AFP/database/00003.mp3&lt;br /&gt;
 ./AFP/database/00004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dir4db% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q0001.wav	./AFP/database/00004.mp3&lt;br /&gt;
 ./AFP/query/q0002.wav	./AFP/database/00054.mp3&lt;br /&gt;
 ./AFP/query/q0003.wav	./AFP/database/01002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 1/100 of real time (For instance, if a music clip has a duration of 4 minutes, it should take 2.4 sec to construct the database in average. For a database of 10000 songs, the total time should around 10000*4/60/100 = 6.7 hours.) || 50KB for 1 minute of music. (For a database of 10000 songs, the total storage for database should be around 50*10000*4/1000000 = 2GB.)&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 2 secs for each query of around 10 seconds (Thus for 5692 queries of around 10 seconds, the total query time should be around 2*5692/3600 = 3.2 hours.) || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10354</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10354"/>
		<updated>2014-08-12T14:38:27Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Database */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) 1,000 of the files are from GTZAN data set, all the others are mainly English and Chinese pop songs. This data set is hidden and not available for download. (Note that there are possibly different numbers of channels (mono and stereo), sampling rates, and bit resolutions for these files.)&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dir4db%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/00001.mp3&lt;br /&gt;
 ./AFP/database/00002.mp3&lt;br /&gt;
 ./AFP/database/00003.mp3&lt;br /&gt;
 ./AFP/database/00004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dir4db% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q0001.wav	./AFP/database/00004.mp3&lt;br /&gt;
 ./AFP/query/q0002.wav	./AFP/database/00054.mp3&lt;br /&gt;
 ./AFP/query/q0003.wav	./AFP/database/01002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 1/100 of real time (For instance, if a music clip has a duration of 4 minutes, it should take 2.4 sec to construct the database in average. For a database of 10000 songs, the total time should around 10000*4/60/100 = 6.7 hours.) || 50KB for 1 minute of music. (For a database of 10000 songs, the total storage for database should be around 50*10000*4/1000000 = 2GB.)&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 2 secs for each query of around 10 seconds (Thus for 5692 queries of around 10 seconds, the total query time should be around 2*5692/3600 = 3.2 hours.) || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10350</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10350"/>
		<updated>2014-08-07T01:01:33Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Time and hardware limits */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dir4db%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/00001.mp3&lt;br /&gt;
 ./AFP/database/00002.mp3&lt;br /&gt;
 ./AFP/database/00003.mp3&lt;br /&gt;
 ./AFP/database/00004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dir4db% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q0001.wav	./AFP/database/00004.mp3&lt;br /&gt;
 ./AFP/query/q0002.wav	./AFP/database/00054.mp3&lt;br /&gt;
 ./AFP/query/q0003.wav	./AFP/database/01002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 1/100 of real time (For instance, if a music clip has a duration of 4 minutes, it should take 2.4 sec to construct the database in average. For a database of 10000 songs, the total time should around 10000*4/60/100 = 6.7 hours.) || 50KB for 1 minute of music. (For a database of 10000 songs, the total storage for database should be around 50*10000*4/1000000 = 2GB.)&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 2 secs for each query of around 10 seconds (Thus for 5692 queries of around 10 seconds, the total query time should be around 2*5692/3600 = 3.2 hours.) || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10349</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10349"/>
		<updated>2014-08-07T00:48:27Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Time and hardware limits */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dir4db%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/00001.mp3&lt;br /&gt;
 ./AFP/database/00002.mp3&lt;br /&gt;
 ./AFP/database/00003.mp3&lt;br /&gt;
 ./AFP/database/00004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dir4db% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q0001.wav	./AFP/database/00004.mp3&lt;br /&gt;
 ./AFP/query/q0002.wav	./AFP/database/00054.mp3&lt;br /&gt;
 ./AFP/query/q0003.wav	./AFP/database/01002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almsot always lead to better accuracy, we need to hard limits for memory and runtime. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 1/100 of real time (For instance, if a music clip has a duration of 4 minutes, it should take 2.4 sec to construct the database in average. For a database of 10000 songs, the total time should around 10000*4/60/100 = 6.7 hours.) || 50KB for 1 minute of music. (For a database of 10000 songs, the total storage for database should be around 50*10000*4/1000000 = 2GB.)&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 2 secs for each query of around 10 seconds (Thus for 5692 queries of around 10 seconds, the total query time should be around 2*5692/3600 = 3.2 hours.) || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10348</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10348"/>
		<updated>2014-08-07T00:46:27Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Time and hardware limits */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dir4db%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/00001.mp3&lt;br /&gt;
 ./AFP/database/00002.mp3&lt;br /&gt;
 ./AFP/database/00003.mp3&lt;br /&gt;
 ./AFP/database/00004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dir4db% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q0001.wav	./AFP/database/00004.mp3&lt;br /&gt;
 ./AFP/query/q0002.wav	./AFP/database/00054.mp3&lt;br /&gt;
 ./AFP/query/q0003.wav	./AFP/database/01002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almsot always lead to better accuracy, we need to hard limits for memory and runtime. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 1/100 of real time (For instance, if a music clip has a duration of 4 minutes, it should take 2.4 sec to construct the databse in average. For a database of 10000 songs, the total time should be less than 10000*4/60/100 = 6.7 hours.) || 50KB for 1 minute of music. (For a database of 10000 songs, the total storage for database should be less than 50*10000*4/1000000=2 GB.)&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 2 secs for each query of around 10 seconds (Thus for 5692 queries of around 10 seconds, the total query time should be around 2*5692/3600 = 3.2 hours.) || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10347</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10347"/>
		<updated>2014-08-07T00:45:48Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Time and hardware limits */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dir4db%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/00001.mp3&lt;br /&gt;
 ./AFP/database/00002.mp3&lt;br /&gt;
 ./AFP/database/00003.mp3&lt;br /&gt;
 ./AFP/database/00004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dir4db% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q0001.wav	./AFP/database/00004.mp3&lt;br /&gt;
 ./AFP/query/q0002.wav	./AFP/database/00054.mp3&lt;br /&gt;
 ./AFP/query/q0003.wav	./AFP/database/01002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almsot always lead to better accuracy, we need to hard limits for memory and runtime. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 1/100 of real time (For instance, if a music clip has a duration of 4 minutes, it should take 2.4 sec to construct the databse in average. For a database of 10000 songs, the total time should be less than 10000*4/60/100 = 6.7 hours.) || 50KB for 1 minute of music. (For a database of 10000 songs, the total storage for database should be less than 50*10000*4/1000000=2 GB.)&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 2 secs for each query of around 10 seconds (Thus for 5719 queries of around 10 seconds, the total query time should be around 2*5692/3600 = 3.2 hours.) || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10346</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10346"/>
		<updated>2014-08-07T00:43:25Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Time and hardware limits */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dir4db%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/00001.mp3&lt;br /&gt;
 ./AFP/database/00002.mp3&lt;br /&gt;
 ./AFP/database/00003.mp3&lt;br /&gt;
 ./AFP/database/00004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dir4db% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q0001.wav	./AFP/database/00004.mp3&lt;br /&gt;
 ./AFP/query/q0002.wav	./AFP/database/00054.mp3&lt;br /&gt;
 ./AFP/query/q0003.wav	./AFP/database/01002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almsot always lead to better accuracy, we need to hard limits for memory and runtime. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 1/100 of real time (For instance, if a music clip has a duration of 4 minutes, it should take 2.4 sec to construct the databse in average. For a database of 10000 songs, the total time should be less than 10000*4/60/100 = 6.7 hours.) || 50KB for 1 minute of music. (For a database of 10000 songs, the total storage for database should be less than 50*10000*4/1000000=2 GB.)&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 2 secs for each query of around 10 seconds (Thus for 5719 queries of around 10 seconds, the total query time should be around 2*5719/3600 = 3.2 hours.) || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10345</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10345"/>
		<updated>2014-08-07T00:39:39Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Time and hardware limits */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dir4db%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/00001.mp3&lt;br /&gt;
 ./AFP/database/00002.mp3&lt;br /&gt;
 ./AFP/database/00003.mp3&lt;br /&gt;
 ./AFP/database/00004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dir4db% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q0001.wav	./AFP/database/00004.mp3&lt;br /&gt;
 ./AFP/query/q0002.wav	./AFP/database/00054.mp3&lt;br /&gt;
 ./AFP/query/q0003.wav	./AFP/database/01002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almsot always lead to better accuracy, we need to hard limits for memory and runtime. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 1/100 of real time (For instance, if a music clip has a duration of 4 minutes, it should take 2.4 sec to construct the databse in average. For a database of 10000 songs, the total time should be less than 10000*4/60/100 = 6.7 hours.) || 50KB for 1 minute of music. (For a database of 10000 songs, the total storage for database should be less than 50*10000*4/1000000=2 GB.)&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10344</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10344"/>
		<updated>2014-08-07T00:33:18Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Time and hardware limits */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dir4db%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/00001.mp3&lt;br /&gt;
 ./AFP/database/00002.mp3&lt;br /&gt;
 ./AFP/database/00003.mp3&lt;br /&gt;
 ./AFP/database/00004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dir4db% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q0001.wav	./AFP/database/00004.mp3&lt;br /&gt;
 ./AFP/query/q0002.wav	./AFP/database/00054.mp3&lt;br /&gt;
 ./AFP/query/q0003.wav	./AFP/database/01002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almsot always lead to better accuracy, we need to hard limits for memory and runtime. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 1/100 of real time (For instance, if a music clip has a duration of 4 minutes, it should take 2.4 sec to construct the databse in average. For a database of 10000 songs, the total time should be less than 10000*4/60/100 = 6.7 hours.) || 3 GB for the database file(s)&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10333</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10333"/>
		<updated>2014-07-27T07:11:50Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4000 (???) clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1264 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via this link &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dbName%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/00001.mp3&lt;br /&gt;
 ./AFP/database/00002.mp3&lt;br /&gt;
 ./AFP/database/00003.mp3&lt;br /&gt;
 ./AFP/database/00004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
The output file %dbName% contains all the information of the database to be used for audio fingerprinting. (The size of the database file is restricted to a certain amount, as explained next.)&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dbName% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q0001.wav	./AFP/database/00004.mp3&lt;br /&gt;
 ./AFP/query/q0002.wav	./AFP/database/00054.mp3&lt;br /&gt;
 ./AFP/query/q0003.wav	./AFP/database/01002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almsot always lead to better accuracy, we need to hard limits for memory and runtime. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 3 GB for the database file %afpDb%&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10332</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10332"/>
		<updated>2014-07-27T07:11:16Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4000 (???) clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1264 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via this link &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dbName%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/00001.mp3&lt;br /&gt;
 ./AFP/database/00002.mp3&lt;br /&gt;
 ./AFP/database/00003.mp3&lt;br /&gt;
 ./AFP/database/00004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
The output file %dbName% contains all the information of the database to be used for audio fingerprinting. (The size of the database file is restricted to a certain amount, as explained next.)&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dbName% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q0001.wav	./AFP/database/00004.mp3&lt;br /&gt;
 ./AFP/query/q0002.wav	./AFP/database/00054.mp3&lt;br /&gt;
 ./AFP/query/q0003.wav	./AFP/database/01002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almsot always lead to better accuracy, we need to hard limits for memory and runtime. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 3 GB for %afpDb%&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10331</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10331"/>
		<updated>2014-07-27T07:08:43Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4000 (???) clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1264 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via this link &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dbName%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/00001.mp3&lt;br /&gt;
 ./AFP/database/00002.mp3&lt;br /&gt;
 ./AFP/database/00003.mp3&lt;br /&gt;
 ./AFP/database/00004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
The output file %dbName% contains all the information of the database to be used for audio fingerprinting. (The size of the database file is restricted to a certain amount, as explained next.)&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dbName% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q0001.wav	./AFP/database/00004.mp3&lt;br /&gt;
 ./AFP/query/q0002.wav	./AFP/database/00054.mp3&lt;br /&gt;
 ./AFP/query/q0003.wav	./AFP/database/01002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almsot always lead to better accuracy, we need to hard limits for memory and runtime. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage (hard disk) limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 3 GB&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || N/A&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10330</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10330"/>
		<updated>2014-07-27T07:05:43Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4000 (???) clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1264 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via this link &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dbName%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/00001.mp3&lt;br /&gt;
 ./AFP/database/00002.mp3&lt;br /&gt;
 ./AFP/database/00003.mp3&lt;br /&gt;
 ./AFP/database/00004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
The output file %dbName% contains all the information of the database to be used for audio fingerprinting. (The size of the database file is restricted to a certain amount, as explained next.)&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dbName% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q0001.wav	./AFP/database/00004.mp3&lt;br /&gt;
 ./AFP/query/q0002.wav	./AFP/database/00054.mp3&lt;br /&gt;
 ./AFP/query/q0003.wav	./AFP/database/01002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions are specified. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage (hard disk) limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 3 GB&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || N/A&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10329</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10329"/>
		<updated>2014-07-27T07:05:11Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4000 (???) clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1264 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via this link &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dbName%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/00001.mp3&lt;br /&gt;
 ./AFP/database/00002.mp3&lt;br /&gt;
 ./AFP/database/00003.mp3&lt;br /&gt;
 ./AFP/database/00004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
The output file %dbName% contains all the information of the database to be used for audio fingerprinting. (The size of the database file is restricted to a certain amount, as explained next.)&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dbName% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
where these two fields are separated by a tab.&lt;br /&gt;
&lt;br /&gt;
Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q0001.wav	./AFP/database/00004.mp3&lt;br /&gt;
 ./AFP/query/q0002.wav	./AFP/database/00054.mp3&lt;br /&gt;
 ./AFP/query/q0003.wav	./AFP/database/01002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions are specified. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage (hard disk) limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 3 GB&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || N/A&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10328</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10328"/>
		<updated>2014-07-27T06:57:28Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4000 (???) clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1264 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via this link &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dbName%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/00001.mp3&lt;br /&gt;
 ./AFP/database/00002.mp3&lt;br /&gt;
 ./AFP/database/00003.mp3&lt;br /&gt;
 ./AFP/database/00004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
The output file %dbName% contains all the information of the database to be used for audio fingerprinting. (The size of the database file is restricted to a certain amount, as explained next.)&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dbName% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query. The format should be:&lt;br /&gt;
 %main_query_file_name% %main_top_1_candiate_file_name%&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
 q0001 00204&lt;br /&gt;
 q0002 08964&lt;br /&gt;
 q0003 05566&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions are specified. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage (hard disk) limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 3 GB&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || N/A&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10327</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10327"/>
		<updated>2014-07-27T06:49:44Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4000 (???) clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1264 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via this link &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dir4db%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/00001.mp3&lt;br /&gt;
 ./AFP/database/00002.mp3&lt;br /&gt;
 ./AFP/database/00003.mp3&lt;br /&gt;
 ./AFP/database/00004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
Output file(s) should be placed into %db_dir%&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %db_dir% %file.query.list% %resultFile%&lt;br /&gt;
where %db_dir% is the directory for the built database.&lt;br /&gt;
&lt;br /&gt;
%file.query.list% is the input list of query clips, for example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query. The format should be:&lt;br /&gt;
 %main_query_file_name% %main_top_1_candiate_file_name%&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
 q0001 00204&lt;br /&gt;
 q0002 08964&lt;br /&gt;
 q0003 05566&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions are specified. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage (hard disk) limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 3 GB&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || N/A&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10326</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10326"/>
		<updated>2014-07-27T06:43:28Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4000 (???) clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1264 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via this link &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %file.db.list% %db_dir%&lt;br /&gt;
where %file.db.list% is the input list of database audio files named as uniq_key.wav For example:&lt;br /&gt;
 ./AFP/database/00001.wav&lt;br /&gt;
 ./AFP/database/00002.wav&lt;br /&gt;
 ./AFP/database/00003.wav&lt;br /&gt;
 ./AFP/database/00004.wav&lt;br /&gt;
 ...&lt;br /&gt;
Output file(s) should be placed into %db_dir%&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %db_dir% %file.query.list% %resultFile%&lt;br /&gt;
where %db_dir% is the directory for the built database.&lt;br /&gt;
&lt;br /&gt;
%file.query.list% is the input list of query clips, for example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query. The format should be:&lt;br /&gt;
 %main_query_file_name% %main_top_1_candiate_file_name%&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
 q0001 00204&lt;br /&gt;
 q0002 08964&lt;br /&gt;
 q0003 05566&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions are specified. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage (hard disk) limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 3 GB&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || N/A&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10325</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10325"/>
		<updated>2014-07-25T16:18:19Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4000 (???) 10-sec clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1264 10-sec clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via this link &lt;br /&gt;
&lt;br /&gt;
These mono recordings, with 44.1 KHz sampling rate and 16-bit resolution, were obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the two parts of the query set, with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %file.db.list% %db_dir%&lt;br /&gt;
where %file.db.list% is the input list of database audio files named as uniq_key.wav For example:&lt;br /&gt;
 ./AFP/database/00001.wav&lt;br /&gt;
 ./AFP/database/00002.wav&lt;br /&gt;
 ./AFP/database/00003.wav&lt;br /&gt;
 ./AFP/database/00004.wav&lt;br /&gt;
 ...&lt;br /&gt;
Output file(s) should be placed into %db_dir%&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %db_dir% %file.query.list% %resultFile%&lt;br /&gt;
where %db_dir% is the directory for the built database.&lt;br /&gt;
&lt;br /&gt;
%file.query.list% is the input list of query clips, for example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query. The format should be:&lt;br /&gt;
 %main_query_file_name% %main_top_1_candiate_file_name%&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
 q0001 00204&lt;br /&gt;
 q0002 08964&lt;br /&gt;
 q0003 05566&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions are specified. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage (hard disk) limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 3 GB&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || N/A&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10324</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10324"/>
		<updated>2014-07-25T16:16:14Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4000 (???) 10-sec clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1264 10-sec clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via this link &lt;br /&gt;
&lt;br /&gt;
These mono recordings, with 44.1 KHz sampling rate and 16-bit resolution, were obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the two parts of the query set, with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %file.db.list% %db_dir%&lt;br /&gt;
where %file.db.list% is the input list of database audio files named as uniq_key.wav For example:&lt;br /&gt;
 ./AFP/database/00001.wav&lt;br /&gt;
 ./AFP/database/00002.wav&lt;br /&gt;
 ./AFP/database/00003.wav&lt;br /&gt;
 ./AFP/database/00004.wav&lt;br /&gt;
 ...&lt;br /&gt;
Output file(s) should be placed into %db_dir%&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %dir_db% %file.query.list% %resultFile%&lt;br /&gt;
where %dir_db% is the directory for the built database.&lt;br /&gt;
&lt;br /&gt;
%file.query.list% is the input list of query clips, for example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query. The format should be:&lt;br /&gt;
 %main_query_file_name% %main_top_1_candiate_file_name%&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
 q0001 00204&lt;br /&gt;
 q0002 08964&lt;br /&gt;
 q0003 05566&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions are specified. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage (hard disk) limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 3 GB&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || N/A&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10323</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10323"/>
		<updated>2014-07-25T16:08:34Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Evaluation Procedures */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4000 (???) 10-sec clips of mp3 format: These are hidden and not available for download&lt;br /&gt;
* 1264 10-sec clips of mp3 format: These recordings are noisy versions of George's music genre dataset. You can download the query set via this link &lt;br /&gt;
&lt;br /&gt;
These recordings were obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the two parts of the query set, with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %file.db.list% %dir_db%&lt;br /&gt;
where %file.db.list% is the input list of database audio files named as uniq_key.wav For example:&lt;br /&gt;
 ./AFP/database/00001.wav&lt;br /&gt;
 ./AFP/database/00002.wav&lt;br /&gt;
 ./AFP/database/00003.wav&lt;br /&gt;
 ./AFP/database/00004.wav&lt;br /&gt;
 ...&lt;br /&gt;
Output file(s) should be placed into %dir_db%&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %dir_db% %file.query.list% %resultFile%&lt;br /&gt;
where %dir_db% is the directory for the built database.&lt;br /&gt;
&lt;br /&gt;
%file.query.list% is the input list of query clips, for example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query. The format should be:&lt;br /&gt;
 %main_query_file_name% %main_top_1_candiate_file_name%&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
 q0001 00204&lt;br /&gt;
 q0002 08964&lt;br /&gt;
 q0003 05566&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions are specified. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage (hard disk) limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 3 GB&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || N/A&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10322</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10322"/>
		<updated>2014-07-25T16:07:57Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Query set */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4000 (???) 10-sec clips of mp3 format: These are hidden and not available for download&lt;br /&gt;
* 1264 10-sec clips of mp3 format: These recordings are noisy versions of George's music genre dataset. You can download the query set via this link &lt;br /&gt;
&lt;br /&gt;
These recordings were obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is baesd on the two parts of the query set, with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %file.db.list% %dir_db%&lt;br /&gt;
where %file.db.list% is the input list of database audio files named as uniq_key.wav For example:&lt;br /&gt;
 ./AFP/database/00001.wav&lt;br /&gt;
 ./AFP/database/00002.wav&lt;br /&gt;
 ./AFP/database/00003.wav&lt;br /&gt;
 ./AFP/database/00004.wav&lt;br /&gt;
 ...&lt;br /&gt;
Output file(s) should be placed into %dir_db%&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %dir_db% %file.query.list% %resultFile%&lt;br /&gt;
where %dir_db% is the directory for the built database.&lt;br /&gt;
&lt;br /&gt;
%file.query.list% is the input list of query clips, for example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query. The format should be:&lt;br /&gt;
 %main_query_file_name% %main_top_1_candiate_file_name%&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
 q0001 00204&lt;br /&gt;
 q0002 08964&lt;br /&gt;
 q0003 05566&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions are specified. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage (hard disk) limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 3 GB&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || N/A&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10321</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10321"/>
		<updated>2014-07-25T16:01:24Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Submission Format */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4000 10-second clips of mp3 format: This is hidden and not available for download&lt;br /&gt;
* 1264 10-sec clips of mp3 format: These recordings are noisy versions of George's music genre dataset. You can download the query set via this link &lt;br /&gt;
&lt;br /&gt;
These recordings were obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is baesd on the two parts of the query set, with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %file.db.list% %dir_db%&lt;br /&gt;
where %file.db.list% is the input list of database audio files named as uniq_key.wav For example:&lt;br /&gt;
 ./AFP/database/00001.wav&lt;br /&gt;
 ./AFP/database/00002.wav&lt;br /&gt;
 ./AFP/database/00003.wav&lt;br /&gt;
 ./AFP/database/00004.wav&lt;br /&gt;
 ...&lt;br /&gt;
Output file(s) should be placed into %dir_db%&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %dir_db% %file.query.list% %resultFile%&lt;br /&gt;
where %dir_db% is the directory for the built database.&lt;br /&gt;
&lt;br /&gt;
%file.query.list% is the input list of query clips, for example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query. The format should be:&lt;br /&gt;
 %main_query_file_name% %main_top_1_candiate_file_name%&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
 q0001 00204&lt;br /&gt;
 q0002 08964&lt;br /&gt;
 q0003 05566&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions are specified. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage (hard disk) limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 3 GB&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || N/A&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10320</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10320"/>
		<updated>2014-07-25T16:00:15Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Submission Format */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4000 10-second clips of mp3 format: This is hidden and not available for download&lt;br /&gt;
* 1264 10-sec clips of mp3 format: These recordings are noisy versions of George's music genre dataset. You can download the query set via this link &lt;br /&gt;
&lt;br /&gt;
These recordings were obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is baesd on the two parts of the query set, with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, including the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %file.db.list% %dir_db%&lt;br /&gt;
where %file.db.list% is the input list of database audio files named as uniq_key.wav For example:&lt;br /&gt;
 ./AFP/database/00001.wav&lt;br /&gt;
 ./AFP/database/00002.wav&lt;br /&gt;
 ./AFP/database/00003.wav&lt;br /&gt;
 ./AFP/database/00004.wav&lt;br /&gt;
 ...&lt;br /&gt;
Output file(s) should be placed into %dir_db%&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %dir_db% %file.query.list% %resultFile%&lt;br /&gt;
where %dir_db% is the directory for the built database.&lt;br /&gt;
&lt;br /&gt;
%file.query.list% is the input list of query clips, for example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query. The format should be:&lt;br /&gt;
 %main_query_file_name% %main_top_1_candiate_file_name%&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
 q0001 00204&lt;br /&gt;
 q0002 08964&lt;br /&gt;
 q0003 05566&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions are specified. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage (hard disk) limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 3 GB&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || N/A&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10319</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10319"/>
		<updated>2014-07-25T15:58:30Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Evaluation Procedures */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4000 10-second clips of mp3 format: This is hidden and not available for download&lt;br /&gt;
* 1264 10-sec clips of mp3 format: These recordings are noisy versions of George's music genre dataset. You can download the query set via this link &lt;br /&gt;
&lt;br /&gt;
These recordings were obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is baesd on the two parts of the query set, with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of algorithm. The two parts are:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %file.db.list% %dir_db%&lt;br /&gt;
where %file.db.list% is the input list of database audio files named as uniq_key.wav For example:&lt;br /&gt;
 ./AFP/database/00001.wav&lt;br /&gt;
 ./AFP/database/00002.wav&lt;br /&gt;
 ./AFP/database/00003.wav&lt;br /&gt;
 ./AFP/database/00004.wav&lt;br /&gt;
 ...&lt;br /&gt;
Output file(s) should be placed into %dir_db%&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %dir_db% %file.query.list% %resultFile%&lt;br /&gt;
where %dir_db% is the directory for the built database.&lt;br /&gt;
&lt;br /&gt;
%file.query.list% is the input list of query clips, for example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query. The format should be:&lt;br /&gt;
 %main_query_file_name% %main_top_1_candiate_file_name%&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
 q0001 00204&lt;br /&gt;
 q0002 08964&lt;br /&gt;
 q0003 05566&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions are specified. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage (hard disk) limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 3 GB&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || N/A&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10318</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10318"/>
		<updated>2014-07-25T15:57:25Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Query set */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4000 10-second clips of mp3 format: This is hidden and not available for download&lt;br /&gt;
* 1264 10-sec clips of mp3 format: These recordings are noisy versions of George's music genre dataset. You can download the query set via this link &lt;br /&gt;
&lt;br /&gt;
These recordings were obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
Top-1 hit rate&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of algorithm. The two parts are:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %file.db.list% %dir_db%&lt;br /&gt;
where %file.db.list% is the input list of database audio files named as uniq_key.wav For example:&lt;br /&gt;
 ./AFP/database/00001.wav&lt;br /&gt;
 ./AFP/database/00002.wav&lt;br /&gt;
 ./AFP/database/00003.wav&lt;br /&gt;
 ./AFP/database/00004.wav&lt;br /&gt;
 ...&lt;br /&gt;
Output file(s) should be placed into %dir_db%&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %dir_db% %file.query.list% %resultFile%&lt;br /&gt;
where %dir_db% is the directory for the built database.&lt;br /&gt;
&lt;br /&gt;
%file.query.list% is the input list of query clips, for example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query. The format should be:&lt;br /&gt;
 %main_query_file_name% %main_top_1_candiate_file_name%&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
 q0001 00204&lt;br /&gt;
 q0002 08964&lt;br /&gt;
 q0003 05566&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions are specified. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage (hard disk) limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 3 GB&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || N/A&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10317</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10317"/>
		<updated>2014-07-25T15:54:48Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Query */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4000 10-second clips of mp3 format: This is hidden and not available for download&lt;br /&gt;
* 1264 10-sec clips of mp3 format: This is open and available for download via this link &lt;br /&gt;
&lt;br /&gt;
These recordings were obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
Top-1 hit rate&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of algorithm. The two parts are:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %file.db.list% %dir_db%&lt;br /&gt;
where %file.db.list% is the input list of database audio files named as uniq_key.wav For example:&lt;br /&gt;
 ./AFP/database/00001.wav&lt;br /&gt;
 ./AFP/database/00002.wav&lt;br /&gt;
 ./AFP/database/00003.wav&lt;br /&gt;
 ./AFP/database/00004.wav&lt;br /&gt;
 ...&lt;br /&gt;
Output file(s) should be placed into %dir_db%&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %dir_db% %file.query.list% %resultFile%&lt;br /&gt;
where %dir_db% is the directory for the built database.&lt;br /&gt;
&lt;br /&gt;
%file.query.list% is the input list of query clips, for example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query. The format should be:&lt;br /&gt;
 %main_query_file_name% %main_top_1_candiate_file_name%&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
 q0001 00204&lt;br /&gt;
 q0002 08964&lt;br /&gt;
 q0003 05566&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions are specified. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage (hard disk) limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 3 GB&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || N/A&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10316</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10316"/>
		<updated>2014-07-25T15:50:35Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Database */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query ===&lt;br /&gt;
* 1,264 10-second clips&lt;br /&gt;
* mono, 44.1 kHz, 16 bit resolution&lt;br /&gt;
* Recorded by variety brand of smartphones, containing noise&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
Top-1 hit rate&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of algorithm. The two parts are:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %file.db.list% %dir_db%&lt;br /&gt;
where %file.db.list% is the input list of database audio files named as uniq_key.wav For example:&lt;br /&gt;
 ./AFP/database/00001.wav&lt;br /&gt;
 ./AFP/database/00002.wav&lt;br /&gt;
 ./AFP/database/00003.wav&lt;br /&gt;
 ./AFP/database/00004.wav&lt;br /&gt;
 ...&lt;br /&gt;
Output file(s) should be placed into %dir_db%&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %dir_db% %file.query.list% %resultFile%&lt;br /&gt;
where %dir_db% is the directory for the built database.&lt;br /&gt;
&lt;br /&gt;
%file.query.list% is the input list of query clips, for example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query. The format should be:&lt;br /&gt;
 %main_query_file_name% %main_top_1_candiate_file_name%&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
 q0001 00204&lt;br /&gt;
 q0002 08964&lt;br /&gt;
 q0003 05566&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions are specified. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage (hard disk) limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 3 GB&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || N/A&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10315</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10315"/>
		<updated>2014-07-25T15:49:23Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Database */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.)&lt;br /&gt;
&lt;br /&gt;
=== Query ===&lt;br /&gt;
* 1,264 10-second clips&lt;br /&gt;
* mono, 44.1 kHz, 16 bit resolution&lt;br /&gt;
* Recorded by variety brand of smartphones, containing noise&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
Top-1 hit rate&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of algorithm. The two parts are:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %file.db.list% %dir_db%&lt;br /&gt;
where %file.db.list% is the input list of database audio files named as uniq_key.wav For example:&lt;br /&gt;
 ./AFP/database/00001.wav&lt;br /&gt;
 ./AFP/database/00002.wav&lt;br /&gt;
 ./AFP/database/00003.wav&lt;br /&gt;
 ./AFP/database/00004.wav&lt;br /&gt;
 ...&lt;br /&gt;
Output file(s) should be placed into %dir_db%&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %dir_db% %file.query.list% %resultFile%&lt;br /&gt;
where %dir_db% is the directory for the built database.&lt;br /&gt;
&lt;br /&gt;
%file.query.list% is the input list of query clips, for example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query. The format should be:&lt;br /&gt;
 %main_query_file_name% %main_top_1_candiate_file_name%&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
 q0001 00204&lt;br /&gt;
 q0002 08964&lt;br /&gt;
 q0003 05566&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions are specified. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage (hard disk) limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 3 GB&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || N/A&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10314</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10314"/>
		<updated>2014-07-25T15:47:22Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database&lt;br /&gt;
&lt;br /&gt;
=== Query ===&lt;br /&gt;
* 1,264 10-second clips&lt;br /&gt;
* mono, 44.1 kHz, 16 bit resolution&lt;br /&gt;
* Recorded by variety brand of smartphones, containing noise&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
Top-1 hit rate&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of algorithm. The two parts are:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %file.db.list% %dir_db%&lt;br /&gt;
where %file.db.list% is the input list of database audio files named as uniq_key.wav For example:&lt;br /&gt;
 ./AFP/database/00001.wav&lt;br /&gt;
 ./AFP/database/00002.wav&lt;br /&gt;
 ./AFP/database/00003.wav&lt;br /&gt;
 ./AFP/database/00004.wav&lt;br /&gt;
 ...&lt;br /&gt;
Output file(s) should be placed into %dir_db%&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %dir_db% %file.query.list% %resultFile%&lt;br /&gt;
where %dir_db% is the directory for the built database.&lt;br /&gt;
&lt;br /&gt;
%file.query.list% is the input list of query clips, for example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query. The format should be:&lt;br /&gt;
 %main_query_file_name% %main_top_1_candiate_file_name%&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
 q0001 00204&lt;br /&gt;
 q0002 08964&lt;br /&gt;
 q0003 05566&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions are specified. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage (hard disk) limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 3 GB&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || N/A&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10313</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10313"/>
		<updated>2014-07-25T15:46:34Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database&lt;br /&gt;
&lt;br /&gt;
=== Query ===&lt;br /&gt;
* 1,264 10-second clips&lt;br /&gt;
* mono, 44.1 kHz, 16 bit resolution&lt;br /&gt;
* Recorded by variety brand of smartphones, containing noise&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
Top-1 hit rate&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of algorithm. The two parts are:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %file.db.list% %dir_db%&lt;br /&gt;
where %file.db.list% is the input list of database audio files named as uniq_key.wav For example:&lt;br /&gt;
 ./AFP/database/00001.wav&lt;br /&gt;
 ./AFP/database/00002.wav&lt;br /&gt;
 ./AFP/database/00003.wav&lt;br /&gt;
 ./AFP/database/00004.wav&lt;br /&gt;
 ...&lt;br /&gt;
Output file(s) should be placed into %dir_db%&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %dir_db% %file.query.list% %resultFile%&lt;br /&gt;
where %dir_db% is the directory for the built database.&lt;br /&gt;
&lt;br /&gt;
%file.query.list% is the input list of query clips, for example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query. The format should be:&lt;br /&gt;
 %main_query_file_name% %main_top_1_candiate_file_name%&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
 q0001 00204&lt;br /&gt;
 q0002 08964&lt;br /&gt;
 q0003 05566&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions are specified. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage (hard disk) limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 3 GB&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || N/A&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10312</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10312"/>
		<updated>2014-07-25T15:41:17Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Database */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task requires the query by using exact but noisy recordings.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database&lt;br /&gt;
&lt;br /&gt;
=== Query ===&lt;br /&gt;
* 1,264 10-second clips&lt;br /&gt;
* mono, 44.1 kHz, 16 bit resolution&lt;br /&gt;
* Recorded by variety brand of smartphones, containing noise&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
Top-1 hit rate&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of algorithm. The two parts are:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %file.db.list% %dir_db%&lt;br /&gt;
where %file.db.list% is the input list of database audio files named as uniq_key.wav For example:&lt;br /&gt;
 ./AFP/database/00001.wav&lt;br /&gt;
 ./AFP/database/00002.wav&lt;br /&gt;
 ./AFP/database/00003.wav&lt;br /&gt;
 ./AFP/database/00004.wav&lt;br /&gt;
 ...&lt;br /&gt;
Output file(s) should be placed into %dir_db%&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %dir_db% %file.query.list% %resultFile%&lt;br /&gt;
where %dir_db% is the directory for the built database.&lt;br /&gt;
&lt;br /&gt;
%file.query.list% is the input list of query clips, for example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query. The format should be:&lt;br /&gt;
 %main_query_file_name% %main_top_1_candiate_file_name%&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
 q0001 00204&lt;br /&gt;
 q0002 08964&lt;br /&gt;
 q0003 05566&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions are specified. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage (hard disk) limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 3 GB&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || N/A&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Query_by_Singing/Humming&amp;diff=9152</id>
		<title>2013:Query by Singing/Humming</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Query_by_Singing/Humming&amp;diff=9152"/>
		<updated>2013-02-17T04:05:12Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
&lt;br /&gt;
The text of this section is copied from the 2010 page. Please add your comments and discussions for 2013. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of the Query-by-Singing/Humming (QBSH) task is the evaluation of MIR systems that take as query input queries sung or hummed by real-world users. More information can be found in:&lt;br /&gt;
&lt;br /&gt;
* [[2009:Query_by_Singing/Humming]]&lt;br /&gt;
* [[2009:Query_by_Singing/Humming]]&lt;br /&gt;
* [[2008:Query_by_Singing/Humming]]&lt;br /&gt;
* [[2007:Query_by_Singing/Humming]]&lt;br /&gt;
* [[2006:Query_by_Singing/Humming]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: Classic QBSH evaluation ===&lt;br /&gt;
This is the classic QBSH problem where we need to find the ground-truth midi from a user's singing or humming.&lt;br /&gt;
* '''Queries''': human singing/humming snippets (.wav). Queries are from MIR-QBSH and IOACAS corpora described below.&lt;br /&gt;
* '''Database''': ground-truth and noise MIDI files(which are monophonic). Comprised of ground-truth MIDIs from MIR-QBSH corpus (48), and IOACAS corpus (106), along with a cleaned version of Essen Database(2000+ MIDIs which are not available to the participator) &lt;br /&gt;
* '''Output''': top-10 candidate list. &lt;br /&gt;
* '''Evaluation''': Top-10 hit rate (1 point is scored for a hit in the top 10 and 0 is scored otherwise).&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: Variants QBSH evaluation ===&lt;br /&gt;
This is based on Prof. Downie's idea that queries are variants of &amp;quot;ground-truth&amp;quot; midi. In fact, this becomes more important since user-contributed singing/humming is an important part of the song database to be searched, as evidenced by the QBSH search service at [http://www.midomi.com/ www.midomi.com].&lt;br /&gt;
* '''Queries''': human singing/humming snippets (.wav). Queries are from Roger Jang's corpus and IOACAS corpus.&lt;br /&gt;
* '''Database''': human singing/humming snippets (.wav) from all available corpora (excluding the query input being searched).&lt;br /&gt;
* '''Output''': top-10 candidate list. &lt;br /&gt;
* '''Evaluation''': Top-10 hit rate (1 point is scored for a hit in the top 10 and 0 is scored otherwise).&lt;br /&gt;
&lt;br /&gt;
To make algorithms able to share intermediate steps, participants are encouraged to submit separate tracker and matcher modules instead of integrated ones, which is according to Rainer Typke's suggestion. So trackers and matchers from different submissions could work together with the same pre-defined interface and thus for us it's possible to find the best combination.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
Currently we have 2 publicly available corpora for QBSH:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBSH-corpus.rar MIR-QBSH corpus] which is comprised of 4431 queries along with 48 ground-truth MIDI files. All queries are from the beginning of references. Manually labeled pitch for each recording is available. &lt;br /&gt;
&lt;br /&gt;
* [http://mirlab.org/dataSet/public/IOACAS_QBH.rar IOACAS corpus] comprised of 759 queries and 298 monophonic ground-truth MIDI files (with MIDI 0 or 1 format). There are no &amp;quot;singing from beginning&amp;quot; guarantee.&lt;br /&gt;
&lt;br /&gt;
The noise MIDI will be the 5000+ Essen collection(can be accessed from http://www.esac-data.org/).&lt;br /&gt;
&lt;br /&gt;
To build a large test set which can reflect real-world queries, it is suggested that every participant makes a contribution to the evaluation corpus. Sometimes this is hard in practice. So we shall adopt &amp;quot;no hidden dataset&amp;quot; policy if there are not enough user-contributed copora.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Corpus Contribution ==&lt;br /&gt;
Every participant will be asked to contribute 100~200 wave queries (8k 16bits) as well as the ground truth MIDI as test data. &lt;br /&gt;
&lt;br /&gt;
A simple tool for recording query data will be public soon. You may need to have .NET 2.0 or above installed in your system in order to run this program. The generated files conform to the format used in the IOACAS corpus. Of course you are also welcomed to use your own program to record the query data.&lt;br /&gt;
&lt;br /&gt;
If there are not enough user-contributed corpora, then we shall adopt &amp;quot;no hidden dataset&amp;quot; policy for QBSH task as usual.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
&lt;br /&gt;
=== Breakdown Version ===&lt;br /&gt;
1. Database indexing/building. Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 ./QBSH/midiDatabase/00001.mid&lt;br /&gt;
 ./QBSH/midiDatabase/00002.mid&lt;br /&gt;
 ./QBSH/midiDatabase/00003.mid&lt;br /&gt;
 ./QBSH/midiDatabase/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (For task 2, %dbMidi.list% is in fact a list of wav files in the database.)&lt;br /&gt;
&lt;br /&gt;
2. Pitch tracker. Command format: &lt;br /&gt;
&lt;br /&gt;
 pitch_tracker %queryWave.list% %dir_query_pitch%&lt;br /&gt;
&lt;br /&gt;
where %queryWave.list% looks like &lt;br /&gt;
&lt;br /&gt;
 queryWave/query_00001.wav&lt;br /&gt;
 queryWave/query_00002.wav&lt;br /&gt;
 queryWave/query_00003.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Each input file dir_query/query_xxxxx.wav in %queryWave.list% outputs a corresponding transcription %dir_query_pitch%/query_xxxxx.pitch which gives the pitch sequence in midi note scale with the resolution of 10ms: &lt;br /&gt;
&lt;br /&gt;
 0&lt;br /&gt;
 0&lt;br /&gt;
 62.23&lt;br /&gt;
 62.25&lt;br /&gt;
 62.21&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Thus a query with x seconds should output a pitch file with 100*x lines. Places of silence/rest are set to be 0.  &lt;br /&gt;
&lt;br /&gt;
3. Pitch matcher. Command format: &lt;br /&gt;
&lt;br /&gt;
 pitch_matcher %dbMidi.list% %queryPitch.list% %resultFile%&lt;br /&gt;
&lt;br /&gt;
where %queryPitch.list% looks like &lt;br /&gt;
&lt;br /&gt;
 queryPitch/query_00001.pitch&lt;br /&gt;
 queryPitch/query_00002.pitch&lt;br /&gt;
 queryPitch/query_00003.pitch&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
and the result file gives top-10 candidates(if has) for each query: &lt;br /&gt;
&lt;br /&gt;
 queryPitch/query_00001.pitch: 00025 01003 02200 ... &lt;br /&gt;
 queryPitch/query_00002.pitch: 01547 02313 07653 ... &lt;br /&gt;
 queryPitch/query_00003.pitch: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
=== Integrated Version ===&lt;br /&gt;
If you want to pack everything together, the command format should be much simpler:&lt;br /&gt;
&lt;br /&gt;
 qbshProgram %dbMidi.list% %queryWave.list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. The result file should have the same format as mentioned previously. (For task 2, %dbMidi.list% is in fact a list of wav files in the database to be retrieved.)&lt;br /&gt;
&lt;br /&gt;
=== Packaging submissions ===&lt;br /&gt;
All submissions should be statically linked to all libraries (the presence of &lt;br /&gt;
dynamically linked libraries cannot be guarenteed).&lt;br /&gt;
&lt;br /&gt;
All submissions should include a README file including the following the &lt;br /&gt;
information:&lt;br /&gt;
&lt;br /&gt;
* Command line calling format for all executables and an example formatted set of commands&lt;br /&gt;
* Number of threads/cores used or whether this should be specified on the command line&lt;br /&gt;
* Expected memory footprint&lt;br /&gt;
* Expected runtime&lt;br /&gt;
* Any required environments (and versions), e.g. python, java, bash, matlab.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of particpants in this and other audio tasks,&lt;br /&gt;
hard limits on the runtime of submissions are specified. &lt;br /&gt;
 &lt;br /&gt;
A hard limit of 72 hours will be imposed on runs (total feature extraction and querying times). Submissions that exceed this runtime may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Query_by_Singing/Humming&amp;diff=9151</id>
		<title>2013:Query by Singing/Humming</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Query_by_Singing/Humming&amp;diff=9151"/>
		<updated>2013-02-17T04:00:04Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Subtask 1: Classic QBSH evaluation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
&lt;br /&gt;
The text of this section is copied from the 2010 page. Please add your comments and discussions for 2013. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of the Query-by-Singing/Humming (QBSH) task is the evaluation of MIR systems that take as query input queries sung or hummed by real-world users. More information can be found in:&lt;br /&gt;
&lt;br /&gt;
* [[2009:Query_by_Singing/Humming]]&lt;br /&gt;
* [[2009:Query_by_Singing/Humming]]&lt;br /&gt;
* [[2008:Query_by_Singing/Humming]]&lt;br /&gt;
* [[2007:Query_by_Singing/Humming]]&lt;br /&gt;
* [[2006:Query_by_Singing/Humming]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: Classic QBSH evaluation ===&lt;br /&gt;
This is the classic QBSH problem where we need to find the ground-truth midi from a user's singing or humming.&lt;br /&gt;
* '''Queries''': human singing/humming snippets (.wav). Queries are from MIR-QBSH and IOACAS corpora described below.&lt;br /&gt;
* '''Database''': ground-truth and noise MIDI files(which are monophonic). Comprised of ground-truth MIDIs from MIR-QBSH corpus (48), and IOACAS corpus (106), along with a cleaned version of Essen Database(2000+ MIDIs which are not available to the participator) &lt;br /&gt;
* '''Output''': top-10 candidate list. &lt;br /&gt;
* '''Evaluation''': Top-10 hit rate (1 point is scored for a hit in the top 10 and 0 is scored otherwise).&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: Variants QBSH evaluation ===&lt;br /&gt;
This is based on Prof. Downie's idea that queries are variants of &amp;quot;ground-truth&amp;quot; midi. In fact, this becomes more important since user-contributed singing/humming is an important part of the song database to be searched, as evidenced by the QBSH search service at [http://www.midomi.com/ www.midomi.com].&lt;br /&gt;
* '''Queries''': human singing/humming snippets (.wav). Queries are from Roger Jang's corpus and ThinkIT corpus.&lt;br /&gt;
* '''Database''': human singing/humming snippets (.wav) from all available corpora (excluding the query input being searched).&lt;br /&gt;
* '''Output''': top-10 candidate list. &lt;br /&gt;
* '''Evaluation''': Top-10 hit rate (1 point is scored for a hit in the top 10 and 0 is scored otherwise).&lt;br /&gt;
&lt;br /&gt;
To make algorithms able to share intermediate steps, participants are encouraged to submit separate tracker and matcher modules instead of integrated ones, which is according to Rainer Typke's suggestion. So trackers and matchers from different submissions could work together with the same pre-defined interface and thus for us it's possible to find the best combination.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
Currently we have 2 publicly available corpora for QBSH:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBSH-corpus.rar MIR-QBSH corpus] which is comprised of 4431 queries along with 48 ground-truth MIDI files. All queries are from the beginning of references. Manually labeled pitch for each recording is available. &lt;br /&gt;
&lt;br /&gt;
* [http://mirlab.org/dataSet/public/IOACAS_QBH.rar IOACAS corpus] comprised of 759 queries and 298 monophonic ground-truth MIDI files (with MIDI 0 or 1 format). There are no &amp;quot;singing from beginning&amp;quot; guarantee.&lt;br /&gt;
&lt;br /&gt;
The noise MIDI will be the 5000+ Essen collection(can be accessed from http://www.esac-data.org/).&lt;br /&gt;
&lt;br /&gt;
To build a large test set which can reflect real-world queries, it is suggested that every participant makes a contribution to the evaluation corpus. Sometimes this is hard in practice. So we shall adopt &amp;quot;no hidden dataset&amp;quot; policy if there are not enough user-contributed copora.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Corpus Contribution ==&lt;br /&gt;
Every participant will be asked to contribute 100~200 wave queries (8k 16bits) as well as the ground truth MIDI as test data. &lt;br /&gt;
&lt;br /&gt;
A simple tool for recording query data will be public soon. You may need to have .NET 2.0 or above installed in your system in order to run this program. The generated files conform to the format used in the ThinkIT corpus. Of course you are also welcomed to use your own program to record the query data.&lt;br /&gt;
&lt;br /&gt;
If there are not enough user-contributed corpora, then we shall adopt &amp;quot;no hidden dataset&amp;quot; policy for QBSH task as usual.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
&lt;br /&gt;
=== Breakdown Version ===&lt;br /&gt;
1. Database indexing/building. Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 ./QBSH/midiDatabase/00001.mid&lt;br /&gt;
 ./QBSH/midiDatabase/00002.mid&lt;br /&gt;
 ./QBSH/midiDatabase/00003.mid&lt;br /&gt;
 ./QBSH/midiDatabase/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (For task 2, %dbMidi.list% is in fact a list of wav files in the database.)&lt;br /&gt;
&lt;br /&gt;
2. Pitch tracker. Command format: &lt;br /&gt;
&lt;br /&gt;
 pitch_tracker %queryWave.list% %dir_query_pitch%&lt;br /&gt;
&lt;br /&gt;
where %queryWave.list% looks like &lt;br /&gt;
&lt;br /&gt;
 queryWave/query_00001.wav&lt;br /&gt;
 queryWave/query_00002.wav&lt;br /&gt;
 queryWave/query_00003.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Each input file dir_query/query_xxxxx.wav in %queryWave.list% outputs a corresponding transcription %dir_query_pitch%/query_xxxxx.pitch which gives the pitch sequence in midi note scale with the resolution of 10ms: &lt;br /&gt;
&lt;br /&gt;
 0&lt;br /&gt;
 0&lt;br /&gt;
 62.23&lt;br /&gt;
 62.25&lt;br /&gt;
 62.21&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Thus a query with x seconds should output a pitch file with 100*x lines. Places of silence/rest are set to be 0.  &lt;br /&gt;
&lt;br /&gt;
3. Pitch matcher. Command format: &lt;br /&gt;
&lt;br /&gt;
 pitch_matcher %dbMidi.list% %queryPitch.list% %resultFile%&lt;br /&gt;
&lt;br /&gt;
where %queryPitch.list% looks like &lt;br /&gt;
&lt;br /&gt;
 queryPitch/query_00001.pitch&lt;br /&gt;
 queryPitch/query_00002.pitch&lt;br /&gt;
 queryPitch/query_00003.pitch&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
and the result file gives top-10 candidates(if has) for each query: &lt;br /&gt;
&lt;br /&gt;
 queryPitch/query_00001.pitch: 00025 01003 02200 ... &lt;br /&gt;
 queryPitch/query_00002.pitch: 01547 02313 07653 ... &lt;br /&gt;
 queryPitch/query_00003.pitch: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
=== Integrated Version ===&lt;br /&gt;
If you want to pack everything together, the command format should be much simpler:&lt;br /&gt;
&lt;br /&gt;
 qbshProgram %dbMidi.list% %queryWave.list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. The result file should have the same format as mentioned previously. (For task 2, %dbMidi.list% is in fact a list of wav files in the database to be retrieved.)&lt;br /&gt;
&lt;br /&gt;
=== Packaging submissions ===&lt;br /&gt;
All submissions should be statically linked to all libraries (the presence of &lt;br /&gt;
dynamically linked libraries cannot be guarenteed).&lt;br /&gt;
&lt;br /&gt;
All submissions should include a README file including the following the &lt;br /&gt;
information:&lt;br /&gt;
&lt;br /&gt;
* Command line calling format for all executables and an example formatted set of commands&lt;br /&gt;
* Number of threads/cores used or whether this should be specified on the command line&lt;br /&gt;
* Expected memory footprint&lt;br /&gt;
* Expected runtime&lt;br /&gt;
* Any required environments (and versions), e.g. python, java, bash, matlab.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of particpants in this and other audio tasks,&lt;br /&gt;
hard limits on the runtime of submissions are specified. &lt;br /&gt;
 &lt;br /&gt;
A hard limit of 72 hours will be imposed on runs (total feature extraction and querying times). Submissions that exceed this runtime may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Query_by_Singing/Humming&amp;diff=9150</id>
		<title>2013:Query by Singing/Humming</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Query_by_Singing/Humming&amp;diff=9150"/>
		<updated>2013-02-17T03:51:04Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Subtask 1: Classic QBSH evaluation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
&lt;br /&gt;
The text of this section is copied from the 2010 page. Please add your comments and discussions for 2013. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of the Query-by-Singing/Humming (QBSH) task is the evaluation of MIR systems that take as query input queries sung or hummed by real-world users. More information can be found in:&lt;br /&gt;
&lt;br /&gt;
* [[2009:Query_by_Singing/Humming]]&lt;br /&gt;
* [[2009:Query_by_Singing/Humming]]&lt;br /&gt;
* [[2008:Query_by_Singing/Humming]]&lt;br /&gt;
* [[2007:Query_by_Singing/Humming]]&lt;br /&gt;
* [[2006:Query_by_Singing/Humming]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: Classic QBSH evaluation ===&lt;br /&gt;
This is the classic QBSH problem where we need to find the ground-truth midi from a user's singing or humming.&lt;br /&gt;
* '''Queries''': human singing/humming snippets (.wav). Queries are from two corpora described below.&lt;br /&gt;
* '''Database''': ground-truth and noise MIDI files(which are monophonic). Comprised of 48+106 Roger Jang's and ThinkIT's ground-truth along with a cleaned version of Essen Database(2000+ MIDIs which are used last year) &lt;br /&gt;
* '''Output''': top-10 candidate list. &lt;br /&gt;
* '''Evaluation''': Top-10 hit rate (1 point is scored for a hit in the top 10 and 0 is scored otherwise).&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: Variants QBSH evaluation ===&lt;br /&gt;
This is based on Prof. Downie's idea that queries are variants of &amp;quot;ground-truth&amp;quot; midi. In fact, this becomes more important since user-contributed singing/humming is an important part of the song database to be searched, as evidenced by the QBSH search service at [http://www.midomi.com/ www.midomi.com].&lt;br /&gt;
* '''Queries''': human singing/humming snippets (.wav). Queries are from Roger Jang's corpus and ThinkIT corpus.&lt;br /&gt;
* '''Database''': human singing/humming snippets (.wav) from all available corpora (excluding the query input being searched).&lt;br /&gt;
* '''Output''': top-10 candidate list. &lt;br /&gt;
* '''Evaluation''': Top-10 hit rate (1 point is scored for a hit in the top 10 and 0 is scored otherwise).&lt;br /&gt;
&lt;br /&gt;
To make algorithms able to share intermediate steps, participants are encouraged to submit separate tracker and matcher modules instead of integrated ones, which is according to Rainer Typke's suggestion. So trackers and matchers from different submissions could work together with the same pre-defined interface and thus for us it's possible to find the best combination.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
Currently we have 2 publicly available corpora for QBSH:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBSH-corpus.rar MIR-QBSH corpus] which is comprised of 4431 queries along with 48 ground-truth MIDI files. All queries are from the beginning of references. Manually labeled pitch for each recording is available. &lt;br /&gt;
&lt;br /&gt;
* [http://mirlab.org/dataSet/public/IOACAS_QBH.rar IOACAS corpus] comprised of 759 queries and 298 monophonic ground-truth MIDI files (with MIDI 0 or 1 format). There are no &amp;quot;singing from beginning&amp;quot; guarantee.&lt;br /&gt;
&lt;br /&gt;
The noise MIDI will be the 5000+ Essen collection(can be accessed from http://www.esac-data.org/).&lt;br /&gt;
&lt;br /&gt;
To build a large test set which can reflect real-world queries, it is suggested that every participant makes a contribution to the evaluation corpus. Sometimes this is hard in practice. So we shall adopt &amp;quot;no hidden dataset&amp;quot; policy if there are not enough user-contributed copora.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Corpus Contribution ==&lt;br /&gt;
Every participant will be asked to contribute 100~200 wave queries (8k 16bits) as well as the ground truth MIDI as test data. &lt;br /&gt;
&lt;br /&gt;
A simple tool for recording query data will be public soon. You may need to have .NET 2.0 or above installed in your system in order to run this program. The generated files conform to the format used in the ThinkIT corpus. Of course you are also welcomed to use your own program to record the query data.&lt;br /&gt;
&lt;br /&gt;
If there are not enough user-contributed corpora, then we shall adopt &amp;quot;no hidden dataset&amp;quot; policy for QBSH task as usual.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
&lt;br /&gt;
=== Breakdown Version ===&lt;br /&gt;
1. Database indexing/building. Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 ./QBSH/midiDatabase/00001.mid&lt;br /&gt;
 ./QBSH/midiDatabase/00002.mid&lt;br /&gt;
 ./QBSH/midiDatabase/00003.mid&lt;br /&gt;
 ./QBSH/midiDatabase/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (For task 2, %dbMidi.list% is in fact a list of wav files in the database.)&lt;br /&gt;
&lt;br /&gt;
2. Pitch tracker. Command format: &lt;br /&gt;
&lt;br /&gt;
 pitch_tracker %queryWave.list% %dir_query_pitch%&lt;br /&gt;
&lt;br /&gt;
where %queryWave.list% looks like &lt;br /&gt;
&lt;br /&gt;
 queryWave/query_00001.wav&lt;br /&gt;
 queryWave/query_00002.wav&lt;br /&gt;
 queryWave/query_00003.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Each input file dir_query/query_xxxxx.wav in %queryWave.list% outputs a corresponding transcription %dir_query_pitch%/query_xxxxx.pitch which gives the pitch sequence in midi note scale with the resolution of 10ms: &lt;br /&gt;
&lt;br /&gt;
 0&lt;br /&gt;
 0&lt;br /&gt;
 62.23&lt;br /&gt;
 62.25&lt;br /&gt;
 62.21&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Thus a query with x seconds should output a pitch file with 100*x lines. Places of silence/rest are set to be 0.  &lt;br /&gt;
&lt;br /&gt;
3. Pitch matcher. Command format: &lt;br /&gt;
&lt;br /&gt;
 pitch_matcher %dbMidi.list% %queryPitch.list% %resultFile%&lt;br /&gt;
&lt;br /&gt;
where %queryPitch.list% looks like &lt;br /&gt;
&lt;br /&gt;
 queryPitch/query_00001.pitch&lt;br /&gt;
 queryPitch/query_00002.pitch&lt;br /&gt;
 queryPitch/query_00003.pitch&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
and the result file gives top-10 candidates(if has) for each query: &lt;br /&gt;
&lt;br /&gt;
 queryPitch/query_00001.pitch: 00025 01003 02200 ... &lt;br /&gt;
 queryPitch/query_00002.pitch: 01547 02313 07653 ... &lt;br /&gt;
 queryPitch/query_00003.pitch: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
=== Integrated Version ===&lt;br /&gt;
If you want to pack everything together, the command format should be much simpler:&lt;br /&gt;
&lt;br /&gt;
 qbshProgram %dbMidi.list% %queryWave.list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. The result file should have the same format as mentioned previously. (For task 2, %dbMidi.list% is in fact a list of wav files in the database to be retrieved.)&lt;br /&gt;
&lt;br /&gt;
=== Packaging submissions ===&lt;br /&gt;
All submissions should be statically linked to all libraries (the presence of &lt;br /&gt;
dynamically linked libraries cannot be guarenteed).&lt;br /&gt;
&lt;br /&gt;
All submissions should include a README file including the following the &lt;br /&gt;
information:&lt;br /&gt;
&lt;br /&gt;
* Command line calling format for all executables and an example formatted set of commands&lt;br /&gt;
* Number of threads/cores used or whether this should be specified on the command line&lt;br /&gt;
* Expected memory footprint&lt;br /&gt;
* Expected runtime&lt;br /&gt;
* Any required environments (and versions), e.g. python, java, bash, matlab.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of particpants in this and other audio tasks,&lt;br /&gt;
hard limits on the runtime of submissions are specified. &lt;br /&gt;
 &lt;br /&gt;
A hard limit of 72 hours will be imposed on runs (total feature extraction and querying times). Submissions that exceed this runtime may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2013:Query_by_Singing/Humming&amp;diff=9149</id>
		<title>2013:Query by Singing/Humming</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2013:Query_by_Singing/Humming&amp;diff=9149"/>
		<updated>2013-02-17T03:48:03Z</updated>

		<summary type="html">&lt;p&gt;Jyh-Shing Roger Jang: /* Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
&lt;br /&gt;
The text of this section is copied from the 2010 page. Please add your comments and discussions for 2013. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The goal of the Query-by-Singing/Humming (QBSH) task is the evaluation of MIR systems that take as query input queries sung or hummed by real-world users. More information can be found in:&lt;br /&gt;
&lt;br /&gt;
* [[2009:Query_by_Singing/Humming]]&lt;br /&gt;
* [[2009:Query_by_Singing/Humming]]&lt;br /&gt;
* [[2008:Query_by_Singing/Humming]]&lt;br /&gt;
* [[2007:Query_by_Singing/Humming]]&lt;br /&gt;
* [[2006:Query_by_Singing/Humming]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Subtask 1: Classic QBSH evaluation ===&lt;br /&gt;
This is the classic QBSH problem where we need to find the ground-truth midi from a user's singing or humming.&lt;br /&gt;
* '''Queries''': human singing/humming snippets (.wav). Queries are from Roger Jang's corpus and ThinkIT corpus.&lt;br /&gt;
* '''Database''': ground-truth and noise MIDI files(which are monophonic). Comprised of 48+106 Roger Jang's and ThinkIT's ground-truth along with a cleaned version of Essen Database(2000+ MIDIs which are used last year) &lt;br /&gt;
* '''Output''': top-10 candidate list. &lt;br /&gt;
* '''Evaluation''': Top-10 hit rate (1 point is scored for a hit in the top 10 and 0 is scored otherwise).&lt;br /&gt;
&lt;br /&gt;
=== Subtask 2: Variants QBSH evaluation ===&lt;br /&gt;
This is based on Prof. Downie's idea that queries are variants of &amp;quot;ground-truth&amp;quot; midi. In fact, this becomes more important since user-contributed singing/humming is an important part of the song database to be searched, as evidenced by the QBSH search service at [http://www.midomi.com/ www.midomi.com].&lt;br /&gt;
* '''Queries''': human singing/humming snippets (.wav). Queries are from Roger Jang's corpus and ThinkIT corpus.&lt;br /&gt;
* '''Database''': human singing/humming snippets (.wav) from all available corpora (excluding the query input being searched).&lt;br /&gt;
* '''Output''': top-10 candidate list. &lt;br /&gt;
* '''Evaluation''': Top-10 hit rate (1 point is scored for a hit in the top 10 and 0 is scored otherwise).&lt;br /&gt;
&lt;br /&gt;
To make algorithms able to share intermediate steps, participants are encouraged to submit separate tracker and matcher modules instead of integrated ones, which is according to Rainer Typke's suggestion. So trackers and matchers from different submissions could work together with the same pre-defined interface and thus for us it's possible to find the best combination.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
Currently we have 2 publicly available corpora for QBSH:&lt;br /&gt;
&lt;br /&gt;
* Roger Jang's [http://mirlab.org/dataSet/public/MIR-QBSH-corpus.rar MIR-QBSH corpus] which is comprised of 4431 queries along with 48 ground-truth MIDI files. All queries are from the beginning of references. Manually labeled pitch for each recording is available. &lt;br /&gt;
&lt;br /&gt;
* [http://mirlab.org/dataSet/public/IOACAS_QBH.rar IOACAS corpus] comprised of 759 queries and 298 monophonic ground-truth MIDI files (with MIDI 0 or 1 format). There are no &amp;quot;singing from beginning&amp;quot; guarantee.&lt;br /&gt;
&lt;br /&gt;
The noise MIDI will be the 5000+ Essen collection(can be accessed from http://www.esac-data.org/).&lt;br /&gt;
&lt;br /&gt;
To build a large test set which can reflect real-world queries, it is suggested that every participant makes a contribution to the evaluation corpus. Sometimes this is hard in practice. So we shall adopt &amp;quot;no hidden dataset&amp;quot; policy if there are not enough user-contributed copora.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Corpus Contribution ==&lt;br /&gt;
Every participant will be asked to contribute 100~200 wave queries (8k 16bits) as well as the ground truth MIDI as test data. &lt;br /&gt;
&lt;br /&gt;
A simple tool for recording query data will be public soon. You may need to have .NET 2.0 or above installed in your system in order to run this program. The generated files conform to the format used in the ThinkIT corpus. Of course you are also welcomed to use your own program to record the query data.&lt;br /&gt;
&lt;br /&gt;
If there are not enough user-contributed corpora, then we shall adopt &amp;quot;no hidden dataset&amp;quot; policy for QBSH task as usual.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
&lt;br /&gt;
=== Breakdown Version ===&lt;br /&gt;
1. Database indexing/building. Command format should look like this: &lt;br /&gt;
&lt;br /&gt;
 indexing %dbMidi.list% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example: &lt;br /&gt;
&lt;br /&gt;
 ./QBSH/midiDatabase/00001.mid&lt;br /&gt;
 ./QBSH/midiDatabase/00002.mid&lt;br /&gt;
 ./QBSH/midiDatabase/00003.mid&lt;br /&gt;
 ./QBSH/midiDatabase/00004.mid&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Output indexed files are placed into %dir_workspace_root%. (For task 2, %dbMidi.list% is in fact a list of wav files in the database.)&lt;br /&gt;
&lt;br /&gt;
2. Pitch tracker. Command format: &lt;br /&gt;
&lt;br /&gt;
 pitch_tracker %queryWave.list% %dir_query_pitch%&lt;br /&gt;
&lt;br /&gt;
where %queryWave.list% looks like &lt;br /&gt;
&lt;br /&gt;
 queryWave/query_00001.wav&lt;br /&gt;
 queryWave/query_00002.wav&lt;br /&gt;
 queryWave/query_00003.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Each input file dir_query/query_xxxxx.wav in %queryWave.list% outputs a corresponding transcription %dir_query_pitch%/query_xxxxx.pitch which gives the pitch sequence in midi note scale with the resolution of 10ms: &lt;br /&gt;
&lt;br /&gt;
 0&lt;br /&gt;
 0&lt;br /&gt;
 62.23&lt;br /&gt;
 62.25&lt;br /&gt;
 62.21&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
Thus a query with x seconds should output a pitch file with 100*x lines. Places of silence/rest are set to be 0.  &lt;br /&gt;
&lt;br /&gt;
3. Pitch matcher. Command format: &lt;br /&gt;
&lt;br /&gt;
 pitch_matcher %dbMidi.list% %queryPitch.list% %resultFile%&lt;br /&gt;
&lt;br /&gt;
where %queryPitch.list% looks like &lt;br /&gt;
&lt;br /&gt;
 queryPitch/query_00001.pitch&lt;br /&gt;
 queryPitch/query_00002.pitch&lt;br /&gt;
 queryPitch/query_00003.pitch&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
and the result file gives top-10 candidates(if has) for each query: &lt;br /&gt;
&lt;br /&gt;
 queryPitch/query_00001.pitch: 00025 01003 02200 ... &lt;br /&gt;
 queryPitch/query_00002.pitch: 01547 02313 07653 ... &lt;br /&gt;
 queryPitch/query_00003.pitch: 03142 00320 00973 ... &lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
=== Integrated Version ===&lt;br /&gt;
If you want to pack everything together, the command format should be much simpler:&lt;br /&gt;
&lt;br /&gt;
 qbshProgram %dbMidi.list% %queryWave.list% %resultFile% %dir_workspace_root%&lt;br /&gt;
&lt;br /&gt;
You can use %dir_workspace_root% to store any temporary indexing/database structures. The result file should have the same format as mentioned previously. (For task 2, %dbMidi.list% is in fact a list of wav files in the database to be retrieved.)&lt;br /&gt;
&lt;br /&gt;
=== Packaging submissions ===&lt;br /&gt;
All submissions should be statically linked to all libraries (the presence of &lt;br /&gt;
dynamically linked libraries cannot be guarenteed).&lt;br /&gt;
&lt;br /&gt;
All submissions should include a README file including the following the &lt;br /&gt;
information:&lt;br /&gt;
&lt;br /&gt;
* Command line calling format for all executables and an example formatted set of commands&lt;br /&gt;
* Number of threads/cores used or whether this should be specified on the command line&lt;br /&gt;
* Expected memory footprint&lt;br /&gt;
* Expected runtime&lt;br /&gt;
* Any required environments (and versions), e.g. python, java, bash, matlab.&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the potentially high number of particpants in this and other audio tasks,&lt;br /&gt;
hard limits on the runtime of submissions are specified. &lt;br /&gt;
 &lt;br /&gt;
A hard limit of 72 hours will be imposed on runs (total feature extraction and querying times). Submissions that exceed this runtime may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;/div&gt;</summary>
		<author><name>Jyh-Shing Roger Jang</name></author>
		
	</entry>
</feed>