<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Chung-Che+Wang</id>
	<title>MIREX Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://music-ir.org/mirex/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Chung-Che+Wang"/>
	<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/wiki/Special:Contributions/Chung-Che_Wang"/>
	<updated>2026-04-29T22:08:25Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.1</generator>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2020:Audio_Fingerprinting_Results&amp;diff=13250</id>
		<title>2020:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2020:Audio_Fingerprinting_Results&amp;diff=13250"/>
		<updated>2020-09-15T07:41:24Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These are the results for the 2020 running of the Audio Fingerprinting task. For background information about this task set please refer to the [[2020:Audio Fingerprinting]] page.&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
    |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
    ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
    ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
    |-&lt;br /&gt;
    ! LPL1&lt;br /&gt;
    | NTES_MUSIC_A  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2020/LPL1.pdf PDF] || Huaping Liu, [http://ir.netease.com/phoenix.zhtml?c=122303&amp;amp;p=irol-IRHome Peng Li], [http://ir.netease.com/phoenix.zhtml?c=122303&amp;amp;p=irol-IRHome Songsheng Pan]&lt;br /&gt;
    |-&lt;br /&gt;
    ! LPL2&lt;br /&gt;
    | NTES_MUSIC_B  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2020/LPL2.pdf PDF] || Huaping Liu, [http://ir.netease.com/phoenix.zhtml?c=122303&amp;amp;p=irol-IRHome Peng Li], [http://ir.netease.com/phoenix.zhtml?c=122303&amp;amp;p=irol-IRHome Songsheng Pan]&lt;br /&gt;
    |-&lt;br /&gt;
    ! LPL3&lt;br /&gt;
    | NTES_MUSIC_C  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2020/LPL3.pdf PDF] || Huaping Liu, [http://ir.netease.com/phoenix.zhtml?c=122303&amp;amp;p=irol-IRHome Peng Li], [http://ir.netease.com/phoenix.zhtml?c=122303&amp;amp;p=irol-IRHome Songsheng Pan]&lt;br /&gt;
    |-&lt;br /&gt;
    ! LPL4&lt;br /&gt;
    | NTES_MUSIC_D  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2020/LPL4.pdf PDF] || Huaping Liu, [http://ir.netease.com/phoenix.zhtml?c=122303&amp;amp;p=irol-IRHome Peng Li], [http://ir.netease.com/phoenix.zhtml?c=122303&amp;amp;p=irol-IRHome Songsheng Pan]&lt;br /&gt;
    |-&lt;br /&gt;
    ! XXZC1-3&lt;br /&gt;
    | KUGOU  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2020/XXZC.pdf PDF] || Xiaoguang Xuan, Chunzhi Xiao, Chaogang Zhang, Chuanyi Chen&lt;br /&gt;
    |-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;br /&gt;
&lt;br /&gt;
Size of database and running time are announced together with top-1 hit rate in the following table. Note that running time is announced for reference because machine spec, machine status, and number of cores used by a submission are different. IDs and specs of machines are:&lt;br /&gt;
&lt;br /&gt;
* A and B: 1.9 GHz, 24 cores CPU, 64 GB RAM&lt;br /&gt;
* C: a VM with 8 cores CPU and 32 GB RAM, hosted on 2.4 GHz, 32 cores CPU and 128 GB RAM&lt;br /&gt;
* D: 2.1 GHz, 24 cores CPU, 32 GB RAM&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2020/afp/result_2020.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2020:Audio_Fingerprinting&amp;diff=13245</id>
		<title>2020:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2020:Audio_Fingerprinting&amp;diff=13245"/>
		<updated>2020-09-03T16:13:08Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) 965 of the files are from GTZAN data set, all the others are mainly English and Chinese pop songs. This data set is hidden and not available for download. (Note that there are possibly different numbers of channels (mono and stereo), sampling rates, and bit resolutions for these files.)&lt;br /&gt;
&lt;br /&gt;
The GTZAN data set were purified according to [1][2]. Exact repetitions were considered by the following principles:&lt;br /&gt;
* If none of the songs in a repetition set has corresponding queries, then nothing is removed from the database.&lt;br /&gt;
* If one of the songs in a repetition set has corresponding queries, then all the other songs (which have no corresponding queries) were removed from the database.&lt;br /&gt;
* If two or more of the songs in a repetition set has corresponding queries, then only one song (which has corresponding queries) was kept in the database. Note that if a query clip corresponds to a removed song, then the query's ground truth is modified to the kept song.&lt;br /&gt;
&lt;br /&gt;
Processed songs are listed in the appendix part of this page.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [https://drive.google.com/open?id=1elI15BomiiNfCXLxpBjhdI3nB6bN9UEp this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphones, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %file_list_for_db% %dir_for_db%&lt;br /&gt;
where %file_list_for_db% is a file containing the input list of database audio files, with name convention as uniqueKey.mp3. For example:&lt;br /&gt;
 ./AFP/database/000001.mp3&lt;br /&gt;
 ./AFP/database/000002.mp3&lt;br /&gt;
 ./AFP/database/000003.mp3&lt;br /&gt;
 ./AFP/database/000004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed into the directory %dir_for_db%. The total size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %file_list_for_query% %dir_for_db% %result_file%&lt;br /&gt;
where %file_list_for_query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q000001.wav&lt;br /&gt;
 ./AFP/query/q000002.wav&lt;br /&gt;
 ./AFP/query/q000003.wav&lt;br /&gt;
 ./AFP/query/q000004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %query_file_path%	%db_file_path%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q000001.wav	./AFP/database/0000004.mp3&lt;br /&gt;
 ./AFP/query/q000002.wav	./AFP/database/0000054.mp3&lt;br /&gt;
 ./AFP/query/q000003.wav	./AFP/database/0001002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
&lt;br /&gt;
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 50KB for 1 minute of music. Assuming that the duration is 4 mins in average, then the total storage for 10,000 songs should be around 50*10000*4/1000000 = 2GB.&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 24 hours || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Special notes ==&lt;br /&gt;
* We will run participants' submissions on Linux without Matlab. C/Python source or executable are most welcome.&lt;br /&gt;
* To run multiple submissions in the same time conveniently, writing temporary files in directories of database audio files (i.e. paths in %file_list_for_db%) is not allowed. Please write temporary files (like *.wav files or any other intermediate formats) only in ./tmp or ./temp . Note that if any file is required by matcher and is produced (written, copied, and so on) by builder, then it is considered as part of database and should be putted in %dir_for_db%.&lt;br /&gt;
* Please check the existence of folders (%dir_for_db%, ./tmp, and ./temp) in the builder before reading any audio. If any of the intended folders do not exist, please create them automatically.&lt;br /&gt;
* Existence of ending slash of %dir_for_db% is not guaranteed. That is, it may be given as &amp;quot;db&amp;quot; or &amp;quot;db/&amp;quot;.&lt;br /&gt;
* ffmpeg is available on the system but the version is not guaranteed. If a certain version is needed, please include it within the submission and use something like &amp;quot;./ffmpeg&amp;quot; to call it.&lt;br /&gt;
* Sampling rate of query files is 44.1 KHz, other formats like 8 KHz will not be provided.&lt;br /&gt;
* Actually some out-of-vocabulary queries are listed in the %query_file_path%, but only those who have correspond song in the database (i.e. 5692 of them) are considered while computing the accuracy. Besides, since we will used a very small set to test participants' submissions before running on the whole set, participants are encouraged to output something in %result_file% even if queries are likely to be unseen.&lt;br /&gt;
* Participants will be asked to modify the submission if any of the above specifications are not followed.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Bibliography ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[1] Bob L. Sturm, ``The State of the Art Ten Years After a State of the Art: Future Research in Music Information Retrieval,'' J. New Music Research, vol. 43, no. 2, pp. 147–172, 2014.&lt;br /&gt;
&lt;br /&gt;
[2] Faults in the GTZAN Music Genre Dataset, available at http://imi.aau.dk/~bst/research/GTZANtable2/ , 2014.&lt;br /&gt;
&lt;br /&gt;
== Appendix ==&lt;br /&gt;
&lt;br /&gt;
1. Processed songs in the GTZAN dataset&lt;br /&gt;
&lt;br /&gt;
Processed songs of each genre are listed below, where the first identifier means the kept song, and the following ones mean the removed songs:&lt;br /&gt;
* Disco 50, 51, 70&lt;br /&gt;
* Disco 55, 60, 89&lt;br /&gt;
* Disco 71, 74&lt;br /&gt;
* Hiphop 39, 45&lt;br /&gt;
* Jazz 34, 53&lt;br /&gt;
* Jazz 35, 55&lt;br /&gt;
* Jazz 37, 60&lt;br /&gt;
* Jazz 39, 65&lt;br /&gt;
* Jazz 40, 67&lt;br /&gt;
* Jazz 43, 69&lt;br /&gt;
* Jazz 44, 70&lt;br /&gt;
* Jazz 45, 71&lt;br /&gt;
* Metal 4, 13&lt;br /&gt;
* Metal 34, 94&lt;br /&gt;
* Metal 40, 61&lt;br /&gt;
* Metal 43, 64&lt;br /&gt;
* Metal 44, 65&lt;br /&gt;
* Metal 45, 66&lt;br /&gt;
* Pop 15, 22&lt;br /&gt;
* Pop 45, 46&lt;br /&gt;
* Pop 47, 80&lt;br /&gt;
* Pop 54, 60&lt;br /&gt;
* Pop 56, 59&lt;br /&gt;
* Reggae 3, 54&lt;br /&gt;
* Reggae 5, 56&lt;br /&gt;
* Reggae 10, 60&lt;br /&gt;
* Reggae 13, 58&lt;br /&gt;
* Reggae 41, 69&lt;br /&gt;
* Reggae 73, 74&lt;br /&gt;
* Reggae 80, 81, 82&lt;br /&gt;
* Reggae 75, 91, 92&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2020:Audio_Fingerprinting&amp;diff=13211</id>
		<title>2020:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2020:Audio_Fingerprinting&amp;diff=13211"/>
		<updated>2020-08-14T21:43:27Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) 965 of the files are from GTZAN data set, all the others are mainly English and Chinese pop songs. This data set is hidden and not available for download. (Note that there are possibly different numbers of channels (mono and stereo), sampling rates, and bit resolutions for these files.)&lt;br /&gt;
&lt;br /&gt;
The GTZAN data set were purified according to [1][2]. Exact repetitions were considered by the following principles:&lt;br /&gt;
* If none of the songs in a repetition set has corresponding queries, then nothing is removed from the database.&lt;br /&gt;
* If one of the songs in a repetition set has corresponding queries, then all the other songs (which have no corresponding queries) were removed from the database.&lt;br /&gt;
* If two or more of the songs in a repetition set has corresponding queries, then only one song (which has corresponding queries) was kept in the database. Note that if a query clip corresponds to a removed song, then the query's ground truth is modified to the kept song.&lt;br /&gt;
&lt;br /&gt;
Processed songs are listed in the appendix part of this page.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [https://drive.google.com/open?id=1elI15BomiiNfCXLxpBjhdI3nB6bN9UEp this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphones, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %file_list_for_db% %dir_for_db%&lt;br /&gt;
where %file_list_for_db% is a file containing the input list of database audio files, with name convention as uniqueKey.mp3. For example:&lt;br /&gt;
 ./AFP/database/000001.mp3&lt;br /&gt;
 ./AFP/database/000002.mp3&lt;br /&gt;
 ./AFP/database/000003.mp3&lt;br /&gt;
 ./AFP/database/000004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed into the directory %dir_for_db%. The total size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %file_list_for_query% %dir_for_db% %result_file%&lt;br /&gt;
where %file_list_for_query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q000001.wav&lt;br /&gt;
 ./AFP/query/q000002.wav&lt;br /&gt;
 ./AFP/query/q000003.wav&lt;br /&gt;
 ./AFP/query/q000004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %query_file_path%	%db_file_path%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q000001.wav	./AFP/database/0000004.mp3&lt;br /&gt;
 ./AFP/query/q000002.wav	./AFP/database/0000054.mp3&lt;br /&gt;
 ./AFP/query/q000003.wav	./AFP/database/0001002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
&lt;br /&gt;
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 50KB for 1 minute of music. Assuming that the duration is 4 mins in average, then the total storage for 10,000 songs should be around 50*10000*4/1000000 = 2GB.&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 24 hours || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Special notes ==&lt;br /&gt;
* We will run participants' submissions on Linux without Matlab. C/Python source or executable are most welcome.&lt;br /&gt;
* To run multiple submissions in the same time conveniently, writing temporary files in directories of database audio files (i.e. paths in %file_list_for_db%) is not allowed. Please write temporary files (like *.wav files or any other intermediate formats) only in ./tmp or ./temp . Note that if any file is required by matcher and is produced (written, copied, and so on) by builder, then it is considered as part of database and should be putted in %dir_for_db%.&lt;br /&gt;
* Please check the existence of folders (%dir_for_db%, ./tmp, and ./temp) in the builder before reading any audio. If any of the intended folders are not exist, please create them automatically.&lt;br /&gt;
* Existence of ending slash of %dir_for_db% is not guaranteed. That is, it may be given as &amp;quot;db&amp;quot; or &amp;quot;db/&amp;quot;.&lt;br /&gt;
* ffmpeg is available on the system but the version is not guaranteed. If a certain version is needed, please include it within the submission and use something like &amp;quot;./ffmpeg&amp;quot; to call it.&lt;br /&gt;
* Sampling rate of query files is 44.1 KHz, other formats like 8 KHz will not be provided.&lt;br /&gt;
* Actually some out-of-vocabulary queries are listed in the %query_file_path%, but only those who have correspond song in the database (i.e. 5692 of them) are considered while computing the accuracy. Besides, since we will used a very small set to test participants' submissions before running on the whole set, participants are encouraged to output something in %result_file% even if queries are likely to be unseen.&lt;br /&gt;
* Participants will be asked to modify the submission if any of the above specifications are not followed.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Bibliography ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[1] Bob L. Sturm, ``The State of the Art Ten Years After a State of the Art: Future Research in Music Information Retrieval,'' J. New Music Research, vol. 43, no. 2, pp. 147–172, 2014.&lt;br /&gt;
&lt;br /&gt;
[2] Faults in the GTZAN Music Genre Dataset, available at http://imi.aau.dk/~bst/research/GTZANtable2/ , 2014.&lt;br /&gt;
&lt;br /&gt;
== Appendix ==&lt;br /&gt;
&lt;br /&gt;
1. Processed songs in the GTZAN dataset&lt;br /&gt;
&lt;br /&gt;
Processed songs of each genre are listed below, where the first identifier means the kept song, and the following ones mean the removed songs:&lt;br /&gt;
* Disco 50, 51, 70&lt;br /&gt;
* Disco 55, 60, 89&lt;br /&gt;
* Disco 71, 74&lt;br /&gt;
* Hiphop 39, 45&lt;br /&gt;
* Jazz 34, 53&lt;br /&gt;
* Jazz 35, 55&lt;br /&gt;
* Jazz 37, 60&lt;br /&gt;
* Jazz 39, 65&lt;br /&gt;
* Jazz 40, 67&lt;br /&gt;
* Jazz 43, 69&lt;br /&gt;
* Jazz 44, 70&lt;br /&gt;
* Jazz 45, 71&lt;br /&gt;
* Metal 4, 13&lt;br /&gt;
* Metal 34, 94&lt;br /&gt;
* Metal 40, 61&lt;br /&gt;
* Metal 43, 64&lt;br /&gt;
* Metal 44, 65&lt;br /&gt;
* Metal 45, 66&lt;br /&gt;
* Pop 15, 22&lt;br /&gt;
* Pop 45, 46&lt;br /&gt;
* Pop 47, 80&lt;br /&gt;
* Pop 54, 60&lt;br /&gt;
* Pop 56, 59&lt;br /&gt;
* Reggae 3, 54&lt;br /&gt;
* Reggae 5, 56&lt;br /&gt;
* Reggae 10, 60&lt;br /&gt;
* Reggae 13, 58&lt;br /&gt;
* Reggae 41, 69&lt;br /&gt;
* Reggae 73, 74&lt;br /&gt;
* Reggae 80, 81, 82&lt;br /&gt;
* Reggae 75, 91, 92&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2020:Audio_Fingerprinting&amp;diff=13210</id>
		<title>2020:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2020:Audio_Fingerprinting&amp;diff=13210"/>
		<updated>2020-08-14T21:42:20Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: /* Database */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) 965 of the files are from GTZAN data set, all the others are mainly English and Chinese pop songs. This data set is hidden and not available for download. (Note that there are possibly different numbers of channels (mono and stereo), sampling rates, and bit resolutions for these files.)&lt;br /&gt;
&lt;br /&gt;
The GTZAN data set were purified according to [1][2]. Exact repetitions were considered by the following principles:&lt;br /&gt;
* If none of the songs in a repetition set has corresponding queries, then nothing is removed from the database.&lt;br /&gt;
* If one of the songs in a repetition set has corresponding queries, then all the other songs (which have no corresponding queries) were removed from the database.&lt;br /&gt;
* If two or more of the songs in a repetition set has corresponding queries, then only one song (which has corresponding queries) was kept in the database. Note that if a query clip corresponds to a removed song, then the query's ground truth is modified to the kept song.&lt;br /&gt;
&lt;br /&gt;
Processed songs are listed in the appendix part of this page.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [https://drive.google.com/open?id=1elI15BomiiNfCXLxpBjhdI3nB6bN9UEp this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphones, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %file_list_for_db% %dir_for_db%&lt;br /&gt;
where %file_list_for_db% is a file containing the input list of database audio files, with name convention as uniqueKey.mp3. For example:&lt;br /&gt;
 ./AFP/database/000001.mp3&lt;br /&gt;
 ./AFP/database/000002.mp3&lt;br /&gt;
 ./AFP/database/000003.mp3&lt;br /&gt;
 ./AFP/database/000004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed into the directory %dir_for_db%. The total size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %file_list_for_query% %dir_for_db% %result_file%&lt;br /&gt;
where %file_list_for_query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q000001.wav&lt;br /&gt;
 ./AFP/query/q000002.wav&lt;br /&gt;
 ./AFP/query/q000003.wav&lt;br /&gt;
 ./AFP/query/q000004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %query_file_path%	%db_file_path%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q000001.wav	./AFP/database/0000004.mp3&lt;br /&gt;
 ./AFP/query/q000002.wav	./AFP/database/0000054.mp3&lt;br /&gt;
 ./AFP/query/q000003.wav	./AFP/database/0001002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
&lt;br /&gt;
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 50KB for 1 minute of music. Assuming that the duration is 4 mins in average, then the total storage for 10,000 songs should be around 50*10000*4/1000000 = 2GB.&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 24 hours || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Special notes ==&lt;br /&gt;
* We will run participants' submissions on Linux without Matlab. C/Python source or executable are most welcome.&lt;br /&gt;
* To run multiple submissions in the same time conveniently, writing temporary files in directories of database audio files (i.e. paths in %file_list_for_db%) is not allowed. Please write temporary files (like *.wav files or any other intermediate formats) only in ./tmp or ./temp . Note that if any file is required by matcher and is produced (written, copied, and so on) by builder, then it is considered as part of database and should be putted in %dir_for_db%.&lt;br /&gt;
* Please check the existence of folders (%dir_for_db%, ./tmp, and ./temp) in the builder before reading any audio. If any of the intended folders are not exist, please create them automatically.&lt;br /&gt;
* Existence of ending slash of %dir_for_db% is not guaranteed. That is, it may be given as &amp;quot;db&amp;quot; or &amp;quot;db/&amp;quot;.&lt;br /&gt;
* ffmpeg is available on the system but the version is not guaranteed. If a certain version is needed, please include it within the submission and use something like &amp;quot;./ffmpeg&amp;quot; to call it.&lt;br /&gt;
* Sampling rate of query files is 44.1 KHz, other formats like 8 KHz will not be provided.&lt;br /&gt;
* Actually some out-of-vocabulary queries are listed in the %query_file_path%, but only those who have correspond song in the database (i.e. 5692 of them) are considered while computing the accuracy. Besides, since we will used a very small set to test participants' submissions before running on the whole set, participants are encouraged to output something in %result_file% even if queries are likely to be unseen.&lt;br /&gt;
* Participants will be asked to modify the submission if any of the above specifications are not followed.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Bibliography ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[1] Bob L. Sturm, ``The State of the Art Ten Years After a State of the Art: Future Research in Music Information Retrieval,'' J. New Music Research, vol. 43, no. 2, pp. 147–172, 2014.&lt;br /&gt;
&lt;br /&gt;
[2] Faults in the GTZAN Music Genre Dataset, available at http://imi.aau.dk/~bst/research/GTZANtable2/ , 2014.&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2020:Main_Page&amp;diff=13209</id>
		<title>2020:Main Page</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2020:Main_Page&amp;diff=13209"/>
		<updated>2020-08-14T21:27:03Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: /* MIREX 2020 Deadline Dates */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Welcome to MIREX 2020==&lt;br /&gt;
&lt;br /&gt;
This is the main page for the 16th running of the Music Information Retrieval Evaluation eXchange (MIREX 2020). The International Music Information Retrieval Systems Evaluation Laboratory (IMIRSEL) at [https://ischool.illinois.edu School of Information Sciences], University of Illinois at Urbana-Champaign ([https://illinois.edu UIUC]) is the principal organizer of MIREX 2020. &lt;br /&gt;
&lt;br /&gt;
The MIREX 2020 community will hold its annual meeting as part of [https://ismir.github.io/ISMIR2020/ The 21st International Society for Music Information Retrieval Conference], ISMIR 2020, which will be held in Montréal, Canada, October 11–15, 2020.&lt;br /&gt;
&lt;br /&gt;
J. Stephen Downie&amp;lt;br&amp;gt;&lt;br /&gt;
Director, IMIRSEL&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Task Leadership Model==&lt;br /&gt;
&lt;br /&gt;
Like previous years, we are prepared to improve the distribution of tasks for the upcoming MIREX 2020.  To do so, we really need leaders to help us organize and run each task.&lt;br /&gt;
&lt;br /&gt;
To volunteer to lead a task, please complete the form [https://forms.gle/9idWhxPgdisxFW55A here]. Current information about task captains can be found on the [[2020:Task Captains]] page. Please direct any communication to the [https://lists.ischool.illinois.edu/lists/admin/evalfest EvalFest] mailing list.&lt;br /&gt;
&lt;br /&gt;
What does it mean to lead a task?&lt;br /&gt;
* Update wiki pages as needed&lt;br /&gt;
* Communicate with submitters and troubleshooting submissions&lt;br /&gt;
* Execution and evaluation of submissions&lt;br /&gt;
* Publishing final results&lt;br /&gt;
&lt;br /&gt;
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.&lt;br /&gt;
&lt;br /&gt;
We really need leaders to help us!&lt;br /&gt;
==MIREX 2020 Deadline Dates==&lt;br /&gt;
* '''August 30th 2020'''&lt;br /&gt;
** [[2020:Audio Fingerprinting]] &amp;lt;TC: Chung-Che Wang&amp;gt;&lt;br /&gt;
** [[2020:Audio Classification (Train/Test) Tasks]] &amp;lt;TC: Yun Hao (IMIRSEL)&amp;gt;, including&lt;br /&gt;
*** Audio US Pop Genre Classification&lt;br /&gt;
*** Audio Latin Genre Classification&lt;br /&gt;
*** Audio Music Mood Classification&lt;br /&gt;
*** Audio Classical Composer Identification&lt;br /&gt;
** [[2020:Audio K-POP Mood Classification]] &amp;lt;TC: Yun Hao (IMIRSEL)&amp;gt;&lt;br /&gt;
** [[2020:Audio K-POP Genre Classification]] &amp;lt;TC: Yun Hao (IMIRSEL)&amp;gt;&lt;br /&gt;
** [[2020:Audio Tag Classification]] &amp;lt;TC: Emre Demir&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* '''September 6th 2020'''&lt;br /&gt;
** [[2020:Audio Chord Estimation]] &amp;lt;TC: Johan Pauwels&amp;gt;&lt;br /&gt;
** [[2020:Audio Cover Song Identification]] &amp;lt;TC: Yun Hao (IMIRSEL)&amp;gt;&lt;br /&gt;
** [[2020:Audio Downbeat Estimation]] &amp;lt;TC: Mickaël Zehren&amp;gt;&lt;br /&gt;
** [[2020:Audio Key Detection]] &amp;lt;TC: Johan Pauwels&amp;gt;&lt;br /&gt;
** [[2020:Audio Melody Extraction]] &amp;lt;TC: An-Qi Huang&amp;gt;&lt;br /&gt;
** [[2020:Patterns for Prediction]] (offshoot of [[2017:Discovery of Repeated Themes &amp;amp; Sections]]) &amp;lt;TC: Berit Janssen, Iris Ren, and Tom Collins&amp;gt;&lt;br /&gt;
** [[2020:Query by Singing/Humming]] &amp;lt;TC: Makarand Velankar&amp;gt;&lt;br /&gt;
** [[2020:Multiple Fundamental Frequency Estimation &amp;amp; Tracking]] &amp;lt;TC: Yun Hao (IMIRSEL)&amp;gt;&lt;br /&gt;
** [[2020:Lyrics Transcription]] (former: Automatic Lyrics-to-Audio Alignment) &amp;lt;TC: Georgi Dzhambazov, Daniel Stoller&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==MIREX 2020 Submission Instructions==&lt;br /&gt;
* Be sure to read through the rest of this page&lt;br /&gt;
* Be sure to read though the task pages for which you are submitting&lt;br /&gt;
* Be sure to follow the [[2009:Best Coding Practices for MIREX | Best Coding Practices for MIREX]]&lt;br /&gt;
* Be sure to follow the  [[MIREX 2019 Submission Instructions]] including both the tutorial video and the text&lt;br /&gt;
* The MIREX 2020 Submission System is coming soon at: https://www.music-ir.org/mirex/sub/.&lt;br /&gt;
&lt;br /&gt;
==MIREX 2020 Evaluation==&lt;br /&gt;
&lt;br /&gt;
===Note to New Participants===&lt;br /&gt;
Please take the time to read the following review articles that explain the history and structure of MIREX.&lt;br /&gt;
&lt;br /&gt;
Downie, J. Stephen (2008). The Music Information Retrieval Evaluation Exchange (2005-2007):&amp;lt;br&amp;gt;&lt;br /&gt;
A window into music information retrieval research.''Acoustical Science and Technology 29'' (4): 247-255. &amp;lt;br&amp;gt;&lt;br /&gt;
Available at: [http://dx.doi.org/10.1250/ast.29.247 http://dx.doi.org/10.1250/ast.29.247]&lt;br /&gt;
&lt;br /&gt;
Downie, J. Stephen, Andreas F. Ehmann, Mert Bay and M. Cameron Jones. (2010).&amp;lt;br&amp;gt;&lt;br /&gt;
The Music Information Retrieval Evaluation eXchange: Some Observations and Insights.&amp;lt;br&amp;gt;&lt;br /&gt;
''Advances in Music Information Retrieval'' Vol. 274, pp. 93-115&amp;lt;br&amp;gt;&lt;br /&gt;
Available at: [http://bit.ly/KpM5u5 http://bit.ly/KpM5u5]&lt;br /&gt;
&lt;br /&gt;
===Runtime Limits===&lt;br /&gt;
&lt;br /&gt;
We reserve the right to stop any process that exceeds runtime limits for each task.  We will do our best to notify you in enough time to allow revisions, but this may not be possible in some cases. Please respect the published runtime limits.&lt;br /&gt;
&lt;br /&gt;
===Note to All Participants===&lt;br /&gt;
&lt;br /&gt;
Because MIREX is premised upon the sharing of ideas and results, '''ALL''' MIREX participants are expected to:&lt;br /&gt;
&lt;br /&gt;
# submit a DRAFT 2-3 page extended abstract PDF in the ISMIR format about the submitted program(s) to help us and the community better understand how the algorithm works when submitting their programme(s).&lt;br /&gt;
# submit a FINALIZED 2-3 page extended abstract PDF in the ISMIR format prior to ISMIR 2020 for posting on the respective results pages (sometimes the same abstract can be used for multiple submissions; in many cases the DRAFT and FINALIZED abstracts are the same).&lt;br /&gt;
# present a poster at the MIREX 2020 poster session at ISMIR 2020, if there is a physical component to the conference.&lt;br /&gt;
&lt;br /&gt;
===Software Dependency Requests===&lt;br /&gt;
If you have not submitted to MIREX before or are unsure whether IMIRSEL currently supports some of the software/architecture dependencies for your submission, please contact [mailto:yunhao2@illinois.edu IMIRSEL team] as early as possible. Failing to notify the team might result in your submission being rejected.&lt;br /&gt;
&lt;br /&gt;
Finally, you will also be expected to detail your software/architecture dependencies in a README file to be provided to the submission system.&lt;br /&gt;
&lt;br /&gt;
==Getting Involved in MIREX 2020==&lt;br /&gt;
MIREX is a community-based endeavour. Be a part of the community and help make MIREX 2020 the best yet.&lt;br /&gt;
&lt;br /&gt;
===Mailing List Participation===&lt;br /&gt;
If you are interested in formal MIR evaluation, you should also subscribe to the &amp;quot;MIREX&amp;quot; (aka &amp;quot;EvalFest&amp;quot;) mail list and participate in the community discussions about defining and running MIREX 2020 tasks. Subscription information at: &lt;br /&gt;
[https://mail.lis.illinois.edu/mailman/listinfo/evalfest EvalFest Central]. &lt;br /&gt;
&lt;br /&gt;
If you are participating in MIREX 2020, it is VERY IMPORTANT that you are subscribed to EvalFest. Deadlines, task updates and other important information will be announced via this mailing list. Please use the EvalFest for discussion of MIREX task proposals and other MIREX related issues. This wiki (MIREX 2020 wiki) will be used to embody and disseminate task proposals, however, task related discussions should be conducted on the MIREX organization mailing list (EvalFest) rather than on this wiki, but should be summarized here. &lt;br /&gt;
&lt;br /&gt;
Where possible, definitions or example code for new evaluation metrics or tasks should be provided to the IMIRSEL team who will embody them in software as part of the NEMA analytics framework, which will be released to the community at or before ISMIR 2020 - providing a standardised set of interfaces and output to disciplined evaluation procedures for a great many MIR tasks.&lt;br /&gt;
&lt;br /&gt;
===Wiki Participation===&lt;br /&gt;
If you find that you cannot edit a MIREX wiki page, you will need to create a new account via: [[Special:Userlogin]].&lt;br /&gt;
&lt;br /&gt;
Please note that because of &amp;quot;spam-bots&amp;quot;, MIREX wiki registration requests may be moderated by IMIRSEL members. It might take up to 24 hours for approval (Thank you for your patience!).&lt;br /&gt;
&lt;br /&gt;
==MIREX 2005 - 2019 Wikis==&lt;br /&gt;
Content from MIREX 2005 - 2018 are available at:&lt;br /&gt;
'''[[2019:Main_Page|MIREX 2019]]'''&lt;br /&gt;
'''[[2018:Main_Page|MIREX 2018]]'''&lt;br /&gt;
'''[[2017:Main_Page|MIREX 2017]]''' &lt;br /&gt;
'''[[2016:Main_Page|MIREX 2016]]''' &lt;br /&gt;
'''[[2015:Main_Page|MIREX 2015]]''' &lt;br /&gt;
'''[[2014:Main_Page|MIREX 2014]]''' &lt;br /&gt;
'''[[2013:Main_Page|MIREX 2013]]''' &lt;br /&gt;
'''[[2012:Main_Page|MIREX 2012]]''' &lt;br /&gt;
'''[[2011:Main_Page|MIREX 2011]]''' &lt;br /&gt;
'''[[2010:Main_Page|MIREX 2010]]''' &lt;br /&gt;
'''[[2009:Main_Page|MIREX 2009]]''' &lt;br /&gt;
'''[[2008:Main_Page|MIREX 2008]]''' &lt;br /&gt;
'''[[2007:Main_Page|MIREX 2007]]''' &lt;br /&gt;
'''[[2006:Main_Page|MIREX 2006]]''' &lt;br /&gt;
'''[[2005:Main_Page|MIREX 2005]]'''&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2019:Audio_Fingerprinting&amp;diff=13117</id>
		<title>2019:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2019:Audio_Fingerprinting&amp;diff=13117"/>
		<updated>2019-11-04T23:43:08Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: /* Special notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) 965 of the files are from GTZAN data set, all the others are mainly English and Chinese pop songs. This data set is hidden and not available for download. (Note that there are possibly different numbers of channels (mono and stereo), sampling rates, and bit resolutions for these files.)&lt;br /&gt;
&lt;br /&gt;
The GTZAN data set were purified according to [1][2]. Exact repetitions were considered by the following principles:&lt;br /&gt;
* If none of the songs in a repetition set has corresponding queries, then nothing is removed from the database.&lt;br /&gt;
* If one of the songs in a repetition set has corresponding queries, then all the other songs (which have no corresponding queries) were removed from the database.&lt;br /&gt;
* If two or more of the songs in a repetition set has corresponding queries, then only one song (which has corresponding queries) was kept in the database. Note that if a query clip corresponds to a removed song, then the query's ground truth is modified to the kept song.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [https://drive.google.com/open?id=1elI15BomiiNfCXLxpBjhdI3nB6bN9UEp this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphones, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %file_list_for_db% %dir_for_db%&lt;br /&gt;
where %file_list_for_db% is a file containing the input list of database audio files, with name convention as uniqueKey.mp3. For example:&lt;br /&gt;
 ./AFP/database/000001.mp3&lt;br /&gt;
 ./AFP/database/000002.mp3&lt;br /&gt;
 ./AFP/database/000003.mp3&lt;br /&gt;
 ./AFP/database/000004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed into the directory %dir_for_db%. The total size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %file_list_for_query% %dir_for_db% %result_file%&lt;br /&gt;
where %file_list_for_query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q000001.wav&lt;br /&gt;
 ./AFP/query/q000002.wav&lt;br /&gt;
 ./AFP/query/q000003.wav&lt;br /&gt;
 ./AFP/query/q000004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %query_file_path%	%db_file_path%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q000001.wav	./AFP/database/0000004.mp3&lt;br /&gt;
 ./AFP/query/q000002.wav	./AFP/database/0000054.mp3&lt;br /&gt;
 ./AFP/query/q000003.wav	./AFP/database/0001002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
&lt;br /&gt;
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 50KB for 1 minute of music. Assuming that the duration is 4 mins in average, then the total storage for 10,000 songs should be around 50*10000*4/1000000 = 2GB.&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 24 hours || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Special notes ==&lt;br /&gt;
* We will run participants' submissions on Linux without Matlab. C/Python source or executable are most welcome.&lt;br /&gt;
* To run multiple submissions in the same time conveniently, writing temporary files in directories of database audio files (i.e. paths in %file_list_for_db%) is not allowed. Please write temporary files (like *.wav files or any other intermediate formats) only in ./tmp or ./temp . Note that if any file is required by matcher and is produced (written, copied, and so on) by builder, then it is considered as part of database and should be putted in %dir_for_db%.&lt;br /&gt;
* Please check the existence of folders (%dir_for_db%, ./tmp, and ./temp) in the builder before reading any audio. If any of the intended folders are not exist, please create them automatically.&lt;br /&gt;
* Existence of ending slash of %dir_for_db% is not guaranteed. That is, it may be given as &amp;quot;db&amp;quot; or &amp;quot;db/&amp;quot;.&lt;br /&gt;
* ffmpeg is available on the system but the version is not guaranteed. If a certain version is needed, please include it within the submission and use something like &amp;quot;./ffmpeg&amp;quot; to call it.&lt;br /&gt;
* Sampling rate of query files is 44.1 KHz, other formats like 8 KHz will not be provided.&lt;br /&gt;
* Actually some out-of-vocabulary queries are listed in the %query_file_path%, but only those who have correspond song in the database (i.e. 5692 of them) are considered while computing the accuracy. Besides, since we will used a very small set to test participants' submissions before running on the whole set, participants are encouraged to output something in %result_file% even if queries are likely to be unseen.&lt;br /&gt;
* Participants will be asked to modify the submission if any of the above specifications are not followed.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Bibliography ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[1] Bob L. Sturm, ``The State of the Art Ten Years After a State of the Art: Future Research in Music Information Retrieval,'' J. New Music Research, vol. 43, no. 2, pp. 147–172, 2014.&lt;br /&gt;
&lt;br /&gt;
[2] Faults in the GTZAN Music Genre Dataset, available at http://imi.aau.dk/~bst/research/GTZANtable2/ , 2014.&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2019:Audio_Fingerprinting&amp;diff=12988</id>
		<title>2019:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2019:Audio_Fingerprinting&amp;diff=12988"/>
		<updated>2019-07-30T14:34:16Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) 965 of the files are from GTZAN data set, all the others are mainly English and Chinese pop songs. This data set is hidden and not available for download. (Note that there are possibly different numbers of channels (mono and stereo), sampling rates, and bit resolutions for these files.)&lt;br /&gt;
&lt;br /&gt;
The GTZAN data set were purified according to [1][2]. Exact repetitions were considered by the following principles:&lt;br /&gt;
* If none of the songs in a repetition set has corresponding queries, then nothing is removed from the database.&lt;br /&gt;
* If one of the songs in a repetition set has corresponding queries, then all the other songs (which have no corresponding queries) were removed from the database.&lt;br /&gt;
* If two or more of the songs in a repetition set has corresponding queries, then only one song (which has corresponding queries) was kept in the database. Note that if a query clip corresponds to a removed song, then the query's ground truth is modified to the kept song.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [https://drive.google.com/open?id=1elI15BomiiNfCXLxpBjhdI3nB6bN9UEp this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphones, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %file_list_for_db% %dir_for_db%&lt;br /&gt;
where %file_list_for_db% is a file containing the input list of database audio files, with name convention as uniqueKey.mp3. For example:&lt;br /&gt;
 ./AFP/database/000001.mp3&lt;br /&gt;
 ./AFP/database/000002.mp3&lt;br /&gt;
 ./AFP/database/000003.mp3&lt;br /&gt;
 ./AFP/database/000004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed into the directory %dir_for_db%. The total size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %file_list_for_query% %dir_for_db% %result_file%&lt;br /&gt;
where %file_list_for_query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q000001.wav&lt;br /&gt;
 ./AFP/query/q000002.wav&lt;br /&gt;
 ./AFP/query/q000003.wav&lt;br /&gt;
 ./AFP/query/q000004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %query_file_path%	%db_file_path%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q000001.wav	./AFP/database/0000004.mp3&lt;br /&gt;
 ./AFP/query/q000002.wav	./AFP/database/0000054.mp3&lt;br /&gt;
 ./AFP/query/q000003.wav	./AFP/database/0001002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
&lt;br /&gt;
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 50KB for 1 minute of music. Assuming that the duration is 4 mins in average, then the total storage for 10,000 songs should be around 50*10000*4/1000000 = 2GB.&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 24 hours || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Special notes ==&lt;br /&gt;
* In this year, we will run participants' submissions on Linux without Matlab. C/Python source or executable are most welcome.&lt;br /&gt;
* To run multiple submissions in the same time conveniently, writing temporary files in directories of database audio files (i.e. paths in %file_list_for_db%) is not allowed. Please write temporary files (like *.wav files or any other intermediate formats) only in ./tmp or ./temp . Note that if any file is required by matcher and is produced (written, copied, and so on) by builder, then it is considered as part of database and should be putted in %dir_for_db%.&lt;br /&gt;
* Please check the existence of folders (%dir_for_db%, ./tmp, and ./temp) in the builder before reading any audio. If any of the intended folders are not exist, please create them automatically.&lt;br /&gt;
* Existence of ending slash of %dir_for_db% is not guaranteed. That is, it may be given as &amp;quot;db&amp;quot; or &amp;quot;db/&amp;quot;.&lt;br /&gt;
* ffmpeg is available on the system but the version is not guaranteed. If a certain version is needed, please include it within the submission and use something like &amp;quot;./ffmpeg&amp;quot; to call it.&lt;br /&gt;
* Sampling rate of query files is 44.1 KHz, other formats like 8 KHz will not be provided.&lt;br /&gt;
&lt;br /&gt;
* Participants will be asked to modify the submission if any of the above specifications are not followed.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Bibliography ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[1] Bob L. Sturm, ``The State of the Art Ten Years After a State of the Art: Future Research in Music Information Retrieval,'' J. New Music Research, vol. 43, no. 2, pp. 147–172, 2014.&lt;br /&gt;
&lt;br /&gt;
[2] Faults in the GTZAN Music Genre Dataset, available at http://imi.aau.dk/~bst/research/GTZANtable2/ , 2014.&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2018:Audio_Fingerprinting&amp;diff=12648</id>
		<title>2018:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2018:Audio_Fingerprinting&amp;diff=12648"/>
		<updated>2018-08-08T22:51:21Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) 965 of the files are from GTZAN data set, all the others are mainly English and Chinese pop songs. This data set is hidden and not available for download. (Note that there are possibly different numbers of channels (mono and stereo), sampling rates, and bit resolutions for these files.)&lt;br /&gt;
&lt;br /&gt;
The GTZAN data set were purified according to [1][2]. Exact repetitions were considered by the following principles:&lt;br /&gt;
* If none of the songs in a repetition set has corresponding queries, then nothing is removed from the database.&lt;br /&gt;
* If one of the songs in a repetition set has corresponding queries, then all the other songs (which have no corresponding queries) were removed from the database.&lt;br /&gt;
* If two or more of the songs in a repetition set has corresponding queries, then only one song (which has corresponding queries) was kept in the database. Note that if a query clip corresponds to a removed song, then the query's ground truth is modified to the kept song.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [https://drive.google.com/open?id=1elI15BomiiNfCXLxpBjhdI3nB6bN9UEp this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphones, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %file_list_for_db% %dir_for_db%&lt;br /&gt;
where %file_list_for_db% is a file containing the input list of database audio files, with name convention as uniqueKey.mp3. For example:&lt;br /&gt;
 ./AFP/database/000001.mp3&lt;br /&gt;
 ./AFP/database/000002.mp3&lt;br /&gt;
 ./AFP/database/000003.mp3&lt;br /&gt;
 ./AFP/database/000004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed into the directory %dir_for_db%. The total size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %file_list_for_query% %dir_for_db% %result_file%&lt;br /&gt;
where %file_list_for_query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q000001.wav&lt;br /&gt;
 ./AFP/query/q000002.wav&lt;br /&gt;
 ./AFP/query/q000003.wav&lt;br /&gt;
 ./AFP/query/q000004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %query_file_path%	%db_file_path%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q000001.wav	./AFP/database/0000004.mp3&lt;br /&gt;
 ./AFP/query/q000002.wav	./AFP/database/0000054.mp3&lt;br /&gt;
 ./AFP/query/q000003.wav	./AFP/database/0001002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
&lt;br /&gt;
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 50KB for 1 minute of music. Assuming that the duration is 4 mins in average, then the total storage for 10,000 songs should be around 50*10000*4/1000000 = 2GB.&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 24 hours || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Special notes ==&lt;br /&gt;
* In this year, we will run participants' submissions on Linux without Matlab. C/Python source or executable are most welcome.&lt;br /&gt;
* To run multiple submissions in the same time conveniently, writing temporary files in directories of database audio files (i.e. paths in %file_list_for_db%) is not allowed. Please write temporary files (like *.wav files or any other intermediate formats) only in ./tmp or ./temp . Note that if any file is required by matcher and is produced (written, copied, and so on) by builder, then it is considered as part of database and should be putted in %dir_for_db%.&lt;br /&gt;
* Please check the existence of folders (%dir_for_db%, ./tmp, and ./temp) in the builder before reading any audio. If any of the intended folders are not exist, please create them automatically.&lt;br /&gt;
* Existence of ending slash of %dir_for_db% is not guaranteed. That is, it may be given as &amp;quot;db&amp;quot; or &amp;quot;db/&amp;quot;.&lt;br /&gt;
* ffmpeg is available on the system but the version is not guaranteed. If a certain version is needed, please include it within the submission and use something like &amp;quot;./ffmpeg&amp;quot; to call it.&lt;br /&gt;
* Sampling rate of query files is 44.1 KHz, other formats like 8 KHz will not be provided.&lt;br /&gt;
&lt;br /&gt;
* Participants will be asked to modify the submission if any of the above specifications are not followed.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Bibliography ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[1] Bob L. Sturm, ``The State of the Art Ten Years After a State of the Art: Future Research in Music Information Retrieval,'' J. New Music Research, vol. 43, no. 2, pp. 147–172, 2014.&lt;br /&gt;
&lt;br /&gt;
[2] Faults in the GTZAN Music Genre Dataset, available at http://imi.aau.dk/~bst/research/GTZANtable2/ , 2014.&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2018:Audio_Fingerprinting&amp;diff=12592</id>
		<title>2018:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2018:Audio_Fingerprinting&amp;diff=12592"/>
		<updated>2018-07-26T13:29:05Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) 965 of the files are from GTZAN data set, all the others are mainly English and Chinese pop songs. This data set is hidden and not available for download. (Note that there are possibly different numbers of channels (mono and stereo), sampling rates, and bit resolutions for these files.)&lt;br /&gt;
&lt;br /&gt;
The GTZAN data set were purified according to [1][2]. Exact repetitions were considered by the following principles:&lt;br /&gt;
* If none of the songs in a repetition set has corresponding queries, then nothing is removed from the database.&lt;br /&gt;
* If one of the songs in a repetition set has corresponding queries, then all the other songs (which have no corresponding queries) were removed from the database.&lt;br /&gt;
* If two or more of the songs in a repetition set has corresponding queries, then only one song (which has corresponding queries) was kept in the database. Note that if a query clip corresponds to a removed song, then the query's ground truth is modified to the kept song.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [https://drive.google.com/open?id=1elI15BomiiNfCXLxpBjhdI3nB6bN9UEp this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphones, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %file_list_for_db% %dir_for_db%&lt;br /&gt;
where %file_list_for_db% is a file containing the input list of database audio files, with name convention as uniqueKey.mp3. For example:&lt;br /&gt;
 ./AFP/database/000001.mp3&lt;br /&gt;
 ./AFP/database/000002.mp3&lt;br /&gt;
 ./AFP/database/000003.mp3&lt;br /&gt;
 ./AFP/database/000004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed into the directory %dir_for_db%. The total size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %file_list_for_query% %dir_for_db% %result_file%&lt;br /&gt;
where %file_list_for_query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q000001.wav&lt;br /&gt;
 ./AFP/query/q000002.wav&lt;br /&gt;
 ./AFP/query/q000003.wav&lt;br /&gt;
 ./AFP/query/q000004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %query_file_path%	%db_file_path%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q000001.wav	./AFP/database/0000004.mp3&lt;br /&gt;
 ./AFP/query/q000002.wav	./AFP/database/0000054.mp3&lt;br /&gt;
 ./AFP/query/q000003.wav	./AFP/database/0001002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
&lt;br /&gt;
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 50KB for 1 minute of music. Assuming that the duration is 4 mins in average, then the total storage for 10,000 songs should be around 50*10000*4/1000000 = 2GB.&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 24 hours || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Special notes ==&lt;br /&gt;
* In this year, we will run participants' submissions on Linux without Matlab. C/Python source or executable are most welcome.&lt;br /&gt;
* To run multiple submissions in the same time conveniently, writing temporary files in the database folder is not allowed. Please write temporary files (like *.wav files or any other intermediate formats) only in ./tmp or ./temp .&lt;br /&gt;
* Please check the existence of folders (%dir_for_db%, ./tmp, and ./temp) in the builder before reading any audio. If any of the intended folders are not exist, please create them automatically.&lt;br /&gt;
* Existence of ending slash of %dir_for_db% is not guaranteed. That is, it may be given as &amp;quot;db&amp;quot; or &amp;quot;db/&amp;quot;.&lt;br /&gt;
* ffmpeg is available on the system but the version is not guaranteed. If a certain version is needed, please include it within the submission and use something like &amp;quot;./ffmpeg&amp;quot; to call it.&lt;br /&gt;
* Sampling rate of query files is 44.1 KHz, other formats like 8 KHz will not be provided.&lt;br /&gt;
&lt;br /&gt;
* Participants will be asked to modify the submission if any of the above specifications are not followed.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Bibliography ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[1] Bob L. Sturm, ``The State of the Art Ten Years After a State of the Art: Future Research in Music Information Retrieval,'' J. New Music Research, vol. 43, no. 2, pp. 147–172, 2014.&lt;br /&gt;
&lt;br /&gt;
[2] Faults in the GTZAN Music Genre Dataset, available at http://imi.aau.dk/~bst/research/GTZANtable2/ , 2014.&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2018:Audio_Fingerprinting&amp;diff=12472</id>
		<title>2018:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2018:Audio_Fingerprinting&amp;diff=12472"/>
		<updated>2018-05-25T13:03:56Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: /* Special notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) 965 of the files are from GTZAN data set, all the others are mainly English and Chinese pop songs. This data set is hidden and not available for download. (Note that there are possibly different numbers of channels (mono and stereo), sampling rates, and bit resolutions for these files.)&lt;br /&gt;
&lt;br /&gt;
The GTZAN data set were purified according to [1][2]. Exact repetitions were considered by the following principles:&lt;br /&gt;
* If none of the songs in a repetition set has corresponding queries, then nothing is removed from the database.&lt;br /&gt;
* If one of the songs in a repetition set has corresponding queries, then all the other songs (which have no corresponding queries) were removed from the database.&lt;br /&gt;
* If two or more of the songs in a repetition set has corresponding queries, then only one song (which has corresponding queries) was kept in the database. Note that if a query clip corresponds to a removed song, then the query's ground truth is modified to the kept song.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphones, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %file_list_for_db% %dir_for_db%&lt;br /&gt;
where %file_list_for_db% is a file containing the input list of database audio files, with name convention as uniqueKey.mp3. For example:&lt;br /&gt;
 ./AFP/database/000001.mp3&lt;br /&gt;
 ./AFP/database/000002.mp3&lt;br /&gt;
 ./AFP/database/000003.mp3&lt;br /&gt;
 ./AFP/database/000004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed into the directory %dir_for_db%. The total size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %file_list_for_query% %dir_db% %result_file%&lt;br /&gt;
where %file_list_for_query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q000001.wav&lt;br /&gt;
 ./AFP/query/q000002.wav&lt;br /&gt;
 ./AFP/query/q000003.wav&lt;br /&gt;
 ./AFP/query/q000004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %query_file_path%	%db_file_path%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q000001.wav	./AFP/database/0000004.mp3&lt;br /&gt;
 ./AFP/query/q000002.wav	./AFP/database/0000054.mp3&lt;br /&gt;
 ./AFP/query/q000003.wav	./AFP/database/0001002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
&lt;br /&gt;
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 50KB for 1 minute of music. Assuming that the duration is 4 mins in average, then the total storage for 10,000 songs should be around 50*10000*4/1000000 = 2GB.&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 24 hours || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Special notes ==&lt;br /&gt;
* In this year, we will run participants' submissions on Linux without Matlab. C/Python source or executable are most welcome.&lt;br /&gt;
* To run multiple submissions conveniently, writing temporary files in the database folder is not allowed. Please write temporary files (like *.wav files or any other intermediate formats) only in ./tmp or ./temp .&lt;br /&gt;
* Please check the existence of folders (%dir_for_db%, ./tmp, and ./temp) in the builder before reading any audio. If any of the intended folders are not exist, please create them automatically.&lt;br /&gt;
* Sampling rate of query files is 44.1 KHz, other formats like 8 KHz will not be provided.&lt;br /&gt;
* Participants will be asked to modify the submission if any of the above specifications are not followed.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Bibliography ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[1] Bob L. Sturm, ``The State of the Art Ten Years After a State of the Art: Future Research in Music Information Retrieval,'' J. New Music Research, vol. 43, no. 2, pp. 147–172, 2014.&lt;br /&gt;
&lt;br /&gt;
[2] Faults in the GTZAN Music Genre Dataset, available at http://imi.aau.dk/~bst/research/GTZANtable2/ , 2014.&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2018:Audio_Fingerprinting&amp;diff=12459</id>
		<title>2018:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2018:Audio_Fingerprinting&amp;diff=12459"/>
		<updated>2018-05-14T23:58:05Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) 965 of the files are from GTZAN data set, all the others are mainly English and Chinese pop songs. This data set is hidden and not available for download. (Note that there are possibly different numbers of channels (mono and stereo), sampling rates, and bit resolutions for these files.)&lt;br /&gt;
&lt;br /&gt;
The GTZAN data set were purified according to [1][2]. Exact repetitions were considered by the following principles:&lt;br /&gt;
* If none of the songs in a repetition set has corresponding queries, then nothing is removed from the database.&lt;br /&gt;
* If one of the songs in a repetition set has corresponding queries, then all the other songs (which have no corresponding queries) were removed from the database.&lt;br /&gt;
* If two or more of the songs in a repetition set has corresponding queries, then only one song (which has corresponding queries) was kept in the database. Note that if a query clip corresponds to a removed song, then the query's ground truth is modified to the kept song.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphones, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %file_list_for_db% %dir_for_db%&lt;br /&gt;
where %file_list_for_db% is a file containing the input list of database audio files, with name convention as uniqueKey.mp3. For example:&lt;br /&gt;
 ./AFP/database/000001.mp3&lt;br /&gt;
 ./AFP/database/000002.mp3&lt;br /&gt;
 ./AFP/database/000003.mp3&lt;br /&gt;
 ./AFP/database/000004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed into the directory %dir_for_db%. The total size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %file_list_for_query% %dir_db% %result_file%&lt;br /&gt;
where %file_list_for_query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q000001.wav&lt;br /&gt;
 ./AFP/query/q000002.wav&lt;br /&gt;
 ./AFP/query/q000003.wav&lt;br /&gt;
 ./AFP/query/q000004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %query_file_path%	%db_file_path%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q000001.wav	./AFP/database/0000004.mp3&lt;br /&gt;
 ./AFP/query/q000002.wav	./AFP/database/0000054.mp3&lt;br /&gt;
 ./AFP/query/q000003.wav	./AFP/database/0001002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
&lt;br /&gt;
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 50KB for 1 minute of music. Assuming that the duration is 4 mins in average, then the total storage for 10,000 songs should be around 50*10000*4/1000000 = 2GB.&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 24 hours || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Special notes ==&lt;br /&gt;
* In this year, we will run participants' submissions on Linux without Matlab. C/Python source or executable are most welcome.&lt;br /&gt;
* To run multiple submissions conveniently, writing temporary files in the database folder is not allowed. Please write temporary files only in ./tmp or ./temp, one process/thread one file, and the maximum allowed number of files is 32.&lt;br /&gt;
* Please check the existence of folders (%dir_for_db%, ./tmp, and ./temp) in the builder before reading any audio. If any of the intended folders are not exist, please create them automatically.&lt;br /&gt;
* Sampling rate of query files is 44.1 KHz, other formats like 8 KHz will not be provided.&lt;br /&gt;
* Participants will be asked to modify the submission if any of the above specifications are not followed.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Bibliography ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[1] Bob L. Sturm, ``The State of the Art Ten Years After a State of the Art: Future Research in Music Information Retrieval,'' J. New Music Research, vol. 43, no. 2, pp. 147–172, 2014.&lt;br /&gt;
&lt;br /&gt;
[2] Faults in the GTZAN Music Genre Dataset, available at http://imi.aau.dk/~bst/research/GTZANtable2/ , 2014.&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2018:Audio_Fingerprinting&amp;diff=12458</id>
		<title>2018:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2018:Audio_Fingerprinting&amp;diff=12458"/>
		<updated>2018-05-14T23:54:51Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) 965 of the files are from GTZAN data set, all the others are mainly English and Chinese pop songs. This data set is hidden and not available for download. (Note that there are possibly different numbers of channels (mono and stereo), sampling rates, and bit resolutions for these files.)&lt;br /&gt;
&lt;br /&gt;
The GTZAN data set were purified according to [1][2]. Exact repetitions were considered by the following principles:&lt;br /&gt;
* If none of the songs in a repetition set has corresponding queries, then nothing is removed from the database.&lt;br /&gt;
* If one of the songs in a repetition set has corresponding queries, then all the other songs (which have no corresponding queries) were removed from the database.&lt;br /&gt;
* If two or more of the songs in a repetition set has corresponding queries, then only one song (which has corresponding queries) was kept in the database. Note that if a query clip corresponds to a removed song, then the query's ground truth is modified to the kept song.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphones, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %file_list_for_db% %dir_for_db%&lt;br /&gt;
where %file_list_for_db% is a file containing the input list of database audio files, with name convention as uniqueKey.mp3. For example:&lt;br /&gt;
 ./AFP/database/000001.mp3&lt;br /&gt;
 ./AFP/database/000002.mp3&lt;br /&gt;
 ./AFP/database/000003.mp3&lt;br /&gt;
 ./AFP/database/000004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed into the directory %dir_for_db%. The total size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %file_list_for_query% %dir_db% %result_file%&lt;br /&gt;
where %file_list_for_query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q000001.wav&lt;br /&gt;
 ./AFP/query/q000002.wav&lt;br /&gt;
 ./AFP/query/q000003.wav&lt;br /&gt;
 ./AFP/query/q000004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %query_file_path%	%db_file_path%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q000001.wav	./AFP/database/0000004.mp3&lt;br /&gt;
 ./AFP/query/q000002.wav	./AFP/database/0000054.mp3&lt;br /&gt;
 ./AFP/query/q000003.wav	./AFP/database/0001002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
&lt;br /&gt;
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 50KB for 1 minute of music. Assuming that the duration is 4 mins in average, then the total storage for 10,000 songs should be around 50*10000*4/1000000 = 2GB.&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 24 hours || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Special notes ==&lt;br /&gt;
* In this year, we will run participants' submissions on Linux without Matlab. C/Python source or executable are most welcome.&lt;br /&gt;
* To run multiple submissions conveniently, writing temporary files in the database folder is not allowed. Please write temporary files only in ./tmp or ./temp, one process/thread one file, and the maximum allowed number of files is 32.&lt;br /&gt;
* Please check the existence of folders (%dir_for_db%, ./tmp, and ./temp) in the builder before reading any audio. If any of the intended folders are not exist, please create them automatically.&lt;br /&gt;
* Sampling rate of query files is 44.1 KHz, other formats like 8 KHz will not be provided.&lt;br /&gt;
* Participants will be asked to modify the submission if any of the rules are not followed.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Bibliography ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[1] Bob L. Sturm, ``The State of the Art Ten Years After a State of the Art: Future Research in Music Information Retrieval,'' J. New Music Research, vol. 43, no. 2, pp. 147–172, 2014.&lt;br /&gt;
&lt;br /&gt;
[2] Faults in the GTZAN Music Genre Dataset, available at http://imi.aau.dk/~bst/research/GTZANtable2/ , 2014.&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2016:Audio_Fingerprinting&amp;diff=11752</id>
		<title>2016:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2016:Audio_Fingerprinting&amp;diff=11752"/>
		<updated>2016-07-27T00:05:24Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) 965 of the files are from GTZAN data set, all the others are mainly English and Chinese pop songs. This data set is hidden and not available for download. (Note that there are possibly different numbers of channels (mono and stereo), sampling rates, and bit resolutions for these files.)&lt;br /&gt;
&lt;br /&gt;
The GTZAN data set were purified according to [1][2]. Exact repetitions were considered by the following principles:&lt;br /&gt;
* If none of the songs in a repetition set has corresponding queries, then nothing is removed from the database.&lt;br /&gt;
* If one of the songs in a repetition set has corresponding queries, then all the other songs (which have no corresponding queries) were removed from the database.&lt;br /&gt;
* If two or more of the songs in a repetition set has corresponding queries, then only one song (which has corresponding queries) was kept in the database. Note that if a query clip corresponds to a removed song, then the query's ground truth is modified to the kept song.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphones, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dir4db%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.mp3. For example:&lt;br /&gt;
 ./AFP/database/000001.mp3&lt;br /&gt;
 ./AFP/database/000002.mp3&lt;br /&gt;
 ./AFP/database/000003.mp3&lt;br /&gt;
 ./AFP/database/000004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The total size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dir4db% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q000001.wav&lt;br /&gt;
 ./AFP/query/q000002.wav&lt;br /&gt;
 ./AFP/query/q000003.wav&lt;br /&gt;
 ./AFP/query/q000004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q000001.wav	./AFP/database/0000004.mp3&lt;br /&gt;
 ./AFP/query/q000002.wav	./AFP/database/0000054.mp3&lt;br /&gt;
 ./AFP/query/q000003.wav	./AFP/database/0001002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 50KB for 1 minute of music. Assuming that the duration is 4 mins in average, then the total storage for 10,000 songs should be around 50*10000*4/1000000 = 2GB.&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 24 hours || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Bibliography ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[1] Bob L. Sturm, ``The State of the Art Ten Years After a State of the Art: Future Research in Music Information Retrieval,'' J. New Music Research, vol. 43, no. 2, pp. 147–172, 2014.&lt;br /&gt;
&lt;br /&gt;
[2] Faults in the GTZAN Music Genre Dataset, available at http://imi.aau.dk/~bst/research/GTZANtable2/ , 2014.&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2016:Audio_Fingerprinting_Results&amp;diff=11751</id>
		<title>2016:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2016:Audio_Fingerprinting_Results&amp;diff=11751"/>
		<updated>2016-07-14T08:52:15Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These are the results for the 2016 running of the Audio Fingerprinting task. For background information about this task set please refer to the [[2016:Audio Fingerprinting]] page.&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
    |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
    ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
    ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
    |-&lt;br /&gt;
&lt;br /&gt;
    ! AT1&lt;br /&gt;
    | ACRCloud_1.2  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/AT1.pdf PDF] || [http://www.acrcloud.com ACRCloud Team]&lt;br /&gt;
    |-&lt;br /&gt;
&lt;br /&gt;
    ! AT2&lt;br /&gt;
    | ACRCloud_1.0  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/AT2.pdf PDF] || [http://www.acrcloud.com ACRCloud Team]&lt;br /&gt;
    |-&lt;br /&gt;
&lt;br /&gt;
    ! AT3&lt;br /&gt;
    | ACRCloud_0.8  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/AT3.pdf PDF] || [http://www.acrcloud.com ACRCloud Team]&lt;br /&gt;
    |-&lt;br /&gt;
&lt;br /&gt;
    ! AT4&lt;br /&gt;
    | ACRCloud_0.6  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/AT4.pdf PDF] || [http://www.acrcloud.com ACRCloud Team]&lt;br /&gt;
    |-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;br /&gt;
&lt;br /&gt;
Size of database and running time are announced together with top-1 hit rate in the following table. Note that running time is announced for reference because machine spec, machine status, and number of cores used by a submission are different. IDs and specs of machines are:&lt;br /&gt;
&lt;br /&gt;
* A and B: 1.9 GHz, 24 cores CPU, 64 GB RAM&lt;br /&gt;
* C: a VM with 8 cores CPU and 32 GB RAM, hosted on 2.4 GHz, 32 cores CPU and 128 GB RAM&lt;br /&gt;
* D: 2.1 GHz, 24 cores CPU, 32 GB RAM&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/afp/result_2016.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2016:Audio_Fingerprinting_Results&amp;diff=11750</id>
		<title>2016:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2016:Audio_Fingerprinting_Results&amp;diff=11750"/>
		<updated>2016-07-14T04:49:52Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These are the results for the 2016 running of the Audio Fingerprinting task. For background information about this task set please refer to the [[2016:Audio Fingerprinting]] page.&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
    |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
    ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
    ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
    |-&lt;br /&gt;
&lt;br /&gt;
    ! AT1&lt;br /&gt;
    | ACRCloud_1.2  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/AT1.pdf PDF] || [http://www.acrcloud.com ACRCloud Team]&lt;br /&gt;
    |-&lt;br /&gt;
&lt;br /&gt;
    ! AT2&lt;br /&gt;
    | ACRCloud_1.0  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/AT2.pdf PDF] || [http://www.acrcloud.com ACRCloud Team]&lt;br /&gt;
    |-&lt;br /&gt;
&lt;br /&gt;
    ! AT3&lt;br /&gt;
    | ACRCloud_0.8  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/AT3.pdf PDF] || [http://www.acrcloud.com ACRCloud Team]&lt;br /&gt;
    |-&lt;br /&gt;
&lt;br /&gt;
    ! AT4&lt;br /&gt;
    | WW_6          ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/AT4.pdf PDF] || [http://www.acrcloud.com ACRCloud Team]&lt;br /&gt;
    |-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;br /&gt;
&lt;br /&gt;
Size of database and running time are announced together with top-1 hit rate in the following table. Note that running time is announced for reference because machine spec, machine status, and number of cores used by a submission are different. IDs and specs of machines are:&lt;br /&gt;
&lt;br /&gt;
* A and B: 1.9 GHz, 24 cores CPU, 64 GB RAM&lt;br /&gt;
* C: a VM with 8 cores CPU and 32 GB RAM, hosted on 2.4 GHz, 32 cores CPU and 128 GB RAM&lt;br /&gt;
* D: 2.1 GHz, 24 cores CPU, 32 GB RAM&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/afp/result_2016.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Audio_Fingerprinting_Results&amp;diff=11749</id>
		<title>2015:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Audio_Fingerprinting_Results&amp;diff=11749"/>
		<updated>2016-07-14T04:49:35Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These are the results for the 2015 running of the Audio Fingerprinting task. For background information about this task set please refer to the [[2015:Audio Fingerprinting]] page.&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
        |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
        ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
        ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! CZ3&lt;br /&gt;
        | CYCG3 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CZ3.pdf PDF] || [http://www.kugou.com ChuanYi Chen], [http://www.kugou.com Chaogang Zhang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! CZ4&lt;br /&gt;
        | CYCG4 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CZ4.pdf PDF] || [http://www.kugou.com ChuanYi Chen], [http://www.kugou.com Chaogang Zhang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! CZ5&lt;br /&gt;
        | CYCG5 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CZ5.pdf PDF] || [http://www.kugou.com ChuanYi Chen], [http://www.kugou.com Chaogang Zhang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! SW3&lt;br /&gt;
        | ACRCloud_1  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/SW3.pdf PDF] || [http://www.acrcloud.com Steve Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! SW4&lt;br /&gt;
        | ACRCloud_2  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/SW4.pdf PDF] || [http://www.acrcloud.com Steve Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! YCP4&lt;br /&gt;
        | Sogou_AFP_V1_20150914  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/YCP4.pdf PDF] || [http://blog.csdn.net/yutianzuijin Guangchao Yao],[http://corp.sogou.com/  Yiqian Pan], [http://www.sogou.com Wei Chen]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! YCP5&lt;br /&gt;
        | Sogou_AFP_V2_20150914  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/YCP5.pdf PDF] || [http://blog.csdn.net/yutianzuijin Guangchao Yao],[http://corp.sogou.com/  Yiqian Pan], [http://www.sogou.com Wei Chen]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! YCP6&lt;br /&gt;
        | Sogou_AFP_V3_20150914  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/YCP6.pdf PDF] || [http://blog.csdn.net/yutianzuijin Guangchao Yao],[http://corp.sogou.com/  Yiqian Pan], [http://www.sogou.com Wei Chen]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! ZW1&lt;br /&gt;
        | fingerprint_Mask1  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/ZW1.pdf PDF] || [http://hccl.ioa.ac.cn/ Zhichao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! ZW2&lt;br /&gt;
        | fingerprint_MASK2  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/ZW2.pdf PDF] || [http://hccl.ioa.ac.cn/ Zhichao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&amp;lt;!----&lt;br /&gt;
        ! TS1&lt;br /&gt;
        | STELLAR  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/TS1.pdf PDF] || [http://www.bafta.org/initiatives/commercial/research Toby Stokes]&lt;br /&gt;
        |-&lt;br /&gt;
        ! HLLC1&lt;br /&gt;
        | AudioFingerprint  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/HLLC1.pdf PDF] || chi hao hsieh, [http://idv.kh.usc.edu.tw/hungyi/ Hung-Yi Lo], Wei-Bin Liang, Chia-Ping Chen&lt;br /&gt;
        |-&lt;br /&gt;
----&amp;gt;&lt;br /&gt;
        |}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;br /&gt;
&lt;br /&gt;
Size of database and running time are announced together with top-1 hit rate in the following table. Note that running time is announced for reference because machine spec, machine status, and number of cores used by a submission are different. IDs and specs of machines are:&lt;br /&gt;
&lt;br /&gt;
* A and B: 1.9 GHz, 24 cores CPU, 64 GB RAM&lt;br /&gt;
* C: a VM with 8 cores CPU and 32 GB RAM, hosted on 2.4 GHz, 32 cores CPU and 128 GB RAM&lt;br /&gt;
* D: 2.1 GHz, 24 cores CPU, 32 GB RAM&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/afp/result_2015.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
* Correction: &amp;quot;YCP&amp;quot; should be &amp;quot;YPC&amp;quot;, &amp;quot;GB&amp;quot; should be &amp;quot;MB&amp;quot;, all &amp;quot;C&amp;quot; should be &amp;quot;D&amp;quot;&lt;br /&gt;
&amp;lt;!----&lt;br /&gt;
Last update: 2015/10/13.&lt;br /&gt;
Note that some of the submissions are still running.&lt;br /&gt;
----&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2016:Audio_Fingerprinting_Results&amp;diff=11748</id>
		<title>2016:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2016:Audio_Fingerprinting_Results&amp;diff=11748"/>
		<updated>2016-07-14T04:30:11Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These are the results for the 2016 running of the Audio Fingerprinting task. For background information about this task set please refer to the [[2016:Audio Fingerprinting]] page.&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
    |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
    ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
    ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
    ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
    |-&lt;br /&gt;
&lt;br /&gt;
    ! AT1&lt;br /&gt;
    | ACRCloud_1.2  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/AT1.pdf PDF] || [http://www.acrcloud.com ACRCloud Team]&lt;br /&gt;
    |-&lt;br /&gt;
&lt;br /&gt;
    ! AT2&lt;br /&gt;
    | ACRCloud_1.0  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/AT2.pdf PDF] || [http://www.acrcloud.com ACRCloud Team]&lt;br /&gt;
    |-&lt;br /&gt;
&lt;br /&gt;
    ! AT3&lt;br /&gt;
    | ACRCloud_0.8  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/AT3.pdf PDF] || [http://www.acrcloud.com ACRCloud Team]&lt;br /&gt;
    |-&lt;br /&gt;
&lt;br /&gt;
    ! AT4&lt;br /&gt;
    | WW_6          ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2016/AT4.pdf PDF] || [http://www.acrcloud.com ACRCloud Team]&lt;br /&gt;
    |-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;br /&gt;
&lt;br /&gt;
Size of database and running time are announced together with top-1 hit rate in the following table. Note that running time is announced for reference because machine spec, machine status, and number of cores used by a submission are different. IDs and specs of machines are:&lt;br /&gt;
&lt;br /&gt;
* A and B: 1.9 GHz, 24 cores CPU, 64 GB RAM&lt;br /&gt;
* C: a VM with 8 cores CPU and 32 GB RAM, hosted on 2.4 GHz, 32 cores CPU and 128 GB RAM&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2016/afp/result_2016.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Audio_Fingerprinting_Results&amp;diff=11744</id>
		<title>2015:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Audio_Fingerprinting_Results&amp;diff=11744"/>
		<updated>2016-07-11T03:31:45Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These are the results for the 2015 running of the Audio Fingerprinting task. For background information about this task set please refer to the [[2015:Audio Fingerprinting]] page.&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
        |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
        ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
        ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! CZ3&lt;br /&gt;
        | CYCG3 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CZ3.pdf PDF] || [http://www.kugou.com ChuanYi Chen], [http://www.kugou.com Chaogang Zhang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! CZ4&lt;br /&gt;
        | CYCG4 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CZ4.pdf PDF] || [http://www.kugou.com ChuanYi Chen], [http://www.kugou.com Chaogang Zhang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! CZ5&lt;br /&gt;
        | CYCG5 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CZ5.pdf PDF] || [http://www.kugou.com ChuanYi Chen], [http://www.kugou.com Chaogang Zhang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! SW3&lt;br /&gt;
        | ACRCloud_1  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/SW3.pdf PDF] || [http://www.acrcloud.com Steve Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! SW4&lt;br /&gt;
        | ACRCloud_2  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/SW4.pdf PDF] || [http://www.acrcloud.com Steve Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! YCP4&lt;br /&gt;
        | Sogou_AFP_V1_20150914  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/YCP4.pdf PDF] || [http://blog.csdn.net/yutianzuijin Guangchao Yao],[http://corp.sogou.com/  Yiqian Pan], [http://www.sogou.com Wei Chen]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! YCP5&lt;br /&gt;
        | Sogou_AFP_V2_20150914  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/YCP5.pdf PDF] || [http://blog.csdn.net/yutianzuijin Guangchao Yao],[http://corp.sogou.com/  Yiqian Pan], [http://www.sogou.com Wei Chen]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! YCP6&lt;br /&gt;
        | Sogou_AFP_V3_20150914  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/YCP6.pdf PDF] || [http://blog.csdn.net/yutianzuijin Guangchao Yao],[http://corp.sogou.com/  Yiqian Pan], [http://www.sogou.com Wei Chen]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! ZW1&lt;br /&gt;
        | fingerprint_Mask1  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/ZW1.pdf PDF] || [http://hccl.ioa.ac.cn/ Zhichao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! ZW2&lt;br /&gt;
        | fingerprint_MASK2  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/ZW2.pdf PDF] || [http://hccl.ioa.ac.cn/ Zhichao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&amp;lt;!----&lt;br /&gt;
        ! TS1&lt;br /&gt;
        | STELLAR  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/TS1.pdf PDF] || [http://www.bafta.org/initiatives/commercial/research Toby Stokes]&lt;br /&gt;
        |-&lt;br /&gt;
        ! HLLC1&lt;br /&gt;
        | AudioFingerprint  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/HLLC1.pdf PDF] || chi hao hsieh, [http://idv.kh.usc.edu.tw/hungyi/ Hung-Yi Lo], Wei-Bin Liang, Chia-Ping Chen&lt;br /&gt;
        |-&lt;br /&gt;
----&amp;gt;&lt;br /&gt;
        |}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;br /&gt;
&lt;br /&gt;
Size of database and running time are announced together with top-1 hit rate in the following table. Note that running time is announced for reference because machine spec, machine status, and number of cores used by a submission are different. IDs and specs of machines are:&lt;br /&gt;
&lt;br /&gt;
* A and B: 1.9 GHz, 24 cores CPU, 64 GB RAM&lt;br /&gt;
* C: a VM with 8 cores CPU and 32 GB RAM, hosted on 2.4 GHz, 32 cores CPU and 128 GB RAM&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/afp/result_2015.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
* Correction: &amp;quot;YCP&amp;quot; should be &amp;quot;YPC&amp;quot;, &amp;quot;GB&amp;quot; should be &amp;quot;MB&amp;quot;&lt;br /&gt;
&amp;lt;!----&lt;br /&gt;
Last update: 2015/10/13.&lt;br /&gt;
Note that some of the submissions are still running.&lt;br /&gt;
----&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Audio_Fingerprinting_Results&amp;diff=11577</id>
		<title>2015:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Audio_Fingerprinting_Results&amp;diff=11577"/>
		<updated>2015-10-26T15:06:15Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These are the results for the 2015 running of the Audio Fingerprinting task. For background information about this task set please refer to the [[2015:Audio Fingerprinting]] page.&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
        |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
        ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
        ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! CZ3&lt;br /&gt;
        | CYCG3 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CZ3.pdf PDF] || [http://www.kugou.com ChuanYi Chen], [http://www.kugou.com Chaogang Zhang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! CZ4&lt;br /&gt;
        | CYCG4 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CZ4.pdf PDF] || [http://www.kugou.com ChuanYi Chen], [http://www.kugou.com Chaogang Zhang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! CZ5&lt;br /&gt;
        | CYCG5 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CZ5.pdf PDF] || [http://www.kugou.com ChuanYi Chen], [http://www.kugou.com Chaogang Zhang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! SW3&lt;br /&gt;
        | ACRCloud_1  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/SW3.pdf PDF] || [http://www.acrcloud.com Steve Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! SW4&lt;br /&gt;
        | ACRCloud_2  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/SW4.pdf PDF] || [http://www.acrcloud.com Steve Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! YCP4&lt;br /&gt;
        | Sogou_AFP_V1_20150914  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/YCP4.pdf PDF] || [http://blog.csdn.net/yutianzuijin Guangchao Yao],[http://corp.sogou.com/  Yiqian Pan], [http://www.sogou.com Wei Chen]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! YCP5&lt;br /&gt;
        | Sogou_AFP_V2_20150914  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/YCP5.pdf PDF] || [http://blog.csdn.net/yutianzuijin Guangchao Yao],[http://corp.sogou.com/  Yiqian Pan], [http://www.sogou.com Wei Chen]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! YCP6&lt;br /&gt;
        | Sogou_AFP_V3_20150914  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/YCP6.pdf PDF] || [http://blog.csdn.net/yutianzuijin Guangchao Yao],[http://corp.sogou.com/  Yiqian Pan], [http://www.sogou.com Wei Chen]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! ZW1&lt;br /&gt;
        | fingerprint_Mask1  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/ZW1.pdf PDF] || [http://hccl.ioa.ac.cn/ Zhichao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! ZW2&lt;br /&gt;
        | fingerprint_MASK2  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/ZW2.pdf PDF] || [http://hccl.ioa.ac.cn/ Zhichao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&amp;lt;!----&lt;br /&gt;
        ! TS1&lt;br /&gt;
        | STELLAR  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/TS1.pdf PDF] || [http://www.bafta.org/initiatives/commercial/research Toby Stokes]&lt;br /&gt;
        |-&lt;br /&gt;
        ! HLLC1&lt;br /&gt;
        | AudioFingerprint  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/HLLC1.pdf PDF] || chi hao hsieh, [http://idv.kh.usc.edu.tw/hungyi/ Hung-Yi Lo], Wei-Bin Liang, Chia-Ping Chen&lt;br /&gt;
        |-&lt;br /&gt;
----&amp;gt;&lt;br /&gt;
        |}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;br /&gt;
&lt;br /&gt;
Size of database and running time are announced together with top-1 hit rate in the following table. Note that running time is announced for reference because machine spec, machine status, and number of cores used by a submission are different. IDs and specs of machines are:&lt;br /&gt;
&lt;br /&gt;
* A and B: 1.9 GHz, 24 cores CPU, 64 GB RAM&lt;br /&gt;
* C: a VM with 8 cores CPU and 32 GB RAM, hosted on 2.4 GHz, 32 cores CPU and 128 GB RAM&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/afp/result_2015.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&amp;lt;!----&lt;br /&gt;
Last update: 2015/10/13.&lt;br /&gt;
Note that some of the submissions are still running.&lt;br /&gt;
----&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Audio_Fingerprinting_Results&amp;diff=11330</id>
		<title>2015:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Audio_Fingerprinting_Results&amp;diff=11330"/>
		<updated>2015-10-18T11:55:25Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These are the results for the 2015 running of the Audio Fingerprinting task. For background information about this task set please refer to the [[2015:Audio Fingerprinting]] page.&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
        |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
        ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
        ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! CZ3&lt;br /&gt;
        | CYCG3 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CZ3.pdf PDF] || [http://www.kugou.com ChuanYi Chen], [http://www.kugou.com Chaogang Zhang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! CZ4&lt;br /&gt;
        | CYCG4 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CZ4.pdf PDF] || [http://www.kugou.com ChuanYi Chen], [http://www.kugou.com Chaogang Zhang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! CZ5&lt;br /&gt;
        | CYCG5 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/CZ5.pdf PDF] || [http://www.kugou.com ChuanYi Chen], [http://www.kugou.com Chaogang Zhang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! SW3&lt;br /&gt;
        | ACRCloud_1  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/SW3.pdf PDF] || [http://www.acrcloud.com Steve Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! SW4&lt;br /&gt;
        | ACRCloud_2  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/SW4.pdf PDF] || [http://www.acrcloud.com Steve Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! YCP4&lt;br /&gt;
        | Sogou_AFP_V1_20150914  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/YCP4.pdf PDF] || [http://blog.csdn.net/yutianzuijin Guangchao Yao],[http://corp.sogou.com/  Yiqian Pan], [http://www.sogou.com Wei Chen]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! YCP5&lt;br /&gt;
        | Sogou_AFP_V2_20150914  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/YCP5.pdf PDF] || [http://blog.csdn.net/yutianzuijin Guangchao Yao],[http://corp.sogou.com/  Yiqian Pan], [http://www.sogou.com Wei Chen]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! YCP6&lt;br /&gt;
        | Sogou_AFP_V3_20150914  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/YCP6.pdf PDF] || [http://blog.csdn.net/yutianzuijin Guangchao Yao],[http://corp.sogou.com/  Yiqian Pan], [http://www.sogou.com Wei Chen]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! ZW1&lt;br /&gt;
        | fingerprint_Mask1  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/ZW1.pdf PDF] || [http://hccl.ioa.ac.cn/ Zhichao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! ZW2&lt;br /&gt;
        | fingerprint_MASK2  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/ZW2.pdf PDF] || [http://hccl.ioa.ac.cn/ Zhichao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&amp;lt;!----&lt;br /&gt;
        ! TS1&lt;br /&gt;
        | STELLAR  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/TS1.pdf PDF] || [http://www.bafta.org/initiatives/commercial/research Toby Stokes]&lt;br /&gt;
        |-&lt;br /&gt;
----&amp;gt;&lt;br /&gt;
        ! HLLC1&lt;br /&gt;
        | AudioFingerprint  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2015/HLLC1.pdf PDF] || chi hao hsieh, [http://idv.kh.usc.edu.tw/hungyi/ Hung-Yi Lo], Wei-Bin Liang, Chia-Ping Chen&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        |}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;br /&gt;
&lt;br /&gt;
Size of database and running time are announced together with top-1 hit rate in the following table. Note that running time is announced for reference because machine spec, machine status, and number of cores used by a submission are different. IDs and specs of machines are:&lt;br /&gt;
&lt;br /&gt;
* A and B: 1.9 GHz, 24 cores CPU, 64 GB RAM&lt;br /&gt;
* C: a VM with 8 cores CPU and 32 GB RAM, hosted on 2.4 GHz, 32 cores CPU and 128 GB RAM&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/afp/result_2015.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Last update: 2015/10/13.&lt;br /&gt;
Note that some of the submissions are still running.&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Audio_Fingerprinting_Results&amp;diff=11313</id>
		<title>2015:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Audio_Fingerprinting_Results&amp;diff=11313"/>
		<updated>2015-10-13T10:17:31Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These are the results for the 2015 running of the Audio Fingerprinting task. For background information about this task set please refer to the [[2015:Audio Fingerprinting]] page.&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
        |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
        ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
        ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! GY1&lt;br /&gt;
        | OS Fingerprinting ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/GY1.pdf PDF] || Guang Yang&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW1&lt;br /&gt;
        | MRAF  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW1.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! DP1&lt;br /&gt;
        | audfprint-master  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/DP1.pdf PDF] || [http://www.ee.columbia.edu/~dpwe/ Dan Ellis]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        |}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;br /&gt;
&lt;br /&gt;
Size of database and running time are announced together with top-1 hit rate in the following table. Note that running time is announced for reference because machine spec, machine status, and number of cores used by a submission are different. IDs and specs of machines are:&lt;br /&gt;
&lt;br /&gt;
* A and B: 1.9 GHz, 24 cores CPU, 64 GB RAM&lt;br /&gt;
* C: a VM with 8 cores CPU and 32 GB RAM, hosted on 2.4 GHz, 32 cores CPU and 128 GB RAM&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/afp/result_2015.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Last update: 2015/10/13.&lt;br /&gt;
Note that some of the submissions are still running.&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2015:Audio_Fingerprinting_Results&amp;diff=11312</id>
		<title>2015:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2015:Audio_Fingerprinting_Results&amp;diff=11312"/>
		<updated>2015-10-13T10:14:23Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: Created page with &amp;quot;== Introduction == These are the results for the 2015 running of the Audio Fingerprinting task. For background information about this task set please refer to the [[2015:Audio Fi...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These are the results for the 2015 running of the Audio Fingerprinting task. For background information about this task set please refer to the [[2015:Audio Fingerprinting]] page.&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
        |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
        ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
        ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! GY1&lt;br /&gt;
        | OS Fingerprinting ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/GY1.pdf PDF] || Guang Yang&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW1&lt;br /&gt;
        | MRAF  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW1.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! DP1&lt;br /&gt;
        | audfprint-master  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/DP1.pdf PDF] || [http://www.ee.columbia.edu/~dpwe/ Dan Ellis]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        |}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;br /&gt;
&lt;br /&gt;
Size of database and running time are announced together with top-1 hit rate in the following table. Note that running time is announced for reference because machine spec, machine status, and number of cores used by a submission are different. IDs and specs of machines are:&lt;br /&gt;
&lt;br /&gt;
* A and B: 1.9 GHz, 24 cores CPU, 64 GB RAM&lt;br /&gt;
* C: a VM with 8 cores CPU and 32 GB RAM, hosted on 2.4 GHz, 32 cores CPU and 128 GB RAM&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2015/afp/result_2015.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Last update: 2015/10/13.&lt;br /&gt;
Note that some of the submission are still running.&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10765</id>
		<title>2014:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10765"/>
		<updated>2014-10-30T07:59:52Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These are the results for the 2014 running of the Audio Fingerprinting task. For background information about this task set please refer to the [[2014:Audio Fingerprinting]] page.&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
        |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
        ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
        ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! GY1&lt;br /&gt;
        | OS Fingerprinting ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/GY1.pdf PDF] || Guang Yang&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW1&lt;br /&gt;
        | MRAF  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW1.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW2&lt;br /&gt;
        | MRAF_2  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW2.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW3&lt;br /&gt;
        | MRAF_3  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW3.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW4&lt;br /&gt;
        | MRAF_4  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW4.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW5&lt;br /&gt;
        | MRAF_5  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW5.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW6&lt;br /&gt;
        | MRAF_6  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW6.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW7&lt;br /&gt;
        | MRAF_7 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW7.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW8&lt;br /&gt;
        |MRAF_ 8  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW8.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW9&lt;br /&gt;
        | MRAF_9  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW9.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! DP1&lt;br /&gt;
        | audfprint-master  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/DP1.pdf PDF] || [http://www.ee.columbia.edu/~dpwe/ Dan Ellis]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! DP2&lt;br /&gt;
        | audfprint-master  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/DP1.pdf PDF] || [http://www.ee.columbia.edu/~dpwe/ Dan Ellis]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        |}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;br /&gt;
&lt;br /&gt;
Size of database and running time are announced together with top-1 hit rate in the following table. Note that running time is announced for reference because machine spec, machine status, and number of cores used by a submission are different. IDs and specs of machines are:&lt;br /&gt;
&lt;br /&gt;
* A and B: 1.9 GHz, 24 cores CPU, 64 GB RAM&lt;br /&gt;
* C: a VM with 8 cores CPU and 32 GB RAM, hosted on 2.4 GHz, 32 cores CPU and 128 GB RAM&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/afp/result_2014.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10764</id>
		<title>2014:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10764"/>
		<updated>2014-10-29T22:53:37Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These are the results for the 2014 running of the Audio Fingerprinting task. For background information about this task set please refer to the [[2014:Audio Fingerprinting]] page.&lt;br /&gt;
&lt;br /&gt;
 Some of the submissions are still running now, once their results are announced, this text will be deleted.&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
        |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
        ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
        ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! GY1&lt;br /&gt;
        | OS Fingerprinting ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/GY1.pdf PDF] || Guang Yang&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW1&lt;br /&gt;
        | MRAF  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW1.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW2&lt;br /&gt;
        | MRAF_2  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW2.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW3&lt;br /&gt;
        | MRAF_3  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW3.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW4&lt;br /&gt;
        | MRAF_4  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW4.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW5&lt;br /&gt;
        | MRAF_5  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW5.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW6&lt;br /&gt;
        | MRAF_6  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW6.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW7&lt;br /&gt;
        | MRAF_7 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW7.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW8&lt;br /&gt;
        |MRAF_ 8  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW8.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW9&lt;br /&gt;
        | MRAF_9  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW9.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! DP1&lt;br /&gt;
        | audfprint-master  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/DP1.pdf PDF] || [http://www.ee.columbia.edu/~dpwe/ Dan Ellis]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! DP2&lt;br /&gt;
        | audfprint-master  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/DP1.pdf PDF] || [http://www.ee.columbia.edu/~dpwe/ Dan Ellis]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        |}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;br /&gt;
&lt;br /&gt;
Size of database and running time are announced together with top-1 hit rate in the following table. Note that running time is announced for reference because machine spec, machine status, and number of cores used by a submission are different. IDs and specs of machines are:&lt;br /&gt;
&lt;br /&gt;
* A and B: 1.9 GHz, 24 cores CPU, 64 GB RAM&lt;br /&gt;
* C: a VM with 8 cores CPU and 32 GB RAM, hosted on 2.4 GHz, 32 cores CPU and 128 GB RAM&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/afp/result_2014.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10763</id>
		<title>2014:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10763"/>
		<updated>2014-10-29T07:12:41Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: /* General Legend */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These are the results for the 2014 running of the Audio Fingerprinting task. For background information about this task set please refer to the [[2014:Audio Fingerprinting]] page.&lt;br /&gt;
&lt;br /&gt;
 Some of the submissions are still running now, once their results are announced, this text will be deleted.&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
        |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
        ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
        ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! GY1&lt;br /&gt;
        | OS Fingerprinting ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/GY1.pdf PDF] || Guang Yang&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW1&lt;br /&gt;
        | MRAF  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW1.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW2&lt;br /&gt;
        | MRAF_2  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW2.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW3&lt;br /&gt;
        | MRAF_3  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW3.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW4&lt;br /&gt;
        | MRAF_4  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW4.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW5&lt;br /&gt;
        | MRAF_5  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW5.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW6&lt;br /&gt;
        | MRAF_6  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW6.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW7&lt;br /&gt;
        | MRAF_7 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW7.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW8&lt;br /&gt;
        |MRAF_ 8  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW8.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW9&lt;br /&gt;
        | MRAF_9  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW9.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! DP1&lt;br /&gt;
        | audfprint-master  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/DP1.pdf PDF] || [http://www.ee.columbia.edu/~dpwe/ Dan Ellis]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        |}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;br /&gt;
&lt;br /&gt;
Size of database and running time are announced together with top-1 hit rate in the following table. Note that running time is announced for reference because machine spec, machine status, and number of cores used by a submission are different. IDs and specs of machines are:&lt;br /&gt;
&lt;br /&gt;
* A and B: 1.9 GHz, 24 cores CPU, 64 GB RAM&lt;br /&gt;
* C: a VM with 8 cores CPU and 32 GB RAM, hosted on 2.4 GHz, 32 cores CPU and 128 GB RAM&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/afp/result_2014.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10762</id>
		<title>2014:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10762"/>
		<updated>2014-10-29T07:11:57Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These are the results for the 2014 running of the Audio Fingerprinting task. For background information about this task set please refer to the [[2014:Audio Fingerprinting]] page.&lt;br /&gt;
&lt;br /&gt;
 Some of the submissions are still running now, once their results are announced, this text will be deleted.&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
        |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
        ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
        ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! GY1&lt;br /&gt;
        | OS Fingerprinting ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/GY1.pdf PDF] || Guang Yang&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW1&lt;br /&gt;
        | MRAF  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW1.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW2&lt;br /&gt;
        | MRAF_2  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW2.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW3&lt;br /&gt;
        | MRAF_3  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW3.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW4&lt;br /&gt;
        | MRAF_4  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW4.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW5&lt;br /&gt;
        | MRAF_5  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW5.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW6&lt;br /&gt;
        | MRAF_6  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW6.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW7&lt;br /&gt;
        | MRAF_7 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW7.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW8&lt;br /&gt;
        |MRAF_ 8  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW8.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW9&lt;br /&gt;
        | MRAF_9  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW9.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! DP1&lt;br /&gt;
        | audfprint-master  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/DP1.pdf PDF] || [www.ee.columbia.edu/~dpwe/ Dan Ellis]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        |}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;br /&gt;
&lt;br /&gt;
Size of database and running time are announced together with top-1 hit rate in the following table. Note that running time is announced for reference because machine spec, machine status, and number of cores used by a submission are different. IDs and specs of machines are:&lt;br /&gt;
&lt;br /&gt;
* A and B: 1.9 GHz, 24 cores CPU, 64 GB RAM&lt;br /&gt;
* C: a VM with 8 cores CPU and 32 GB RAM, hosted on 2.4 GHz, 32 cores CPU and 128 GB RAM&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/afp/result_2014.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10760</id>
		<title>2014:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10760"/>
		<updated>2014-10-27T00:33:55Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These are the results for the 2014 running of the Audio Fingerprinting task. For background information about this task set please refer to the [[2014:Audio Fingerprinting]] page.&lt;br /&gt;
&lt;br /&gt;
 Some of the submissions are still running now, once their results are announced, this text will be deleted.&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
        |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
        ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
        ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! GY1&lt;br /&gt;
        | OS Fingerprinting ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/GY1.pdf PDF] || Guang Yang&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW1&lt;br /&gt;
        | MRAF  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW1.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW2&lt;br /&gt;
        | MRAF_2  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW2.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW3&lt;br /&gt;
        | MRAF_3  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW3.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW4&lt;br /&gt;
        | MRAF_4  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW4.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW5&lt;br /&gt;
        | MRAF_5  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW5.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW6&lt;br /&gt;
        | MRAF_6  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW6.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW7&lt;br /&gt;
        | MRAF_7 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW7.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW8&lt;br /&gt;
        |MRAF_ 8  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW8.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW9&lt;br /&gt;
        | MRAF_9  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW9.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        |}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;br /&gt;
&lt;br /&gt;
Size of database and running time are announced together with top-1 hit rate in the following table. Note that running time is announced for reference because machine spec, machine status, and number of cores used by a submission are different. IDs and specs of machines are:&lt;br /&gt;
&lt;br /&gt;
* A and B: 1.9 GHz, 24 cores CPU, 64 GB RAM&lt;br /&gt;
* C: a VM with 8 cores CPU and 32 GB RAM, hosted on 2.4 GHz, 32 cores CPU and 128 GB RAM&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/afp/result_2014.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10759</id>
		<title>2014:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10759"/>
		<updated>2014-10-25T12:46:10Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These are the results for the 2014 running of the Audio Fingerprinting task. For background information about this task set please refer to the [[2014:Audio Fingerprinting]] page.&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
        |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
        ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
        ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! GY1&lt;br /&gt;
        | OS Fingerprinting ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/GY1.pdf PDF] || Guang Yang&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW1&lt;br /&gt;
        | MRAF  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW1.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW2&lt;br /&gt;
        | MRAF_2  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW2.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW3&lt;br /&gt;
        | MRAF_3  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW3.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW4&lt;br /&gt;
        | MRAF_4  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW4.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW5&lt;br /&gt;
        | MRAF_5  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW5.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW6&lt;br /&gt;
        | MRAF_6  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW6.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW7&lt;br /&gt;
        | MRAF_7 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW7.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW8&lt;br /&gt;
        |MRAF_ 8  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW8.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW9&lt;br /&gt;
        | MRAF_9  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW9.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        |}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;br /&gt;
&lt;br /&gt;
Size of database and running time are announced together with top-1 hit rate in the following table. Note that running time is announced for reference because machine spec, machine status, and number of cores used by a submission are different. IDs and specs of machines are:&lt;br /&gt;
&lt;br /&gt;
* A and B: 1.9 GHz, 24 cores CPU, 64 GB RAM&lt;br /&gt;
* C: a VM with 8 cores CPU and 32 GB RAM, hosted on 2.4 GHz, 32 cores CPU and 128 GB RAM&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/afp/result_2014.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10758</id>
		<title>2014:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10758"/>
		<updated>2014-10-25T12:23:13Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: /* Summary Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These are the results for the 2014 running of the Audio Fingerprinting task. For background information about this task set please refer to the [[2014:Audio Fingerprinting]] page.&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
        |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
        ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
        ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! GY1&lt;br /&gt;
        | OS Fingerprinting ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/GY1.pdf PDF] || Guang Yang&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW1&lt;br /&gt;
        | MRAF  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW1.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW2&lt;br /&gt;
        | MRAF_2  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW2.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW3&lt;br /&gt;
        | MRAF_3  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW3.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW4&lt;br /&gt;
        | MRAF_4  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW4.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW5&lt;br /&gt;
        | MRAF_5  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW5.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW6&lt;br /&gt;
        | MRAF_6  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW6.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW7&lt;br /&gt;
        | MRAF_7 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW7.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW8&lt;br /&gt;
        |MRAF_ 8  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW8.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW9&lt;br /&gt;
        | MRAF_9  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW9.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        |}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/afp/result_2014.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10755</id>
		<title>2014:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10755"/>
		<updated>2014-10-25T03:32:39Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: /* General Legend */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These are the results for the 2014 running of the Audio Fingerprinting task. For background information about this task set please refer to the [[2014:Audio Fingerprinting]] page.&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
        |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
        ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
        ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! GY1&lt;br /&gt;
        | OS Fingerprinting ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/GY1.pdf PDF] || Guang Yang&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW1&lt;br /&gt;
        | MRAF  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW1.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW2&lt;br /&gt;
        | MRAF_2  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW2.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW3&lt;br /&gt;
        | MRAF_3  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW3.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW4&lt;br /&gt;
        | MRAF_4  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW4.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW5&lt;br /&gt;
        | MRAF_5  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW5.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW6&lt;br /&gt;
        | MRAF_6  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW6.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW7&lt;br /&gt;
        | MRAF_7 ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW7.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW8&lt;br /&gt;
        |MRAF_ 8  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW8.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW9&lt;br /&gt;
        | MRAF_9  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW9.pdf PDF] || [http://www.doreso.com Lei Wang],  [http://www.doreso.com Runtao Wang]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        |}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/afp/result_2014.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10745</id>
		<title>2014:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10745"/>
		<updated>2014-10-24T07:50:16Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: /* Overall Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These are the results for the 2014 running of the Audio Fingerprinting task. For background information about this task set please refer to the [[2014:Audio Fingerprinting]] page.&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
        |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
        ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
        ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! GY1&lt;br /&gt;
        |  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/GY1.pdf PDF] || [http://compmus.ime.usp.br Antonio de_Carvalho_Junior],[http://www.pet.di.ufpb.br Leonardo Batista]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW1&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW1.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW2&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW2.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW3&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW3.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW4&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW4.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW5&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW5.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW6&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW6.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW7&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW7.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW8&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW8.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW9&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW9.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        |}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/afp/result_2014.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10744</id>
		<title>2014:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10744"/>
		<updated>2014-10-24T07:48:02Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These are the results for the 2014 running of the Audio Fingerprinting task. For background information about this task set please refer to the [[2014:Audio Fingerprinting]] page.&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
        |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
        ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
        ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! GY1&lt;br /&gt;
        |  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/GY1.pdf PDF] || [http://compmus.ime.usp.br Antonio de_Carvalho_Junior],[http://www.pet.di.ufpb.br Leonardo Batista]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW1&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW1.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW2&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW2.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW3&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW3.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW4&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW4.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW5&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW5.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW6&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW6.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW7&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW7.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW8&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW8.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW9&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW9.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        |}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;br /&gt;
&lt;br /&gt;
===Overall Results===&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/afp/result_2014.csv&amp;lt;/csv&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category: Results]]&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10743</id>
		<title>2014:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10743"/>
		<updated>2014-10-24T07:46:39Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: /* Summary Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These are the results for the 2014 running of the Audio Fingerprinting task. For background information about this task set please refer to the [[2014:Audio Fingerprinting]] page.&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
        |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
        ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
        ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! GY1&lt;br /&gt;
        |  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/GY1.pdf PDF] || [http://compmus.ime.usp.br Antonio de_Carvalho_Junior],[http://www.pet.di.ufpb.br Leonardo Batista]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW1&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW1.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW2&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW2.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW3&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW3.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW4&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW4.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW5&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW5.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW6&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW6.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW7&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW7.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW8&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW8.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW9&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW9.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        |}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;br /&gt;
&lt;br /&gt;
===Overall Results===&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/afp/result_2014.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10742</id>
		<title>2014:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10742"/>
		<updated>2014-10-24T07:45:58Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: /* Summary Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These are the results for the 2014 running of the Audio Fingerprinting task. For background information about this task set please refer to the [[2014:Audio Fingerprinting]] page.&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
        |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
        ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
        ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! GY1&lt;br /&gt;
        |  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/GY1.pdf PDF] || [http://compmus.ime.usp.br Antonio de_Carvalho_Junior],[http://www.pet.di.ufpb.br Leonardo Batista]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW1&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW1.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW2&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW2.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW3&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW3.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW4&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW4.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW5&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW5.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW6&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW6.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW7&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW7.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW8&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW8.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW9&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW9.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        |}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv&amp;gt;2014/afp/result_2014.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10741</id>
		<title>2014:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10741"/>
		<updated>2014-10-24T07:45:41Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: /* Summary Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These are the results for the 2014 running of the Audio Fingerprinting task. For background information about this task set please refer to the [[2014:Audio Fingerprinting]] page.&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
        |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
        ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
        ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! GY1&lt;br /&gt;
        |  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/GY1.pdf PDF] || [http://compmus.ime.usp.br Antonio de_Carvalho_Junior],[http://www.pet.di.ufpb.br Leonardo Batista]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW1&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW1.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW2&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW2.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW3&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW3.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW4&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW4.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW5&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW5.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW6&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW6.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW7&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW7.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW8&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW8.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW9&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW9.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        |}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;csv p=3&amp;gt;2014/afp/result_2014.csv&amp;lt;/csv&amp;gt;&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10740</id>
		<title>2014:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10740"/>
		<updated>2014-10-24T05:33:34Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These are the results for the 2014 running of the Audio Fingerprinting task. For background information about this task set please refer to the [[2014:Audio Fingerprinting]] page.&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
        |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
        ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
        ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! GY1&lt;br /&gt;
        |  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/GY1.pdf PDF] || [http://compmus.ime.usp.br Antonio de_Carvalho_Junior],[http://www.pet.di.ufpb.br Leonardo Batista]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW1&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW1.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW2&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW2.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW3&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW3.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW4&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW4.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW5&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW5.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW6&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW6.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW7&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW7.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW8&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW8.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW9&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW9.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        |}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10739</id>
		<title>2014:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10739"/>
		<updated>2014-10-24T04:25:19Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: /* General Legend */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
        |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
        ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
        ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! GY1&lt;br /&gt;
        |  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/GY1.pdf PDF] || [http://compmus.ime.usp.br Antonio de_Carvalho_Junior],[http://www.pet.di.ufpb.br Leonardo Batista]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW1&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW1.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW2&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW2.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW3&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW3.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW4&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW4.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW5&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW5.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW6&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW6.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW7&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW7.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW8&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW8.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW9&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW9.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        |}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10738</id>
		<title>2014:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10738"/>
		<updated>2014-10-24T04:24:53Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: /* General Legend */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
        |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
        ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
        ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! GY1&lt;br /&gt;
        |  ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/GY1.pdf PDF] || [http://compmus.ime.usp.br Antonio de_Carvalho_Junior],[http://www.pet.di.ufpb.br Leonardo Batista]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW1&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW1.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW2&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW2.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW3&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW3.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW4&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW4.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW5&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW5.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW6&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW6.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW7&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW7.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW8&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW8.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW9&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW9.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! WW1&lt;br /&gt;
        |   ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2014/WW1.pdf PDF] ||&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        |}&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10737</id>
		<title>2014:Audio Fingerprinting Results</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting_Results&amp;diff=10737"/>
		<updated>2014-10-24T04:05:24Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
== General Legend ==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; style=&amp;quot;text-align: left; width: 800px;&amp;quot;&lt;br /&gt;
        |- style=&amp;quot;background: yellow&amp;quot;&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; | Sub code&lt;br /&gt;
        ! width=&amp;quot;200&amp;quot; | Submission name&lt;br /&gt;
        ! width=&amp;quot;80&amp;quot; style=&amp;quot;text-align: center;&amp;quot; | Abstract&lt;br /&gt;
        ! width=&amp;quot;540&amp;quot; | Contributors&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! DB1&lt;br /&gt;
        | PPM-DJ ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2012/DB1.pdf PDF] ||[http://compmus.ime.usp.br Antonio de_Carvalho_Junior],[http://www.pet.di.ufpb.br Leonardo Batista]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! ULMS1&lt;br /&gt;
        | ShapeH ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2012/ULMS1.pdf PDF] ||[http://julian-urbano.info Julián Urbano], [http://www.kr.inf.uc3m.es Juan Lloréns],[http://sites.google.com/site/jorgemorato/ Jorge Morato], [http://www.kr.inf.uc3m.es Sonia Sánchez-Cuadrado]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
        ! ULMS2&lt;br /&gt;
        | ShapeL ||  style=&amp;quot;text-align: center;&amp;quot; |[https://www.music-ir.org/mirex/abstracts/2012/ULMS2.pdf PDF] || [http://julian-urbano.info Julián Urbano], [http://www.kr.inf.uc3m.es Juan Lloréns],[http://sites.google.com/site/jorgemorato/ Jorge Morato], [http://www.kr.inf.uc3m.es Sonia Sánchez-Cuadrado]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
        ! ULMS3&lt;br /&gt;
        | ShapeG ||  style=&amp;quot;text-align: center;&amp;quot; |[https://www.music-ir.org/mirex/abstracts/2012/ULMS3.pdf PDF] ||[http://julian-urbano.info Julián Urbano], [http://www.kr.inf.uc3m.es Juan Lloréns],[http://sites.google.com/site/jorgemorato/ Jorge Morato], [http://www.kr.inf.uc3m.es Sonia Sánchez-Cuadrado]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! ULMS4&lt;br /&gt;
        | ShapeTime ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2012/ULMS4.pdf PDF] || [http://julian-urbano.info Julián Urbano], [http://www.kr.inf.uc3m.es Juan Lloréns],[http://sites.google.com/site/jorgemorato/ Jorge Morato], [http://www.kr.inf.uc3m.es Sonia Sánchez-Cuadrado]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        ! ULMS5&lt;br /&gt;
        | Time ||  style=&amp;quot;text-align: center;&amp;quot; | [https://www.music-ir.org/mirex/abstracts/2012/ULMS5.pdf PDF] ||[http://julian-urbano.info Julián Urbano], [http://www.kr.inf.uc3m.es Juan Lloréns],[http://sites.google.com/site/jorgemorato/ Jorge Morato], [http://www.kr.inf.uc3m.es Sonia Sánchez-Cuadrado]&lt;br /&gt;
        |-&lt;br /&gt;
&lt;br /&gt;
        |}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Summary Results==&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10475</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10475"/>
		<updated>2014-09-23T12:42:21Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: /* Time and hardware limits */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) 965 of the files are from GTZAN data set, all the others are mainly English and Chinese pop songs. This data set is hidden and not available for download. (Note that there are possibly different numbers of channels (mono and stereo), sampling rates, and bit resolutions for these files.)&lt;br /&gt;
&lt;br /&gt;
The GTZAN data set were purified according to [1][2]. Exact repetitions were considered by the following principles:&lt;br /&gt;
* If none of the songs in a repetition set has corresponding queries, then nothing is removed from the database.&lt;br /&gt;
* If one of the songs in a repetition set has corresponding queries, then all the other songs (which have no corresponding queries) were removed from the database.&lt;br /&gt;
* If two or more of the songs in a repetition set has corresponding queries, then only one song (which has corresponding queries) was kept in the database. Note that if a query clip corresponds to a removed song, then the query's ground truth is modified to the kept song.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphones, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dir4db%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.mp3. For example:&lt;br /&gt;
 ./AFP/database/000001.mp3&lt;br /&gt;
 ./AFP/database/000002.mp3&lt;br /&gt;
 ./AFP/database/000003.mp3&lt;br /&gt;
 ./AFP/database/000004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The total size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dir4db% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q000001.wav&lt;br /&gt;
 ./AFP/query/q000002.wav&lt;br /&gt;
 ./AFP/query/q000003.wav&lt;br /&gt;
 ./AFP/query/q000004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q000001.wav	./AFP/database/0000004.mp3&lt;br /&gt;
 ./AFP/query/q000002.wav	./AFP/database/0000054.mp3&lt;br /&gt;
 ./AFP/query/q000003.wav	./AFP/database/0001002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 50KB for 1 minute of music. (For a database of 10000 songs, the total storage for database should be around 50*10000*4/1000000 = 2GB.)&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 24 hours || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Bibliography ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[1] Bob L. Sturm, ``The State of the Art Ten Years After a State of the Art: Future Research in Music Information Retrieval,'' J. New Music Research, vol. 43, no. 2, pp. 147–172, 2014.&lt;br /&gt;
&lt;br /&gt;
[2] Faults in the GTZAN Music Genre Dataset, available at http://imi.aau.dk/~bst/research/GTZANtable2/ , 2014.&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10474</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10474"/>
		<updated>2014-09-23T12:40:39Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: /* Time and hardware limits */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) 965 of the files are from GTZAN data set, all the others are mainly English and Chinese pop songs. This data set is hidden and not available for download. (Note that there are possibly different numbers of channels (mono and stereo), sampling rates, and bit resolutions for these files.)&lt;br /&gt;
&lt;br /&gt;
The GTZAN data set were purified according to [1][2]. Exact repetitions were considered by the following principles:&lt;br /&gt;
* If none of the songs in a repetition set has corresponding queries, then nothing is removed from the database.&lt;br /&gt;
* If one of the songs in a repetition set has corresponding queries, then all the other songs (which have no corresponding queries) were removed from the database.&lt;br /&gt;
* If two or more of the songs in a repetition set has corresponding queries, then only one song (which has corresponding queries) was kept in the database. Note that if a query clip corresponds to a removed song, then the query's ground truth is modified to the kept song.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphones, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dir4db%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.mp3. For example:&lt;br /&gt;
 ./AFP/database/000001.mp3&lt;br /&gt;
 ./AFP/database/000002.mp3&lt;br /&gt;
 ./AFP/database/000003.mp3&lt;br /&gt;
 ./AFP/database/000004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The total size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dir4db% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q000001.wav&lt;br /&gt;
 ./AFP/query/q000002.wav&lt;br /&gt;
 ./AFP/query/q000003.wav&lt;br /&gt;
 ./AFP/query/q000004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q000001.wav	./AFP/database/0000004.mp3&lt;br /&gt;
 ./AFP/query/q000002.wav	./AFP/database/0000054.mp3&lt;br /&gt;
 ./AFP/query/q000003.wav	./AFP/database/0001002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 50KB for 1 minute of music. (For a database of 10000 songs, the total storage for database should be around 50*10000*4/1000000 = 2GB.)&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 12 hours || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Bibliography ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[1] Bob L. Sturm, ``The State of the Art Ten Years After a State of the Art: Future Research in Music Information Retrieval,'' J. New Music Research, vol. 43, no. 2, pp. 147–172, 2014.&lt;br /&gt;
&lt;br /&gt;
[2] Faults in the GTZAN Music Genre Dataset, available at http://imi.aau.dk/~bst/research/GTZANtable2/ , 2014.&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10447</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10447"/>
		<updated>2014-09-03T03:28:03Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: /* Submission Format */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) 965 of the files are from GTZAN data set, all the others are mainly English and Chinese pop songs. This data set is hidden and not available for download. (Note that there are possibly different numbers of channels (mono and stereo), sampling rates, and bit resolutions for these files.)&lt;br /&gt;
&lt;br /&gt;
The GTZAN data set were purified according to [1][2]. Exact repetitions were considered by the following principles:&lt;br /&gt;
* If none of the songs in a repetition set has corresponding queries, then nothing is removed from the database.&lt;br /&gt;
* If one of the songs in a repetition set has corresponding queries, then all the other songs (which have no corresponding queries) were removed from the database.&lt;br /&gt;
* If two or more of the songs in a repetition set has corresponding queries, then only one song (which has corresponding queries) was kept in the database. Note that if a query clip corresponds to a removed song, then the query's ground truth is modified to the kept song.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphones, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dir4db%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.mp3. For example:&lt;br /&gt;
 ./AFP/database/000001.mp3&lt;br /&gt;
 ./AFP/database/000002.mp3&lt;br /&gt;
 ./AFP/database/000003.mp3&lt;br /&gt;
 ./AFP/database/000004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The total size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dir4db% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q000001.wav&lt;br /&gt;
 ./AFP/query/q000002.wav&lt;br /&gt;
 ./AFP/query/q000003.wav&lt;br /&gt;
 ./AFP/query/q000004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q000001.wav	./AFP/database/0000004.mp3&lt;br /&gt;
 ./AFP/query/q000002.wav	./AFP/database/0000054.mp3&lt;br /&gt;
 ./AFP/query/q000003.wav	./AFP/database/0001002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 1/50 of real time (For instance, if a music clip has a duration of 4 minutes, it should take 4.8 sec to construct the database in average. For a database of 10000 songs, the total time should around 10000*4/60/50 = 13.33 hours.) || 50KB for 1 minute of music. (For a database of 10000 songs, the total storage for database should be around 50*10000*4/1000000 = 2GB.)&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 5 secs for each query of around 10 seconds (Thus for 5692 queries of around 10 seconds, the total query time should be around 5*5692/3600 = 7.91 hours.) || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Bibliography ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[1] Bob L. Sturm, ``The State of the Art Ten Years After a State of the Art: Future Research in Music Information Retrieval,'' J. New Music Research, vol. 43, no. 2, pp. 147–172, 2014.&lt;br /&gt;
&lt;br /&gt;
[2] Faults in the GTZAN Music Genre Dataset, available at http://imi.aau.dk/~bst/research/GTZANtable2/ , 2014.&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10404</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10404"/>
		<updated>2014-08-30T09:12:26Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) 965 of the files are from GTZAN data set, all the others are mainly English and Chinese pop songs. This data set is hidden and not available for download. (Note that there are possibly different numbers of channels (mono and stereo), sampling rates, and bit resolutions for these files.)&lt;br /&gt;
&lt;br /&gt;
The GTZAN data set were purified according to [1][2]. Exact repetitions were considered by the following principles:&lt;br /&gt;
* If none of the songs in a repetition set has corresponding queries, then nothing is removed from the database.&lt;br /&gt;
* If one of the songs in a repetition set has corresponding queries, then all the other songs (which have no corresponding queries) were removed from the database.&lt;br /&gt;
* If two or more of the songs in a repetition set has corresponding queries, then only one song (which has corresponding queries) was kept in the database. Note that if a query clip corresponds to a removed song, then the query's ground truth is modified to the kept song.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphones, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dir4db%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/000001.mp3&lt;br /&gt;
 ./AFP/database/000002.mp3&lt;br /&gt;
 ./AFP/database/000003.mp3&lt;br /&gt;
 ./AFP/database/000004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The total size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dir4db% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q000001.wav&lt;br /&gt;
 ./AFP/query/q000002.wav&lt;br /&gt;
 ./AFP/query/q000003.wav&lt;br /&gt;
 ./AFP/query/q000004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q000001.wav	./AFP/database/0000004.mp3&lt;br /&gt;
 ./AFP/query/q000002.wav	./AFP/database/0000054.mp3&lt;br /&gt;
 ./AFP/query/q000003.wav	./AFP/database/0001002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 1/50 of real time (For instance, if a music clip has a duration of 4 minutes, it should take 4.8 sec to construct the database in average. For a database of 10000 songs, the total time should around 10000*4/60/50 = 13.33 hours.) || 50KB for 1 minute of music. (For a database of 10000 songs, the total storage for database should be around 50*10000*4/1000000 = 2GB.)&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 5 secs for each query of around 10 seconds (Thus for 5692 queries of around 10 seconds, the total query time should be around 5*5692/3600 = 7.91 hours.) || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Bibliography ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[1] Bob L. Sturm, ``The State of the Art Ten Years After a State of the Art: Future Research in Music Information Retrieval,'' J. New Music Research, vol. 43, no. 2, pp. 147–172, 2014.&lt;br /&gt;
&lt;br /&gt;
[2] Faults in the GTZAN Music Genre Dataset, available at http://imi.aau.dk/~bst/research/GTZANtable2/ , 2014.&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10403</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10403"/>
		<updated>2014-08-30T09:12:14Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) 965 of the files are from GTZAN data set, all the others are mainly English and Chinese pop songs. This data set is hidden and not available for download. (Note that there are possibly different numbers of channels (mono and stereo), sampling rates, and bit resolutions for these files.)&lt;br /&gt;
&lt;br /&gt;
The GTZAN data set were purified according to [1][2]. Exact repetitions were considered by the following principles:&lt;br /&gt;
* If none of the songs in a repetition set has corresponding queries, then nothing is removed from the database.&lt;br /&gt;
* If one of the songs in a repetition set has corresponding queries, then all the other songs (which have no corresponding queries) were removed from the database.&lt;br /&gt;
* If two or more of the songs in a repetition set has corresponding queries, then only one song (which has corresponding queries) was kept in the database. Note that if a query clip corresponds to a removed song, then the query's ground truth is modified to the kept song.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphones, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dir4db%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/000001.mp3&lt;br /&gt;
 ./AFP/database/000002.mp3&lt;br /&gt;
 ./AFP/database/000003.mp3&lt;br /&gt;
 ./AFP/database/000004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The total size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dir4db% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q000001.wav&lt;br /&gt;
 ./AFP/query/q000002.wav&lt;br /&gt;
 ./AFP/query/q000003.wav&lt;br /&gt;
 ./AFP/query/q000004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q000001.wav	./AFP/database/0000004.mp3&lt;br /&gt;
 ./AFP/query/q000002.wav	./AFP/database/0000054.mp3&lt;br /&gt;
 ./AFP/query/q000003.wav	./AFP/database/0001002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 1/50 of real time (For instance, if a music clip has a duration of 4 minutes, it should take 4.8 sec to construct the database in average. For a database of 10000 songs, the total time should around 10000*4/60/50 = 13.33 hours.) || 50KB for 1 minute of music. (For a database of 10000 songs, the total storage for database should be around 50*10000*4/1000000 = 2GB.)&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 5 secs for each query of around 10 seconds (Thus for 5692 queries of around 10 seconds, the total query time should be around 5*5692/3600 = 7.91 hours.) || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Bibliography ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[1] Bob L. Sturm, ``The State of the Art Ten Years After a State of the Art: Future Research in Music Information Retrieval,'' J. New Music Research, vol. 43, no. 2, pp. 147–172, 2014.&lt;br /&gt;
[2] Faults in the GTZAN Music Genre Dataset, available at http://imi.aau.dk/~bst/research/GTZANtable2/ , 2014.&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10402</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10402"/>
		<updated>2014-08-30T09:06:10Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) 965 of the files are from GTZAN data set, all the others are mainly English and Chinese pop songs. This data set is hidden and not available for download. (Note that there are possibly different numbers of channels (mono and stereo), sampling rates, and bit resolutions for these files.)&lt;br /&gt;
&lt;br /&gt;
The GTZAN data set were purified according to [1]. Exact repetitions were considered by the following principles:&lt;br /&gt;
* If none of the songs in a repetition set has corresponding queries, then nothing is removed from the database.&lt;br /&gt;
* If one of the songs in a repetition set has corresponding queries, then all the other songs (which have no corresponding queries) were removed from the database.&lt;br /&gt;
* If two or more of the songs in a repetition set has corresponding queries, then only one song (which has corresponding queries) was kept in the database. Note that if a query clip corresponds to a removed song, then the query's ground truth is modified to the kept song.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphones, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dir4db%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/000001.mp3&lt;br /&gt;
 ./AFP/database/000002.mp3&lt;br /&gt;
 ./AFP/database/000003.mp3&lt;br /&gt;
 ./AFP/database/000004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The total size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dir4db% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q000001.wav&lt;br /&gt;
 ./AFP/query/q000002.wav&lt;br /&gt;
 ./AFP/query/q000003.wav&lt;br /&gt;
 ./AFP/query/q000004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q000001.wav	./AFP/database/0000004.mp3&lt;br /&gt;
 ./AFP/query/q000002.wav	./AFP/database/0000054.mp3&lt;br /&gt;
 ./AFP/query/q000003.wav	./AFP/database/0001002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 1/50 of real time (For instance, if a music clip has a duration of 4 minutes, it should take 4.8 sec to construct the database in average. For a database of 10000 songs, the total time should around 10000*4/60/50 = 13.33 hours.) || 50KB for 1 minute of music. (For a database of 10000 songs, the total storage for database should be around 50*10000*4/1000000 = 2GB.)&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 5 secs for each query of around 10 seconds (Thus for 5692 queries of around 10 seconds, the total query time should be around 5*5692/3600 = 7.91 hours.) || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
== Bibliography ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[1] Bob L. Sturm, ``The State of the Art Ten Years After a State of the Art: Future Research in Music Information Retrieval,'' J. New Music Research, vol. 43, no. 2, pp. 147–172, 2014.&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10353</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10353"/>
		<updated>2014-08-11T05:50:44Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: /* Database */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) 1,000 of the files are from GTZAN data set, all the others are mainly English and Chinese pop songs. This data set is hidden and not available for download. Note that there are different # of channels (mono and stereo), sampling rate, and bit resolution in these files.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dir4db%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/00001.mp3&lt;br /&gt;
 ./AFP/database/00002.mp3&lt;br /&gt;
 ./AFP/database/00003.mp3&lt;br /&gt;
 ./AFP/database/00004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dir4db% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q0001.wav	./AFP/database/00004.mp3&lt;br /&gt;
 ./AFP/query/q0002.wav	./AFP/database/00054.mp3&lt;br /&gt;
 ./AFP/query/q0003.wav	./AFP/database/01002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 1/100 of real time (For instance, if a music clip has a duration of 4 minutes, it should take 2.4 sec to construct the database in average. For a database of 10000 songs, the total time should around 10000*4/60/100 = 6.7 hours.) || 50KB for 1 minute of music. (For a database of 10000 songs, the total storage for database should be around 50*10000*4/1000000 = 2GB.)&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 2 secs for each query of around 10 seconds (Thus for 5692 queries of around 10 seconds, the total query time should be around 2*5692/3600 = 3.2 hours.) || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10352</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10352"/>
		<updated>2014-08-11T05:41:32Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: /* Database */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This data set is hidden and not available for download. Note that there are different # of channels (mono and stereo), sampling rate, and bit resolution in these files.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dir4db%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/00001.mp3&lt;br /&gt;
 ./AFP/database/00002.mp3&lt;br /&gt;
 ./AFP/database/00003.mp3&lt;br /&gt;
 ./AFP/database/00004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dir4db% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q0001.wav	./AFP/database/00004.mp3&lt;br /&gt;
 ./AFP/query/q0002.wav	./AFP/database/00054.mp3&lt;br /&gt;
 ./AFP/query/q0003.wav	./AFP/database/01002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 1/100 of real time (For instance, if a music clip has a duration of 4 minutes, it should take 2.4 sec to construct the database in average. For a database of 10000 songs, the total time should around 10000*4/60/100 = 6.7 hours.) || 50KB for 1 minute of music. (For a database of 10000 songs, the total storage for database should be around 50*10000*4/1000000 = 2GB.)&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 2 secs for each query of around 10 seconds (Thus for 5692 queries of around 10 seconds, the total query time should be around 2*5692/3600 = 3.2 hours.) || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10343</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10343"/>
		<updated>2014-08-06T03:10:26Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: /* Query set */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4630 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dir4db%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/00001.mp3&lt;br /&gt;
 ./AFP/database/00002.mp3&lt;br /&gt;
 ./AFP/database/00003.mp3&lt;br /&gt;
 ./AFP/database/00004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dir4db% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q0001.wav	./AFP/database/00004.mp3&lt;br /&gt;
 ./AFP/query/q0002.wav	./AFP/database/00054.mp3&lt;br /&gt;
 ./AFP/query/q0003.wav	./AFP/database/01002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almsot always lead to better accuracy, we need to hard limits for memory and runtime. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 3 GB for the database file(s)&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10338</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10338"/>
		<updated>2014-07-28T05:42:53Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: /* Query set */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4657 clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dir4db%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/00001.mp3&lt;br /&gt;
 ./AFP/database/00002.mp3&lt;br /&gt;
 ./AFP/database/00003.mp3&lt;br /&gt;
 ./AFP/database/00004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dir4db% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q0001.wav	./AFP/database/00004.mp3&lt;br /&gt;
 ./AFP/query/q0002.wav	./AFP/database/00054.mp3&lt;br /&gt;
 ./AFP/query/q0003.wav	./AFP/database/01002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almsot always lead to better accuracy, we need to hard limits for memory and runtime. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 3 GB for the database file(s)&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
	<entry>
		<id>https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10337</id>
		<title>2014:Audio Fingerprinting</title>
		<link rel="alternate" type="text/html" href="https://music-ir.org/mirex/w/index.php?title=2014:Audio_Fingerprinting&amp;diff=10337"/>
		<updated>2014-07-28T03:22:36Z</updated>

		<summary type="html">&lt;p&gt;Chung-Che Wang: /* Query set */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description ==&lt;br /&gt;
This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.&lt;br /&gt;
&lt;br /&gt;
== Data ==&lt;br /&gt;
=== Database ===&lt;br /&gt;
* 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.&lt;br /&gt;
&lt;br /&gt;
=== Query set ===&lt;br /&gt;
The query set has two parts:&lt;br /&gt;
* 4000 (???) clips of wav format: These are hidden and not available for download&lt;br /&gt;
* 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via [http://mirlab.org/dataSet/public/queryPublic_George.rar this link] &lt;br /&gt;
&lt;br /&gt;
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphone, at various locations with various kinds of environmental noise.&lt;br /&gt;
&lt;br /&gt;
== Evaluation Procedures ==&lt;br /&gt;
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.&lt;br /&gt;
&lt;br /&gt;
== Submission Format ==&lt;br /&gt;
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:&lt;br /&gt;
&lt;br /&gt;
1. Database Builder&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 builder %fileList4db% %dir4db%&lt;br /&gt;
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:&lt;br /&gt;
 ./AFP/database/00001.mp3&lt;br /&gt;
 ./AFP/database/00002.mp3&lt;br /&gt;
 ./AFP/database/00003.mp3&lt;br /&gt;
 ./AFP/database/00004.mp3&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The size of the database file(s) is restricted to a certain amount, as explained next.&lt;br /&gt;
&lt;br /&gt;
2. Matcher&lt;br /&gt;
&lt;br /&gt;
Command format:&lt;br /&gt;
 matcher %fileList4query% %dir4db% %resultFile%&lt;br /&gt;
where %fileList4query% is a file containing the list of query clips. For example:&lt;br /&gt;
 ./AFP/query/q0001.wav&lt;br /&gt;
 ./AFP/query/q0002.wav&lt;br /&gt;
 ./AFP/query/q0003.wav&lt;br /&gt;
 ./AFP/query/q0004.wav&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
The result file gives retrieved result for each query, with the format:&lt;br /&gt;
&lt;br /&gt;
 %queryFilePath%	%dbFilePath%&lt;br /&gt;
&lt;br /&gt;
where these two fields are separated by a tab. Here is a more specific example:&lt;br /&gt;
&lt;br /&gt;
 ./AFP/query/q0001.wav	./AFP/database/00004.mp3&lt;br /&gt;
 ./AFP/query/q0002.wav	./AFP/database/00054.mp3&lt;br /&gt;
 ./AFP/query/q0003.wav	./AFP/database/01002.mp3&lt;br /&gt;
 ..&lt;br /&gt;
&lt;br /&gt;
== Time and hardware limits ==&lt;br /&gt;
Due to the fact that more features extracted for AFP almsot always lead to better accuracy, we need to hard limits for memory and runtime. The time/storage limits of different steps are shown in the following table:&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; |-&lt;br /&gt;
! Steps !! Time limit !! Storage limit&lt;br /&gt;
|-&lt;br /&gt;
| builder || 24 hours || 3 GB for the database file(s)&lt;br /&gt;
|-&lt;br /&gt;
| matcher || 10 hours || None&lt;br /&gt;
|}&lt;br /&gt;
Submissions that exceed these limitations may not receive a result.&lt;br /&gt;
&lt;br /&gt;
== Potential Participants ==&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
name / email&lt;br /&gt;
&lt;br /&gt;
= Bibliography =&lt;/div&gt;</summary>
		<author><name>Chung-Che Wang</name></author>
		
	</entry>
</feed>