SIGIR 2003: Workshop on the
Evaluation of Music Information Retrieval (MIR) Systems, August 1,
2003, Toronto, Canada.
If you have not already done so, please
familiarize yourself with the background information found at
At the above URL you will find example
White Papers from the previous workshops. A general introduction to the
"MIR/MDL Evaluation Frameworks Project," of which this workshop is a
part, can be found at:
Information about SIGIR 2003: http://www.sigir2003.org/
If you have any comments, suggestions
or questions please contact me, J. Stephen Downie, at firstname.lastname@example.org.
Special thanks to the Andrew W. Mellon Foundation for its support of
the "MIR/MDL Evalution Frameworks Project".
Two classes of participants are
envisioned: 1) presenters;
and 2) audience
will submit written briefing documents (i.e., White Papers) prior to the
workshop. I plan on including these briefing documents in the growing
collection at http://www.music-ir.org/evaluation.
Based upon prior experience, there will be 8 to 12 formal presenters. Audience members,
while not acting as formal presenters, will be encourage to respond to
the presentations as active debate on recommendations being put forward
is a key goal of the workshop.
We are looking for the submission of scholarly "White
Papers" or "Position Papers". These papers will be scholarly
in the sense that they must conform to the standards of attribution and
referencing generally accepted in the academic world. The "White Paper"
notion, however, does afford us latitude as to content. Your personal
and professional expertise, experiences and judgements are explicitly
be sought to help the MIR/MDL communities design and manage an ongoing
framework for meaningful and rigourous MIR/MDL evaluation. As such, if
you have an opinion or argument, either for, or against, a particular
aspect of MIR/MDL evaluation and you can provide sufficient background
information (i.e., supporting references) that will help clarify your
opinion or argument, then I am almost certain that your submission will
be accepted. Remember, the MIR/MDL research communities are
multi-disciplinary and this fact should be taken into account as you
present your opinions (i.e., provide pointers to background information
so that non-subject experts can better appreciate your arguments and
Basic Open Questions and Topics:
The following, non-exclusive (nor
all-encompassing) list of open questions should help you understand
just a few of the many possible paper topics:
These are just a few of the possible
questions/topics that can be addressed. The underlying questions are:
- As a music librarian, are there
issues that evaluation standards must address for their results to be
credible? Do you know of possible collections that might form the basis
of a test collection? What prior research should we be considering?
- As a musicologist, what things need
examination that are possibly being overlooked?
- As digital library (DL) developer,
what standards for evaluation should we borrow from the traditional DL
community? Any perils or pitfalls that we should consider?
- As an audio engineer, what do you
need to test your approaches? What methods have worked in other
contexts that might or might not work in the MIR/MDL context?
- As an information retrieval
specialist, what lessons have you learned about other traditional IR
evaluation frameworks? Any suggestions about what to avoid or consider
as we build our MIR/MDL framework from "scratch"?
- As an intellectual property expert,
what rights and responsibilities will we have as we strive to build and
distribute our test collections?
- As an interface/human computer
interaction (HCI) expert, what tests should we consider to validate our
many different types of interfaces?
- As a business person, what format
of results will help you make selection decisions? Are there business
research models and methods that should be considered?
- As a computer scientist, what are
the strengths and weaknesses of the CS approach to validation in the
- How do we determine, and then
appropriately classify, the tasks that should make up the legitimate
purviews of the MIR/MDL domains?
- What do we mean by "success"? What
do we mean by "failure"?
- How will we decide that one MIR/MDL
approach works better than another?
- How do we best decide which MIR/MDL
approach is best suited for a particular task?
Major Workshop Themes
Major general themes to be addressed in
the workshop include:
To address these majors themes,
participants will be prompted to provide recommendations and commentary
on specific sub-components of the themes. For example, a non-exclusive
list of possible presentations includes suggestions and commentary on:
- How do we adequately comprehend the
complex nature of music information so that we can properly construct
our evaluation recommendations?
- How do we adequately capture the
complex nature of music queries so proposed experiments and protocols
are well-grounded in reality?
- How do we deal with the “relevance”
problem in the MIR context (i.e., What does “relevance” really mean in
the MIR context?)?
- How do we continue to the expansion
of a comprehensive collection of music materials to be used in
- How do we manage the interplay
between TREC-like and other potential evaluation paradigms?
- How do we integrate the evaluation
of MIR systems with the larger framework of MIR evaluation (i.e., What
aspects are held in common and what are unique to MIR?)?
- How best to ground evaluation
methods in real-world requirements.
- How to facilitate the creation of
data-rich query records that are both grounded in real-world
requirements and neutral with respect to retrieval technique(s) being
- How the possible adoption, and
subsequent validation, of a “reasonable person” approach to “relevance”
assessment might address the MIR “relevance” problem.
- How to develop new models and
theories of “relevance” in the MIR context.
- How to evaluate the utility, within
the MIR context, of already-established evaluation metrics (e.g.,
precision and recall, etc.).
- How to support the ongoing
acquisition of music information (audio, symbolic and metadata) to
enhance the development of a secure, yet accessible, research
environment that allows researchers to remotely participate in the use
of the large-scale testbed collection.
Unlike many other Workshops, you need not
necessarily attend the Workshop for your submission to be considered for
inclusion in the "White Paper" collection. It is much more important
that your expertise be heard and considered by those interested in
MIR/MDL evaluation issue.
If you plan on attending and/or
submitting a "White Paper" please drop me an email at email@example.com that briefly outlines your
intentions at your earliest convenience. I will be structuring Workshop
presentation schedules with an eye toward being as inclusive as
Due to email constraints I would
appreciate that submissions, if possible, be mounted on the Web and the
URL of the submission emailed to me. I or my assistant will then
download your submission.
Papers should range between 2000-5000
Deadline for submission: 5
We will be using the basic ACM-style
formatting structure. This is the structure that was used for the 2nd
International Symposium on Music Information Retrieval, Bloomington, IN
(ISMIR 2001). The formatting instructions and templates can be found at
the ISMIR 2001 site http://ismir2001.indiana.edu/template.html.
You may use any standard bibliographic reference style including those
in the (Author-Date) genre.
Turn around time from submission to
printing/mounting on the Web will be VERY TIGHT so I ask
that submissions be made in the PDF format if at all possible. In cases
of undue hardship MS-Word or RTF files *may be* considered. Please be
considerate of this formatting request: we simply do not have the
resources nor time to reformat documents into shape for printing.
music-ir.org is hosted by the ISRL (Information Science Research Laboratories) which is part of GSLIS (the Graduate School of Library and Information Science at UIUC (the University of Illinois at Urbana-Champaign).
J. Stephen Downie
15 April 2003