As a result of challenges and awards related to or selected during Interspeech 2015 conference the following prizes were awarded:
- Christian Benoît Award
- Best SpeCom Paper
- Best CSL Paper (already awarded at EUSIPCO 2015)
- Best Student Papers
- Show & Tell Price
- ComParE Award: Degree of Nativeness
- ComParE Award: Degree of Parkinson’s Condition
- ComParE Award: Eating Condition
- ZeroSpeech: Best Paper Award
Names of the individual winners can be obtained from the statements officially pronounced in front of the plenary in the closing ceremony of Interspeech 2015 conference: Awards_Interspeech2015.
The whole team of Interspeech 2015 congratulates all winners and explicitely thanks and expresses our gratefullness towards all the excellent works that were nominated for these prizes and awards.
For a copy of calls and nominations please scoll down.
Eighth Christian Benoît Award
supported by the International Speech Communication Association, the Association Francophone de la Communication Parlée, and GIPSA-lab
Deadline Jun 8, 2015 (extended)
The Christian Benoît Award is delivered periodically by the Association Christian Benoit(**). It is given to promising young scientists in the domain of SPEECH and FACE-TO-FACE COMMUNICATION. It can concern basic or applied research projects. The award provides the elected scientist with financial support for the development of a short-term research project that
- Illustrates concretely the achievements of her/his research work
- Could help promoting this work in the scientific community and to Grant Agencies
- Gives an overall view of the state of the art in the research domain
The proposed research project can have the form of a demonstrator, a technical product or of a pedagogical multi-media product (Movie, Web-site, interactive software…).
The award is valued at 7,500 Euro(*).
The commitments of the elected scientist are:
- To attend the Interspeech2015 Conference in Dresden, Germany
- To deliver the final product of the project within 2 years
- To present her/his results in a workshop such as, among others, AVSP, ISSP, or SpeechProsody.
In the application, the candidate should provide
- A statement of research interests (2 pages max),
- A detailed curriculum vitae including the list of a selection of the most relevant publications for the project.
- A description of the proposed short-term research project (15 pages max). The description should include a presentation of the scientific and/or pedagogical objectives and of the methodological aspects, a link with the former research work of the applicant, as well as a detailed description of the provisional budget.
Applications will be evaluated by an international committee including experts in the field of Speech and Face-to-Face communications and representatives of the Institutions supporting the award.
Applications should be sent to Pascal.Perrier@gipsa-lab.fr before Jun 8, 2015.
Electronic submissions are mandatory.
The successful candidate will be notified by June 15, 2015. The award will be delivered at the Interspeech 2015 Conference in Dresden (Germany). For further information, please contact Pascal Perrier.
* 3,500 Euros will be given immediately; the remaining 4,000 Euros will be available at reception of the multi-media project by the Christian Benoit Association. Travel and registration costs necessary to attend the Interspeech 2015 Conference will have to be paid on this grant.
** For details about the Association Christian Benoît and the past awardees of the Christian Benoît Award see http://www.gipsa-lab.fr/acb/
ISCA Best Student Paper Awards
Each year, ISCA awards the student authors of the 3 best student papers at INTERSPEECH which have a student as first author with an ISCA Best Student Paper Award. The decision on the ISCA Best Student Paper Award will be based on the anonymous paper reviewing and on the paper presentation at the Conference.
Finalists for Best Student Papers
- T. Villa-Cañas, J.D. Arias-Londoño, J.R. Orozco-Arroyave, J.F. Vargas-Bonilla, E. Nöth
Low-frequency components analysis in running speech for the automatic detection of Parkinson’s disease
PaperID 1037 Mon-O-4-3 11:40-12:00
- Guillaume Barbier, Pascal Perrier, Lucie Ménard, Yohan Payan, Mark Tiede, Joseph Perkell
Speech planning in 4-year-old children versus adults: Acoustic and articulatory analyses
PaperID 1117 Mon-O-6-4 15:30-15:50
- Benjamin Milde and Chris Biemann
Using Representation Learning and Out-of-domain Data for a Paralinguistic Speech Task
PaperID 1022 Mon-SP2b-5 17:50-18:00
- Reza Sahraeian, Dirk Van Compernolle, Febe de Wet
Under-resourced Speech Recognition based on the Speech Manifold
PaperID 1025 Tue-P-10-1 09:00-11:00
- Satyabrata Parida, Pattem Ashok Kumar, Prasanta Kumar Ghosh
Estimation of the air-tissue boundaries of the vocal tract in the mid-sagittal plane from electromagnetic articulograph data
PaperID 388 Wed-P-18-1 09:00-11:00
- Nicholas Ruiz, Qin Gao, William Lewis, Marcello Federico
Adapting Machine Translation Models toward Misrecognized Speech with Text-to-Speech Pronunciation Rules and Acoustic Confusability
PaperID 1492 Wed-O-30-1 14:00-14:20
- Qing Wang, Jun Du, Xiao Bao, Zi-Rui Wang, Li-Rong Dai, Chin-Hui Lee
A Universal VAD Based on Jointly Trained Deep Neural Networks
PaperID 1578 Wed-O-31-2 14:20-14:40
- Vimal Manohar, Daniel Povey, Sanjeev Khudanpur
Semi-supervised Maximum Mutual Information Training of Deep Neural Network Acoustic Models
PaperID 1418 Wed-O-35-2 16:50-17:10
- Elise Michon, Emmanuel Dupoux , Alejandrina Cristia
Salient dimensions in implicit phonotactic learning
PaperID 1318 Wed-O-36-3 17:10-17:30
- Vijayaditya Peddinti, Daniel Povey, Sanjeev Khudanpur
A time delay neural network architecture for efficient modeling of long temporal contexts PaperID 1474 Thu-P-27-1 09:00-11:00
- Huy Phan, Lars Hertel, Marco Maass, Radoslaw Mazur, and Alfred Mertins
Representing Nonspeech Audio Signals through Speech Classification Models
PaperID 875 Thu-O-45-6 15:40-16:00
- Raphael Ullmann, Ramya Rasipuram, Mathew Magimai-Doss, and Hervé Bourlard
Objective Intelligibility Assessment of Text-to-Speech Systems Through Utterance Verification PaperID 1017 Thu-O-47-6 15:40-16:00