|9:30 – 9:45||Ilan Kernerman.
Opening words. Dictionaries vs. lexicographic data & the need for scratching up new data
|9:45 – 10:00||Jorge Gracia.
Previous experiences in translation inference: the Apertium RDF case
|10:00 – 10:15||Noam Ordan.
Shared task description
|10:15 – 10:30||Morris Alper & Noam Ordan.
Baseline computation and evaluation proces
|10:30 – 11:00||coffee break|
|11:00 – 11:30|| Thomas Proisl, Philipp Heinrich, Stefan Evert and Besim Kabashi.
Translation inference across dictionaries via a combination of graph-based methods and co-occurrence statistics
|11:30 – 12:00||Kathrin Donandt, Christian Chiarcos and Maxim Ionov.
Using Machine Learning for Translation Inference Across Dictionaries
|12:00 – 12:15||Tom Knorr.
Translation Inference Across Dictionaries TIAD 2017 – Shared Task Data Analysis
|12:15 – 12:30||Uliana Sentsova.
Report on TIAD participation
Call for Participation
We are pleased to invite participation in the first shared task for Translation Inference Across Dictionaries. TIAD-2017 is aimed at exploring best methods and techniques for auto-generating new bilingual dictionaries based on existing resources. It will make use of cross-lingual lexicographic data offered by K Dictionaries Ltd (KD), and the results of the systems developed will be validated against KD’s existing resources as well as by human translators. The systems will be presented at a workshop held as part of the first Language, Data and Knowledge conference in Galway, Ireland, on 18 June 2017 (http://ldk2017.org). The papers describing the participant systems will be published on CEUR-WS (http://ceur-ws.org).
Each system should be described in a 6-8 page paper, formatted according to LNCS guidelines [https://springer.com/computer/lncs?SGWID=0-164-6-793341-0]. It is not obligatory to submit fully scientific papers, but there should be a focused description of the system developed by each participant, along with a discussion of the results. The paper should include a description of the system, the way the data have been processed, the applied algorithms, the obtained results, as well as the conclusions and ideas for future improvements. The papers will be peer reviewed prior to publication to confirm that all aspects are well covered. The proceedings will be published on CEUR-WS: http://ceur-ws.org.
- Jorge Gracia, Ontology Engineering Group, Universidad Politécnica de Madrid
- Noam Ordan, K Dictionaries and The Arab Academic College of Education, Haifa
- Ilan Kernerman, K Dictionaries, Tel Aviv
Irith Ben-Arroyo Hartman, University of Haifa, Israel
Thierry Declerck, German Research Center for Artificial Intelligence, Germany
Thierry Fontenelle, Translation Center for the Bodies of the EU, Luxembourg
Mikel Forcada, Universidad de Alicante, Spain
Jorge Gracia, Universidad Politécnica de Madrid, Spain
Miloš Jakubíček, Lexical Computing, Czech Republic
Jelena Kallas, Institute of the Estonian Language, Estonia
Ilan Kernerman, K Dictionaries, Israel
Iztok Kosem, Trojina Institute and University of Ljubljana, Slovenia
Nikola Ljubešić, University of Zagreb, Croatia
Shervin Malmasi, Harvard University, USA
John McCrae, National University of Ireland, Galway
Elena Montiel-Ponsoda, Universidad Politécnica de Madrid, Spain
Preslav Nakov, Hamad Bin Khalifa University, Qatar
Noam Ordan, K Dictionaries and The Arab Academic College of Education, Israel
Georg Rehm, German Research Center for Artificial Intelligence, Germany
Victor Rodriguez-Doncel, Universidad Politécnica de Madrid, Spain
Liling Tan, Saarland University / Nanyang Technological University
Carole Tiberius, Institute of Dutch Language, Netherlands
Marta Villegas, Spain
Marcos Zampieri, University of Köln, Germany
Noam Ordan: email@example.com