Invited Speakers

Zoltan Szlavik – IBM

Zoltán Szlávik leads the IBM Benelux Center for Advanced Studies, studying, among other topics, (Enterprise) Crowdsourcing, Nichesourcing and Cognitive Computing, and applying these concepts on real-world industrial challenges in prototypes and first-of-a-kind implementations. He is responsible for University Programmes in the Netherlands, collaborating closely with several universities on research, innovation and education. He’s also affiliated with VU University Amsterdam (Web and Media Group), and TU Delft (Delft Data Science). Prior to this role he worked as a Data Scientist at IBM Netherlands’ Advanced Analytics department, working directly on various client projects. Before his work at IBM, he was a postdoctoral researcher and lecturer at the VU University Amsterdam, focusing on Data Mining, Machine Learning and Recommender Systems, with a keen interest in research with social impact. He received his PhD in Information Retrieval (IIR, Summarisation) at Queen Mary University in London.

Even the changes are changing: A new age of cognitive computing

Watson, the computer that achieved prominence in 2011 by defeating the all time best players of the American TV quiz show, Jeopardy!, was a herald of a new age. Suddenly it was possible for machines to understand us, in our own language. In the six years since, new AI and cognitive capabilities for machines are being announced at a dizzying pace, yet in the 60 years before Watson, AI moved comparably slowly. There are a few simple factors that have helped move us into the cognitive computing age, such as availability of data and the power of computation, but there is a deeper reason. Artificial intelligence itself has had to change, from a view of machines as perfect rational thinkers, to an understanding that cognition is intimately tied to perception, which is imperfect, and that therefore knowledge and reasoning is inherently subjective and ambiguous. Once we accept this, we can continue changing.

Kathleen McKeown – Columbia University

Kathleen McKeown is the Director of the Data Science Institute and the Henry and Gertrude Rothschild Professor of Computer Science at Columbia University. She served as Department Chair from 1998-2003 and as Vice Dean for Research for the School of Engineering and Applied Science for two years. A leading scholar and researcher in the field of natural language processing, McKeown focuses her research on big data; her interests include text summarization, question answering, natural language generation, and multilingual applications. She has received numerous honors and awards, including AAAI Fellow, a Founding Fellow of the Association for Computational Linguistics and an ACM Fellow. Early on she received the National Science Foundation Presidential Young Investigator Award, and a National Science Foundation Faculty Award for Women. In 2010, she won both the Columbia Great Teacher Award—an honor bestowed by the students—and the Anita Borg Woman of Vision Award for Innovation.

Data Science and the Analysis of Language: From Fact to Fiction

Data science holds the promise to solve many of society’s most pressing challenges, but much of the necessary data is locked within the volumes of unstructured data on the web including language, speech and video. In this talk, I will describe how data science approaches are being used in research projects that draw from language data along a continuum from fact to fiction. I will present research on tracking disasters over time and on the use of data science in understanding subjective, personal narratives from those who have experienced disaster. I will then turn to work on monitoring social media to identify posts by gang involved youth threatening violence or mourning loss. I will close with research on the analysis of novels to provide evidence for or against literary theory.

Antal van den Bosch – Radboud University

Antal van den Bosch is director of the Meertens Institute in Amsterdam, and Professor of Language and Speech Technology at the Centre for Language Studies at Radboud University, Nijmegen, the Netherlands. He is a member of the Royal Dutch Academy of Arts and Sciences and an ECCAI Fellow. His research is focussed on developing machine learning and language technology and involves the intersection of the two fields: computers that learn to understand and generate natural language. His interests include text mining as social and cultural data mining, digital humanities, the Dutch language, and open source natural language processing software development.

Processing text as socio-economic and cultural data

We are in a better position than ever to consider text as a ‘big data’ resource to do large-scale sweeps in search of patterns of human social behavior and communicative and cultural expressions. I review recent developments that link text analytics to social sciences and humanities research. Scholars from these areas, only moderately impressed by the capacity to “extract and label all entities in a text”, challenge text analytics developers to consider textual corpora as, for example, networks of interconnected text, or texts that can be grouped according to individual language users and social groups. Doing this reveals how intimately related text is to demographics and behavior and how automation can help social sciences and humanties research.

Graham R. Isaac – NUI Galway

After studying Celtic languages and literatures in Aberystwyth, Graham Isaac held university teaching and research posts variously in Galway, Bonn, Freiburg, Berlin and Aberystwyth. He has been lecturer in Welsh in Galway since 2005. His research is specialised in the comparative-historical and typological linguistics of the Celtic languages and aspects of medieval Irish and Welsh literatures.

Talk Title: Some features of the grammars of Irish and Welsh