top of page

Invited Talks

ACL 2022 will Feature Two New Types of Invited Talks

  1. Spotlight Talks for Young Rising Stars

This year at ACL 2022, we are introducing a new type of invited talks to be given as part of a plenary session with 5 speakers: Spotlight Talks by Young Rising Stars (STIRS).  A short note on this new initiative and our selection process. The STIRS talks are given by young rising stars in NLP,  selected based on their scholarships and who have just begun their careers as new faculty members or the equivalent in industry (up to 5 years after their PhD). These talks will highlight some of their best work.  We will have one plenary session for these talks. We collected nominations from over 70 members of the ACL Exec and the ACL Fellows, and then we made the final selection. The 5 speakers and their talks are given below.

University of Texas at Austin, USA

Title: Information-seeking Language Understanding in a Dynamic World


Although knowledge is currently accessible in ways previously considered impossible, much of it remains out of reach of non-expert information seekers. Knowledge-rich natural language systems are key to lowering such barriers of expertise and cost. Question answering (QA) is an intuitive and flexible interface to fulfill this goal. However, current paradigms in QA dramatically simplify the problem, discounting the fundamental roles dynamically changing world and users play in information-seeking interactions. In this talk, I will outline my lab’s vision for natural language systems that serve information-seeking users, and show how our work addresses fundamental challenges in expanding the QA paradigm towards building such systems. This includes expanding the scope of QA systems to include temporal and geographical contexts, modeling lengthy answers with a discourse structure, and leveraging interactions with users to design continually learning systems.


Eunsol Choi is an assistant professor in the Computer Science department at the University of Texas at Austin and a visiting researcher at Google AI. Her research spans natural language processing and machine learning. She is particularly interested in interpreting and reasoning about text in dynamic real-world contexts. She received a Ph.D. in Computer Science and Engineering from University of Washington and B.A in mathematics and computer science at Cornell University. She is a recipient of Facebook research fellowship, Google faculty research award, and outstanding paper award at EMNLP 2021.


ETH Zürich, Switzerland

Title: A Hot Take on Sampling from Probabilistic Text Generators


Today’s neural language models appear to model the distribution over sentences well, i.e., they are able to assign high probability to held-out text. However, those same models frequently underperform when used to generate text— indeed, text neural language models place high probability on is often dull and repetitive. So, an inquisitive mind might ask: What’s up with that? In this talk, I will explore this apparent contradiction through an information-theoretic lens. Specifically I take the position that humans use language as a communication channel. And, in so doing, humans tend to utter sentences that are both concise and efficient, but also easy to understand. In that light, I assert that, when we use language models to generate text, we ought to adopt a similar principle. This principle leads to a simple sampling strategy, which I christen typical sampling. Rather than always choosing words from the high-probability region of the distribution at every iteration, typical sampling selects words with information content (negative log-probability) close to the entropy of the conditional distribution p(y | y1, … ,yk), i.e., the average information content of the distribution. And, if you’re one of those researchers who primarily cares about making the nUmBeRs gO uP, there’s something for you, too! We find that typical sampling outperforms several recently proposed sampling algorithms in terms of quality while consistently reducing the number of degenerate repetitions.



Ryan is an assistant professor of computer science at ETH Zürich, where he has been since 2020. Previous to that, he spent a year as a lecturer at the University of Cambridge. He defended his PhD in 2019 at Johns Hopkins University where his advisor was Jason Eisner. Ryan likes probability, word formation, and hierarchical structure inter alia.


Google Research, Germany

Title: Scaling NLP Systems to the Next 1000 Languages


Research in natural language processing (NLP) has seen striking advances in recent years but most of this success has focused on English and a few other languages with large amounts of available data. In this talk, I will discuss my research on making NLP systems more multilingual. In order to make progress in this area, we need to design datasets reflective of a language's context and models capable of modelling a language's phenomena. I will highlight challenges such as evaluation in the face of limited labeled data and generalizing to typologically diverse languages and discuss approaches that help mitigate them. Finally, I will provide an outlook of data and modelling strategies on the way to scaling NLP systems to the next 1000 languages.


Sebastian Ruder is a research scientist at Google Research working on NLP for under-represented languages and based in Berlin. He was previously a research scientist at DeepMind, London. He completed his Ph.D. in Natural Language Processing and Deep Learning at the Insight Research Centre for Data Analytics, while working as a research scientist at Dublin-based text analytics startup AYLIEN. Previously, he studied Computational Linguistics at the University of Heidelberg, Germany and at Trinity College, Dublin. He is interested in transfer learning for NLP and making ML and NLP more accessible.


University of Southern California (starting in fall), USA

Title: The Devil’s in the Data: Mapping and Generating Datasets for Robust Generalization


As large language models continue to dominate the field of Natural Language Processing, there is a rush towards investment in scale. However, for datasets —the bedrock of NLP—is scale indeed the panacea it is purported to be? I will address this question by presenting data maps, two-dimensional representations of datasets, obtained through a model’s training dynamics, or how the model confidence evolves through training. These insights into the data landscape will lead to a novel data collection strategy involving humans and generative models, that prioritizes instances from specific regions of the data map. I will showcase a new benchmark for the natural language inference task, WaNLI, which challenges state-of-the-art models, and leads to better generalization. Overall, I will argue for a renewed emphasis on data quality over scale, which could potentially bolster successes in this new era of NLP.


Swabha Swayamdipta is the soon-to-be Gabilan Assistant Professor of Computer Science at the University of Southern California, and a postdoctoral researcher at the Allen Institute for AI. Her research interests are in natural language processing, with a focus on studying data distributions to uncover and address spurious biases and annotation artifacts, towards improving robustness and generalization. Swabha received her PhD from Carnegie Mellon University, and her Masters from Columbia University. Her work has received an outstanding paper award at NeurIPS 2021 and an honorable mention for the best paper at ACL 2020.


Georgia Tech, USA

Title: Socially Aware NLP for Social Impact


Natural language processing (NLP) has had increasing success and produced extensive industrial applications. Despite being sufficient to enable these applications, current NLP systems often ignore the social part of language, e.g., who says it, in what context, for what goals, which severely limits the functionality of these applications and the growth of the field.  In this talk, I introduce socially aware NLP to show how we can study and build NLP systems from a social perspective, using a set of specific studies around hate speech, dialect inclusive language understanding, and positive reframing. I conclude by discussing how to build this interdisciplinary subfield of socially aware language technologies for social impact.


Diyi Yang is an assistant professor in the School of Interactive Computing at Georgia Tech. Her research interests are computational social science and natural language processing. Her research goal is to understand the social aspects of language and then build socially aware NLP systems to better support human-human and human-computer interaction. Her work has received multiple best paper nominations or awards at ICWSM, EMNLP, SIGCHI, and CSCW.  She is a recipient of Forbes 30 under 30 in Science (2020),  IEEE “AI 10 to Watch” (2020), the Intel Rising Star Faculty Award (2021),  Microsoft Research Faculty Fellowship (2021), and NSF CAREER Award (2022).

Moderator: Bonnie Webber

University of Edinburgh, UK


Bonnie Webber received her PhD from Harvard University and taught at the University of Pennsylvania in Philadelphia for 20 years before joining the School of Informatics at the University of Edinburgh, where she is now professor emeritus.


Known for early research on "cooperative question-answering" and extended research on discourse anaphora and discourse relations, she has served as President of the Association for Computational Linguistics (ACL) and Deputy Chair of the European COST action IS1312, "TextLink: Structuring Discourse in Multilingual Europe". Along with Aravind Joshi, Rashmi Prasad, Alan Lee and Eleni Miltsakaki, she is co-developer of the Penn Discourse TreeBank, including the recently released PDTB-3.


She is a Fellow of the Association for Advancement of Artificial Intelligence (AAAI), the Association for Computational Linguistics (ACL) and the Royal Society of Edinburgh (RSE). In July, she was awarded the ACL Life Time Achievement award for 2020. In both the RSE and the ACL, she continues to work towards ensuring that women are recognized for their achievements in the NLP community and in Science and Technology more generally.

2. The Next Big Ideas Talks

The second type of new invited talks to be featured at this year ACL 2022 as a plenary session is The Next Big Ideas Talks. Below we present our amazing lineup of speakers in alphabetical order, who will each tell us what they see as The Next Big Idea in a 10-minute presentation followed by a 20-minute QA session moderated by Iryna Gurevych. To keep the surprise of the next big ideas, we will be posting their ideas after the event.



Universitat Pompeu Fabra, Spain


Marco Baroni received a PhD in Linguistics from the University of California, Los Angeles, in the year 2000. After several experiences in research and industry, he joined the Center for Mind/Brain Sciences of the University of Trento, where he became associate professor in 2013. From 2016 to 2021, Marco worked in the Paris Facebook Artificial Intelligence Research lab. In 2019, he became an ICREA research professor, affiliated with the Linguistics Department of Pompeu Fabra University in Barcelona. Marco's work in the areas of multimodal and compositional distributed semantics has received widespread recognition, including a Google Research Award, an ERC Grant, the ICAI-JAIR best paper prize and the ACL test-of-time award. Marco was recently awarded another ECR grant to conduct research on improving communication between artificial neural networks, taking inspiration from human language and other animal communication systems.


University of Melbourne and CMU, Australia & USA


Eduard Hovy is the Executive Director of Melbourne Connect (a research and tech transfer centre at the University of Melbourne), a professor at the University of Melbourne’s School of Computing and Information Systems, and a research professor at the Language Technologies Institute in the School of Computer Science at Carnegie Mellon University.  In 2020–21 he served as Program Manager in DARPA’s Information Innovation Office (I2O), where he managed programs in Natural Language Technology and Data Analytics.  Dr. Hovy holds adjunct professorships in CMU’s Machine Learning Department and at USC (Los Angeles).  Dr. Hovy completed a Ph.D. in Computer Science (Artificial Intelligence) at Yale University in 1987 and was awarded honorary doctorates from the National Distance Education University (UNED) in Madrid in 2013 and the University of Antwerp in 2015.  He is one of the initial 17 Fellows of the Association for Computational Linguistics (ACL) and is also a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI).  Dr. Hovy’s research focuses on computational semantics of language and addresses various areas in Natural Language Processing and Data Analytics, including in-depth machine reading of text, information extraction, automated text summarization, question answering, the semi-automated construction of large lexicons and ontologies, and machine translation.  In early 2022 his Google h-index was 95, with over 54,000 citations.  Dr. Hovy is the author or co-editor of eight books and around 400 technical articles and is a popular invited speaker.  From 2003 to 2015 he was co-Director of Research for the Department of Homeland Security’s Center of Excellence for Command, Control, and Interoperability Data Analytics, a distributed cooperation of 17 universities.  In 2001 Dr. Hovy served as President of the international Association of Computational Linguistics (ACL), in 2001–03 as President of the International Association of Machine Translation (IAMT), and in 2010–11 as President of the Digital Government Society (DGS).  Dr. Hovy regularly co-teaches Ph.D.-level courses and has served on Advisory and Review Boards for both research institutes and funding organizations in Germany, Italy, Netherlands, Ireland, Singapore, and the USA.


University of Illinois at Urbana-Champaign, USA


Heng Ji is a professor at Computer Science Department, and an affiliated faculty member at Electrical and Computer Engineering Department of University of Illinois at Urbana-Champaign. She is an Amazon Scholar. She received her B.A. and M. A. in Computational Linguistics from Tsinghua University, and her M.S. and Ph.D. in Computer Science from New York University. Her research interests focus on Natural Language Processing, especially on Multimedia Multilingual Information Extraction, Knowledge Base Population and Knowledge-guided Generation. She was selected as "Young Scientist" and a member of the Global Future Council on the Future of Computing by the World Economic Forum in 2016 and 2017. The awards she received include "AI's 10 to Watch" Award by IEEE Intelligent Systems in 2013, NSF CAREER award in 2009, Google Research Award in 2009 and 2014, IBM Watson Faculty Award in 2012 and 2014, Bosch Research Award in 2014-2018, Amazon AWS Faculty Award in 2021, Best-of-ICDM2013 Paper, Best-of-SDM2013 Paper, ACL2020 Best Demo Paper Award, and NAACL2021 Best Demo Paper Award. She was invited by the Secretary of the Air Force and AFRL to join Air Force Data Analytics Expert Panel to inform the Air Force Strategy 2030. She was elected as the North American Chapter of the Association for Computational Linguistics (NAACL) secretary 2020-2021. She has served as the Program Committee Co-Chair of many conferences including NAACL-HLT2018, and she has been the coordinator for the NIST TAC Knowledge Base Population track since 2010.


University of Edinburgh, Scotland


Mirella Lapata is professor of natural language processing in the School of Informatics at the University of Edinburgh. Her research focuses on getting computers to understand, reason with, and generate natural language. She is the first recipient (2009) of the British Computer Society and Information Retrieval Specialist Group (BCS/IRSG) Karen Sparck Jones award, a Fellow of the ACL and the Royal Society of Edinburgh. She has also received best paper awards in leading NLP conferences and has served on the editorial boards of the Journal of Artificial Intelligence Research, the Transactions of the ACL, and Computational Linguistics. She was president of SIGDAT (the group that organizes EMNLP) in 2018.

Hang Li


Bytedance Technology, China


Hang Li is currently a Director of the AI Lab at ByteDance Technology. He is also a Fellow of ACL, Fellow of IEEE, and Distinguished Scientist of ACM. He graduated from Kyoto University and earned his Ph.D. from the University of Tokyo. He worked at NEC Research as researcher and at Microsoft Asia Research as senior researcher and research manager. He was a director and chief scientist of Noah's Ark Lab of Huawei Technologies prior to joining ByteDance.


University of Pennsylvania and AWS AI Labs, USA


Dan Roth is the Eduardo D. Glandt Distinguished Professor at the Department of Computer and Information Science, University of Pennsylvania, lead of NLP Science at AWS AI Labs., and a Fellow of the AAAS, the ACM, AAAI, and the ACL.

In 2017 Roth was awarded the John McCarthy Award, the highest award the AI community gives to mid-career AI researchers. Roth was recognized “for major conceptual and theoretical advances in the modeling of natural language understanding, machine learning, and reasoning.”

Roth has published broadly in machine learning, natural language processing, knowledge representation and reasoning, and learning theory, and has developed advanced machine learning based tools for natural language applications that are being used widely. Roth was the Editor-in-Chief of the Journal of Artificial Intelligence Research (JAIR) and a program chair of AAAI, ACL, and CoNLL. Roth has been involved in several startups; most recently he was a co-founder and chief scientist of NexLP, a startup that leverages the latest advances in Natural Language Processing (NLP), Cognitive Analytics, and Machine Learning in the legal and compliance domains. NexLP was acquired by Reveal in 2020. Prof. Roth received his B.A Summa cum laude in Mathematics from the Technion, Israel, and his Ph.D. in Computer Science from Harvard University in 1995.


University of Houston and Bloomberg, USA


Thamar Solorio is a  Professor of Computer Science at the University of Houston (UH) and she is also a visiting scientist at Bloomberg LP. She holds graduate degrees in Computer Science from the Instituto Nacional de Astrofísica, Óptica y Electrónica, in Puebla, Mexico. Her research interests include information extraction from social media data, enabling technology for code-switched data, stylistic modeling of text and more recently multimodal approaches for online content understanding. She is the director and founder of the Research in Text Understanding and Language Analysis Lab at UH. She is the recipient of an NSF CAREER award for her work on authorship attribution, and recipient of the 2014 Emerging Leader ABIE Award in Honor of Denice Denton. She is serving a second term as an elected board member of the North American Chapter of the Association of Computational Linguistics.

Moderator QA Session


Technical University Darmstadt, Germany


Iryna Gurevych (PhD 2003, U. Duisburg-Essen, Germany) is professor of Computer Science and director of the Ubiquitous Knowledge Processing (UKP) Lab at the Technical University (TU) of Darmstadt in Germany. Her main research interests are in machine learning for large-scale language understanding and text semantics. Iryna’s work has received numerous awards. Examples are the ACL fellow award 2020 and the ever-first Hessian LOEWE Distinguished Chair award (2,5 mil. Euro) in 2021. Iryna is co-director of the NLP program within ELLIS, a European network of excellence in machine learning. She is currently the vice-president of the Association of Computational Linguistics. Very recently, Iryna has won one of the highly-coveted “ERC Advanced Grants” and received 2.5 million euros from the European Research Council (ERC) for her project “InterText – Modeling Text as a Living Object in a Cross-Document Context”.

bottom of page