Useful Links

Aim and Scope

Leveraging the foundation built in the prior workshops RoboNLP 2017, SpLU 2018, SpLU-RoboNLP 2019, and SpLU 2020, this workshop aims to realize the long-term goal of natural conversation with machines in our homes, workplaces, hospitals, and warehouses. It also highlights the importance of spatial semantics when it comes to communicating about the physical world and grounding language in perception. Human-robot dialogue often involves developing an understanding of grounded spatial descriptions. These capabilities invariably require understanding spatial semantics that relates to the physical environments where robots are embodied. The main goal of this joint workshop is to bring in the perspectives of researchers working on physical robot systems and with human users, and align spatial language understanding representation and learning approaches, datasets, and benchmarks with the goals and constraints encountered in HRI and robotics. Such constraints include high costs of real-robot experiments, human-in-the-loop training and evaluation settings, scarcity of embodied data, as well as non-verbal communication.

Topics of Interest

  1. Achieving Common Ground in Human-Robot Interaction
  2. Aligning and Translating Language to Situated Actions
  3. Cognitive and Linguistically Motivated Spatial Language Representations
  4. Evaluation Metrics for Language Grounding and Human-Robot Communication
  5. Human-Computer Interactions Through Natural or Structural Language
  6. Instruction Understanding and Spatial Reasoning Based on Multimodal Information for Navigation, Articulation, and Manipulation
  7. Interactive Situated Dialogue for Physical Tasks
  8. Language-based Game Playing for Grounding
  9. Reasoning over Spatial Language (e.g. Based on Qualitative and Quantitative Spatial Representation)
  10. Spatial Language and Skill Learning via Grounded Dialogue
  11. Spatial Information Extraction from Text (e.g. Locative Descriptions, Navigation Instruction)
  12. (Spatial) Language Generation for Embodied Tasks
  13. (Spatially) Grounded Knowledge Representations

Schedule

Invited Speakers

Thora Tenbrink, Bangor University   [Slides]

Beyond physical robots: How to achieve joint spatial reference with a smart environment

Abstract: Interacting with a smart environment involves joint understanding of where things and people are or where they should be. Face-to-face interaction between humans, or between humans and robots, implies clearly identifiable perspectives on the environment that can be used to establish such a joint understanding. A smart environment, in contrast, is ubiquitous and thus perspective-independent. In this talk I will review the implications of this situation in terms of the challenges for establishing joint spatial reference between humans and smart systems, and present a somewhat unconventional solution as an opportunity.

Bio: Thora Tenbrink is a Professor of Linguistics at Bangor University (Wales, UK), who uses linguistic analysis to understand how people think. She is author of “Cognitive Discourse Analysis: An Introduction” (Cambridge University Press, 2020) and "Space, Time, and the Use of Language" (Mouton de Gruyter, 2007), and has co-edited various further books on spatial language, representation, and dialogue


Jean Oh, Carnegie Mellon University

Core Challenges of Embodied Vision-Language Planning

Abstract: Service dogs or police dogs work on real jobs in human environments. Are embodied AI agents intelligent enough to perform service tasks in a real, physical space? Embodied AI is generally considered as one of the ultimate AI problems that would require complex, integrated intelligence combining multiple subfields of AI including natural language understanding, visual understanding, planning, reasoning, inferencing, and prediction. While steep progresses have been witnessed in several subfields in recent years, the field of embodied AI remains extremely challenging. In this talk, we will focus on the Embodied Vision-Language Planning (EVLP) problem to understand the unique technical challenges imposed at the intersection of computer vision, natural language understanding, and planning problems. We will review several examples of the EVLP problem to discuss the current approaches, training environments, and evaluation methodologies. Through in-depth investigation of the current progress on the EVLP problem, this talk aims to assess where we are in term of making progress in EVLP and facilitate future interdisciplinary research to tackle core challenges that have not been fully addressed.

Bio: Jean Oh is an Associate Research Professor at the Robotics Institute at Carnegie Mellon University. She is passionate about creating persistent robots that can co-exist and collaborate with humans in shared environments, learning to improve themselves over time through continuous training, exploration, and interactions. Jean’s current research is focused on autonomous social navigation, natural language direction following, and creative AI. Her team has won two Best Paper Awards in Cognitive Robotics at IEEE International Conference on Robotics and Automation (ICRA) for the works on following natural language directions in unknown environments and socially compliant robot navigation in human crowds, in 2015 and 2018, respectively. Jean received her Ph.D. in Language and Information Technologies at Carnegie Mellon University, M.S. in Computer Science at Columbia University, and B.S. in Biotechnology at Yonsei University in South Korea.


Karthik Narasimhan, Princeton University

Language-guided policy learning for better generalization and safety

Abstract: Recent years have seen exciting developments in autonomous agents that can understand natural language in interactive settings. As we gear up to transfer some of these advances into real-world systems (e.g physical robots, autonomous cars or virtual assistants), we encounter unique challenges that stem from these agents operating in an ever-changing, chaotic world. In this talk, I will focus on our recent efforts at addressing two of these challenges through a combination of NLP and reinforcement learning — 1) grounding novel concepts to their linguistic symbols through interaction, and 2) specification of safety constraints during policy learning. First, I will demonstrate a new benchmark of tasks we designed specifically to measure an agent's ability to ground new concepts for generalization, along with a new model for grounding entities and dynamics without any prior mapping provided. Next, I will show how we can train control policies with safety constraints specified in natural language. This will encourage more widespread use of methods for safety-aware policy learning, which otherwise require domain expertise to specify constraints. Scaling up these techniques can help bring us closer to deploying learning systems that can interact seamlessly and responsibly with humans in everyday life.

Bio: Karthik Narasimhan is an assistant professor in the Computer Science department at Princeton University. His research spans the areas of natural language processing and reinforcement learning, with a view towards building intelligent agents that learn to operate in the world through both their own experience and leveraging existing human knowledge. Karthik received his PhD from MIT in 2017, and spent a year as a visiting research scientist at OpenAI prior to joining Princeton in 2018. His work has received a best paper award at EMNLP 2016 and an honorable mention for best paper at EMNLP 2015.


Maja Matarić, University of Southern California

Socially Assistive Robotics: What it Takes to Get Personalized Embodied Systems into Homes for Support of Health, Wellness, Education, and Training

Abstract: The nexus of advances in robotics, NLU, and machine learning has created opportunities for personalized robots for the ultimate robotics frontier: the home. The current pandemic has both caused and exposed unprecedented levels of health & wellness, education, and training needs worldwide, which must increasingly be addressed in the home. Socially assistive robotics has the potential to address those needs through personalized and affordable in-home support. This talk will discuss human-robot interaction methods for socially assistive robotics that utilize multi-modal interaction data and expressive and persuasive robot behavior to monitor, coach, and motivate users to engage in health, wellness, education and training activities. Methods and results will be presented that include modeling, learning, and personalizing user motivation, engagement, and coaching of healthy children and adults, stroke patients, Alzheimer's patients, and children with autism spectrum disorders, in short and long-term (month+) deployments in schools, therapy centers, and homes. Research and commercial implications and pathways will be discussed. Originally presented at Cornell CS Colloquium.

Bio: Maja Matarić is the Chan Soon-Shiong Distinguished Professor in the Computer Science Department, Neuroscience Program, and the Department of Pediatrics and Interim Vice President for Research at the University of Southern California, founding director of the USC Robotics and Autonomous Systems Center (RASC), co-director of the USC Robotics Research Lab, and the lead of the Viterbi K-12 STEM Center. She received her PhD in Computer Science and Artificial Intelligence from MIT in 1994, MS in Computer Science from MIT in 1990, and BS in Computer Science from the University of Kansas in 1987.


Submissions

Long Papers

Technical papers: ACL style, 8 pages excluding references

Short Papers

Position statements describing previously unpublished work or demos: ACL style, 4 pages excluding references

ACL Style files: Template

Submissions website: Softconf

Non-Archival option: ACL workshops are traditionally archival. To allow dual submission of work to SpLU-RoboNLP 2021 and other conferences/journals, we are also including a non-archival track. Space permitting, these submissions will still participate and present their work in the workshop, and will be hosted on the workshop website, but will not be included in the official proceedings. Please submit through softconf but indicate that this is a cross submission (non-archival) at the bottom of the submission form.

Important Dates

Organizing Committee

  • Malihe Alikhani
  • University of Pittsburgh malihe@pitt.edu
  • Valts Blukis
  • NVIDIA valts@cs.cornell.edu
  • Parisa Kordjamshidi
  • Michigan State University kordjams@msu.edu
  • Aishwarya Padmakumar
  • Amazon Alexa AI padmakua@amazon.com
  • Hao Tan
  • University of North Carolina haotan@cs.unc.edu
    Contact: splu-robonlp-2021@googlegroups.com

    Program Committee

  • Jacob Arkin
  • University of Rochester
  • Jonathan Berant
  • Tel-Aviv University
  • Steven Bethard
  • University of Arizona
  • Johan Bos
  • University of Groningen
  • Volkan Cirik
  • Carnegie Mellon University
  • Guillem Collell
  • KU Leuven
  • Simon Dobnik
  • University of Gothenburg, Sweden
  • Fethiye Irmak Dogan
  • KTH Royal Institute of Technology
  • Frank Ferraro
  • University of Maryland, Baltimore County
  • Daniel Fried
  • University of California, Berkeley
  • Felix Gervits
  • Tufts University
  • Yicong Hong
  • Australian National University
  • Drew Arad Hudson
  • Stanford University
  • Xavier Hinaut
  • Inria
  • Gabriel Ilharco
  • University of Washington
  • Siddharth Karamcheti
  • Stanford University
  • Hyounghun Kim
  • UNC Chapel Hill
  • Jacob Krantz
  • Oregon State University
  • Stephanie Lukin
  • Army Research Laboratory
  • Lei Li
  • ByteDance AI Lab
  • Roshanak Mirzaee
  • Michigan State University
  • Ray Mooney
  • University of Texas, Austin
  • Mari Broman Olsen
  • Microsoft
  • Natalie Parde
  • University of Illinois, Chicago
  • Christopher Paxton
  • NVIDIA
  • Roma Patel
  • Brown University
  • Nisha Pillai
  • University of Maryland, Baltimore County
  • Preeti Ramaraj
  • University of Michigan
  • Kirk Roberts
  • University of Texas, Houston
  • Anna Rohrbach
  • University of California, Berkeley
  • Mohit Shridhar
  • University of Washington
  • Ayush Shrivastava
  • Georgia Tech
  • Jivko Sinapov
  • Tufts University
  • Kristin Stock
  • Massey University of New Zealand
  • Alane Suhr
  • Cornell University
  • Rosario Scalise
  • University of Washington
  • Morgan Ulinski
  • Columbia University
  • Xin Wang
  • University of California, Santa Cruz
  • Shiqi Zhang
  • SUNY Binghamton
    *If you are interested to join the program committee and participate in reviewing submissions please Email the organizers at splu-robonlp-2021@googlegroups.com. Please mention your prior reviewing experience and a link to your publication records in your Email.