Nathaniel Weir

Logo

GitHub

CV

nweir@jhu.edu

I am a final-year PhD student at the Center for Language and Speech Processing at Johns Hopkins University, where I research natural language processing and artificial intelligence. I am advised by Benjamin Van Durme. I am supported by an NSF Graduate Research Fellowship.

I am on the job market! I am looking for both postdoc and industry opportunities. Please reach me at nweir@jhu.edu.

I recently interned with the Aristo reasoning team at The Allen Institute for Artificial Intelligence under Peter Clark. I also interned with Microsoft Semantic Machines under Harsh Jhamtani and with the Deep Learning & Language group at Microsoft Research under Marc-Alexandre Cote and Eric Yuan.

As an undergraduate at Brown University, I worked with Ugur Cetintemel and Carsten Binnig in the Database Group, where we built of the first neural approaches to parsing natural language into SQL.

My research interests include:

Projects I have worked on (click to expand):

Neuro-symbolic reasoning as entailment tree search
nellie

Nathaniel Weir, Kate Sanders, Orion Weller, Shreya Sharma, Dongwei Jiang, Zhengping Zhang, Bhavana Dalvi Mishra, Oyvind Tafjord, Peter Jansen, Peter Clark, Benjamin Van Durme. Enhancing Systematic Decompositional Natural Language Inference Using Informal Logic. preprint.

Nathaniel Weir, Peter Clark and Benjamin Van Durme. NELLIE: A Neuro-Symbolic Inference Engine for Grounded, Compositional, and Explainable Reasoning. IJCAI 2024.

Ontology-constrained dialogue tree generation
cod3s

Weir, N., Thomas, R., D'Amore, R., Hill, K., Durme, B.V., & Jhamtani, H. (2022). Ontologically Faithful Generation of Non-Player Character Dialogues. ArXiv, abs/2212.10618.

Knowledge-guided natural language generation
cod3s

Weir, N., Sedoc, J., & Durme, B.V. (2020). COD3S: Diverse Generation with Discrete Semantic Signatures. Conference on Empirical Methods in Natural Language Processing.

Ou, J., Weir, N., Belyy, A., Yu, F.X., & Durme, B.V. (2021). InFillmore: Frame-Guided Language Generation with Bidirectional Context. STARSEM.

Language-guided policy search for grounded agents
describeworld describeworld_arch

Nathaniel Weir, Xingdi Yuan, Marc-Alexandre Côté, Matthew Hausknecht, Romain Laroche, Ida Momennejad, Harm Van Seijen, and Benjamin Van Durme. One-Shot Learning from a Demonstration with Hierarchical Latent Language. AAMAS Poster.

Semantic probing of neural language models
stas

Weir, N., Poliak, A., & Durme, B.V. (2020). Probing Neural Language Models for Human Tacit Assumptions. Cognitive Science Society

Neural semantic parsing
dbpal

Weir, N. (2019). Bootstrapping Generalization in Neural Text-to-SQL Semantic Parsing Models.

Weir, N. et al (2020). DBPal: A Fully Pluggable NL2SQL Training Pipeline. SIGMOD.


Full List of Publications:

Nathaniel Weir, Peter Clark and Benjamin Van Durme. NELLIE: A Neuro-Symbolic Inference Engine for Grounded, Compositional, and Explainable Reasoning. IJCAI 2024.

Nathaniel Weir, Kate Sanders, Orion Weller, Shreya Sharma, Dongwei Jiang, Zhengping Zhang, Bhavana Dalvi Mishra, Oyvind Tafjord, Peter Jansen, Peter Clark, Benjamin Van Durme. Enhancing Systematic Decompositional Natural Language Inference Using Informal Logic. preprint.

Kate Sanders, Nathaniel Weir, and Benjamin Van Durme. TV-TREES: Multimodal Entailment Trees for Neuro-Symbolic Video Reasoning . preprint.

Dongwei Jiang, Jingyu Zhang, Orion Weller, Nathaniel Weir, Benjamin Van Durme, Daniel Khashabi. SELF-[IN]CORRECT: LLMs Struggle with Refining Self-Generated Responses. preprint.

Orion Weller, Marc Marone, Nathaniel Weir, Dawn Lawrie, Daniel Khashabi, and Benjamin Van Durme. “According to ...” Prompting Language Models Improves Quoting from Pre-Training Data.. Appearing at EACL 2024.

Nathaniel Weir, Ryan Thomas, Randolph d'Amore, Kellie Hill, Benjamin Van Durme, and Harsh Jhamtani. Ontologically Faithful Generation of Non-Player Character Dialogues. preprint.

Orion Weller, Aleem Khan, Nathaniel Weir, Dawn Lawrie, and Benjamin Van Durme. Defending Against Poisoning Attacks in Open-Domain Question Answering. Appearing at EACL 2024.

Nathaniel Weir, Xingdi Yuan, Marc-Alexandre Côté, Matthew Hausknecht, Romain Laroche, Ida Momennejad, Harm Van Seijen, and Benjamin Van Durme. One-Shot Learning from a Demonstration with Hierarchical Latent Language. AAMAS Poster.

Jiefu Ou, Nathaniel Weir, Anton Belyy, Felix Yu, and Benjamin Van Durme. InFillmore: Frame-Guided Language Generation with Bidirectional Context. StarSem 2021.

Nathaniel Weir, Joao Sedoc and Benjamin Van Durme. COD3S: Diverse Generation with Discrete Semantic Signatures. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP).

Nathaniel Weir, Adam Poliak, and Benjamin Van Durme. Probing Neural Language Models for Human Tacit Assumptions. Proceedings of the 42nd Annual Conference of the Cognitive Science Society. 2020 (video)

Nathaniel Weir, Prasetya Utama, Alex Galakatos, Andrew Crotty, Amir Ilkhechi, Shekar Ramaswamy, Rohin Bhushan, Nadja Geisler, Benjamin Hattasch, Steffen Eger, Carsten Binnig, Ugur Cetintemel. DBPal: A Fully Pluggable NL2SQL Training Pipeline. Proceedings of SIGMOD. 2020 Presented as talks at 2018 IBM AI Systems Day 2018 (slides) and North East Database Day 2019.

Nathaniel Weir. Bootstrapping Generalization in Neural Text-to-SQL Semantic Parsing Models. Undergraduate honors thesis, Brown University, Providence, RI 02912.

Fuat Basik, Benjamin Hattasch, Amir Ilkhechi, Arif Usta, Shekar Ramaswamy, Prasetya Utama, Nathaniel Weir, Carsten Binnig and Ugur Cetintemel. DBPal: A Learned NL-Interface for Databases Proceedings of SIGMOD (demo). 2018.

Prasetya Utama, Nathaniel Weir, Carsten Binnig, and Ugur Cetintemel, Voice-based Data Exploration: Chatting with your Database Proceedings of 2017 workshop on Search-Oriented Conversational AI. 2017


Teaching

I co-taught EN.601.470/670: Artificial Agents with Benjamin Van Durme.

I was a teaching assistant at Brown for