I am a first year PhD student at the Center for Language and Speech Processing at Johns Hopkins University. I am advised by Benjamin Van Durme.
I’m most interested in exploring how artificial agents can efficiently generalize knowledge into new domains in a human-inspired way. While humans have inexplicably capable generalization skills, current deep, task-oriented systems are shockingly bad at applying even simple concepts in ways not seen during training. I want to build models that bridge this extraordinary gap in cognitive ability.
My primary areas of interest generally lie within these categories:
Nathaniel Weir, Adam Poliak, and Benjamin Van Durme. On the Existence of Tacit Assumptions in Neural Language Models. Proceedings of the 42nd Annual Conference of the Cognitive Science Society. 2020
Nathaniel Weir, Prasetya Utama, Alex Galakatos, Andrew Crotty, Amir Ilkhechi, Shekar Ramaswamy, Rohin Bhushan, Nadja Geisler, Benjamin Hattasch, Steffen Eger, Carsten Binnig, Ugur Cetintemel. DBPal: A Fully Pluggable NL2SQL Training Pipeline. Proceedings of SIGMOD. 2020
Nathaniel Weir. Bootstrapping Generalization in Neural Text-to-SQL Semantic Parsing Models. Undergraduate honors thesis, Brown University, Providence, RI 02912.
Prasetya Utama, Nathaniel Weir, Fuat Basik, Carsten Binnig, Ugur Cetintemel, Benjamin Hattasch, Amir Ilkhechi, Shekar Ramaswamy and Arif Usta. An End-to-end Neural Natural Language Interface for Databases Preprint and presented as talk at 2018 IBM AI Systems Day (slides). 2018.
Fuat Basik, Benjamin Hattasch, Amir Ilkhechi, Arif Usta, Shekar Ramaswamy, Prasetya Utama, Nathaniel Weir, Carsten Binnig and Ugur Cetintemel. DBPal: A Learned NL-Interface for Databases Proceedings of SIGMOD (demo). 2018.
Prasetya Utama, Nathaniel Weir, Carsten Binnig, and Ugur Cetintemel, Voice-based Data Exploration: Chatting with your Database Proceedings of 2017 workshop on Search-Oriented Conversational AI. 2017
I was a teaching assistant at Brown for