Biography

I’m a cognitive scientist and PhD candidate at the NYU Center for Data Science, advised by Brenden Lake and Todd Gureckis. I’m excited about understanding the human mind and leveraging ideas from human cognition to develop more human-like artificial intelligence. My research combines human experiments, data analysis, and computational modeling to study cognition. I am particularly interested in cognitive goals and the use of structured, program-like representations to capture them, and I’m keen to explore how richer goal representations could facilitate exploration and generalization in artificial agents.

In my non-academic life, I enjoy playing ultimate frisbee, making homemade hot sauces, and making friends with all the dogs in Brooklyn.

Interests
  • Human goal representation and generation
  • Generating and inferring human-like goals with artificial agents
  • Compuational cognitive science
  • Large Language Models’ intent inference from ambiguous inputs
Education
  • PhD in Data Science, 2019--

    New York University

  • MPhil in Data Science, 2023

    New York University

  • BSc in Computational Sciences, 2015--2019

    Minerva University

Recent Publications

(2024). Goals as Reward-Producing Programs. Accepted in principle, Nature Machine Intelligence.

Cite URL

(2024). Toward Complex and Structured Goals in Reinforcement Learning. Finding the Frame @ RLC 2024.

Cite URL

(2024). Spatial relation categorization in infants and deep neural networks. Cognition.

Cite DOI URL

(2024). Toward Human-AI Alignment in Large-Scale Multi-Player Games. Wordplay @ ACL 2024, Association for Computational Linguistics.

PDF Cite arXiv

(2023). Generating Human-Like Goals by Synthesizing Reward-Producing Programs. Intrinsically Motivated Open-Ended Learning @ NeurIPS 2023.

PDF Cite

(2023). Spatial Relation Categorization in Infants and Deep Neural Networks. Cognition (in press).

Cite DOI URL

(2022). Creativity, Compositionality, and Common Sense in Human Goal Generation. Proceedings of the 44th Annual Meeting of the Cognitive Science Society, CogSci 2022.

Cite URL

(2022). A model of mood as integrated advantage. Psychological Review.

Cite DOI URL

(2021). Examining Infant Relation Categorization Through Deep Neural Networks. Proceedings of the 43rd Annual Meeting of the Cognitive Science Society, CogSci 2021.

PDF Cite URL

(2020). Investigating Simple Object Representations in Model-Free Deep Reinforcement Learning. Proceedings of the 42nd Annual Meeting of the Cognitive Science Society, CogSci 2020.

PDF Cite arXiv

(2020). Systematically Comparing Neural Network Architectures in Relation Learning. Object-Oriented Learning (OOL): Perception, Representation, and Reasoning @ ICML 2020.

Cite URL

(2020). Sequential mastery of multiple visual tasks: Networks naturally learn to learn and forget to forget . The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

PDF Cite arXiv URL

(2019). Contrasting the effects of prospective attention and retrospective decay in representation learning. The 4th Multidisciplinary Conference on Reinforcement Learning and Decision Making.

PDF Cite

(2019). Momentum and mood in policy-gradient reinforcement learning. The 4th Multidisciplinary Conference on Reinforcement Learning and Decision Making.

PDF Cite

Summer Schools

Brains, Minds, and Machines Summer Course
Attended the 2021 Brains, Minds, and Machines summer course in Woods Hole, MA.
Machine Learning Summer School
Attended the July 2019 MLSS in London, England.