Deepak Nathani

prof_pic.jpg

Hello! I am Deepak Nathani. I am currently a 3rd year PhD student at UC, Santa Barbara advised by Prof. William Wang in the UCSB NLP lab. My research revolve around the field of Natural Language Generation, with a focus on Reasoning in LLMs, Tool Use and Autonomous Agents. During my PhD, I have been fortunate to intern at GenAI at Meta and AWS Translate.

Before joining the PhD program, I worked as a Pre-Doctoral Researcher at Google Research, India where I was advised by Dr. Partha Talukdar. At Google, I worked on Controllable Text Generation and Conversational AI. I also did a brief stint as a Software Engineering AMTS at Salesforce.com.

I graduated from IIT Hyderabad with a B.Tech degree in Mechanical Engineering and Computer Science as my second major. During my time as an undergraduate, I worked on various research problems with Dr. Manohar Kaul.

Among other things, I enjoy playing games, listening to music, reading books and most important of them all, I love food. :wink:

Research

I am interested in developing frameworks and methods to enhance the capabilities of large language models through automated feedback, tool use, and their applications in accelerating scientific discovery. My work is anchored in the following themes:

  1. AI Research Agents and Tool Use: Developing frameworks for autonomous AI agents [Preprint] that can effectively use tools and improve their reasoning abilities through structured feedback mechanisms EMNLP 2023.
  2. Controllable Text Generation: Developing methods for controlled text generation in multilingual and low-resource settings [ACL 2022] to make text generation more accessible across different languages and domains.
  3. Healthcare and Behavior Science Applications: Creating personalized coaching systems and synthetic data generation methods for healthcare applications while ensuring privacy and ethical considerations. PLOS Digital Health 2024, IEEE CAI 2024, UMAP 2022.
  4. Graph Learning and Knowledge Graphs: Investigating novel approaches to graph representation learning and knowledge graph completion [ACL 2019] using few-shot learning [ICLR 2020] and topological methods Complex Networks 2019, ICML 2018.

news

Mar 7, 2025 I gave a talk on MLGym at Ploutus. slides, talk
Feb 20, 2025 Excited to announce the release of my Meta Internship Project: MLGym! Check out our paper, codebase, and join our Discord community to get started with MLGym today!
Apr 17, 2024 I will join the GenAI team at Meta London as a Research Scientist Intern starting in June!
Oct 7, 2023 Our paper titled MAF: Multi-Aspect Feedback for Improving Reasoning in Large Language Models will appear at EMNLP 2023. Details coming out soon!
Sep 25, 2023 Completed my Summer Internship at Amazon Web Services (AWS) AI labs under the supervision of Xing Niu, Shuoyang Ding and Prashant Mathur.
Nov 25, 2022 I am looking for summer research internships in 2023. If you think I will be a good fit for your team, let me know!
Sep 29, 2022 I have started my PhD at University of California, Santa Barbara NLP group. I will be working with Prof. William Wang.
Dec 20, 2019 Our paper titled “Few-Shot Learning on Graphs via Super-Classes Based on Graph Spectral Measures” accepted as a Poster Presentation at ICLR 2020! Paper is available here.
Oct 7, 2019 Our paper titled “A Persistent Homology Perspective to the Link Prediction Problem” accepted as a Poster Presentation at Complex Networks 2019! Link.
Jul 12, 2019 The blog for our ACL 2019 paper is now available here.
Jun 5, 2019 Arxiv Preprint and code is now available for our ACL 2019 paper.
May 14, 2019 Our paper titled “Learning Attention-based Embeddings for Relation Prediction in Knowledge Graphs” accepted as long paper at ACL 2019! Paper and code coming soon!
Show all news