About Me
I am a fourth year at the University of Chicago earning a bachelor's in computer science and linguistics. I'm looking for opportunities to work and connect with those in the AI space. Particularly, Agentic AI, NLP, AI Powered Search/Recommendation, and all applications of these.
On a personal level, I enjoy learning languages, reading, playing music, wrestling, and cooking. Check out my Goodreads or give my band (from years ago) a listen!
About Projects
Most of my projects are geared towards working with LLMs in interpretability and LLM applications.
Currently, I am a visiting student researcher at the Speech and Language group at TTIC working with Dr. Zhewei Sun and as a Research Assistant at Knowledge Lab with Austin Kozlowski. The description of our research can be found below.
Knowledge Lab: AutoInterp
As a Research Assistant, I optimize Codex-based report generation and analysis for AutoInterp, an autonomous mechanistic interpretability research agentic system.
I also designed an agentic data analysis pipeline for non-technical domains where users can input datasets and query them, producing grounded and reproducible reports.
Research with Dr. Zhewei Sun (TTIC)
My current research focuses on developing NLP techniques to better process and model informal language across languages.
We collected novel datasets for fine-tuning models using BeautifulSoup to more accurately model slang cross-linguistically. We also expanded and modified LLM architectures with PyTorch on distributed GPUs to expand previous research on embedding slang terms.
Prepost AI / Epistemic Hygienist
Engineered a multi-agent system using an AI architecture that integrates NLI nodes and RAG retrieval to mitigate "bullshit" assertions and promote nuanced social media speech.
The system gathers external context from sources like Wikipedia and BBC to perform structured reasoning. I validated performance across diverse agentic designs using models like Gemma 3, Llama 3.1, and GPT-OSS.
Isolated Knowledge Updates in LLMs
Quantified the limits of localized model editing using arithmetic contrapositives on the Qwen2.5-1.5B model.
Implemented dynamic control over the edit-preservation tradeoff using an inference-time "alpha sweep" on LORA task vectors, accomplishing 100% edit efficacy. Successfully isolated targeted factual updates using a 50-item multi-domain locality set.
*Note: The associated repository is currently private.
Nabu - Personal AI Language Tutor
Developed an AI native desktop application for real-time conversations with a personalized AI system to expand vocabulary using spaced repetition.
Crafted the workflows, prompts, and API endpoints, implementing a host of AI tools and APIs such as LangGraph, Whisper, and ElevenLabs.
Send an email to nolanpozzobon[at]uchicago.edu or connect on LinkedIn.