Research

Building AI with educational instincts

Our peer-reviewed research explores how fine-tuned language models can serve as genuine pedagogical tools in computing education — explaining errors, guiding debugging, and fostering deeper understanding.

Publications

7 papers

Peer-reviewed and preprint research from our team.

2026Fine-TuningOpen-SourceCompiler Messages

Fine-Tuning Open-Source Models as a Viable Alternative to Proprietary LLMs for Explaining Compiler Messages

Forthcoming

SIGCSE 2025Compiler IntegrationConversational AIDebugging

Compiler-Integrated, Conversational AI for Debugging CS1 Programs

Proceedings of the 56th ACM Technical Symposium on Computer Science Education V. 1 (SIGCSE 2025)

Presents a compiler-integrated conversational AI system that helps CS1 students debug their programs through pedagogically-grounded dialogue.

Koli Calling 2024Fine-TuningError Explanations

Fine-Tuning Large Language Models for Better Programming Error Explanations

Proceedings of the 24th Koli Calling International Conference on Computing Education Research

Explores fine-tuning large language models to produce better, more pedagogically appropriate explanations for programming errors encountered by novice programmers.

SIGCSE 2024DCCCompiler ErrorsLLMs

dcc --help: Transforming the Role of the Compiler by Generating Context-Aware Error Explanations with Large Language Models

Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1 (SIGCSE 2024)

Introduces dcc --help, a system that transforms the compiler into a pedagogical tool by using LLMs to generate context-aware explanations for error messages.