Fine-Tuning Open-Source Models as a Viable Alternative to Proprietary LLMs for Explaining Compiler Messages
Forthcoming
Research
Our peer-reviewed research explores how fine-tuned language models can serve as genuine pedagogical tools in computing education — explaining errors, guiding debugging, and fostering deeper understanding.
Peer-reviewed and preprint research from our team.
Forthcoming
arXiv preprint arXiv:2507.05305
Read paperarXiv preprint arXiv:2502.20527
Read paperProceedings of the 56th ACM Technical Symposium on Computer Science Education V. 1 (SIGCSE 2025)
Presents a compiler-integrated conversational AI system that helps CS1 students debug their programs through pedagogically-grounded dialogue.
arXiv preprint arXiv:2411.01765
Read paperProceedings of the 24th Koli Calling International Conference on Computing Education Research
Explores fine-tuning large language models to produce better, more pedagogically appropriate explanations for programming errors encountered by novice programmers.
Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1 (SIGCSE 2024)
Introduces dcc --help, a system that transforms the compiler into a pedagogical tool by using LLMs to generate context-aware explanations for error messages.