Transforming GenAI Policy to Prompting Instruction: An RCT of Scalable Prompting Interventions in a CS1 Course
Ruiwei Xiao, Runlong Ye, Xinying Hou, Jessica Wen, Harsh Kumar, Michael Liut, John Stamper
Convert your GenAI policy into structured prompting instruction. Use ICAP's engagement hierarchy—more cognitive engagement produces better prompting skills. Don't expect policy alone to change behavior.
Students use GenAI as a solution vending machine, not a learning tool, leading to worse exam performance. Telling them to 'use AI responsibly' doesn't teach them how to prompt for learning instead of answers.
Method: A semester-long RCT with 979 CS1 students tested four ICAP-based interventions of increasing cognitive engagement. All conditions significantly improved prompting skills, with gains increasing progressively from Condition 1 to Condition 4, validating ICAP's cognitive engagement hierarchy. Students with higher learning gains in immediate post-tests scored higher on final exams, though no direct between-group differences emerged.
Caveats: Learning gains predicted exam scores within groups, but between-group exam differences didn't materialize. Prompting skill ≠ automatic grade boost.
Reflections: What's the minimum intervention intensity needed to shift student behavior from solution-seeking to learning-oriented prompting? · Do prompting skills transfer across courses, or must each instructor re-teach them? · Why didn't between-group exam differences emerge despite improved prompting skills and within-group correlations?