by Jarek Janio, Ph.D.
In my last post, I explored the idea sparked by Dr. Gavin Henning’s Friday SLO Talk that AI can be used not just to assess students, but to analyze how instructors shape learning through feedback. This “meta-assessment” flips the script: instead of asking only how students performed, we start asking how teaching practices reinforce or obscure learning outcomes.
The intent of this post is to show how to do exactly that by walking through specific ChatGPT prompts that instructors, assessment coordinators, or faculty developers can use to analyze rubric comments, LMS feedback, or assignment notes.
Why Analyze Feedback?
Instructor feedback isn’t just a side note it’s part of the instructional environment. In fact, in asynchronous or hybrid classes, feedback may be the primary instructional interaction a student receives.
Analyzing patterns in that feedback can help identify:
- Which skills are being consistently reinforced
- Which areas of the rubric are neglected or unclear
- Whether feedback tone encourages revision or shuts it down
- How closely faculty comments align with stated learning outcomes
By processing these patterns at scale, AI becomes a tool for reflective instructional design, not just a timesaver.
Before You Begin: What You’ll Need
You don’t need coding skills or fancy analytics platforms. All you need is:
- A free or paid ChatGPT account (4.0 recommended)
- A text file or spreadsheet of instructor feedback comments (e.g., copy from your LMS gradebook or Canvas SpeedGrader)
- One or more of your rubrics or SLOs, to use as reference for alignment
Important Note:
⚠️ Never upload student names or identifiable data into ChatGPT unless you’re using a secure, institutionally approved version. Strip comments of names or identifiers before analysis.
Prompt 1: Identify Patterns in Feedback Content
Goal: See which topics dominate instructor feedback.
Prompt:
“I’m going to share a set of instructor comments from a writing course. Please categorize them into themes based on what skills or issues they address. List the most frequently mentioned themes.”
Example add-on:
“Also, note how often the instructor comments on higher-order skills (e.g., argument structure, critical thinking) vs. surface-level skills (e.g., grammar, citation formatting).”
Prompt 2: Analyze Alignment with Rubric Criteria or SLOs
Goal: Check whether feedback reflects the intended learning goals.
Prompt:
“Here are the course learning outcomes for an introductory writing course:
- Construct a coherent argument with supporting evidence
- Integrate source material using proper citation
- Revise based on instructor and peer feedback
- Demonstrate command of grammar and mechanics
Now, here are 50 instructor feedback comments from student essays.
Please analyze whether the feedback addresses each outcome equally. If not, explain where there are gaps.”
Bonus Tip: Ask ChatGPT to rate alignment on a 1–5 scale and summarize missing coverage.
Prompt 3: Evaluate Tone and Motivational Impact
Goal: Explore how the tone of feedback may affect student motivation and revision behavior.
Prompt:
“Analyze the following 25 instructor comments for tone. Classify each as supportive, neutral, or discouraging. Indicate whether the tone would likely increase or decrease a student’s willingness to revise and reengage.”
You can also ask:
“Provide suggestions for making any discouraging comments more constructive without losing specificity.”
This helps faculty improve feedback efficacy especially important in high-anxiety disciplines like math, writing, or STEM labs.
Prompt 4: Highlight Missed Opportunities for Reinforcement
Goal: Spot where feedback stops short of prompting next steps.
Prompt:
“Here are instructor comments on student work. For any feedback that lacks a clear next step, add a sentence that gives a specific suggestion for improvement or practice. Focus on observable student behaviors.”
This is where AI excels: pattern completion. It helps turn vague feedback like “Good effort!” into “Good effort! Next time, try organizing your main points before writing start with a simple outline.”
Prompt 5: Compare Feedback Across Instructors or Sections
Goal: Detect differences in reinforcement between faculty.
Prompt:
“Here are rubric comments from three instructors teaching the same course. Compare their feedback patterns. Do any instructors focus more on certain criteria than others? Are there differences in tone, length, or specificity?”
This is great for department chairs, SLO coordinators, or program leads who want to foster shared instructional priorities while honoring faculty autonomy.
Prompt 6: Generate a Feedback Practice Report
Goal: Summarize results for reflection or PD sessions.
Prompt:
“Based on your analysis, summarize the instructor’s feedback style. List:
- The most frequently reinforced skills
- Under-addressed rubric areas
- Tone profile (supportive vs. corrective)
- Recommendations for improvement
Format the result as a one-page report for professional reflection.”
This could be a faculty development goldmine a printable snapshot to guide peer review, mentoring, or coaching.
How to Use These Insights
Once you’ve gathered AI-generated observations, here’s how to apply them:
- Adjust rubrics to include underemphasized learning behaviors
- Modify assignments to scaffold skills students frequently miss
- Create feedback templates based on recurring student needs
- Develop faculty PD sessions around themes like “Commenting for Transfer” or “Feedback That Promotes Revision”
- Track improvements by comparing feedback patterns semester to semester
What AI Can’t Do (And That’s Okay)
AI won’t tell you how your students felt about your feedback.
It won’t know your intention behind a comment or your relationship with a student.
But it can show you:
- What your feedback tends to reinforce
- What behaviors you’re prompting or ignoring
- Whether your assessment practices align with your learning goals
In short, it holds up a mirror not to judge, but to help you teach more intentionally.
Final Thought: From Grading to Guiding
Feedback is where your voice meets your students’ minds. And too often, that voice goes unexamined locked inside LMS boxes and quickly forgotten.
Using ChatGPT to analyze your own instructional language is one of the most powerful ways to grow as an educator. Not because AI is smarter but because it helps us see what we do, and do it better.
In a world of rubrics, grades, and deadlines, feedback is the place where teaching becomes human. And that’s where the inquiry into an excellent teaching practice begins.