Let's Chat! Click here to book a 30 minute meeting.

Can Grammarly predict an "A" paper?

The Verge: Grammarly says its AI agent can predict an A paper

Grammarly is probably correct with its predictions, and people in the comments are understandably suspicious.

For example, this comment:

“As a teacher, can I just say that not only will this NOT be useful, but it will be actively counterproductive on many different levels. Can’t wait for students to try to use this to argue with me about grades. I’m so sick of genAI making my job harder.”

and this one:

”[…]this is going to swiftly devolve into AI-written student papers graded by instructors’ AI agents. ”

both capture the sentiment well…which is, “AI is bad.”

But this is NOT an “AI problem.”

I would counter that it’s a “grading is too predictable” problem.

Or, in my personal opinion, what if it’s a “we are grading the wrong thing” problem.

Anything that can be systematized almost certainly WILL be systematized. This is great from the standpoint of one teacher working with hundreds of students. But, it also works in favor of the students; work that can be easily graded is also work that can be easily “faked.”

This seems like it’s been true for at least the last few decades (cough Cliff’s Notes cough), but I’d argue it’s been that way since we started a formal factory model of “schooling.” Technology might make some forms of cheating easier, but it didn’t create the concept. It’s sort of how teenage minds work, they’re much better/faster/smarter at spotting shortcuts than we are.

AI is a red herring in these discussions. I think these problems were always here, now they’re just more obvious. We’re overdue for some difficult questions about what we’re ACTUALLY trying to grade/assess. In a world where everyone can generate “A” papers with a few prompts, then maybe “A” papers are the wrong litmus?