The reason it’s so hard to figure out who’s affected by AI grading is because there’s not just one program that’s being used. But they’re all made in basically the same way: First, an automated scoring company looks at how human graders behave. Then, the company trains an algorithm to make predictions as to how a human grader might score an essay based on that data. Depending on the program, those predictions can be consistently wrong in the same way. In other words, they can be biased. And once those algorithms are built, explains Reset host Arielle Duhaime-Ross, they can reproduce those biases at a huge scale.
https://www.vox.com/recode/2019/10/20/20921354/ai-algorithms-essay-writing