Imagine: you wrote an article for class or for your blog. You studied this topic and carefully constructed your argument. You submit your essay online and receive your grade in seconds. However, how to quickly read, understand and judge your article?
So the answer is no one can. Your article is marked with a computer. Do you believe the mark you received? Will you approach your next essay with the same amount of hard work and care?
These are questions raised by parents, teachers, and unions about automatic grading (AES). The Australian Curriculum, Assessment and Reporting Authority (ACARA) recommends using this course to provide students with persuasive reviews of writing questions in the NAPLAN standardized exam program for primary and secondary schools.
ACARA defended its decision and suggested that computer-based tags match or even exceed the consistency of human tags. I usually fix my papers at https://essayseek.com/fix-my-paper.html.
This, in my opinion, neglects this. The computer cannot really read and understand the text. A good argument is worthless when it comes to structural comparisons with other texts, not by judgment of their ideas.
More importantly, we run the risk of encouraging the writing of Scripts, but they are basically of no value. In other words, writing “nonsense.
How does the algorithm mark work?
The function of AES is not fully understood, but based on previous announcements, we assume that it takes a form of machine learning.
Here’s how it works: Machine learning algorithms “learn” from the training database – in this case, they can “train with over 1,000 NAPLAN writing tests scored by human markup.”
But it usually does not learn the standard of human comment essay. In contrast, machine learning consists of layers of so-called “artificial neurons.” These are statistic values that are gradually adjusted during training to associate certain inputs (structured text patterns, words, keywords, semantic structure, segmentation, and sentence length) with certain outputs (high rank or low rank).
When annotating a new article, the algorithm makes a statistical inference by comparing the text with the learning pattern and eventually matches it with the rank. However, the algorithm can not explain why this corollary has been reached.
Importantly, for papers that are highly persuasive about the structural characteristics of writing, the ones with the highest ratings are arguably the ones that follow the Handbook of Persuasion Rules.
Does ACARA claim that algorithm tags are not consistent with human markup errors? Probably not, but this is not a problem.
It is possible that machine learning reliably rewards papers that follow persuasive writing of structural scripts. And it does make for a higher degree of consistency than human markup. Examples from other areas show this – for example, the classification of images in medical diagnosis. It will surely be faster and cheaper.
However, the text does not matter: whether the argument is moral, offensive or totally meaningless, whether it expresses any coherent view, or whether it effectively speaks to the target audience.
The only important thing is that the text has the correct structural pattern. In essence, algorithmic markup may reward the writing of “nonsense” – seldom consider the topic of text writing, just to achieve the standard of the algorithm.