Study reveals who writes better


 essay
Credit: Unsplash/CC0 Public Domain

AI-generated essays don’t yet live up to the efforts of real students, according to new research from the University of East Anglia (UK).

A new study published in Written Communication compared the work of 145 real students with essays generated by ChatGPT. The paper is titled “Does ChatGPT write like a student? Engagement markers in argumentative essays.”

While the AI essays were found to be impressively coherent and grammatically sound, they fell short in one crucial area—they lacked a personal touch.

As the line between human and machine writing continues to blur, the study underlines the importance of fostering critical literacy and ethical awareness in the digital age.

It is hoped that the findings could help educators spot cheating in schools, colleges and universities worldwide by recognizing machine-generated essays.

Prof Ken Hyland, from UEA’s School of Education and Lifelong Learning, said, “Since its public release, ChatGPT has created considerable anxiety among teachers worried that students will use it to write their assignments.

“The fear is that ChatGPT and other AI writing tools potentially facilitate cheating and may weaken core literacy and critical thinking skills. This is especially the case as we don’t yet have tools to reliably detect AI-created texts.

“In response to these concerns, we wanted to see how closely AI can mimic human essay writing, particularly focusing on how writers engage with readers.”

The research team analyzed 145 essays written by real university students and another 145 generated by ChatGPT.

“We were particularly interested in looking at what we called ‘engagement markers’ like questions and personal commentary,” said Prof Hyland.

“We found that the essays written by real students consistently featured a rich array of engagement strategies, making them more interactive and persuasive.

“They were full of rhetorical questions, personal asides, and direct appeals to the reader—all techniques that enhance clarity, connection, and produce a strong argument.

“The ChatGPT essays, on the other hand, while linguistically fluent, were more impersonal. The AI essays mimicked academic writing conventions but they were unable to inject text with a personal touch or to demonstrate a clear stance.

“They tended to avoid questions and limited personal commentary. Overall, they were less engaging, less persuasive, and there was no strong perspective on a topic.

“This reflects the nature of its training data and statistical learning methods, which prioritize coherence over conversational nuance,” he added.

Despite its shortcomings, the study does not dismiss the role of AI in the classroom.

Instead, the researchers say that tools like ChatGPT should be used as teaching aids rather than shortcuts.

“When students come to school, college or university, we’re not just teaching them how to write, we’re teaching them how to think—and that’s something no algorithm can replicate,” added Prof Hyland.

This study was led by UEA in collaboration with Prof Kevin Jiang of Jilin University, China.

More information:
Jiang et al, Does ChatGPT write like a student? Engagement markers in argumentative essays, Written Communication (2025). ueaeprints.uea.ac.uk/id/eprint/97952/

Provided by
University of East Anglia

Citation:
ChatGPT vs. students: Study reveals who writes better (2025, April 30)
retrieved 30 April 2025
from https://phys.org/news/2025-04-chatgpt-students-reveals.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Show Comments (0) Hide Comments (0)
Leave a comment

Your email address will not be published. Required fields are marked *