Assessment in the AI Era: Grade the thinking, not just the text

Image
A person typing on a laptop with floating digital graphics representing artificial intelligence concepts, including icons of a human head with gears, a trophy, a graduation cap and other symbols, appearing as translucent holograms above the keyboard.

When it comes to assessing student work in the age of AI, process and critical thinking are the keys.

Artificial intelligence is prompting educators to rethink how they assess student work. In this article, Xiao Hu, associate professor in the College of Information Science, writes the process and thinking behind student work are just as important as the final product in a "post-plagiarism era."


Plagiarism has long been the quiet battle of higher education. But today, with generative AI capable of producing original-seeming work in seconds, that battle has transformed into something far more complex. At Digital Universities Asia this July, I was invited to tackle this very issue in a fireside chat with Times Higher Education. The question posed to me was bold: "Are we truly living in a post-plagiarism era?"

My answer – yes – surprised many in the audience. Yet I believe this era is less a threat than an opportunity to rethink what it means to learn, and how we measure it.

From outputs to process

When asked what alternative assessment methods we should explore, my response was clear: process and critical thinking are the keys.

For too long, higher education has focused almost exclusively on outcomes – polished essays, final exams and products that AI can now imitate convincingly, at least on the surface. But Bloom's taxonomy reminds us that higher levels of learning are analysis, synthesis, evaluation and creation. These can hardly be measured by outputs alone; they require us to evaluate how students think, not just what they submit.

Image
Xiao Hu

Xiao Hu

That is why I advocate for assessments that emphasize process, such as:

  • Project-based assignments grounded in real-world contexts
  • Maker activities that link directly to students' experiences and environments
  • Collaborative work that includes peer critique and individual accountability
  • Personal reflections that surface metacognition – how students understand their own learning
  • Oral defenses and game-based assessments that test adaptability in real time

With the support of learning analytics, these approaches make visible what has traditionally been hidden: the evolution of ideas, self-regulation over time and the authentic voice of the learner.

These ideas come from over a decade of research and hands-on experience. One example is our work with maker activities, where my team developed a unique pedagogy and an AI-powered platform called LAVR. This tool helps students create immersive virtual reality stories that highlight their personal lives, communities and cultural or natural heritage. Their learning journey is captured through version histories and enriched by peer feedback, making learning visible, reflective and process-oriented.

 

To support collaborative learning, we created Wikiglass, a platform that uses a learning analytics dashboard to make students' progress and group interactions more visible. By highlighting how team members engage and contribute over time, Wikiglass prompts group awareness and socially shared self-regulated learning – key elements in assessing not just outcomes, but the learning process itself.

Tools like LAVR and Wikiglass show how thoughtful technology design can support deep learning and progress-oriented assessment. As we continue exploring new possibilities, the goal remains focused: to find evidence of higher-order thinking and measure it – not just in final products, but throughout the learning journey.

Supporting responsible AI use

Another question raised during the fireside chat was: How can universities support the ethical use of AI while ensuring accountability?

My answer comes in three key aspects:

First is of course educationStudents must learn the difference between good and bad uses of AI, and how to critically evaluate AI outputs for accuracy, bias and relevance. This is easier said than done. AI literacy is quickly becoming a core skill – not just for the next-generation workforce, but for everyone. That's why many researchers, including myself, are working to help people learn how to collaborate effectively with AI.

I demonstrated several recent projects of mine on AI education. One is GAILA (GenAI and Learning Analytics), a platform that captures how students prompt AI, how much time they spend using AI tools and how they revise work – turning AI interactions into evidence of learning.

 

Another is Mariscope, a generative AI-supported creative learning tool in digital museum education. Students produce art, but they are assessed based on their reflection and iterationnot just the final product.

And then there is WekiMusic, a music version of Teachable Machine that encourages learners to explore machine learning by manipulating their favorite music. Again, student interactions with the system over time are considered as formative assessment, helping educators understand how learners engage, experiment and grow.

Beyond the tools, we also need agreement. At the start of a course, instructors should co-create policies with students. When students have a say in what is permitted and what counts as plagiarism, they take greater ownership. Policies should be announced early and clearly, with requirements such as AI-use declarations and process reflections.

Finally, there is reflection. A declaration alone is not enough. Students should articulate how AI shaped their learning – whether it empowered them, misled them or challenged them to think differently. These reflections are authentic, deeply personal and cannot be easily replicated by machines. 

Frameworks for academic integrity

The final question at the session asked what frameworks and guidelines we need in this new landscape. 

My response is that universities must shift from punitive detection to proactive education. First and foremost, we must embrace AI – not ban it. Instead, we should integrate it into learning with transparency.

Second, we must invest in faculty development. Teachers need support to design new assessments and use learning analytics effectively. This leads to my third point: invest in technology. A growing number of tools – many powered by AI – are being developed to help educators understand students' learning progress and infer if they have encountered any difficulty and need help. 

Our recent work takes this further with a self-service dashboard powered by large language models. This tool allows users – including teachers, students and school administrators – to generate customized visualizations of student learning behaviors using simple text prompts. It's a flexible, scalable way to turn data into insight.

Finally, any framework of academic integrity must include institutional commitment. Integrity should be supported at all levels and embedded in a culture of trust. Ultimately, integrity cannot be sustained by rules alone. It must be cultivated as a shared ethos between students, faculty and institutions.

Looking ahead

The "post-plagiarism era" will not be defined by machines outsmarting educators. It will be defined by whether we, as a community, reimagine assessment to reflect what truly matters. If we continue grading only the final answer, we may find ourselves grading the “ghost” of learning. But if we assess the journey – the prompts, the iterations, the judgments, the reflections – we preserve what makes education human.

At the University of Arizona, I am proud to be part of a community leading this shift, through research, dialogue and innovation. I believe, with evidence from the aforementioned research, with thoughtful design that integrates learning principles and technologies, AI can illuminate integrity rather than undermine it.

AI is not the end of originality. It is our invitation to reimagine it.


This story originally appeared on the College of Information Sciences website.

Resources for the Media