April 16, 2026
the-evolving-landscape-of-university-assessments-in-the-age-of-generative-ai

The academic world is grappling with a profound shift in educational assessment as the capabilities of generative artificial intelligence (Gen-AI) become increasingly apparent. In response, universities globally are re-evaluating their traditional methods, with many instructors pivoting back to in-person assessments to mitigate the risks associated with AI-assisted academic dishonesty. This movement towards "secure assessments," a term encompassing a range of methods designed to ensure academic integrity, is detailed in a significant December 2025 Nature article authored by Vitomir Kovanović, Abhinava Barthakur, Srećko Joksimović, and George Siemens. Their research highlights a trend where institutions are implementing "short-term fixes such as ‘stress-testing’ written assessments and replacing them with oral examinations, hand written tests or reflective formats." These approaches, while seemingly a direct response to the challenges posed by Gen-AI, are not without their own set of complexities and limitations.

Defining Secure Assessments and the Drive for Authenticity

Secure assessments are fundamentally designed to verify that a student’s submitted work genuinely reflects their own understanding and abilities, free from unauthorized external assistance. This assistance can manifest in various forms, including collaboration with peers, reliance on external resources, or, most pertinently in the current climate, the use of sophisticated Gen-AI tools. The impetus for employing these stringent assessment methods often stems from external mandates, such as program accreditation requirements or institutional policies that stipulate the inclusion of specific assessment types, like final examinations. While in-person, proctored exams are a prevalent example of a secure assessment, they represent just one facet of a broader strategy to ensure academic authenticity. This article aims to delve into the multifaceted challenges and potential alternatives to traditional secure assessment methods in higher education.

The Growing Pains of Secure Assessments: Logistics, Security, and Validity

The widespread adoption of secure assessments, particularly in-person, proctored examinations, presents a tripartite challenge encompassing logistical hurdles, evolving security concerns, and fundamental questions about validity.

Logistical Complexities in a Diverse Learning Environment

The practical implementation of secure assessments is becoming increasingly demanding. As the Gwenna Moss Centre for Teaching and Learning (GMCTL) at the University of Saskatchewan points out, "assessments that need security often require more planning (e.g., technical configuration, physical materials, and space) and time restrictions." This increased planning burden extends to scheduling, resource allocation, and the management of a growing number of student accommodation requests. Many universities are experiencing a significant rise in students approved for accommodations, which can necessitate specialized testing environments and additional staff support. This surge in demand can strain existing resources, leading to pressures on available secure assessment spaces and increasing the workload for instructors and administrative staff.

The shift towards online learning has introduced its own set of logistical complexities for secure assessments. The GMCTL further notes that online environments require robust measures for monitoring student activity during exams, the implementation of browser lockdown software to restrict access to other applications, and contingency plans for exam interruptions caused by student internet connectivity issues. These technical requirements and potential points of failure add layers of complexity to the administration of remote secure assessments.

The Ever-Evolving Threat to Exam Security

While traditional in-person proctored exams were once considered a robust defense against academic dishonesty, the advent of advanced technologies has created new vulnerabilities. Measures such as requiring photo identification, restricting movement within the exam room, and prohibiting personal items at desks are standard practices. However, the emergence of sophisticated wearable devices has introduced unprecedented challenges.

University of Calgary professor Susan Elaine Eaton and her coauthors, in their research on AI glasses, highlight a critical concern: "Models that feature a heads-up display project information onto the inside of the lens, which is not visible to an external proctor." This capability effectively grants students access to the entire internet and powerful AI assistants, even within the confines of a closed-book, high-stakes examination. Traditional proctoring methods, designed to detect overt forms of cheating, are rendered obsolete by such discreet technologies. Eaton and her colleagues emphasize that "The use of AI glasses lies beyond the scope of conventional proctoring methods, which are not designed to identify or regulate such discreet technology."

This technological arms race is further exacerbated by the rapid evolution of Gen-AI. As University of Florida professor Sidney I. Dobrin observes, "trying to develop assignments for which GenAI platforms cannot provide viable responses may be impractical – if not impossible – given the velocity of AI evolution." The continuous improvement and adaptation of AI tools mean that any attempt to design assessments that are completely AI-proof may be a Sisyphean task.

Questioning the Validity: Does "Secure" Mean "Accurate"?

Beyond the logistical and security challenges, a more fundamental critique of secure assessments centers on their validity – their ability to accurately measure what they are intended to measure. Sean McMinn, director of the Center for Education Innovation at the Hong Kong University of Science and Technology, advocates for a critical self-reflection among educators, posing two essential questions: "What is the assessed task meant to prove?" and "Does this task still assess what I think it does?"

Traditionally, secure assessments aim to gauge a student’s mastery of knowledge and skills in the absence of external aid. However, the efficacy of this approach in truly assessing mastery is increasingly debated. Critics argue that the artificial conditions of high-stakes, timed exams may not be a reliable indicator of genuine understanding. Sarah Aiono, CEO of Longworth Education, posits that "Cognitive science tells us that knowledge retrieval is important for learning, but timed, high-stakes retrieval under exam conditions is a poor proxy for true understanding." Students might excel at memorizing and reproducing information under pressure, yet struggle to apply or transfer that knowledge in practical contexts. Conversely, a student with a deep conceptual grasp might falter under the stress of an exam, unable to recall information or articulate their understanding in the required format. The skill of exam writing itself, while important in some academic contexts, may not always align with broader learning objectives that prioritize critical thinking, problem-solving, and application.

Exploring Alternatives to Traditional Secure Assessments

The limitations of traditional secure assessments necessitate a broader exploration of alternative methods that can still uphold academic integrity while offering a more authentic measure of student learning.

The Power of Oral Assessments

Winona State University professor Steve M. Baule champions oral assessments as a valuable alternative. He suggests that "Short, low-stakes oral defenses, whether one-on-one, in small groups, or recorded, create powerful validation opportunities." These interactions can take various forms, such as asking students to summarize their key arguments, respond to clarifying questions, explain specific data interpretations, or justify design decisions. The low-pressure, brief nature of these exchanges can effectively confirm a student’s comprehension of the material without the inherent stresses and potential security vulnerabilities of written exams. Such methods allow for direct engagement and immediate feedback, providing a more nuanced understanding of a student’s learning process.

Practical and Performance-Based Assessments

Practical assessments offer another promising avenue for evaluating student mastery. This approach involves students applying their knowledge and skills to real-world tasks within practical settings. Examples include hands-on laboratory demonstrations, simulated scenarios, role-playing exercises, oral pitch or briefing presentations, live performances, or even having students teach material to their peers. Like proctored exams, these assessments can motivate students to achieve a deep understanding and provide valuable feedback for further development. However, it is crucial to acknowledge that practical assessments also come with their own logistical and security considerations that must be carefully managed.

Reconsidering the Definition of "Unassisted" Learning in a Connected World

As educational institutions navigate the evolving landscape shaped by Gen-AI, a critical re-examination of the concept of "unassisted" mastery is warranted. In a world where collaboration and the integration of advanced tools are increasingly the norm in professional environments, the exclusive emphasis on unassisted performance may become less relevant. The question arises: when is it truly imperative for students to demonstrate mastery without any form of assistance, human or technological?

The intention is not to eliminate all unassisted work or traditional exams, but rather to foster a more deliberate and pedagogical approach to assessment design. This moment presents an opportunity to move beyond default assessment choices and thoughtfully consider how evaluation methods can best align with intended learning outcomes and prepare students for future success in a complex, interconnected world. By embracing a diverse range of assessment strategies, educators can create a more robust, equitable, and valid system of academic evaluation that acknowledges the realities of technological advancement and the future of work. The ongoing dialogue surrounding Gen-AI in education is not merely about preventing cheating; it is about fundamentally rethinking what we value in learning and how we best measure its attainment.

Leave a Reply

Your email address will not be published. Required fields are marked *