The increasing sophistication and widespread availability of generative artificial intelligence (GenAI) tools have precipitated a significant shift in academic assessment strategies across universities worldwide. Many institutions, grappling with the implications of AI for academic integrity, are returning to traditional in-person evaluation methods. A pivotal report published in Nature in December 2025 by researchers Vitomir Kovanović, Abhinava Barthakur, Srećko Joksimović, and George Siemens highlighted this trend, noting that universities have resorted to "short-term fixes such as ‘stress-testing’ written assessments and replacing them with oral examinations, hand written tests or reflective formats." These methods are broadly categorized under the umbrella of "secure assessments," designed to ensure academic work is solely the product of the student’s own effort, free from external assistance, including AI.
The drive towards secure assessments is often fueled by external pressures, such as program accreditation requirements or institutional mandates that necessitate specific assessment formats, like mandatory final exams. While in-person, proctored examinations are a prevalent form of secure assessment, this article invites a critical re-examination of their role and efficacy in contemporary higher education.
The Multifaceted Challenges of Secure Assessments
The implementation and effectiveness of secure assessments, particularly traditional in-person proctored exams, are fraught with significant limitations. These challenges can be broadly delineated into three key areas: logistics, security, and validity.
Logistical Hurdles in a Growing Educational Ecosystem
The logistical demands of administering secure assessments are escalating, particularly within institutions experiencing student enrollment growth and an increasing number of approved academic accommodations. As detailed by the Gwenna Moss Centre for Teaching and Learning (GMCTL) at the University of Saskatchewan, assessments requiring stringent security protocols necessitate extensive pre-planning. This includes intricate technical configurations for digital environments, the procurement and management of physical materials for paper-based exams, and the allocation of suitable, secure physical spaces. Furthermore, the conditions often imposed by secure assessments, such as time constraints and the need for controlled environments, invariably lead to a greater demand for student accommodations.
The surge in students requiring accommodations, a trend observed across numerous universities, exacerbates these logistical pressures. This translates into increased workloads for instructors and administrative staff responsible for coordinating these assessments. Moreover, it places considerable strain on the availability of dedicated spaces equipped to handle specialized assessment needs.
The shift towards online learning, while offering flexibility, introduces its own set of logistical complexities for secure assessments. The GMCTL also points out that ensuring security in digital assessments involves constant monitoring of student activity to maintain academic integrity. This often requires the deployment of specialized browser lockdown software and robust protocols to manage disruptions, such as unexpected student internet connectivity issues. The technical infrastructure and human oversight required for these digital safeguards are substantial and prone to failure.
Evolving Security Threats in the Digital Age
Exam security has traditionally involved a suite of measures aimed at preventing cheating, such as requiring students to present photo identification for identity verification, restricting movement within the exam venue, prohibiting personal belongings at exam stations, and conducting physical searches for unauthorized materials. However, the advent of sophisticated wearable technology has introduced new vulnerabilities, even within the seemingly secure environment of a proctored exam room.
While traditional proctoring methods were designed to counter human collusion or the use of pre-written notes, they are ill-equipped to detect AI-powered assistance delivered through discreet devices. A seminal study by Sarah Elaine Eaton and her colleagues from the University of Calgary examined the implications of AI-enabled eyewear. Their research highlighted how "models that feature a heads-up display project information onto the inside of the lens, which is not visible to an external proctor." This technology effectively grants students unfettered access to the internet and advanced AI assistants during high-stakes, closed-book examinations, rendering conventional proctoring techniques obsolete. The authors concluded that "the use of AI glasses lies beyond the scope of conventional proctoring methods, which are not designed to identify or regulate such discreet technology."
The rapid evolution of AI further complicates efforts to maintain security. Sidney I. Dobrin, a professor at the University of Florida, posits that "trying to develop assignments for which GenAI platforms cannot provide viable responses may be impractical – if not impossible – given the velocity of AI evolution." This sentiment underscores the perpetual arms race between educational institutions and AI developers, suggesting that security measures focused solely on preventing AI use may prove to be a futile endeavor in the long run.
Questioning the Validity and Purpose of Secure Assessments
Beyond logistical and security concerns, a fundamental question arises regarding the validity of secure assessments and whether they truly measure what they are intended to assess. Sean McMinn, Director of the Center for Education Innovation at the Hong Kong University of Science and Technology, advocates for instructors to critically evaluate their assessment practices by posing two key questions: "What is the assessed task meant to prove?" and "Does this task still assess what I think it does?"
Secure assessments are typically designed to gauge a student’s mastery of knowledge and skills without the aid of external resources, including other individuals, reference materials, or AI. However, the inherent limitations of these assessments, even in the absence of technological circumvention, can compromise their validity. Critics of traditional exams, such as Sarah Aiono, CEO of Longworth Education, argue that the efficacy of exams as a measure of true understanding is questionable. She points to cognitive science research suggesting that while knowledge retrieval is crucial for learning, the high-stakes, timed retrieval demanded by exams is a poor proxy for genuine comprehension. Students may excel at memorization and regurgitation without the ability to apply or transfer that knowledge. Conversely, students with a deep conceptual grasp might falter under pressure or struggle to articulate their understanding rapidly in a written format.
The skill of exam writing itself, while a component of academic performance, is often not a prioritized learning outcome. This raises questions about whether the current emphasis on secure, exam-based assessments aligns with broader educational goals of fostering critical thinking, problem-solving, and the ability to apply knowledge in diverse contexts.
Exploring Alternative Approaches to Assessing Unassisted Learning
While proctored exams remain a popular choice for secure assessment, educators are increasingly exploring alternative methods that can offer more authentic and valid measures of student learning while still addressing academic integrity concerns.
The Power of Oral Assessments
Steve M. Baule, a professor at Winona State University, champions oral assessments as a valuable alternative. He suggests that "short, low-stakes oral defenses, whether one-on-one, in small groups, or recorded, create powerful validation opportunities." These interactions allow students to demonstrate their understanding by summarizing key arguments, responding to clarifying questions, explaining data interpretations, or justifying design choices. Baule emphasizes that these conversations do not need to be high-pressure or time-intensive; even brief exchanges can effectively confirm a student’s grasp of the material. This approach not only assesses understanding but also develops students’ communication and argumentation skills, which are highly transferable.
Practical and Performance-Based Assessments
Practical assessments offer another avenue for evaluating student mastery. This approach involves students applying their acquired knowledge and skills to real-world tasks in simulated or authentic settings. Examples include hands-on laboratory demonstrations, scenario-based simulations, role-playing exercises, oral pitch or briefing presentations, live performances, or even having students teach concepts to their peers.
Similar to proctored exams, these practical assessments can motivate students to achieve mastery and provide valuable feedback for further development. However, they also present their own set of logistical and security challenges that require careful consideration and planning. For instance, ensuring the authenticity of a simulated scenario or managing the security of performance-based evaluations demands innovative approaches from educators.
Reconsidering the Imperative of "Unassisted" Mastery
As higher education institutions navigate the transformative impact of generative AI, a fundamental re-evaluation of assessment objectives is paramount. The question arises: when is it truly essential for students to demonstrate unassisted mastery of knowledge and skills? In a professional landscape increasingly characterized by collaborative environments and sophisticated AI integration, the necessity of purely unassisted competence warrants critical examination.
The objective is not to eliminate all forms of unassisted work or traditional examinations, but rather to cultivate a more thoughtful and deliberate approach to pedagogical design. The current moment presents an opportune time for educators to critically assess their default assessment choices and consider whether they align with the evolving demands of the 21st-century workforce and the broader goals of intellectual development. By embracing a diverse range of assessment strategies, institutions can foster deeper learning, promote academic integrity, and better prepare students for a future where collaboration and the judicious use of technology are key to success. The ongoing dialogue around AI in education necessitates a parallel evolution in how we measure learning, moving beyond traditional metrics to embrace approaches that are both relevant and resilient in the face of technological advancement. This strategic recalibration of assessment practices will be crucial for ensuring the continued value and integrity of higher education in the years to come.




