The discourse surrounding Artificial Intelligence (AI) in higher education has largely coalesced around the practicalities of its integration. Universities and academic institutions grapple with how to deploy AI in alignment with universal design principles, ensuring equitable access and application across diverse demographics defined by gender, race, and socioeconomic status. Furthermore, the long-term implications for the professorial profession are under intense scrutiny. While these implementation-focused conversations are crucial and necessitate careful consideration, they often bypass a more fundamental question: Is the widespread adoption of AI in the classroom fundamentally desirable? This oversight is rooted in two prevalent assumptions: the inevitability of technological progress and the perception of technology as apolitical until its implementation.
The assertion of technological inevitability is historically problematic. Human history is not a predetermined path but a complex tapestry of evolving likelihoods, shaped by human choices and actions within prevailing social and structural forces. To view history as inevitable is to diminish individual agency and render concepts of responsibility and justice meaningless. Significant historical shifts, from the Industrial Revolution to the digital age, have occurred not because they were destined, but because individuals and collectives made deliberate decisions, pursued innovations, and navigated unforeseen consequences.
Similarly, the notion of technology as apolitical, a mere tool whose ethical valence is solely determined by the user’s intent, is a perspective challenged by scholars of technology. This "tool view," which likens AI to a hammer – inherently neutral and only acquiring moral significance through its application – fails to account for the profound societal impacts of complex technological systems. While a hammer’s politics are limited, advanced technologies like AI are not simple instruments.
The Networked Nature of AI: Beyond the "Tool View"
Historians and philosophers of technology have long argued against the simplistic "tool view" when examining large-scale technological systems. Political scientist Langdon Winner, in his seminal 1980 essay, forcefully contended that technologies possess inherent political qualities. His example of nuclear power illustrates this point. The decision to adopt nuclear technology necessitates a fundamental restructuring of society, often leading to increased surveillance and a concentration of power due to its inherent dangers and complex infrastructure. This commitment, once made, is exceedingly difficult to reverse, effectively embedding a particular social and political arrangement into the fabric of a nation.
Similarly, the integration of AI into higher education represents more than just the deployment of a sophisticated tool; it signifies a societal contract with far-reaching consequences. Unlike legislative acts, which typically undergo rigorous democratic debate and scrutiny, the adoption of powerful technologies like AI often proceeds with less public deliberation, precisely because they are often perceived as neutral instruments. The political and ethical dimensions are then relegated to the user’s intentions, a perspective that current AI discourse largely mirrors.
Scholars now advocate for a "network view" of technology, which recognizes that modern technologies are intricate webs of materials, geographical locations, practices, human actors, institutions, political ideologies, and ethical considerations that can span the globe. These technologies profoundly shape our experience of the world and our actions in ways that are often unanticipated and enduring. The electrification of urban landscapes in the 19th century, for instance, not only provided light but fundamentally altered urban navigation, social interaction, and artistic expression. Applying this network perspective to AI allows for a more critical examination of the long-term, binding social and political commitments involved.
Quantifying the Costs: Environmental and Societal Burdens of AI
While AI offers potential benefits to higher education, the associated costs are substantial and often underemphasized. AI systems, particularly large language models and complex analytical tools, rely on massive data centers that consume vast amounts of energy. Projections from the U.S. Energy Information Administration indicate that data center electricity consumption could reach approximately 1,050 terawatt-hours by 2026, placing it among the top energy consumers globally, comparable to entire nations like Japan or Russia. This immense energy demand has led to the continued operation of coal-fired power plants that were slated for retirement, directly contributing to carbon emissions and environmental degradation.
The production of AI hardware, including the specialized chips required for AI processing, and the ongoing cooling of data centers also consume significant quantities of water, exacerbating water scarcity in many regions. The environmental toll extends to pollution and greenhouse gas emissions associated with the entire lifecycle of AI technologies, from manufacturing to operation and eventual disposal.
Beyond environmental concerns, AI has proven to be a potent source of misinformation, disinformation, and sophisticated "deepfakes," posing significant challenges to truth and trust in the digital age. The supply chains underpinning AI device production are often characterized by exploitative labor practices, as meticulously documented in "Anatomy of an AI System" by Kate Crawford and Vladan Joler. This research highlights the human cost embedded in the creation of the technologies we increasingly rely upon.
The application of AI in critical sectors such as insurance coverage determination and judicial sentencing has already sparked widespread human rights concerns. Automated computation systems have been linked to a decrease in accountability, as responsibility can be diffused through complex algorithmic processes. Moreover, the rapid expansion of the AI industry has demonstrably contributed to a further concentration of wealth and political power in the hands of a select few, exacerbating existing inequalities.
Higher Education’s Unique Vulnerabilities to AI
Within the specific context of higher education, the under-discussed and often undisclosed risks of AI adoption are particularly concerning. An over-reliance on AI tools could potentially erode critical thinking and creativity among both students and faculty. The ability to delegate complex tasks to AI might diminish the incentive for deep intellectual engagement and original thought.
Furthermore, the pervasive use of AI tools raises significant questions about student and faculty privacy and data security. The collection and analysis of vast amounts of personal and academic data by AI systems present new vulnerabilities. The issue of informed consent is frequently sidestepped, often disguised within routine software updates or terms of service agreements that students and faculty may not fully comprehend. This dynamic also poses challenging questions regarding the working conditions and rights of academic labor, a concern recently highlighted by Hannah Johnston in a Canadian Association of University Teachers (CAUT) feature article on delivering guardrails in the era of AI.
A Call for Critical Inquiry: Prioritizing Desirability Over Unquestioned Adoption
If we are to move beyond the disempowering assumptions of technological inevitability and the naive belief in technology’s apolitical nature, a fundamental shift in our approach is required. Instead of a premature leap to implementation, the critical question must be: Are the potential benefits of wholesale AI adoption truly worth the substantial environmental, social, political, and educational costs?
This is not to suggest that AI holds no value. It may indeed prove beneficial in specific domains, such as accelerating research in healthcare or assisting with certain administrative tasks in higher education. However, such judgments demand nuanced consideration, focusing on the specific forms of AI being proposed and a thorough assessment of their associated costs. A blind adherence to technological boosterism is insufficient and potentially detrimental.
Just as the adoption of a significant piece of legislation binds a society to a particular future, so too does the widespread integration of AI. It forecloses other possibilities and shapes our collective trajectory. Therefore, it is imperative that we rigorously evaluate whether the potential environmental, political, and social costs are commensurate with any perceived benefits. The act of adopting technology, for any purpose, including education, is inherently a political and ethical undertaking, not a neutral one. The time for critical inquiry into the desirability of AI in higher education, prior to any widespread implementation, is now.




