The way students and instructors create and share ideas has shifted dramatically.
Generative artificial intelligence (AI) can draft essays, solve problems, code, and suggest research sources.
This technology excites many while stirring concerns about learning and integrity.
As someone who spoke on AI integration in assessment at Nust’s recent 18th Great Teachers Workshop, I saw both excitement and concern in equal measure.
It begs the question: How might AI serve higher education and how might it challenge the foundations of scholarship.
A LEARNING ALLY
AI can help students expand ideas and overcome starting points, offering outlines, probing questions and step-by-step explanations of complex concepts. It acts as a personal tutor available anytime.
Instructors can use AI to generate practice problems, draft quizzes, or summarise lengthy readings, saving preparation time.
For large classes, AI can detect common errors in writing or calculations, allowing instructors to address key themes efficiently.
This frees educators to focus on creative tasks, such as designing projects or providing individual mentorship, enhancing both teaching and learning experiences.
AI might help narrow gaps in access to support.
In regions where tutoring services are scarce or costly, students can still receive feedback on draft essays or practise foreign languages. This could empower those who might otherwise fall behind.
It could also give nontraditional students flexible options when they juggle studies with work or family commitments.
In this sense, AI can ensure greater participation and retention across diverse populations.
ACADEMIC INTEGRITY
At the same time, when AI does more than tutor, it can generate entire assignments.
Students might submit AI-created essays or code without understanding the content.
This weakens the connection between effort and learning.
When assessments no longer reflect what a student can think through on their own, grades become less meaningful.
This matters because education is meant to develop critical thinking skills and personal accountability, not merely produce pleasing papers.
Traditional assessment methods face a test. AI can answer essay questions in seconds.
Take-home exams lose meaning when AI provides correct solutions.
The spectre of AI-generated cheating undermines trust.
Instructors and institutions must rethink how they measure learning in a world where AI is widely available.
AI-RESISTANT ASSESSMENTS
One option is to design assessments that AI cannot complete without human insight.
For example, instructors might ask students to reflect on personal experiences or apply theory to local contexts.
AI can’t easily share a student’s personal journey in overcoming a challenge or describe the culture of a particular community.
Group projects that require coordination and face-to-face interaction can also raise the barrier for AI-generated work.
Oral examinations or live presentations compel students to think on their feet and demonstrate genuine understanding.
Instead of a single high-stakes paper (summative), instructors might consider using a series of low-stakes reflections where students respond to prompts over time (formative).
AI may help draft those reflections, but instructors can monitor drafts, ask follow-up questions, and observe growth.
The process becomes a conversation rather than a product that can be handed to a machine.
ETHICS AND LEGALITIES
Educational institutions must also adopt policies that clarify the appropriate use of AI.
Should students cite AI like a secondary source?
Must students acknowledge if AI suggests an idea or provides phrasing? Clear guidelines help students understand boundaries and encourage honesty.
Institutions might require honour pledges that include AI use alongside plagiarism and collusion.
There is also the question of data privacy.
Many AI tools collect user input and might store or share them. Students working on sensitive research or personal reflections could expose private details.
Universities should vet AI platforms for compliance with data protection regulations.
They might negotiate institutional agreements that limit data retention or guarantee confidentiality before adopting such platforms.
PREPARING EDUCATORS
Instructors cannot be expected to master every AI tool alone.
Professional development must include training on how AI works, its limitations, and how to integrate it responsibly.
Teachers need forums to share best practices, sample assessments, and case studies of both successful and failed implementations.
Institutions should allocate time and resources for faculty to experiment with AI in low-risk settings before rolling out new policies.
Generative AI will remain part of higher education’s future.
While it offers benefits in access, efficiency, and personalised guidance, it also threatens the integrity of assessment and the very purpose of learning.
If we embrace AI without caution, assessments will become hollow.
If we reject AI outright, students lose valuable support, and institutions risk falling behind.
The choice is neither black nor white.
We can treat AI as a tool for growth while protecting the human core of education. We can guide students to use AI ethically as an aid rather than a crutch.
In that scenario, AI becomes a friendly force in higher education, one that demands new approaches but never replaces the need for human thinking.
So, is AI a friend or a foe? Or is it a friendly foe? You be the judge!
– Oluibukun Ajayi is an associate professor of geoinformation technology at the Department of Land and Spatial Sciences, Namibia University of Science and Technology. The views expressed here are his own and not those of Nust; oajayi@nust.na




