Opinions of Tuesday, 15 April 2025

Columnist: Dr Godfred Yaw Koi-Akrofi

The use of generative AI tools in our tertiary institutions: Policy direction and the way forward

Generative artificial intelligence is a branch of AI capable of producing new content Generative artificial intelligence is a branch of AI capable of producing new content

Balancing Innovation with Integrity

The advent of generative artificial intelligence (AI) is reshaping education in ways once confined to science fiction. Today, in many classrooms, AI assistants can draft essays in seconds, refine grammar, and even cite relevant sources with a single click.

However, these remarkable capabilities have sparked pressing questions: Will they erode critical thinking? Could they compromise academic integrity?

In Ghana and around the world, universities are embracing AI applications like ChatGPT, Bard (now known as Gemini), Deepseek, and Midjourney, each offering new capabilities for research, creativity, and classroom engagement. However, their rapid adoption also triggers debates about ethical usage, equitable access, and regulatory oversight.

This article aims to inform policy decisions, share best practices, and invite robust dialogue among educators, policymakers, and technologists seeking to balance innovation with responsible governance.

But what exactly is Generative AI? Generative artificial intelligence, commonly referred to as generative AI or GenAI, is a branch of artificial intelligence capable of producing new content such as conversations, stories, images, videos, and music by learning from vast amounts of dataset.

Rather than simply following pre-programmed rules, these systems adapt to new inputs, opening up unprecedented possibilities for teaching, learning, and creative expression. As these tools mature, their potential to transform tertiary education grows, prompting a deeper look at the opportunities and the challenges they present.

A BRIEF HISTORY OF GENERATIVE AI

Generative AI did not emerge overnight in its current form; rather, it has evolved through several distinct phases, often known by different names within specialist circles.

Early systems were rule-based, functioning strictly within human-defined parameters to produce limited outcomes. Because they could not learn or adapt, these systems remained confined to their original programming.

From the 1990s to the 2000s, machine learning techniques enabled AI to move beyond rigid rules by learning from data. This breakthrough allowed AI to recognize complex patterns, such as identifying different animals in photographs through extensive training on large datasets. However, it also raised concerns about data privacy and the reliability of insights derived from potentially biased or incomplete sources.

With the rise of deep learning in the 2010s, AI underwent another significant leap. Neural networks, modelled after the human brain, enabled machines to process and generate far more complex output, such as transforming thousands of images into entirely new ones.

At this point AI could create a lifelike depiction of a tiger after analyzing millions of relevant images. Yet, while this rapid advancement revolutionized content creation, it also sparked heated debates over deepfakes and misinformation issues that continue to challenge researchers and policymakers.

In 2014, the introduction of Generative Adversarial Networks (GANs) marked yet another notable milestone. By pairing one AI system that creates content with another that evaluates its authenticity, GANs greatly enhanced the realism of AI-generated images, videos, and sounds.

Despite unlocking new creative possibilities, the technology fueled ethical controversies over the potential misuse of hyper-realistic media for deceptive purposes.

The 2020s brought Large Language Models (LLMs) like GPT-3 and GPT-4, further broadening AI’s capabilities. Trained on massive datasets drawn from books, websites, and other sources, these models could produce human-like text, engage in natural conversations, compose essays, and generate code.

Despite their remarkable benefits, these systems have sparked critical concerns about algorithmic bias and their propensity to generate content that may sometimes be inaccurate.

Today, multimodal generative AI processes text, images, audio, and video in tandem, enabling more immersive content creation, such as turning a written description into an animated video or musical composition.

As these sophisticated models weave themselves into our daily lives, conversations surrounding the need for robust ethical frameworks and regulatory policies have intensified, emphasizing the responsible deployment of AI technologies.

POTENTIALS AND PITFALLS Varied Forms of Generative AI

Generative AI can produce a wide range of outputs, depending on the task. For instance, ChatGPT excels at text-to-text generation, drafting emails, essays, translations, or even poetry from user prompts. Imagine typing a few notes and letting ChatGPT shape them into a coherent document.

Text-to-image systems such as DALL-E 2 turn written descriptions into realistic visuals. Picture typing “a tranquil beach with palm trees at sunset” and moments later receiving a custom digital illustration suitable for a travel brochure or social media post. This technology spares professionals and everyday users from tedious graphic work by automatically generating eye-catching imagery.

Other AI models handle image-to-image modifications, enhancing or restyling photos. A simple snapshot of a daytime cityscape can be turned into a vivid nighttime skyline, or a dull portrait can become a colourful pop-art piece. Image-to-text AI describes what appears in a photo, an invaluable feature for visually impaired individuals who rely on textual explanations to interpret images.

Audio-related innovations have also surged. Speech-to-text solutions accurately transcribe spoken language, powering everything from virtual assistants to automated subtitling. Conversely, text-to-audio models can convert written prompts into music or voice narrations, providing new frontiers for podcasters, radio producers, or students looking to add flair to presentations.

Then there is text-to-video, where a short narrative, for example, a concept for an advertisement, can be turned into an animated clip with characters and transitions. This ability drastically cuts the time and cost needed for video production, especially in marketing and education.

Some of the most exciting developments come under the umbrella of multimodal AI, which blends text, images, audio, and video into a single environment. Envision a virtual classroom where students ask questions in plain text and receive layered responses, including diagrams, voice explanations, and short animations.

Artists can combine music, visuals, and dialogue with minimal technical constraints. This fusion of multiple formats in one platform points to the next frontier of digital creation, where different mediums intersect in genuinely innovative ways.

A Partner Rather Than a Replacement

In many respects, generative AI works best alongside human ingenuity, functioning like an ever-available colleague rather than a substitute. Teachers can enlist AI to break down dense concepts; imagine automatically generated summaries or interactive quizzes, allowing them more time to address nuanced student questions.

Researchers might feed preliminary data into an AI tool, obtaining suggestions for new angles or hypotheses they hadn’t considered. Meanwhile, students benefit from instant feedback on their essays or projects, helping them refine ideas before finalizing their work.

This synergy can invigorate creativity. Take, for instance, a writer wrestling with a persistent case of writer’s block: a generative AI program can generate a handful of prompts, plot twists, or lines of dialogue to reignite their inspiration. Similarly, a business owner with limited design skills can rely on AI-generated logos or marketing templates to present a professional image without the costs of hiring a full design team.

Yet the ultimate goal remains to amplify rather than replace the human element. Teachers still guide discussions, researchers still interpret findings, and students still craft their own arguments. By letting AI handle certain repetitive or time-consuming tasks, people can focus on what they do best: critical thinking, empathy, and imaginative problem-solving.

Balancing the Upside with Caution

Despite the clear advantages of generative AI, skeptics caution that its very efficiency can become a double-edged sword. One major concern is “deskilling,” the idea that overreliance on AI tools might weaken the foundational skills students and professionals need—particularly critical thinking and problem-solving. If users let AI answer their toughest questions or craft entire essays, they risk missing out on the intellectual wrestling that sparks real learning.

There’s also the persistent threat of misinformation. While AI can generate impressively coherent text and strikingly real images, some of this output might be riddled with inaccuracies or biased assumptions. In academic contexts, this raises red flags for both students and instructors.

A paper that appears credible on the surface could be built on faulty AI-generated data. Moreover, if students rely on AI to “shortcut” their assignments, they may never develop the analytical and research skills that underpin genuine scholarship.

The key is to use AI tools with a discerning eye. Educators should guide students to question AI-generated content, cross-check facts, and use AI as a steppingstone rather than a crutch. By instilling digital literacy skills and a healthy dose of skepticism, institutions can help learners harness AI’s benefits without forfeiting their own intellectual growth.

Early Successes and Warning Signs

Real-world stories show both the promise and the pitfalls of integrating generative AI into higher education. Some universities have successfully deployed AI-driven tutoring systems that adapt to each student’s learning pace and provide immediate feedback, making large lecture courses more personalized. In these cases, students often report greater engagement and a clearer understanding of difficult material.

Conversely, several institutions have attempted to use automated grading software to handle large volumes of student work, only to encounter a backlash. Students complained of receiving impersonal feedback and sometimes missed the nuances of their writing or problem-solving approach. Beyond eroding trust between students and faculty, such automated systems risk devaluing the mentorship aspect of education if they are treated as a complete replacement for human evaluation.

These successes and setbacks underscore why thoughtful policies are so essential. Colleges and universities can draft guidelines on the appropriate use of AI in coursework and assessments, specifying when AI assistance is allowed, encouraging transparency about its use, and maintaining safeguards against academic dishonesty.

By striking this balance, institutions stand to gain the efficiency and innovation AI provides without compromising the very human core of teaching and learning.

INSIGHTS FROM RECENT DISCUSSIONS Diverse Discourses on AI

Bearman et al. (2023), in their analysis titled Discourses of Artificial Intelligence in Higher Education: A Critical Literature Review, argue that AI is often defined vaguely in educational contexts. They identify two main discourses shaping how universities respond to AI.

The first, the “Discourse of Imperative Response,” views AI as a transformative force requiring urgent adaptation, often framed as a choice between a dystopian future (if AI is resisted) and a utopian one (if it is embraced). The second, the “Altering Authority” discourse, sees AI as transferring control from human educators to machines, corporations, and data systems, ultimately altering how universities create and govern knowledge.

In this perspective, AI can empower teaching and learning but also risk diminishing educator autonomy. These dual narratives highlight the need for clearer frameworks to address accountability, governance, and the evolving role of students and faculty.

Curriculum Design and Ethical Challenges

Zawacki-Richter et al. (2019) illustrate how generative AI can streamline curriculum development by leveraging data-driven insights and tailoring content to individual learning pathways, which aligns with Sustainable Development Goal (SDG) #4 (Quality Education).

They however, cautioned that fulfilling these aspirations requires careful attention to bias, data protection laws, and collaborative efforts between AI and human stakeholders.

In ChatGPT and Beyond: How Generative AI May Affect Research, Teaching, and Practice, Peres et al. (2023) note that journals prohibit listing AI as a co-author due to accountability concerns. While AI can boost creativity in the ideation phase of research, it raises several concerns including biases, misinformation, and the need to safeguard academic integrity. They also encourage further studies on AI’s influence in shaping important research questions.

Across these discussions, ethical and legal ambiguities, such as potential misinformation, bias, and uncharted territory in copyright ownership loom large. AI developers and academic institutions are urged to build clear ethical frameworks, while further investigations are essential for clarifying the legal status of AI-generated works.

POLICY DIRECTIONS AND BEST PRACTICES

To address the challenges posed by generative AI in tertiary education, institutions need clear guidelines that balance innovation with ethical responsibility. Below are key areas where well-defined policies can make a significant difference:

1. Academic Integrity and Plagiarism: Institutions should expand their academic integrity rules to include AI-generated content. By requiring students to disclose any AI assistance, universities can reduce the risk of plagiarism and maintain transparency (Black, J. and Chaput, T. 2024).

2. AI Literacy Initiatives: Comprehensive training programs on how AI works, its capabilities, and its ethical implications empower students and faculty to engage with AI thoughtfully and ensure responsible use. This kind of literacy helps them critically evaluate AI outputs rather than accept them at face value (Dwivedi et al., 2023).

3. Regulated Use in Assessments: Exam and coursework policies must explicitly define where and when AI tools are permissible. Some institutions ban AI for high-stakes tests but allow it for creative projects, ensuring students still hone their independent problem-solving skills (Kasneci et al., 2023).

4. Faculty Training and Integration: Educators and administrators should receive structured guidance on integrating AI into teaching. Responsible use of AI can save time and enhance learning, provided instructors understand both its benefits and limitations (Zhai, 2023).

5. Data Privacy and Security: Because AI tools often process large amounts of information, safeguarding student data is crucial. Institutions must implement robust privacy measures that comply with national and international laws to protect sensitive information (Kasneci et al., 2023).

Crafting and enforcing these policies should be an ongoing process. As AI technology evolves, so must the regulations, ensuring that universities can harness its promise without compromising academic standards or student well-being.

FINAL REFLECTIONS, SUGGESTIONS, AND PATH FORWARD

Adopting generative AI in higher education demands a nuanced perspective. Although these tools can greatly enhance teaching and learning, they also present significant challenges that call for proactive policy responses.

Educational institutions, in collaboration with policymakers, technology developers, and educators, should work to establish comprehensive frameworks that ensure responsible AI use. Additionally, ongoing research into the long-term effects of AI in education is crucial to refine best practices.

As AI continues to progress, its impact on academia will only grow. The key lies in harnessing its potential while mitigating risks through well-defined policies and agile responses that evolve with technological advancements. The core considerations include:

- Balancing Benefits and Risks: AI reshapes education, research, and marketing, yet concerns around academic integrity, bias, and public trust must be addressed.

- Policy Development and Student Guidance: Universities should formulate clear guidelines to govern AI’s role in academic activities and support students in responsibly incorporating AI into their work.

- Areas for Future Research:

1. Long-Term Effects on Learning and Critical Thinking: Investigating how AI influences student development over extended periods.

2. Ethical Governance Models: Exploring frameworks that promote responsible AI use within universities and industry settings.

FOR FURTHER READING

Bearman, M., Ryan, J. & Ajjawi, R.(2023). Discourses of artificial intelligence in higher education: a critical literature review. High Educ 86, 369–385 (2023). https://doi.org/10.1007/s10734-022-00937-2

Black, J. and Chaput, T. (2024) A Discussion of Artificial Intelligence in Visual Art Education. Journal of Computer and Communications, 12, 71-85. doi: 10.4236/jcc.2024.125005.

Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., … & Wright, R. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642

Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., & Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. https://doi.org/10.1016/j.lindif.2023.102274

Peres, R., Schreier, M., Schweidel, D., & Sorescu, A. (2023). ChatGPT and beyond: How generative artificial intelligence may affect research, teaching, and practice. International Journal of Research in Marketing, 40(2), 269-275. https://doi.org/10.1016/j.ijresmar.2023.03.001

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—Where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 39. https://doi.org/10.1186/s41239-019-0171-0

Zhai, X. (2023). ChatGPT: Reforming Education on Five Aspects. Shanghai Education. 16-17. , Available at SSRN: https://ssrn.com/abstract=4389098