SVG Image
< Terug naar nieuws

28 oktober

Opinion: “A call for consensus: AI's place in education”

Roemer Wage, a graduate of the University of Amsterdam with a degree in Information Science and currently pursuing a Master's in Information Sciences at the Vrije Universiteit, wrote his Bachelor's thesis titled "Regulating Generative AI: An Analysis of the EU Approach," which was graded an 8.5. Drawing on this research and his insights gained throughout his studies, he shares the following opinion on the role of AI in education.

Educational institutions are facing a growing challenge: the lack of clear guidelines on whether and how generative artificial intelligence (AI) tools like ChatGPT should be used in academic work. Without a unified approach, universities have implemented their own, often inconsistent, policies. It's time for a consensus to provide a clear direction moving forward.

 

At the core of the issue is the ambiguity surrounding tools like ChatGPT. Should students be allowed to use AI language models in their academic work? This question remains collectively unanswered, with universities adopting wildly different policies. Some permit AI use, as long as it is properly referenced—although it’s questionable whether these tools can be referenced in a meaningful way, but that’s a topic for another time— , while others prohibit it completely. The result? Students and educators are caught in a maze of uncertainty, often unsure of what is permissible.

 

This ambiguity extends to the lack of consensus on what qualifies as fraudulent AI use when universities prohibit AI in academic work. For instance, at what point should a piece of work be considered fraudulent? Is it when 20, 50, or 80% of the content is "possibly generated by AI"? This lack of clear standards is further complicated by the tools used to detect AI-generated content. AI detection systems, such as Turnitin, rely on probabilistic algorithms, meaning their results are not definitive but based on the likelihood of AI involvement. Research has shown that these detection systems are neither robust nor reliable, producing false positives that flag genuine student work as AI-generated, and vice-versa. The consequences can be severe, with students facing accusations of academic dishonesty based on flawed technology. 

 

To make matters worse, the basic output of AI detection tools gives both parties little information to work with. When a tool simply states that "X percent is possibly generated by AI", it leaves teachers with no concrete evidence to support an accusation beyond, "because the machine says so." At the same time, students have limited ways to defend themselves, as the output provides no explanation for why their work is flagged as AI.

 

Adding to the confusion is the inconsistency in how AI tools are treated. Machine translation tools like Google Translate and DeepL are generally accepted, despite their high rates of false positives in AI detection tools, while ChatGPT is not. Educators are also struggling to adapt. Many courses, particularly those focused on academic writing, coding, or other tasks that language models can easily perform, now face fundamental questions about their relevance and design. Should these courses be redesigned to teach students how to use AI tools effectively and ethically, or should universities enforce offline “AI-free” environments to ensure students develop essential skills without becoming dependent on AI? Without any formal guidance, different institutions are handling this challenge in completely different ways.

 

The current uncertainty calls for a living framework that evolves as our understanding of AI deepens. This approach can provide much-needed clarity and consistency. However, these guidelines should not be rigid or binding; universities and, more importantly, individual professors should still retain the flexibility to design their own courses and set their own policies based on the needs of their subjects. A living framework would offer structure while still allowing room for academic freedom, enabling educators to integrate or limit AI in ways that best support their teaching objectives.

 

If we continue without clear guidelines, the academic system risks falling behind the technological advancements shaping our future. Although students must still develop fundamental skills without becoming dependent on AI, we can’t ignore its growing presence. AI is here to stay and will play a significant role in their professional lives. Beyond the foundational skills and knowledge tied to their discipline, students should also learn to use AI responsibly and effectively, with techniques like prompt engineering as a possible tool.

 

As a student, I believe we need to strike the right balance by thoughtfully integrating AI into education, ensuring we get the best of both worlds: preserving essential competencies while embracing AI’s potential. If we don’t, we risk either losing touch with foundational knowledge or falling behind in technological advancements—neither of which will serve us well in the future.






Vergelijkbaar >

Vergelijkbare nieuwsitems

>Bekijk alle nieuwsitems >
 De Amsterdamse Visie op AI: Een Realistische Blik op Kunstmatige Intelligentie

14 November 2024

De Amsterdamse Visie op AI: Een Realistische Blik op Kunstmatige Intelligentie >

In het nieuwe beleid, De Amsterdamse Visie op AI, wordt beschreven hoe kunstmatige intelligentie (AI) een rol mag spelen in Amsterdam, en hoe deze technologie het leven in de stad mag beïnvloeden volgens de inwoners. Deze visie is tot stand gekomen na een maandenlang proces van gesprekken en dialoog, waarin een breed scala aan Amsterdammers – van festivalbezoekers tot schoolkinderen en van experts tot digibeten – hun mening gaven over de toekomst van AI in hun stad.  

Lees meer >

Interview: KPN Responsible AI Lab met Gianluigi Bardelloni en Eric Postma

14 November 2024

Interview: KPN Responsible AI Lab met Gianluigi Bardelloni en Eric Postma >

ICAI's Interview featured deze keer Gianluigi Bardelloni en Eric Postma, zij praten over de ontwikkelingen in hun ICAI Lab.

Lees meer >

AI pilots TLC Science: Generatieve AI in wetenschappelijk onderwijs

14 november

AI pilots TLC Science: Generatieve AI in wetenschappelijk onderwijs >

De UvA is een nieuw project gestart waarbij het Teaching & Learning Centre Science onderzoekt hoe Generatieve AI, specifiek ChatGPT, kan bijdragen aan het verbeteren van academisch onderwijs. Binnen dit pilotprogramma aan de Faculteit der Natuurwetenschappen worden diverse toepassingen van GenAI in het hoger onderwijs getest en geëvalueerd.

Lees meer >