Recently, there was yet another national stir over Generative AI (GenAI) in education, which, upon reflection, is entirely on-brand for a system hardwired more for control and closure than for cultivating risk-taking or innovation. The issue this time? Students are allegedly using ChatGPT to ‘cheat’ in their paper submissions.
From the report above: The professor’s briefing slides to students said: “The use of ChatGPT and other AI tools are not allowed in the development or generation of the essay proposal or the long essay.”
This is not the first of such cases; in fact, how the universities react will set a precedent for the cases that will surely follow.
Even at the Junior College level, I encourage my students to use ChatGPT. I’ve had students who confessed to using ChatGPT to refine or improve their answers. Sometimes, they don’t read the articles I issue to them, so they use ChatGPT as a summariser. Sometimes, they use ChatGPT to craft full paragraphs or even full essays as homework submissions. Such a ‘learning strategy’ echoed through my classes more than once. Here’s the uncomfortable truth: they’re not necessarily trying to cheat; many of them are just trying to cope with the increasing deluge of content and material covered in lessons—on more than one occasion, I’ve given up lessons to my Science colleagues who valiantly scramble to finish the curriculum, and the gratitude is mutual.
I question if many students actually learn by adopting such a strategy. I’m all for using and being adept at the latest technological tools available to us, but I’m also pulled in the other direction, where I do value authentic learning in the classroom. In leveraging, GenAI, many of them are cosplaying as students, rather than embodying the ‘essence’ of being one, and many more use GenAI to dispense with the ‘busywork’ of actually being a student: doing tedious repetitive questions, actually sitting down and struggling through a difficult piece of text, writing plans which are purely drafts—many simply want the outcome without the struggle. In that sense, many students have become victims of a zeitgeist which has denounced ‘slow-pamine’ or Slow Dopamine as the arch-enemy. Inundated by social media and the Endless Scroll, many of them have willingly ceded authentic learning in favour of instant knowledge.
Being able to produce paragraphs and essays which are of decent (if not great) quality is no way an indicator that students are learning, and this is further compounded by how many schools today focus a lot on using EdTech tools in the classroom to support student learning. In that sense, I see GenAI more as a ‘leveller’ of skill rather than a tool that actually improves the learning outcomes of students. It democratises access to knowledge production, and students who excel at such production are themselves sure to succeed in a future where familiarity with GenAI tools is the ambrosia of the economy. For the majority of them, however, as far as academic output is concerned, they have checked the box off—and if it’s one thing I know, Singaporean students love checking off boxes and to-do lists because it’s how many of them earn some form of validation. It takes maturity (typically in short supply in your average 17/18 year-old) and a little bit of stepping outside themselves to recognise that while such a levelling tool is indeed useful, its real utility lies in the skill of being able to understand and distinguish how and why the GenAI improved, or responded to the provided prompt—the processs of writing. The unfortunate reality is that many of them are singularly focused on the output—what comes out of the Thinking Machine.
This is one of the core tensions confronting much of education today: an increasingly ossified, agricultural model (manufacturing ‘students’ who will eventually become ‘workers’) that is still struggling to hold on to the Old Ways. Schools cannot have it both ways: they cannot adopt the convenience, efficiency, and reduced friction that technology offers to teaching and learning in the classroom without also accepting that students will push the use-cases of such tools to their breaking point and try to exploit these same tools for their own benefit.
I see this phenomenon more as emblematic of a broader narrative: it is young people who push the boundaries, and the role of the institutions they grow up in to reinforce these boundaries. This is why most revolutions were historically fomented by the young—they are the ones who are willing to risk the status quo precisely because the stakes for them are exceptionally low. On my end, my stance has been quite liminal: this year, I implemented a no-phones policy in class, and I take comfort in the potential dividends: final standardised exams are wholly handwritten, without any digital tools. This, too, will eventually change as Singapore adopts a more digitalised academic infrastructure where exams are attempted wholly on laptops. At the same time, I do see these tools as helping everyone get to a ‘certain’ level of competence—where that door will open to is anyone’s guess, and I’m betting that it lands us in a more culturally deficit place than we’re comfortable admitting (but that’s another post).
There’s a growing tech arms race in schools: AI detection software, anti-plagiarism checkers, tools which ‘monitor’ or track student achievement, effort, and whether they have incorporated elements of GenAI into their work. To me knowledge, many of these checking tools are contextual at best, and outrightly dishonest at worst. But policing GenAI in a world where it has already proliferated in the bloodstream of student life is like banning calculators for algebra class today.
The (AI) cat is already out of the bag.
While the changes are difficult, a lot of them can already be initiated in classrooms, without massive school-wide implementation: I recognise that there are tech-resistant teachers who might not be so ready for such changes, yet at the same time also feel helpless that they are being outpaced by GenAI’s insidious creeping into our classrooms. Some basic strategies can include:
Making students submit AI prompt logs alongside assignments (more work, I know, but that is the point).
Requiring reflection on why they used AI, what they changed, and what they rejected (perhaps a reflection log).
Having them defend AI outputs orally, akin to PhD defences.
Writing, as cognitive psychologist Daniel Willingham reminds us, isn’t about transcription. It’s a process of grappling with complexity, of organising thought, and more importantly, of ‘building context’. This context building is especially important as generations increasingly become reliant more on explicit contexts rather than metaphorical or implicit ones— if the kids need to be told if something’s funny or not, then they are certainly not alright. Case-in-point: a lot of Gen Alpha’s slang originate from an absurdist vision where the internet cannibalises itself to produce even more absurdist ideas (for example, ‘fanum tax’ comes from a streamer known as Fanum, ‘mewing’ and ‘maxxing’ became popularised on TikTok, while ‘skibidi’ came from a YouTube series). When we subcontract all of that context to a bot, students are robbing themselves of the very practice that makes learning stick.
That said, I’m not advocating for what some are requesting for: a reversion to a purely analog style of teaching. As tempting as it is, we cannot stuff the genie back into the bottle, not especially after it has gorged so much on the ahnhedonia of many young minds. Some context is necessary: for the early years, we should. Children should write by hand, read out loud, and doodle on the margins, daydream (and for that matter, so should pre-teens). When did parents equipping children with a phones to obtain some modicum of safety and control become the norm? Let them think slow. Because GenAI speeds everything up, and childhood shouldn’t be an academic arms race.
Many of my students occupy brief pauses in their day with weapons of mass distraction: to them, their phones promise not only a convenient escape route where anonymity is prized, but also a good excuse not to connect with each other in person, and that is a very necessary skill in the world of GenAI. ‘Silence,’ one student told me, ‘is scarier than homework’.
But we cannot unplug forever. The world students are entering is most certainly not analog, and will not hesitate to leave them behind: AI won’t replace humans, but humans with AI will replace those without AI. So, what can schools do? If we believe that schools are the microcosm of society, then we have to build scaffolds. And as much as I would prefer it to be a whole school approach, it has to be done progressively. Let’s start first with curriculum: it should teach them to use GenAI thoughtfully, not instinctively. Assessment comes next: instead of always ‘meta-testing’ and enforcing that students stop using GenAI, why not test them how they are best able to push such technologies to a productive outcome? At the same time, let’s train our teachers to model straddling between both the digital and the analog. Then—and hopefully then—can we slowly hope that the learning will occur.
In the rush to adapt, we’ve mistaken compliance for creativity. But the best students aren’t those who use GenAI the most. They’re the ones who stop to ask if the answer makes sense given the broader context they are using it for. ChatGPT can only ever be an assistant, and never the author.
Thank you for reading this far. And as always, keep being the brightest star for others.