AI in language learning: making lemonade

I was recently asked to participate in a panel on Artificial Intelligence (AI) in Academic Writing on Michigan State University campus.

In my talking points, I drew some connections to the challenges of machine translation for language educators in the late 2000s. Google translate was born in 2006, and as early as 2008, I was teaching intensive English to international students in Chicago. We English teachers panicked. For a minute there, we thought that we were never again going to get a valid measurement on vocabulary learning outcomes ever. It’s true that we saw lots of plagiarism and other versions of, often unintentional, academic dishonesty. Ultimately, our profession took a collective deep breath and realized that the platform wasn’t going anywhere, so we needed to dig in and see what we could do with it instead.

I’m not entirely sure how we will deal with AI in the years to come, but language educators are notorious for making lemonade from lemons (deriving benefits from something initially challenging), and so we shall again. We will find ways to deal with ChatGPT just like we did with machine translation a decade ago.

Much like dictionary and thesaurus use that preceded widespread use of Google translate, it became important for we English teachers to teach appropriate use of translation tools. Many of us rightly felt that total prohibition would have done our language learners a communicative disservice: translation is a really useful, sometimes critical thing particularly when you’re living in the country of the language you’re learning. One thing we were able to do with not-quite-right translations in the early days was to show students the pitfalls of relying on translations wholesale; showing students the risks involved in saying something accidentally hilarious or catastrophically wrong was one way we made explicit the dangers of not relying on your own voice.

We also explored ways to reduce perfectionism and anxiety related to high-stakes assessment so that students could feel (more) comfortable using their own language and not worry (too much) about the effect it would have on their grade. But they worry about grades anyway, and rightfully so when much rides on the success of their assessment in English. It would be out of touch-at least in English language teaching in the United States-to be focused on just pure learning for the sake of learning when a student’s whole life trajectory depends on their next score of on the TOEFL (Test of English as a Foreign Language) which determines (or at least did at the time I was teaching intensive English back then) where they could go to college in the U.S. It’s no wonder they feel pressure to submit mistake-free writing.

When I was approached about contributing to this panel, I checked in a little bit with my fellow language educators who have dealt with ChatGPT firsthand. I also searched one of my favorite publishing venues, the Free Language Technology Magazine published online and open access by the International Association for Language Learning Technology (IALLT), to see what language educators have written about AI so far. I found two hits, both unsurprisingly optimistic. The authors of the first article, Robots vs. Humans: Does ChatGPT Pose a Challenge to Second Language Writing? did two really interesting things. First, the language educators who wrote this article actually outsourced whole chunks of the article to ChatGPT, to test its mettle, so to speak. I found that rhetorical move really dynamic. The authors then shrewdly annotated the chatbot’s responses, especially highlighting the possible motivations for its response when asked about its effects on foreign language learning. Second, the authors conducted an experiment, having teachers of Spanish, French, and Italian grade ChatGPT assignments to see if they could tell the difference between AI generated work and student generated responses. (Yes, instructors could still tell a difference-at least with what the current version of the platform can deliver-but it was a slow and unfortunate process for teachers to undertake). Like me, the authors also drew connections to the early days of widespread machine translation and put a positive spin on how foreign language teachers might use ChatGPT to their and students’ advantage. Lemons out of lemonade.

The other hit I found at FLTmag was written by a colleague of mine, Fred Poole, who teaches in the Master’s in Foreign Language Teaching Program housed in my unit. In his article, he discusses how educators can use ChatGPT to design materials for the language classroom: Using ChatGPT to Design Language Material and Exercises. Lemons out of lemonade.

Beyond the scope of language learning, I think we need to be mindful of the fact that, like many technologies, AI like ChatGPT has huge potential as assistive technology for disabled students and educators alike. Thus, it would be unethical and inequitable to prohibit its use entirely in learning spaces. I highlight a brief example, from an article written in The Conversation, Will AI tech like ChatGPT improve inclusion for people with communication disability?: “People using speech generating devices are often limited to laboriously entering a mere 10 words per minute with word prediction only increasing that to 12-18 words per minute…ChatGPT looks like it will be more inclusive of diversity by being able to understand poorly written commands, or sentences with several grammar or spelling errors. It can reportedly “read” poorly structured input, re-write and improve imperfect writing, and simplify complex texts into simpler summaries for early-stage readers. ChatGPT could be considered an “assistive technology” if it assists people with communication disability to get their message across more efficiently or effectively.”

Important to my own work at the intersection of language learning and Disability Justice, some of those assistive ChatGPT features listed above seem like they could be incredibly helpful for both language learners and disabled learners and especially meaningful for learners who find themselves at the intersection of both of those identities.

Near the end of the discussion, each panelist was asked to name their hopes and fears related to AI. At this point, my greatest fear is less about cheating and loss of instructional control resulting from AI bots like ChatGPT. Instead, my concerns are centered on the potential for knee jerk reactions and what we educators might do pedagogically to try to regain some control in response to this new and unpredictable thing. I would hate for educators to allow this admittedly very complex and potentially problematic tool to push us toward more rigidity which I think might really encourage students to be even more fearful of making mistakes.

So, what can we do? Make lemonade from lemons, of course. We learned that translation platforms weren’t actually lemons, of course, and I predict the same of AI platforms like ChatGPT. Universities need some policies, sure. Students need very clear guidance on how to use and how not to use this technology. As more robust university policies related to AI emerge in the coming weeks and months, I urge language educators to remain as responsive as possible to students’ needs and their underlying motivations for using AI platforms. We also need to be vigilant about consistently returning to the alignment between assessment and learning outcomes to make sure that we are only evaluating students on what they need in order to achieve success in the outcomes. Otherwise, we risk needlessly and inequitably penalizing them for performance that isn’t tied to learning essentials.

Leave a comment

Is this your new site? Log in to activate admin features and dismiss this message
Log In