LLMs/ChatGPT for Language Teachers

Last month, I was invited to talk on Using AI Tools in Post-entry English and Academic Language Contexts for the English Australia Post-entry English and Academic Language (PEAL) SIG. This post describes a few ideas I presented, and I hope it will help you in your explorations. 

Following Murphy’s Law, ‘When any instrument is dropped, it will roll into the least accessible corner,’ a potentially powerful EdTech tool may roll into the corner with the lowest value for teaching and be used to generate unnecessary or subpar materials, which we already have in abundance.

We have seen many examples of this, especially during the COVID outbreak when educational technology was used for emergency remote teaching. We quickly adopted new tools not initially designed for education, built platforms based on them to mimic what we can already do without the tools, and similarly abandoned them quickly with the verdict ‘they ain’t no good.’ However, sometimes it takes just one more step to untap the tool’s potential.

This step often starts with thinking about how the tool can help you do things that you wouldn’t be able to do or that would be extremely challenging to do without it.

Quite a similar thing is happening with AI and ChatGPT-3/4 aka large language models (LLMs) at the moment – there have appeared various AI-powered apps and platforms to help teachers plan lessons and activities and give feedback, yet they do not add much value to the process except saving time, though in many cases these time savings will have to be spent on improving the outcome anyway. Useful? In a way, yes, but this is low-hanging fruit.      

There are a few areas in language teaching and learning where AI and large language models (LLMs) may potentially add value to teaching and learning. One area that I am currently exploring is using large language models (LLMs) to work with input modifications for language teaching and learning, which is particularly relevant in the contexts of EAP, ESP and EMI/CLIL.

For language learners, reading academic and specialised texts in a foreign language can be a challenging process due to the complex syntax, technical vocabulary, and specialised jargon that is often used in these types of texts, which can ultimately hinder the learning process. This may be addressed by text modification.

To explore how LLMs can assist teachers in generating and elaborating on input, I’m following the classification of input outlined by Michael Long: genuine, simplified, elaborated, modified elaborated input, and bimodal input.  

Genuine Input

Genuine input refers to authentic materials that are used for their original purpose and have not been specifically designed for pedagogical use.

However, ‘…except when employed with very advanced students, genuine texts will be linguistically overwhelming and usually constitute linguistically inappropriate input for language learning.’ [2, p.170]. One of the ways to tackle this may be to find or create quasi-authentic or linguistically simple texts.

Generating linguistically simple input using ChatGPT/GPT-3

Large language models (LLMs), such as GPT-3.5, can generate human-like texts in various genres and styles, including parallel texts, for specific target readers. This means we can use ChatGPT/LLMs to generate texts at a level of complexity and comprehensibility that is appropriate for a particular group of students. 

While the question of whether such texts can be considered authentic remains open, this approach may be a solution to the challenge of creating accessible texts for language learners, and potentially to be used further to create bimodal/multimodal input, using AI tools. 

For this purpose, I will use a prompt that could be used by a blogger or writer to generate content for their blog or newspaper/magazine for international readership or a particular target group. 

The elements of the prompt include Input – initial information that we’d like an LLM to use (to avoid hallucinations or potential bias or falsehoods), Role & Task – the role or persona and context, Task specifying the length* and the level of target readers on the CEFR, and Constraints with format and style/tone parameters.   

Prompt: Act as a machine learning expert. Your task is to write an informal blog post for young adults and a formal news article on [artificial intelligence/large language models]. Both texts should be short (around 300 words). Make the text suitable for [B2 level on the CEFR]. Write it in the style of Simon Sinek. Be engaging and interesting. Add nuance.

To avoid any bias or hallucination, I will also provide the baseline text on machine learning from a credible source to constrain the output generated by the LLM.

Note: *You’re unlikely to get exactly 300 words as GPT-3/ChatGPT doesn’t ‘see’ words but uses a token system, where one token generally corresponds to roughly ¾ of a word, i.e. 100 tokens ~= 75 words. 

Simplified Input

‘A teacher who takes on the task of adapting texts becomes, effectively, a materials writer.’

Simplification involves deletion, substitution, expansion, and movement of text elements to make it more accessible to students. However, this is a complex task that poses a significant challenge for teachers.  While some teachers may have received some training on how to simplify or elaborate on texts, many others approach the task intuitively, relying on their personal experience and background knowledge (read more here: Approaches to simplifying academic texts in English: English teachers’ views and practices). By and large, the process is  time-consuming and relies on intuitive decision-making. 

An LLM can be a useful tool to facilitate the process and help bridge simplification and authenticity of the text. Unlike existing platforms for simplifying texts, it offers a high level of flexibility and iteration.

Let’s take the following paragraph as an example.

‘Constantly recycling the same grammatical patterns and limited set of vocabulary items results in impoverished input, which is counter-productive from an acquisition perspective. Acquisition potential is sacrificed for comprehensibility thus excluding many new opportunities for learning. Comprehensibility is needed, but language acquisition from the input should not be sacrificed.’

Source: Materials for ELT – and Noticing by Geoff Gordan

And use GPT-3 to simplify it, without setting constraints.

Prompt: Make the following text easy to understand: [your text]

We can also introduce particular constraints to make the text more suitable for our purposes, and more engaging for students. 

Prompt: Act as an experienced writer and editor. Your task is to improve comprehensibility of the following text for students whose first language is not English. Simplify complex language without sacrificing accuracy or depth. Clarify confusing or unclear concepts using a metaphor or analogy. [‘your text’]

You can also set more specific constraints.

*See some more examples of text modifications in ChatGPT Prompts for Language Teachers

Elaborated Input

Elaborated input is more effective than simplified input because it does not dilute the content of the original text.

To work with this type of input, use the LLM in the Playground where you can set the temperature – this parameter controls the level of randomness in the generated text. If you want the model to strictly follow your prompt, you should set the temperature to “0”. 

In addition to the role and task, it is also a good idea to add a few examples for the model to follow.

Prompt: Explain all words or phrases or concepts that a 5-year-old would not understand in the sentence. Keep the sentence intact. Do not remove or replace any parts in the sentence. Only add explanations or synonyms within the sentence, after the words you explain. Follow the example below:

[Example: AI models that generate stunning imagery from simple phrases are evolving into powerful creative and commercial tools.

Elaborated input: AI models, computer programs, that generate stunning imagery, pictures that are beautiful and attractive, from simple phrases, words that are easy to understand, are evolving, becoming more complex, into powerful creative and commercial tools, products that can be used to make money and express ideas.]

You can modify the elaborated input further and prompt the LLM to break down long sentences into shorter ones and incorporate linking words to restore normal sentence length.

When working with an LLM, the same prompt can generate various results. If you are not satisfied with the output or would like to see other options, click ‘Regenerate,’ and the LLM will generate more choices for you to choose from and elaborate further. 

You can also specify particular requirements, such as the use of specific linking words, or request additional clarifications to ensure the LLM generates the desired output. 

Multimodal Input

The resulting text may be used further to create bimodal/multimodal input with the help of text-to-speech, text-to-image and text-to-video software.

Image: Midjourney; Video: D-ID; Voice: ElevenLabs


One comment

Leave a Reply