First off, I should disclaim that this is a rough transcription of a talk I gave at the 2023 Upper Peninsula Teaching and Learning Conference. I really think that AI in academia is the most important contemporary issue for educators to address, and I chose the topic so I could formulate my own thoughts and opinion on the matter.
A survey of Stanford students in Fall 2022 showed that 17% of students used ChatGPT to help them in some way on exams. And this is just before the tool got a ton of media attention. I did a quick poll in my Microcontrollers class in Spring 2023, and found that ~33% of my students were using ChatGPT to assist with assignments in some way. So the big question being asked is: “Is this a good thing?”
Large Language Models (LLMs) in Short
Before we answer that question it may be wise to take a step back and look at what ChatGPT really is. ChatGPT is considered a large language model (LLM), which is largely based off of the idea of artificial neural networks. Given your prompt, the neural network just tries to predict the next word that embodies the most statistically correct answer.
You know that feature on your phone, where you type a word, and it suggest the next word? I’m not sure if that little feature is based on a neural net, but is certainly trained off of your previous texts. Well ChatGPT is trained off of 570 GB of text, and that’s a lot of data when we’re in the context of raw text.
Something that will be important to consider later in this post is the limitations of LLMs:
- They cannot generate large (multi-page) reports. They can try, but they become pretty easy to spot. They tend to lose their structure or become repetitive. That’s not to say that you can’t use them to build up long reports piece by piece
- Their knowledge is not infinite.
– You can find very niche / recent topics that it has no knowledge of.
– LLMS can straight up be incorrect in its assertions (you’ll sometimes see this called “hallucinations”) - They have limited mathematical and analytical ability.
- LLMs only work with text.
Note that those limitations will likely not be around forever, and it unwise to formulate lasting plans around these limitations. In my opinion we are at the bottom of the curve for this technology, and are only at the beginning of this wave of AI tools.
How LLMs can be used
There are several different models that students use to solve problems, but one of the most common methods is something I call “The Google Loop”. In essence, a student Googles a problem, reviews the material, and if it doesn’t make sense, head back to Google.
The operative part of that loop is the decision “Does it make sense?” That’s where the real thinking happens: students have to reflect and make sure that the information they’ve found will meet their needs.
Now let’s see how that loop changes when we through an LLM into the mix:
The loop is pretty much the same, where the key is still on the validation step. Now of course this loop only works if the student is aiming to learn from the problem and not just entering answers without any thought if they’re right. I envision this loop being used for problems like writing programs, or some other engineering problems.
Now let’s look at a student’s report writing process:
Once again I’ve highlighted the key parts of this cycle in red. A potentially contentious point is my choice to exclude “Expand each outline point” from the key parts. At least in the context of my teaching, the expansion part is just busy work. In most cases the students know what they need to say, and this step just takes up time. Now let’s see how this model works when we throw LLM into the mix:
I would argue that in a way LLMs can actually enhance the learning experience for report writing. If you want a good output from an LLM, you have to give it a good input. Sometimes this can result in a sort-of dialog with the tool to explain and correct things. This is still a student exercising their communication & comprehension skills, and in the end is that not the point of a writing assignment?
Now if the purpose of your class is composition or argumentative writing (or things along that vein), this whole look can somewhat go out the window.
And now time for a spicy topic: Imagine this situation. You’re applying for a job, you lean on an LLM to help you write a cover letter. The employer receives your application, and they use an LLM to reduce your letter to a summary.
What was the point of the cover letter at all?
Perhaps LLMs could usher an era of brevity?
LLMs for Ideation
I have used Chat-GPT with decent success as an ideation companion, and I think it would be wise to teach students how to use it for this purpose as well. The two main things I’ve incorporated Chat-GPT into my workflow are:
- Writer’s block prevention: Whenever I find that I’m having trouble getting the words on paper, I’ve found it helpful to explain to Chat-GPT what I’m thinking of saying, just to help inspire. Personally I don’t often directly use an LLM’s material in my work.
- Strawman Creation: I come up with crackpot ideas once and awhile that I don’t even know how to start on, or what technology to use. I’ve engaged in dialogs with Chat-GPT, explaining my idea and asking what technology or methods to use, or ideas to enhance the concept. In my experience this has worked well.
Thwarting LLMs
It may be your desire to create assignments to thwart the usage of LLMs in your class. Here are 6 recommendations:
- Increase reliance on visual figures: LLMs require text-inputs, so using images in your problems creates a roadblock for students. And if the student transcribes the image to a verbal description, I would say that there is at least some good educational value to that.
- Require a rigorous bibliography: LLMs are not able to give accurate citations. However validating citations can be a somewhat laborious endeavor when grading.
- Increase the difficulty: If a problem is of sufficient complexity, it cannot be solved by Chat-GPT. In some cases, the problem can be broken up into smaller pieces and solved with LLMs, but still requires an understanding of how to deconstruct the problem.
- Make the problem “more nice”: LLMs are not all knowing. If your problem works on uncommon or novel knowledge, LLMs won’t have a leg to stand on.
- Require in-person work / exams: Removing the option / temptation to use LLMs is the only full-proof way to thwart them. However this may not be feasible for large classes.
- Project work: Having students develop substantial designs will have them show their mastery of the engineering and/or scientific method, something that LLMs can only emulate.
All of these proposed methods are stopgap solutions. We are at the bottom of the curve for the implementation and complexity of these tools, so within a short period the limitations that educators could exploit may be gone.
Closing
Chat-GPT is a pretty young technology, and students did not have it available during their upbringing. They’re still learning how to use these tools. So I feel like it is my responsibility to educate and show students how to use LLMs responsibly and efficiently. After all, we face a very real probability that they’ll be using these tools when they enter the work force.