Is that why you're here?
As I was idly browsing Bluesky this evening, I spotted some terrific words of advice on the role of LLMs within study.
They were written by Robert McNees, associate professor of Physics at Loyola University Chicago, who has kindly given me permission to share them here. Robert stressed it shouldn't be taken as the the college's policy, rather as a discussion starter for his own students.
The paragraph in bold is particularly important, I think, and it offers a fine framework for how journalists should be considering their own AI use at work, too. You can find Robert on Bluesky here.
Should I Use ChatGPT Or Another LLM To Study?
I wouldnât recommend it. I try to keep up with the capabilities of the major LLMs. They can do some things really well, if you use them the right way. However, they frequently make mistakes when generating responses to questions about physics. Sometimes these mistakes are obvious, sometimes they are subtle and hard to spot. The fact that you cannot trust the output of LLMs should be reason enough not to rely on these systems when you are trying to learn a new subject.
But thatâs not the only problem. Interactions with LLMs feel like a dialog, so itâs natural to think the usual rules of conversation apply. You ask a question and expect the response will be an answer to that question. Itâs important to understand that this is not whatâs happening. An LLM is designed to generate statistically likely responses to the question âWhat would an answer to this query sound like?â This is not the same thing as answering the question. It might produce what you are looking for, or it might not. This is one reason why output from an LLM will sound authoritative even when itâs wrong, and apologetic when mistakes are pointed out. It isnât authoritative or apologetic, and it isnât âthinkingâ about your question. These are just the sorts of responses that best fit a very complicated set of likelihood criteria.
A bigger problem is that using an LLM short circuits the process of thinking through questions and developing strategies to answer them. Itâs not that an LLM never gets things right; they often produce correct output. But correct outputs are limited to material in the modelâs training data â questions we already know how to answer. Is that why youâre here? To answer questions we already know how to answer? Whether you are studying Physics or English or Business, all your instructors are trying to help you learn how to answer questions for yourself. Part of that training involves questions we already understand, because thatâs an effective way of learning processes that can be applied to questions we donât understand. This is one of the most important aspects of your college education and it takes practice. Asking an LLM may or may not generate a correct answer, but either way it prevents you from practicing and learning these processes.
To make matters worse, there is now research claiming that frequent use of LLMs has neurological and behavioral consequences. One recent study find significant cognitive debt and consistent underperformance compared to peers that do not rely on these systems. That is a steep price for a momentary convenience. So I canât stop you from using an LLM, but I would urge you to consider the long term cost.