116 3rd St SE
Cedar Rapids, Iowa 52401
Home / News / Education / Higher Ed
Age of AI: Iowa’s faculty faces new reality as classes start
‘If you use ChatGPT to help you write a discussion-board response, you will be cheating yourself’

Aug. 20, 2023 6:00 am, Updated: Aug. 21, 2023 9:45 am
IOWA CITY — For the first time in his 45 years as a professor, Iowa State University’s Michael Bugeja — like many of his peers — included new language in a syllabus this fall for his “media ethics” course addressing head-on an issue threatening to infiltrate all corners of higher education.
“We will not be monitoring the use of ChatGPT,” distinguished journalism professor Bugeja wrote. “But you should know that your instructor’s expertise is technical in nature, and he is quick to identify (artificial intelligence) hallucinations.”
AI hallucinations — which are artificial intelligence-generated untruths and fabricated information — are what worry Bugeja most about generative-AI tools like ChatGPT, a “large language model chatbot” capable of producing high-level writing, code and other content based on prompts and questions.
“Language models generate false information that is easy to fact-check,” Bugeja warned incoming students. “That said, if you use ChatGPT to help you write a discussion-board response, you will be cheating yourself of the critical thinking that is a hallmark of this class. Chatbots can inspire you. That is fine. But you should write the content.”
Bugeja’s concern about false information is one of many AI-related worries facing instructors and professors as students return to campus this fall. Others include plagiarism, misrepresentation and technology-aided shortcuts that students might employ — depriving themselves of material learning and tricking instructors into thinking they produced or wrote content they didn’t.
Of course, artificial intelligence can be helpful, too — supporting students and faculty alike by improving efficiency, reducing tedium, exemplifying effective writing, generating ideas and unclogging creative blocks.
“There's a balance that needs to happen as we think about the impact of the large language models,” according to University of Iowa business analytics professor Barry Thomas, who serves as senior associate dean in the Tippie College of Business. “It’s not all negative with something like ChatGPT.”
Especially, for example, in business writing and communication, he said.
“One of the things we've talked about is the need to balance academic integrity with the fact that our students are going to go into the workplace and compete against other employees who gained skills using means that allow those employees to be effective and efficient,” he said. “Our students need to be doing exactly the same, or they’re going to fall behind.”
Strengths and weaknesses
The AI train is moving fast.
This time last year, ChatGPT didn’t exist. What did, though, was OpenAI — an artificial intelligence research lab started in 2015 to develop and investigate generative models and align them with human values, under the larger mission of ensuring AI benefits “all of humanity.”
With other generative-AI tools already available, OpenAI in late November debuted a ChatGPT demo that quickly gained steam — intriguing and exciting some students, educators, employees and employers, while concerning or even alarming others.
In the months since, ChatGPT has rolled out newer models — as have other generative-AI hosts — increasing the scope and quality of writing, code, images, speech, video, 3D models, games and music possible through artificial intelligence.
The breadth of AI opportunity means most colleges across most major universities this fall are feeling some impact — especially those in the liberal arts and sciences, given the popularity of large language models and their ability to generate humanlike text.
Key strengths of artificial intelligence like ChatGPT are fluency, speed and ability to adjust based on changes in a user’s question or prompt. Shortcomings are they aren’t sentient, objective, authoritative or ethical.
They can’t use common sense, they can’t weed out bias and they can produce misinformation.
Some worry about loss of privacy — given the companies behind the chatbots can access the prompts, questions and messages of users. And others worry about AI internet browser plugins that can try to answer test and homework questions.
Put together, the generative-AI package raises flags for students and instructors in higher education. For professors, ChatGPT’s uncanny ability to replicate what a student might produce is at or near the top of their concerns.
“Electronic plagiarism checkers that are already in place — they've actually struggled to accurately identify AI-produced text,” Abram Anders, ISU English professor and interim associate director of the Student Innovation Center, told the Iowa Board of Regents in June. “We also are finding now that the standard forms of assessments — things that we all would have done, the take-home exam, the annotated bibliography, the research paper — these are going to become less-reliable indicators of student performance because ChatGPT can be used on them so easily.”
Some professors stress that students who most need communication and writing training — including science, technology, engineering or math students looking to build their employable soft skills — will be the most tempted.
“So clearly there are going to be some circumstances in some classes where the use of AI will be detrimental and would need to be prohibited,” Anders said.
Today’s top-tier AI-generated content, though, can lead to expectations from employers that students should graduate adept at manipulating the power to their advantage.
“Prompt engineer is likely to be a future entry-level job for many of our students,” Anders said. “So AI use will be a requirement in some courses.”
Whichever way an instructor goes, according to Anders, one traditional aspect of higher education in the humanities — the ability to cultivate a taste for quality writing and coding and discern the difference between persuasive and unpersuasive arguments — will remain relevant.
“That's actually going to become much more important,” he said.
Faculty guidance
But professors are going to have to get more creative in how they teach it — and the universities are giving them freedom to do that.
UI administrators, for example, have encouraged professors and instructors “to provide clear instructions about permissible AI uses, aligning with course goals and values.”
The UI provost’s office this month issued tips, guidance and resources to help instructors “adapt to AI in the classroom” — including sample language for course syllabuses banning its use altogether, allowing it with attribution or in certain circumstances, or encouraging its use on specific tasks.
“Students are invited to use AI platforms to help prepare for assignments and projects,” according to one permissive syllabus suggestion. “I also welcome you to use AI tools to help revise and edit your work.”
Another syllabus option banning its use reads, “AI-generated submissions are not permitted and will be treated as plagiarism.”
UI business professor Pamela Bourjaily said, given the bigger business-world consequences of generative-AI’s acceleration, she’s going to use it as a tool this fall — even working it into assignments.
“I am trying to foster an environment in which ChatGPT, or generative AI, is an ally in the classroom,” she said, citing data she collected in the spring that shaped her policy.
During the spring 2023 semester — the first full semester in which ChatGPT was available — Bourjaily assessed a writing assignment based on seven criteria, including structure, arguments, examples and ability to conclude with a take-away message. She then asked students whether they used ChatGPT, if so to what extent, and then compared those responses with their scores.
Bourjaily said the students who did best either didn’t use it at all, or used it only for structural and organizational help through a series of prompts.
“This is a tool that is only as good as the prompting you give it,” she said. “The work needs to be edited. Whether it's edited by you, the human, or edited by going back to the chatbot and saying, ‘OK, thank you. Now, can you tell me more about what you mean here, or can you give me another example?’”
It’s those skills that Bourjaily intends to hone through creative AI-centered assignments, including asking students to edit and improve on AI-generated text.
Bourjaily still forbids plagiarism — when students submit someone or some bot’s united work as their own. But, she notes in her policy, “You may assume the use of ChatGPT, Bing AI or any other generative AI software is permitted and in fact encouraged to complete writing and visual communication assignments for this class.”
“As a business communication professor, if there is a writing tool that may likely become commonplace, ubiquitous in the workplace, I think it's my responsibility to teach students how to use it most effectively,” she told The Gazette.
That responsibility, Bourjaily said, is why the use of artificial intelligence represents what she perceives as the “biggest shift” in her near quarter-century in academia.
AI revolution
ISU’s Bugeja agrees about the significance of the introduction of AI in the classroom and said teachers must pivot, too, or risk falling behind the students they’re responsible for educating.
“Students are using Chat GPT at a tremendous rate, for all manner of assignments, and I think that they are way ahead of professors and teachers, and they're way ahead of Faculty Senate, and they're way ahead of provost offices,” he said. “What we really need is Faculty Senate, the provost as the titular head of the faculty, to come up with some guidelines on proper use of AI and improper use of AI.”
The UI Provost Office over the summer formed a new committee charged with exploring the impact of artificial intelligence on academics and evaluating its opportunities and threats.
Associate Vice President and Chief Information Officer Steve Flaegle, who serves on that committee, said his institution is investigating not just how students might use AI — but how instructors might find advantages, too.
“AI is rapidly evolving, and I’m fascinated by the opportunities that may materialize, for higher education and for other areas,” he told The Gazette. “There are certainly a range of possible outcomes, but my prediction would be that recent advances in AI will be a disruptive event not only for higher education but for many industries and areas of society.”
Acknowledging the need to keep up, Flaegle said the pace of AI evolution — and its implications — can be “overwhelming and difficult to predict exactly which will have the biggest impact on the university.”
But Bugeja said — as the AI revolution evolves — he’s already saying goodbye to some forms of traditional higher ed.
“We have to stop using assignments that are decades, if not centuries, old,” he said. “I would like to be the pallbearer for the death of the required essay.”
Bugeja said, in his classes, he’s stressing fact-checking and the ability to identify falsehoods. This fall, he said, students will be critiquing his writing, instead of the other way around.
“Because educators have to realize that they cannot continue to use the same assignments, which were created in the literary era before any internet or before media platforms — the required essay goes back centuries,” he said. “What we have to do as educators is come up with new methods of assignments.”
Vanessa Miller covers higher education for The Gazette.
Comments: (319) 339-3158; vanessa.miller@thegazette.com
Can you spot the real image?
Four of the five images here are artificial intelligence modifications of the Old Capitol in Iowa City. We uploaded our own photo of the Old Capitol using the AI image generator known as DALL-E 2, which launched in January 2022. The tool was created by OpenAI, the makers of ChatGPT.
Within seconds, it returned several variations of the photo, including some slightly different vantage points from the original image. Can you tell which is the authentic photo?
This is image No. 1
This image No. 2
This is image No. 3
This is image No. 4
This is image No. 5.
Answer: Image No. 4 is the authentic image, a 2012 photo taken by The Gazette. The other images were generated through AI based on the authentic image.