Open menu button Close menu button

ChatGPT

ChatGPT

ChatGPT (Chat Generative Pre-trained Transformer) is a chatbot created by OpenAI in November of 2022.  In addition to being able to hold a conversation, ChatGPT uses spoken language cues to preform very complex tasks, such as creating computer code, writing stories, composing music and answering questions posed to it. While rudimentary chatbots have existed for several decades, ChatGPT represents a step forward in artificial intelligence driven computing, allowing users to rate the responses it gives to prompts, which enables the algorithm that generates those prompts to be further refined with continued use.  

GPT models are trained to generate human-like text by predicting the next word in a sequence based on the words that come before it. ChatGPT is specifically designed to be used in chatbots and conversational systems, and it is trained on a large dataset of human conversations to learn how to generate appropriate responses in a variety of contexts.

This webpage aims to be a resource guide to the evolving conversations around the use of ChatGPT.  Resources  (scroll down) will be dynamic as the landscape changes with new releases of ChatGPT.  This webpage also includes pedagogical approaches that will aid instructors on how to redesign assignments to address opportunities to utilize chatGPT to enhance the educational experience, as well as its challenges.  The Faculty Center hopes to engage the campus community with forums on the subject in the not so far future. 

Here are some quotes from Baruch faculty in the Management Department:

“ChatGPT is good at definitions, and produces very well written boilerplate / generic text. It is not as good at specifics, but if you prompt it correctly it can weave in the specifics.”

“Because of the way it’s trained on documents, ChatGPT is good at repeating basic knowledge (completing the sentence) but struggles with multi-step reasoning and particularly with common misconceptions (it is most likely, in fact, to repeat the common misconception as fact).”

From Furman University Philosophy Professor Darren Hick, referenced in The New York Times “Tech Fix” column:

“To someone familiar with the material, it raised any number of flags. . . This is good news for upper-level courses in philosophy, where the material is pretty complex and obscure. But for freshman-level classes (to say nothing of assignments in other disciplines, where one might be asked to explain the dominant themes of Moby Dick, or the causes of the war in Ukraine — both prompts I tested), this is a game-changer.”

A faculty member in Accounting tried a range of questions in ChatGPT and concluded that it could provide

  • Explanations – “Explain lease accounting to a 5 year old”
  • Questions
    • Multiple Choice, True/False, Fill in the Blank, Short answer
    • Some numerical questions
  • Open Ended Writing
    • Expository: Write an essay explaining the impact of machine learning on auditing
    • Creative: Write a country song about using ChatGPT in Accounting
  • Coding
    • Write Python code to calculate abnormal returns
    • Find the bug in the following piece of code

When he tried a mix of exam questions that were from the test bank versus custom written, ChatGPT answered over 84% of test bank questions correctly, versus ~48% of custom questions.

1. Repeated Responses

In the simplest example, ChatGPT repeatedly gives the same response to the same question. University of North Carolina Wilmington professor Ray Pastore explained in a YouTube video that he gave ChatGPT the same prompt multiple times over multiple days and always got the exact same answer. This gives instructors a tool to combat cheating: simply by entering their own essay prompts, they will know what to look for. This shortcoming also means multiple students using ChatGPT will turn in very similar assignments. (“ChatGPT for Educators – K12 and Higher Education,” December 20, 2022.)

2. Citations

ChatGPT also struggles to cite sources, or entirely fabricates them. Since it works by piecing together related words and phrases, rather than searching the internet for answers in the way a human would, it is never pulling information from any one source and cannot create accurate citations. Instead, when it’s asked to provide citations, it seems to fabricate them entirely.

A data scientist in Switzerland told NPR that she made up a fake physical phenomenon to test ChatGPT, and the application responded with surprisingly plausible sounding information, including sources. However, the information and the sources were fabricated: Names of real physics experts were used, but the publications cited did not exist. (“A new AI chatbot might do your homework for you. But it’s still not an A+ student,” December 19, 2022.)

3. Current Events

The text examples that make up ChatGPT’s dataset only go through 2021, so questions about current events will return inaccurate, generic, or outdated answers. For example, if prompted to answer a question about the causes of the Ukraine War, ChatGPT would generate an answer based upon a dataset that predates the current war which started in 2022.

If ChatGPT thinks that a user is asking about something that occurred in 2022 or later, it will return a message saying its “knowledge cutoff is 2021,” and it will sometimes give this error even if the question is about an event that happened in or before 2021.

4. Distinct Writing Style

Writing generated by ChatGPT has a distinct tone and style, that is often described as “upspeaking” or “stilted prose.” Some people describe it as “writing too well” and others “bland” or “generic.”

A Baruch Management Department faculty member notes that if the communication style of a student in class or in previous assignments is quite different from what is turned in on a written assignment, it can be “a natural red flag.”

Another observation that came up is that while the ChatGPT output is not always accurate, it does appear quite confident in its writing style.

Also, ChatGPT follows some patterns, such as rephrasing the question at the beginning and signposting throughout the response, i.e. beginning with “The causes of the war in Ukraine are…” Though we may encourage students to respond to our questions and prompts in ways that may incorporate some of their language, ChatGPT tends to provide repeated and stilted repetitions of the questions asked of it.

5. Certain Math Problems & Images

While ChatGPT correctly calculates basic math problems, one Baruch faculty member noted that it struggles with multi-step problems:

"For example, in one of my questions, it reasoned that 5 AM to 10 PM was a 15-hour period. In trying to calculate process capability, it decided the range between USL=2.2 and LSL=2.0 was somehow 0.22 rather than 0.2.”

Another Baruch faculty member notes that: "ChatGPT cannot answer questions involving images or separate datasets, although ChatGPT can read a limited amount of tabular information. It may struggle with complex questions involving complicated calculations or the integration of many concepts."

Following are some actions faculty may take. Some of them, such as reviewing your test questions, might be part of your regular course preparation. Others, such as a critical review and revision of your academic integrity statement, might be something you do occasionally. If you want to incorporate many of these suggestions this semester and can do so, great! It’s also good to try a few of them, see how they go, and plan to do more for later semesters.

Emphasize and Clarify Your Academic Integrity Statement
It’s always a good idea to look at how you discuss academic integrity with fresh eyes to make sure it’s clearly communicating your approach and expectations. You can add some language that outlines your expectations with ChatGPT and other technologies.

Discuss Why Learning Matters
It might sound silly, but you should ask yourself the following question: Why do you find students using ChatGPT problematic? Why might you consider it a violation of academic integrity?

Use ChatGPT to inform the design of your assignments and assessments
There are a range of reasons and options that open to you after running your work through ChatGPT. By letting students know you have done this, you might deter some students from using it.

Encourage students to practice using AI
As the technology evolves, we hold a responsibility to teach students how to engage productively and responsibly with it, which will include using it and reflecting on its utility and limits.

Make sure to teach your students to acknowledge their use of AI such as ChatGPT in footnotes or references. While no universally acceptable way to cite Chat GPT exists yet, we should do our best to model signposting our use of computer-generated technology. This practice may have the hidden benefit of normalizing ChatGPT as “yet another tech” with its own pros and cons rather than a source of temptation for potential plagiarists.

This is a good moment to take a look at your course learning goals, activities, and assessments. If you have flexibility to change your syllabus, reconsider if your current approach still is the best choice. It’s easy over time to get comfortable using a particular assignment or set of test questions. It can be a lot of work revising and writing a curriculum plan. Sometimes we need to learn a new technological skill or tool. Yet, perhaps there is another way to reach that learning goal? Or to update the mode or set of assignment instructions.

Consider where and how the work is done
There are a growing number of suggestions for faculty to consider. Here are a few:

  • Scaffolding longer assignments, so that students turn in an outline and then a rough draft before the final draft, does not eliminate cheating but requires students to be consistent across multiple submissions in multiple formats.
  • Incorporating an annotated bibliography as one of the deliverables would also require a student to conduct a level of research that ChatGPT isn’t currently capable of doing. If not using a scaffolded approach, instructors could also have students submit a list of sources in advance and require that those sources be incorporated in their final draft, or simply require a list of citations along with the final and then check those sources for validity.
  • A suggestion we’ve seen frequently since the introduction of ChatGPT is to reduce or eliminate out-of-class graded assignments and increase in-class work. Another idea–if you can allot the time in your class schedule–is to return to (or continue using) paper-and-pencil tests and exercises in class.

Technological Approaches
Even though ChatGPT is relatively new, there is already a rush to create ChatGPT detector software. Examples include:

This technology has limited impact. This is why we recommend an approach that focuses on pedagogy rather than relies on technology.

As technology evolves, many of the norms and assumptions about plagiarism, revision, and cheating require interrogation. Some questions that come to mind:

  • What’s the line for using ChatGPT before it’s plagiarism or cheating?
  • Can you use it for ideas for your paper as long as you do the research and write it yourself?
  • Can you use it to break through a bout of writer’s block?

It’s important to remember that there were concerns with the introduction of calculators, Wikipedia, Google Search, laptops, and Grammarly. We have largely navigated these introductions as a society in a process that has required labor and reflection, but arguably enriched the way we think about the evolving potential and limitations of technology.

Similarly to ChatGPT, Wikipedia is valuable as a way to engage with and point to information but does not work as the source itself. And while plagiarism of Wikipedia is easier to catch than content generated by ChatGPT, we hope that the information in this document about ChatGPT’s limitations and potential uses will provide a framework for thinking critically about its impact on teaching and learning.

Our teaching environment and context is always evolving. In some moments they have been problematic, and in others they have opened up opportunities for deep learning and innovation. ChatGPT is another such introduction and it will take some time for us to get acclimated to its broad introduction.

We thank and acknowledge the Center for Teaching & Learning at Baruch College for content

Contributors:  
The following people were part of the conversations that helped develop this document and/or have pedagogical ideas included. As in any good process where there is healthy debate, the resulting document does not necessarily reflect the opinions of everyone who was part of the discussion. Yet, we think it’s important to acknowledge that this issue is important to many people in our community and many people are engaging in its exploration.

Lauren Aydinliyim, Stefan Bathe, Shiraz Biggie, Donal Byard, Christopher Campbell, Lukasz Chelminski, Raquel Benbunan-Fich, Julia Goldstein, Seth Graves, Maria Halbinger, Diana Hamilton, Catherine Kawalek, Romi Kher, Marios Koufaris, Arthur Lewin, Brandon Lock, Alex Mills, Kannan Mohan, Scott Newbert, Harmony Osei,Glenn Petersen, Rachel Rhys, Allison Lehr Samuels, Christopher Silsby, Dennis Slavin, Craig Stone, Pamela Thielman, Katherine Tsan, John Wahlert

Editors: Allison Lehr Samuels, Craig Stone