ChatGPT is a relatively new AI software from OpenAI originally developed as a language software designed to engage in conversations and has the capabilities to answer followup questions, admit its mistakes, challenge incorrect premises and reject inappropriate questions. By recognizing and understanding human conversation patterns, ChatGPT is able to respond to questions posed by humans in a realistic way that doesn’t look much different from a response you might get from a colleague. This system bridges the gap between human and computer communication and allows people to obtain and relay information in a less systematic manner with the potential to eliminate the need for Google. While ChatGPT seems innocuous, it poses a real threat to students who pay copious amounts of money to attend schools and write academic papers to fulfill their educational requirements.
In an everyday sense, ChatGPT can be incredibly practical and has often been compared to Alexa or Siri in its ability to make lists and complete certain tasks like any virtual assistant might. People have even used ChatGPT for comedic purposes, using it to create poetry or write songs. However, while the primary use of ChatGPT has been to ask the system questions you might otherwise search on Google, people have the ability to insert essay prompts and ask ChatGPT to write entire essays on any given topic. This doesn’t seem like an inherently negative thing, however, it can most definitely have negative implications on academia. Not only does this affect students in higher education, it is incredibly worrisome for students in elementary, middle and high school who are missing out on harboring important skills that come with thinking critically and writing thoughtful papers.
There is a fine line between an AI system that can answer questions in the blink of an eye and one that can conjure up entire dissertations. One must not overlook the value of the hard work students do to explain philosophical concepts in the essays that they write. In New York, schools have already acknowledged the dangers of ChatGPT and have taken steps to restrict access to it. It does not benefit students in the long run and it puts a higher burden on educators to review the content, carefully analyzing every sentence to see if it was written by a human or by a computer.
New AI like ChatGPT takes the hard work out of writing an academic paper and there have also been instances where the program gives a seemingly convincing and intelligent answer, yet fails to display accurate facts or data to support it. The OpenAI CEO Sam Altman made a statement on Twitter saying, “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness.” ChatGPT’s deceivingly intelligent answers means that fact checking and greater attentiveness to the claims made by ChatGPT is essential. It also further contributes to another pressing issue: the spread of misinformation. Since ChatGPT pulls from a variety of sources, the possibility remains that it could incorporate information from an untrustworthy site in its responses and unless the information is fact checked, it is easy for anyone to take the information from ChatGPT at face value. It is easier than ever before to spread misinformation and ChatGPT is not helping to eliminate this problem.
Even if AI such as ChatGPT isn’t necessarily accurate enough to compete with actual writers, it remains an impressive and improving tool. The world of academia will soon have to conceptualize how to combat this new technology as it becomes more advanced and protect academic integrity.