The phrases “misinformation” and “fake news” may sound far-fetched as they are expressions often used by political parties to discredit the beliefs and campaigns of their opposers. Misinformation, however, is real and is encountered daily by most people, especially when using the internet. Through the rapid evolution of the internet, people are now able to share and access information immediately. Although the speed and accessibility of information have various benefits, the fast-paced nature of the internet has created a lapse in judgment where misinformation is able to easily disguise itself as factual. Many universities do not teach media literacy or use outdated instruction, allowing harmful rhetoric to run rampant online and taint the minds of the public. To combat the concerning invasion of misinformation online, updated curriculum is needed to protect students from falling victim to fake news.
Students of higher education institutions should not be at fault for failing to have perfect media literacy when examining online sources. According to a report co-authored with the Stanford History Education Group, the majority of university students are being taught outdated information when trying to discern fiction from fact. The advice commonly given comes from a guide published in 1998, released by the Princeton University Library. This guide has not been updated since and states that dot-org sites are credible sources, a fact that is no longer true or unambiguous. When using these outdated resources in the report, students largely failed to correctly identify which sites or articles were misinformation.
For current university students, media literacy may have only been taught to them in high school or beginning college English courses, but solely for research purposes when writing essays. Misinformation is now much larger than simply being told not to trust what one reads on Wikipedia. Today misinformation is used to pit the public against each other and spread harmful campaigns that target those most vulnerable.
In an age of technology, misinformation is typically easier to spot when receiving scam text messages or when a bot account tries to follow you through social media. The evolution of computers, however, is now leading to a widespread use of AI technologies that are meant to create their own images and trick viewers into believing a false narrative.
The development of deep fakes can now make it so it appears that someone is saying or doing something that was done by another person. Images can also be created to portray persons completing any action imaginable. Other AI inventions, like Chat GPT, are writing essays and articles after inputting just a few prompts. A recent image of Pope Francis wearing a high fashion coat was spread online fooling many into believing it was real to the discontent of the public, showing how believable and dangerous AI has become. These advancements in technology will only continue making it more difficult to discern which information is true.
Instead of blaming those who fall victim to misinformation, media literacy needs to be taught starting in primary school through higher education. Students in most public schools are now given iPads or laptops allowing them access to the online world. By teaching students to sort through incredible sites at a young age, they will grow up alongside the fast changing internet. The only way to combat misinformation and give the public an upper hand is to help them understand how it is being used against them.