Courtesy of Pexels

With the recent surge of companies racing to come out with their own beta version AI chatbots, a few have produced some unexpected surprises. Some designed to speak like certain people can end up completely out of character, some aren’t very good at remembering message history and some can come up with very awkwardly worded responses. However, some of them can speak very fluent ‘human.’ The only problem is that much of what they are saying is completely inaccurate. One of the most recently released beta version AI chatbots is Google’s Bard, and which appears to have been coded as a compulsive liar.

The problem was addressed in the New York Times article, ‘When A.I. Chatbots Hallucinate’, where reporters tested the legitimacy of answers given by several conversational AIs, including Google Bard. Bard speaks eloquently and clearly, but what it says is only about half true, at best. It is entertaining, but the problem it poses is beyond misremembering a New York Times article that doesn’t exist or coming up with stories about historical figures who never really met. 

More than Chat GPT and other well-known AI chatbots, Google’s Bard poses the very real risk of misinformation on a grand scale. One of the oldest and most well-known examples is Siri, the artificial assistant included on new Apple brand devices since 2011. Since then, there’s been a rise of ‘AI assistants’ including Siri, Cortana, Google Assistant and Alexa. The similarity between the two poses the risk of misinformation, even though chatbots such as Bard were not advertised to fulfill that function.

Another thing to consider is that Google has already created code that is very similar to artificial intelligence and utilized its parent search engine to find answers for the user. This has become second nature to many who are used to having such devices in their home as a way of finding information quickly and accurately. Given that this is a product of the same company, one has to wonder why they didn’t make the decision to link Bard to the search engine in a similar fashion. But the hope remains that users can have a fun time with the bot without taking anything it says too seriously. In this case, Bard fulfills his purpose perfectly.