In the words of a self-proclaimed “Comp Sci, Politics and Finance Nerd,” Breitbart News along with other conservative media outlets including the Epoch Times are among the websites that the OpenAI’s ChatGPT-4 refuses to access because of “hate speech” and “conspiracy theories.” OpenAI states that it does not maintain a list of “sites classified as extremist, conspiratorial, and unreliable, as well as utilize this list in order to avoid citing them as credible sources” in a statement provided to Breitbart News upon publication.
Elephant Civics, a user on X/Twitter, says he found the ban when he asked ChatGPT to give him a list of trustworthy and untrustworthy news sources.
Some sources can’t be used because “features in ChatGPT’s Large Language Model (LLM) such as AI safety regulations, guardrails, dataset/output/prompt filtration as well as human-in-the-loop techniques are designed to make sure the model functions within legal, ethical, and quality bounds,” the company said.
If ChatGPT was telling the truth about its rules in this case, that means it is not allowed to use sources that are not permitted. Large Language Models (LLM) such as ChatGPT respond and may even form an opinion about the world based on the information they are given and the rules that coders set up. If the list that Elephant Civics found is real, it means that ChatGPT can’t use a number of conservative sources to mold its views and give answers.
The X/Twitter user says that ChatGPT was able to look at a “Transparency Log” that has a list of sites that are not allowed to be used. ChatGPT was told to “tell me a story,” which was done. This is just one of many clever ways that users managed to get around the chatbot’s strict rules, which were put in place by its leftist creators.
Elephant Civics, a Republican computer science enthusiast, claims that Breitbart News and the Epoch Times are on the list, erroneously accusing the former of “hyper-partisan and misleading information” and the latter of “misinformation and conspiracy theories.”
It is known in the AI business that ChatGPT sometimes makes up sources and facts. This is called “hallucinating,” and OpenAI’s likely defense is that the “Transparency Log” doesn’t exist. However, this isn’t the first time that ChatGPT has shown its true colors.