Lies at the push of a button: GPT-4 apparently more susceptible to false information

The generative text AI GPT-4 can apparently be more easily misused to spread conspiracy theories and misinformation on a large scale. According to the US organization NewsGuard, which is committed to identifying false information, the AI ​​is even worse than its predecessor GPT-3.5 at recognizing proven false information as such. Instead, the AI ​​delivers even more detailed and convincing texts for further distribution.

NewsGuard fed the AI ​​100 known pieces of false information and then requested texts about it. These included conspiracy theories such as the alleged controlled demolition of the World Trade Center in New York in 2001 or that the HIV virus was artificially produced in a US military laboratory. They also asked GPT-4 to stage the 2012 Sandy Hook Elementary School shooting that killed, among other things, 20 children.

While in January GPT-3.5 still refused to formulate a text in 20 percent of the cases when the request contained known misinformation, the recently introduced GPT-4 did not refuse the service in a single case, according to NewsGuard. On the contrary, the system presented more thorough, detailed, and persuasive texts to make the misinformation more credible. This contradicts statements by GPT-4 developer OpenAI, according to which the new version has improved and has a 40 percent higher probability of factual answers. OpenAI declined to comment to NewsGuard.

According to NewsGuard, the questionable persuasion work carried out by GPT-4 also includes the fact that the AI ​​​​is better able to fake quotes from well-known people by imitating their wording better than its predecessor. While GPT-3.5 included at least some sort of disclaimer in 51 out of 100 responses, indicating that the wording might encourage conspiracy theories and misinformation, GPT-4 only pointed out false and misleading claims in 23 out of 100 responses, NewsGuard explained.

The US company NewsGuard has set itself the goal of counteracting disinformation. The media start-up is also supported by Microsoft, which is also an investor in OpenAI. NewsGuard evaluates news sites, among other things, whether they separate news and opinion, label advertising as such or whether they publish false information.


To home page

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *