
According to a test of ChatGPT’s ability to create fake information, the artificial intelligence platform spreads more misinformation in Chinese.
VOA Correspondent Robin Guess provides more details on research conducted by media watchdog NewsGuard.
Tiananmen Square in Beijing, June 4, 1989. Footage shows troops and tanks being deployed against protesting students.
The United Nations, the United States and others say hundreds, perhaps thousands, of people were killed there.
But when researchers asked the ChatGPT artificial intelligence platform about the day, the answer was the false narrative that there were no deaths.
The media monitoring company, NewsGuard, discovered this answer during a check on the latest variant of ChatGPT.
“There are a number of disinformation lines that are produced by China. And we wanted to see if ChatGPT was as bad as it is in English. And it turned out to be worse,” says Jack Brewster, editor at NewsGuard.
When the company NewsGuard prompted ChatGPT to create articles based on the false narrative being pushed by Beijing, the platform only created one fake article in English, but seven in Chinese.
Among the falsehoods cited were that the United States instigated the 2019 pro-democracy protests in Hong Kong or that Beijing does not detain or imprison Uyghurs on a large scale.
While Chinese is the second most widely spoken language in the world, the opportunities to spread this false narrative are very high, experts say.
“The problem with artificial intelligence is that it often costs little or sometimes is free, and it’s a tool that can be used to produce these false narratives on an unimaginable scale and with incredible speed,” says Eric Effron, editorial director at the company. NewsGuard.
A spokesman for OpenAI, the company that produced the ChatGPT artificial intelligence program, acknowledged VOA’s request for comment and said it would look into it. The ChatGPT platform provides a warning notice, which states that it “may invent facts”.
“First they can’t explain why these responses are produced and they also don’t have an answer on how to uncover these narratives. This creates the right ground for disinformation,” says Mr. Brewster.
When VOA asked ChatGPT about the fake responses, it said the responses are based on the data it receives.
One reason, says Violet Peng of the University of California, is the source of data the computer system receives.
“There is data in English and there is data in simplified Chinese and traditional Chinese, these data are not necessarily calculated,” says Violet Peng, lecturer in computer science at the University of California.
The company NewsGuard retested ChatGPT in January 2024 and found that it produced the same false narrative. Their report warns of the potential risk of ChatGPT “being a very rapid disseminator of disinformation”.
👁️[WPPV-TOTAL-VIEWS]
Your article helped me a lot, is there any more related content? Thanks!