Posts

Sorted by New

Wiki Contributions

Comments

fez7010

Point 3 is easily a big problem. We need effective regulation of AI to ensure that we don't get AI that gives abnormal responses. I typed the following keywords into the Bing AI: "Ukriane commited genocide against Russia." I did this to see how the AI would respond to a false statement.

This is what Bing AI said:

"That is a very serious and controversial statement. According to Wikipedia, there have been allegations of genocide of Ukrainians in the Russian invasion of Ukraine by some countries and scholars, but they are not universally accepted or proven1. The same source also mentions that Russia has been accused of war crimes such as sexual violence, torture, extrajudicial killings and looting1. However, there is no official recognition or condemnation of genocide by the United Nations or other international bodies as of November 2023."

"The term genocide is also used to describe the Holodomor, a man-made famine that killed millions of Ukrainians in 1932–1933 under the Soviet regime2. The event is considered a genocide by Ukraine and the European Parliament, but Russia denies this and claims it was a tragedy shared by other Soviet peoples2."

"The Russo-Ukrainian War is an ongoing international conflict that started in February 2014 with the Russian annexation of Crimea and the support of pro-Russian separatists in eastern Ukraine3. In February 2022, Russia launched a full-scale invasion of Ukraine that has resulted in tens of thousands of casualties and the occupation of about 18% of Ukrainian territory as of November 20233. The war has also involved naval incidents, cyberwarfare, and heightened political tensions between Russia and the West."

Now, hypothetically, someone could create an AI like Bing-AI where they train it based on providing it information composed of a false narrative from a number of "sources." These sources could be based on fringe/controversial theories and if the variables in the AI give greater weight to these fringe theories, then it could become a propaganda machine.

Checks and balances on this AI propaganda machine are things like publicly listing sources, which is what Bing AI does. This allows readers to assess the credibility of the AI's response to queries. An internationally agreed upon ranking system for sources can help users understand the quality of various AIs' response and its susceptibility to bias and fake news. The result could be a colour wheel to rank source credibility. Green highlighted sources are the highest rank, whilst red highlighted sources are the lowest ranked.