As artificial intelligence proliferates, so do concerns about AI-grown misinformation. AI systems like ChatGPT and other algorithms learn from the text and data that they’re fed. If the input data is bad, so it the output—thus the aphorism “garbage in; garbage out.”
The Washington Post recently published a report analyzing the sources of Google’s C4 data set, a large collection of information used to train many AI models. There was bad news and some supposed good news. The bad news: many sources in Google’s data set ranked low on trustworthiness scales and promoted conspiracy theories, feeding misinformation and propaganda into AI models.