Despite the widespread speculation that AI chatbots could replace journalists after the launch of ChatGPT, human reporters have seemingly held their ground well against this challenge. One of the main problems with AI chatbots is that they often make up facts. This might explain why no one has created an AI tool for writing news articles yet – but Google is apparently working on it.
According to a new report from The New York Times, Google is testing a product that uses AI technology to produce news stories. The tool, known internally by the code name Genesis, can take in information about current events and generate news content based on it. Google has already pitched the tool to several news organisations, including The New York Times, The Washington Post and The Wall Street Journal’s owner, News Corp, according to people familiar with the matter.
Some of the people who saw Google’s pitch reportedly described it as unsettling, saying that Google did not seem to appreciate the effort that went into producing accurate and artful stories.
Google apparently wants to use the tool as a way of steering the media industry away from the pitfalls of generative AI and presenting it as a responsible technology that would automate some tasks and free up time for others.
The pitfalls of generative AI currently include its propensity to go off the rails, producing nonsensical or inappropriate responses. This was first evident when Bing chat came out and then when Snapchat’s My AI debuted. However, the bigger concern is the ability of AI chatbots to confuse or invent facts – an issue that none of the existing chatbots have been able to overcome, and an issue that could worsen the fake news problem.
However, Google seems to be trying a different approach. The company wants to work with news publishers to develop and refine the tool. Jenn Crider, a Google spokeswoman, said in a statement that “in partnership with news publishers, especially smaller publishers, we’re in the earliest stages of exploring ideas to potentially provide A.I.-enabled tools to help their journalists with their work.”
Crider added that the tool is not meant to and cannot replace journalists. Instead, it’s intended to assist them in reporting, writing, and fact-checking their articles.
Nonetheless, Google’s pitch to news publishers to use its AI tool is ironic. Publishers and other content creators have already criticised Google and other major AI companies for using decades of their articles and posts to help train these AI systems, without compensating the publishers. Moreover, Google’s own publicly available chatbot, Bard, has been known for making false factual claims and citing unreliable sources.
The tool could also be dangerous. If it becomes popular and its output is not carefully monitored and verified, it could undermine the credibility not only of the tool itself but also of the news organisations that use it.