A.I. in the Newsroom
Sep. 19, 2023. 6 mins. read.
1 Interactions
A.I. is revolutionizing news authorship, and everyone is asking, 'Are AI-generated pieces the future of journalism?' Hanson explores the evolving landscape where semi-conscious machines meet the byline in today's newsrooms.
Goodbye to the Byline? How A.I. May Change Authorship in News
In the world of print journalism, a byline is a coveted commodity. For journalists, it’s recognition of the hard work that goes into developing and writing a solid story. It’s no secret that reporters rely on editors, fact checkers, proofreaders, and automated spelling and grammar programs for support in producing articles – that’s part of the process.
But what happens when reporters use Artificial Intelligence (A.I.) to do more – such as produce paragraphs of content for their stories? Will reporters and news outlets disclose what is being produced by machines versus humans? Should the byline change to acknowledge A.I. generated content?
A.I. Makes Inroads in the Newsrooms
Much has been written recently about the ability of machines and software programs to generate news articles. Tools such as QuillBot, ChatGPT and dozens more can create or paraphrase content. Many print and digital news organizations, faced with economic realities in today’s marketplace, have been quick to adopt A.I.
News outlets have acknowledged the use of A.I. to generate (non-bylined) stores. The Associated Press states it was among the first news organizations to use AI in the newsroom: “Today, we use machine learning along key points in our value chain, including gathering, producing and distributing the news.”
“Roughly a third of the content published by Bloomberg News uses some form of automated technology,” The New York Times said in its 2019 article, “The Rise of the Robot Reporter.”
And in July, The New York Times reported that Google was testing a new tool called “Genesis” that generates news stories. “Google is testing a product that uses artificial intelligence technology to produce news stories, pitching it to news organizations including The New York Times, The Washington Post and The Wall Street Journal’s owner, News Corp, according to three people familiar with the matter.”
As A.I. tools continue to be explored and adopted by reporters and the news media, some organizations have been sounding the alarm about the overall impact on the quality of newswriting and reporting created by automated systems. Inaccurate data, bias, and plagiarism – which have happened in human-generated stories – have also been uncovered in A.I. generated content.
The most recent example of A.I. gone awry in a newsroom occurred last year at CNET. The news outlet issued corrections to more than half of 70 articles created by A.I. for its Money section. The articles, including many “how to” stories, were plagued by inaccuracies and plagiarism.
After correcting the articles, CNET announced it was changing its policies on the use of A.I. in generating news.
“When you read a story on CNET, you should know how it was created,” said Connie Guglielmo, former CNET Editor in Chief in her January 25 blog post. “We changed the byline for articles compiled with the AI engine to “CNET Money” and moved the disclosure so you don’t need to hover over the byline to see it. The disclosure clearly says the story was created in part with our AI engine. Because every one of our articles is reviewed and modified by a human editor, the editor also shares a co-byline. To offer even more transparency, CNET started adding a note in AI-related stories written by our beat reporters letting readers know that we’re a publisher using the tech we’re writing about.”
(Guglielmo took on a new role in CNET following the A.I. debacle. She is now senior vice president on A.I. strategy.)
Many credible news outlets are letting readers know they are aware of the potential for A.I generated text to include bias and what actions they are taking to avoid it.
“We will guard against the dangers of bias embedded within generative tools and their underlying training sets,” The Guardian’s editor US Editor Betsy Reed states. “If we wish to include significant elements generated by AI in a piece of work, we will only do so with clear evidence of a specific benefit, human oversight, and the explicit permission of a senior editor. We will be open with our readers when we do this.”
Just last week, the Associated Press issued new guidance for use of A.I. in developing stories. “Generative AI has the ability to create text, images, audio and video on command, but isn’t yet fully capable of distinguishing between fact and fiction,” AP advises.
“As a result, AP said material produced by artificial intelligence should be vetted carefully, just like material from any other news source. Similarly, AP said a photo, video or audio segment generated by AI should not be used, unless the altered material is itself the subject of a story.”
Use of A.I. as a Tool, Not a Replacement for Human-Generated News
In some ways, the failed experiment at CNET supports the use of A.I. as a compliment to human reporting. Proponents cite the ability of A.I. to take the burden of mundane tasks off reporters and editors, increasing productivity and freeing up time to do what humans do best.
“Social Perceptiveness, Originality, and Persuasion” are cited as the human qualities that would be difficult for A.I. to replicate in newswriting and reporting, according to the website calculator “Will Robots Take My Job.” (Journalists are shown to be at a “Moderate Risk” of 47% of losing their jobs to automation, the site said.)
The new Google tool is designed to do just that, a company spokesperson said to the news outlet Voice of America.
“Our goal is to give journalists the choice of using these emerging technologies in a way that enhances their work and productivity,” the spokesperson said. “Quite simply these tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating and fact-checking their articles.”
That philosophy may sit well with readers, as shown by a recent Pew Research Poll. When asked if A.I. in the newsroom was a major advance for the media, many didn’t see the value.
“Among Americans who have heard about AI programs that can write news articles – a use closely connected with platforms such as ChatGPT – a relatively small share (16%) describe this as a major advance for the news media, while 28% call it a minor advance. The largest share (45%) say it is not an advance at all,” the survey said.
Will Today’s Byline Become Extinct?
As A.I. becomes mainstreamed into the print reporting world, news outlets are faced with choices on how to acknowledge the origins of their content. Will reporters who use A.I. text in their stories acknowledge its source in their byline (‘By York Smith, and Genesis’)? Will they add a credit line at the end of the article? Or will A.I. generated sentences be considered just another tool in the hands of reporters and editors?
A definitive answer may not be available yet. But credible news outlets that maintain the value of transparency will help the media develop a new industry standard in the world of machine learning.
Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.
1 Comments
One thought on “A.I. in the Newsroom”