AI is making it harder to differentiate between real journalism and fake news. How will the relationship between AI and journalism affect the ways we consume information?
The advanced capabilities of Artificial intelligence are only adding to the continued concern around misinformation, especially when we address the growing relationship between AI and journalism.
The potential for chatbots to disseminate convincing but unverified information makes it harder to differentiate between real journalism and fake news.
People want to know that what they are reading in the media presents facts and researched accounts of events.
It’s more important now than ever before for media outlets to keep developing ways to ensure and demonstrate their trustworthiness.
The public concern of fake news
Various polls highlight the public’s concern about fake news and their desire to understand which sources they can trust.
In a 2022 You Gov survey commissioned by the digital subscription app Readly, two-thirds of British people were worried about the spread of fake news, making the UK one of the three most concerned countries in Europe.
46% said they believed they were exposed to fake news daily
46% said they believed they were exposed to fake news daily, and nearly three-quarters expected to encounter increasing amounts of it over the next few years. And that was before the launch of ChatGPT.
ChatGPT and the power of AI fakes
When it comes to written works, there’s currently a lot of discussion in expert and artistic communities about detecting what content is AI- and what is human-generated.
ChatGPT is proving itself to be an accomplished content creator, driving several UK universities to ban it and implement detectors to identify its use by students.
Singer-songwriter Nick Cave is not impressed, calling attempts by ChatGPT to write in his style “a grotesque mockery of what it is to be human”.
“a grotesque mockery of what it is to be human”.
Even AI’s creators are concerned
The speed of AI’s development has put the wind up its very creators, with so-called “Godfather of AI”, Geoffrey Hinton leaving Google to pronounce his concerns over the dangers of misinformation.
His warning that chatbots could be exploited by bad actors is supported by Toby Walsh, chief scientist at the University of New South Wales AI Institute, who said: “When it comes to any digital data you see – audio or video – you have to entertain the idea that someone has spoofed it.”
What is journalism’s responsibility against the rising use of AI
For the general public, misinformation, whether AI- or human-generated, can be hard to spot.
While the manipulation of information is no new phenomenon, social media has undoubtedly increased the proliferation of disinformation, conspiracy theories and fake news.
The viral nature of Facebook, Twitter and other platforms has enabled unsubstantiated stories to travel at an alarming rate.
The increasing number of citizen journalists and online sources puts pressure on traditional mainstream media to differentiate themselves and to demonstrate that their sources are 100% verified and authenticated.
It’s important that the public can recognise media outlets that provide trustworthy journalism. Newsback’s partner, the Journalism Trust Initiative (JTI) from Reporters Without Borders (RSF) has developed a standard for media outlets to distinguish themselves from the myriad of other sources sharing information on the internet and social media.
This entails a three-step process:
- An internal self-assessment of journalistic policies
- Public disclosure of that assessment in a transparency report
- An independent audit from a licensed, certified body. The idea is that readers will recognise the JTI stamp and know that the outlet adheres to this standard.
AI and Journalism: Technology is not just the problem
As well as being able to identify trustworthy media, people want it to be the first line of defence against fake news. A survey from the newspaper industry marketing body, Newsworks, found that almost 70% of respondents rely on journalists to lead the fight against misinformation about climate change.
In response, the number of journalists dedicated to checking and reporting on misinformation has risen sharply in recent years.
Most large news organisations are also investing in increasingly sophisticated fact-checking services to verify every story.
The BBC, for example, implements fact-checking tools and promotes discourse around the globe on how media organisations can use technology to ensure the authenticity of content and detect misinformation.
It strongly endorsed the importance of provenance at its March 2023 global Trust In News Forum, with experts discussing tech tools to prevent the manipulation of content and authenticate the origin of work.
Tracking the source of information
Tracking the source of information and assessing its authenticity is paramount. Our technology at Newsback can detect editing, deletions, distortion, or misuse that may have occurred through a piece of content’s dissemination.
It is also possible to identify content that is pulled from multiple sources and may have been manipulated along the way. Journalists are often required to pull stories together at speed to satisfy a 24/7 news agenda, so the ability to find the source and journey of a piece of content quickly is critical.
Media outlets can also receive alerts if their content is re-used or altered in any way.
Separating fact from fiction
Separating fact from fiction has never been more difficult, given the increasing level of sophistication from those spreading misinformation. The speed at which AI has developed is unnerving and adds to the problem.
This, coupled with the role social media plays in spreading fake news, paints a negative picture of technology. But it’s important to remember that while technology has enabled the viral spread of misinformation, it will also play a vital role in bringing it to a halt.
This piece was written and provided by Delphine Gatignol, Director of Newsback.