This article was not written by a robot – on the JournalismAI Festival 2021

By Lara Wiebecke

Illustrated by Vaneeza Jawad

If we are to believe movies like the Matrix or Ex Machina, and influential figures including Bill Gates, Elon Musk, and Stephen Hawking, we should be worried about the future of AI. They make it seem that it’s only a matter of time until artificial intelligence overtakes human intelligence and either traps us all in a virtual simulation or, perhaps more realistically, takes our jobs away. Even journalism, a field that might seem immune to automation, could be destabilised by artificial intelligence. However, according to research conducted by Polis, LSE’s media think tank, this might not be a bad thing. In fact, AI might bring journalism to the next level and unearth stories that could not have been told without technological assistance.

At the JournalismAI Festival 2021, practitioners and academics shared their ideas on how newsrooms might use AI in the future. In the panel discussion “Automating the Analysis of Documents, Images and Language: AI in Service of Investigative Journalism”, speakers shared possible uses for AI in investigative journalism.In collaboration with Polis, members of 7 newsrooms in Latin America and the US worked on projects exploring possible innovations AI could bring to their investigative work. The journalists were split into different teams, one of which used artificial intelligence to identify gender-based violence on social networks. The team was represented by  Bárbara Libório, a journalist working for Azmina magazine in Brazil and Fernanda Aguirre, a data analyst working for DataCrítica in Mexico. By feeding an algorithm with misogynistic tweets, their team taught the algorithm to identify and map misogyny online, speeding up the ability of journalists to comb through large amounts of data that would be impossible by hand.

Similarly, the team represented by Shreya Vaidyanathan (project manager at Bloomberg) and Gibrán Mena (co-founder of DataCrítica) used AI to analyse satellite imagery. Their goal was to detect environmental damage, for instance by detec ence analysed satellite images of the forests and was able to extract information invisible to the human eye and predict patterns. However, the journalists admitted that accessing high-definition images for the AI was difficult and that all findings ultimately had to be verified by humans. AI could take over certain tasks in the newsroom, but is far from replacing human journalists.

The Festival also saw discussions on the question whether using artificial intelligence in journalism was ethical,  in the talk “Bridging AI Ethics and Journalistic Standards”. Tess Jeffers, the Director of Data Science at the Wall Street Journal, explained that the WSJ was using AI in their investigative work, as well as for online comment moderation and to determine user preferences for content recommendation.  Jeffers explained that the WSJ was aware of the possible ethical problems of using AI, and had made it a priority to be transparent with their readers about it. Whenever thinking about new ways AI could be used to collect user data, the WSJ first considers whether readers have been sufficiently informed about what was happening with their data. The newsroom had also reflected on which areas they were comfortable using AI in and established standards to guide their decisions.  It was important to Jeffers to emphasise that the mission of her paper remained the same even if the tools changed: “Our readers always come first before profits.”

Jonathan Stray, a researcher at the Berkley Center for Human-Compatible AI, deals with ethical questions around online recommender systems in his work. “The main ethical concern there is how do you choose the stories, how do you decide which someone should see. The modern practice is generally to choose stories that produce certain types of user interaction, so engagement, right? So I want to show you stories that you are going to click on, but… that does not necessarily align with our values and our ethics.”

In his previous role Stray had been building software for investigative journalists, where the primary concern was story selection. As Stray pointed out, journalists constantly have to make decisions about which stories to report on. Car crashes, for instance, may only be considered newsworthy when they involve celebrities or a large number of victims. While factors that make stories newsworthy may have been more ambiguous or hidden in the past, “when you are working with computers you have to be extremely explicit about what you define as news”. This makes it necessary for newsrooms to reflect on their values and processes.

AI is already shaping journalism in important ways by assisting journalists in processing huge amounts of data and bringing new stories to light. In the meantime, researchers, programmers and journalists continue to wrestle with ethical questions surrounding transparency, bias, story selection, and recommendations. As of now, AI cannot autonomously produce well-researched and authentic sounding content. But in a few years, who knows; maybe even The Beaver will be written by robots.

Share:

Share on facebook
Facebook
Share on twitter
Twitter
Share on pinterest
Pinterest
Share on linkedin
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

On Key

Related Posts

Give Racism the Red Card

Saira reports on the recent Give Racism the Red Card tournament, held as part of Black History Month. Find out scores and more!

AU For All!

Josh, this year’s AU Treasurer, delves into life in the AU, from events like Carol to a commitment to welcome all.

scroll to top