AI is everywhere. And the media industry is no exception. With hot takes abound on how this technology will evolve, much of the hype continues to breed confusion amongst journalists and audiences alike. As automation and generative AI become increasingly interwoven in our society, journalists now more than ever need guidance on how best to approach AI from an investigative standpoint. This is especially important given the existing bias that is already at work.
NYU associate professor Meredith Broussard gave a compelling talk on this very issue at this year's C+J DataJ Conference 23 held in Zurich. With her research focusing on AI in investigative journalism, her opening keynote speech highlighted findings from her recently published book More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech”. With the European Journalism Centre in attendance, here are some takeaways from her talk on how journalists should approach AI in their investigative reporting:
1. Tune down the AI anxiety - As journalists, Broussard argues we must demystify AI for our readers to help relieve their anxiety. She reminds us that AI is beautiful, complicated math, not The Terminator.
2. Focus on how AI harms real people - Instead of focusing your reporting on imaginary scenarios of how AI might evolve in the future, Broussard suggests journalists apply a dose of healthy scepticism to today’s world and investigate how AI is impacting people today in their daily lives.
3. Assume AI discriminates by default - Pointing to Ruha Benjamin’s book “Race after Technology”, Broussard recommends journalists operate from the assumption that AI is discriminatory rather than non-biased, neutral or objective. The problems of the past continue to replicate in AI systems, particularly those that rely on algorithmic decision-making that negatively impacts people of colour.
4. Push back against Technochauvinism - Technochauvinism, a term Broussard coined in her book Artificial Unintelligence: How Computers Misunderstand the World is the assumption that computers are superior to people or that a technological solution is superior to any other. Its time has come and gone, she says. Instead, she argues for us to think about the right tool for the task and to push back against this concept. After all, sometimes, technology is not the best solution.
5. Investigate how AI systems are built - Journalists should delve into the training data sources to understand what is under the hood of machine learning systems. Pointing to The Markup’s article “The Secret Bias Hidden in Mortgage-Approval Algorithms”, the investigation found loan applicants of colour in the United States were 40–80% more likely to be denied a loan than their White counterparts. Why is this? The US has a long history of discrimination against people of colour; unsurprisingly, that bias is in mortgage approval algorithms. She also points to the Washington Post’s article, revealing the data sources of ChatGPT and other generative AI tools.
6. Make narrative change happen - By investigating these AI systems, journalists can show the reality of AI through algorithmic auditing and accountability reporting. Another aspect to explore includes investigating the accessibility of these technological systems. Broussard also makes the case for engaging in public interest technology through this type of reporting. If you are new to algorithmic investigations, she recommends examining how others conducted them. Read the methodologies from news organisations like Lighthouse Reports, The Markup and The Washington Post to understand how best to replicate them.
7. Collaboration is key - Algorithmic investigations are expensive and require a big team with a mix of technical and journalistic skills. Broussard advises that you team up with the right people to pursue this and ensure you have the budget and time to investigate your story fully. Through the AI Accountability Network, funding is available via the Pulitzer Center.
Want to watch the full talk from the conference? View it here.