Introduction Story Ideas Sources Coverage & Research Advocacy Efforts

Artificial intelligence is everywhere — internet searches optimized based on your browser history, refrigerators that let you check what’s inside from the grocery store, cars that can drive themselves.

All of these are the result of algorithms meant to make our lives better. But what happens when those algorithms aren’t fair? And what happens when it’s government agencies that are using artificial intelligence to conduct the people’s business?

Government agencies are also increasingly turning to artificial intelligence, and this might sound like a good thing. Relying on data-based algorithms can remove the potential for human biases when making critical decisions or allocating resources. And computers can improve efficiencies in the long run at a time when local, state and federal agencies are pressured to cut costs.

But is artificial intelligence error-proof?

Algorithms are more than just the data that goes in and the answers that come out. Assumptions are made when deciding what information to include in a formula and how those inputs get weighted. Thus, biases can be baked into algorithms, creating outcomes that are unfair.

Algorithms can predict the chances a person will commit crimes again in the future, a critical factor in sentencing guidelines after a criminal conviction. Those formulas can be biased against African Americans.

As algorithms continue to play an ever-larger role in our society, we hope this guide proves helpful to journalists reporting on the issue. If you have feedback, questions or suggestions on how to improve the guide, please contact SPJ Freedom of Information Committee chair Haisten Willis.

This guide was compiled by the Society of Professional Journalists’ Freedom of Information Committee:

Israel Balderas
Danielle McLean
Hilary Niles
Steve Reilly
Michael Savino
Haisten Willis

Algorithms are also increasingly helping government agencies decide how to more efficiently deploy resources like police to social workers. Yet coding decisions made by software programmers can lead to decisions that public employees struggle to defend.

How are public officials evaluating algorithms to make sure we’re getting the desired outcomes? How are we even defining what a desired outcome looks like? When biases are spotted, what is being done to fix the problems? And was it even necessary to rely on artificial intelligence instead of humans in the first place?

These are all questions we as journalists need to ask whenever we find bias within an algorithm. Artificial intelligence itself is inanimate, emotionless and inert. It is incapable of recognizing when algorithms are harming people. More importantly, it cannot fix itself without human action.

But public officials are human, and they’re elected or appointed to find problems in society and fix them. Are they trying to hide behind artificial intelligence, saying algorithms are incapable of bias? Or are they willing to have the conversations needed to correct problems? Will they abandon algorithms if the failures are too deep? Are they properly weighing the pros and cons before using new technologies?

As members of the Society of Professional Journalists, we “believe that public enlightenment is the forerunner of justice and the foundation of democracy.” Nothing is more important to democracy than enlightening the public on how the government makes its decisions and why. In the 21st century, this includes reporting on the ways the government uses artificial intelligence and whether those outcomes are fair, equitable and just.

This project will highlight examples of reporting on artificial intelligence and tips for how you can find similar stories on your beats.

Next: Suggested Story Ideas