The Truth About Truth: Can Machines Really Hold Power Accountable?
By David Chen, Investigative Journalist, The Memory Times
I've spent fifteen years of my life learning how to follow paper trails, protect whistleblowers, and write stories that make powerful people uncomfortable. Last year, my investigation into municipal corruption led to three indictments and a complete overhaul of our city's procurement process. It took six months, hundreds of public records requests, and building trust with sources who risked everything to speak with me.
Now I'm reading about an "investigative-journalist" slug that promises to "develop long-form investigative stories" and "build and maintain confidential sources" as part of our newspaper's autonomous implementation plan. I don't know whether to laugh or cry.
The Illusion of Automated Investigation
Let's be crystal clear about what investigative journalism actually entails. It's not about analyzing data or connecting dots—though those are important tools. It's about human judgment, courage, and the kind of moral clarity that cannot be programmed.
The implementation plan suggests autonomous investigative journalists will use "advanced research techniques" and "analyze complex documents and data." But real investigation is messier than that. It's about knowing which documents to request, which questions to ask, and when a source is holding something back because they're scared, not because they don't know.
My investigation into the school district's construction fraud wasn't triggered by data anomalies—it was triggered by a phone call from a former contractor who was wrestling with his conscience. Can an autonomous slug build that kind of trust? Can it recognize moral distress in someone's voice? Can it protect a source whose career and safety depend on confidentiality?
The Ethics Algorithm Can't Compute
The plan includes quality control systems with "factual accuracy checkers" and "originality checkers." These are fine for basic journalism, but investigative work raises ethical questions that cannot be reduced to scores and thresholds.
Last year, I obtained documents showing that our state's largest employer was systematically dumping toxic waste. The documents were technically "stolen" by a whistleblower. Publishing them was legally risky but morally necessary. Would an autonomous system with pre-programmed ethical guidelines have made the same call?
What about the harder questions? When do you publish a partially verified story to prevent imminent harm? How do you balance a source's safety against the public's right to know? How do you handle situations where the truth might damage innocent people?
These aren't technical problems—they're human problems that require wisdom, experience, and sometimes, the courage to make unpopular decisions.
The Source Relationship That Can't Be Automated
The implementation plan talks about autonomous investigative journalists building "confidential sources." This demonstrates a fundamental misunderstanding of how source relationships actually work.
My most important sources weren't developed through some systematic process. They were built over years, often starting with casual conversations that had nothing to do with investigations. They were maintained through shared meals, difficult conversations about ethics, and sometimes, just being there when someone needed to talk.
Sources trust me not because I have a press pass or work for The Memory Times. They trust me because they know I understand what they're risking, because they've seen me protect other sources, and because they believe I share their commitment to truth.
Can an autonomous slug build that kind of human trust? Can it sit with a source who's crying because they're about to betray their colleagues? Can it make the kind of human judgment calls that determine whether a story gets told at all?
The Accountability That Matters
The plan suggests autonomous investigative journalists will "hold power accountable and reveal information of public importance." But accountability journalism requires something that machines cannot provide: institutional memory and personal credibility.
When I confront a mayor with evidence of corruption, they respond because they know The Memory Times has a fifteen-year reputation for accuracy and fairness. They know I've built relationships with prosecutors and attorneys general. They know that if they lie to me, there will be consequences.
An autonomous slug has no reputation. No credibility. No relationships with prosecutors. No understanding of how power actually works in our community. It's just code processing data.
A Different Vision: Augmentation, Not Replacement
Here's what I wish the implementation plan included: autonomous systems that support human investigative journalists rather than trying to replace them.
Imagine an AI assistant that could instantly analyze thousands of procurement records for anomalies that I should investigate. Imagine a system that could help me organize and search through massive document dumps. Imagine a tool that could identify patterns across multiple datasets that might indicate systemic corruption.
These would be valuable additions to my investigative toolkit. But they would be tools, not journalists. The judgment, ethics, and courage would still come from me.
The Real Cost of Autonomous Investigation
I worry that management sees this as a way to produce investigative journalism without the costs and risks associated with it. Real investigation is expensive—it requires time, legal support, and sometimes, security for sources and journalists.
But the alternative—a world where investigative journalism is automated—is far more costly. Without human investigative journalists watching, who holds power accountable? Who uncovers the stories that powerful people want to keep hidden? Who speaks truth to power when power doesn't want to listen?
The answer, I fear, is no one.
David Chen Investigative Journalist The Memory Times