Telephones are not exactly cutting-edge technology. But, in 1877, when then-President Hayes stood in the telegraph room of the Executive Mansion, this strange new device he held for the first time would alter history (and government) forever.
At the time, the White House had been given the very first line with a phone number of “1.”
Sounds momentous; but the only other place to call was the Treasury Department next door. Hayes reportedly found the invention “marvelous,” but didn’t really know what to do with it.
Three years later, a telephone was installed in the U.S. Capitol. By the 1890s, phones were everywhere in Congress… on desks, in cloakrooms, even in the press gallery. What started as a curiosity quickly became a necessity.
The telephone didn’t just change how the government communicated. It changed how it worked.
That’s how new technology spreads. At first, it’s a novelty. And then, if it’s useful, almost overnight, it becomes infrastructure. Eventually, it becomes invisible, just part of how things get done.
AI Feels a Lot Like That Right Now
That’s where we are with Artificial Intelligence right now. It’s (still) new. It’s exciting. It’s everywhere. And for many in (and out of) government, it’s unclear exactly what to do with it.
When we say “AI,” we’re not just talking about chatbots or science fiction. We mean tools that learn from patterns in data, create forecasts, automate repetitive tasks, and augment human decision-making. The question is, “What should you do with it?” Because, contrary to the hype, the answer isn’t “everything”.
Should you use it to speed up paperwork or replace paperwork? To answer questions from the public or eliminate the need for questions? To detect fraud or to make fraud impossible? To write reports or make everything work so smoothly that reports are no longer needed?
The problem with those questions is that they are the wrong questions. All of those questions deal with how. Paperwork, Q&A, minimizing fraud, report writing are all activities that are done in support of some objective. Understanding that objective is the first step: what problem are you trying to solve?
Start With the Mission
We’ve seen this before. A new technology arrives, and the rush to adopt it begins. But when the mission gets lost in the rush, the results are often disappointing or even counterproductive.
That’s why every AI conversation should start with a simple question: What’s the outcome you’re trying to achieve?
Is it being more responsive to veterans’ needs? Providing better insights for public health officials? Reducing the burden on frontline workers? AI can help with all of that but only if it’s in service of the mission you are trying to achieve.
We’ve seen this firsthand in our work with agencies like CMS, where AI can help identify patterns in claims data that point to potential fraud or abuse—patterns too subtle or complex for traditional rules-based systems. Or in TSA, where computer vision models can assist in screening anomalies in scanned images, helping agents focus on the edge cases that matter most.
People Are Still the Point—but So Is Scale
There’s a lot of talk about AI replacing jobs. But in our experience, the best AI doesn’t replace people, it supports them. It helps them do more, faster, and with greater precision.
We’ve worked with agencies where AI helps analysts sift through millions of records in seconds, something no human team could do alone. Where automation reduces claims processing time from hours to minutes. Where generative models draft responses to routine inquiries, freeing up staff to focus on complex cases.
Automation is certainly a component of process improvements; but AI and automation isn’t always about cutting headcount. When done correctly, it’s about increasing throughput and doing things we simply couldn’t do before. It’s about pairing human judgment with machine speed.
Trust Is the Real Innovation
The telephone didn’t succeed because it was clever. It succeeded because it enabled faster and better communications and people learned to trust it and depend on it. They came to trust that when they picked it up, they could be assured that someone would be on the other end.
We need to establish that same level of trust and transparency with AI. That means being open about how it works. Testing it for bias. Protecting privacy. Following the law. And most of all, making sure it serves a useful purpose for the public.
That’s why we’re encouraged by the federal government’s push to build an AI evaluations ecosystem, a key initiative in America’s AI Action Plan. It’s a recognition that performance alone isn’t enough. AI must also be safe, fair, and aligned with public values.
We’re already applying this thinking in our work, using tools that explain model decisions, integrating retrieval-augmented generation (RAG) to ground outputs in trusted and domain-aware data, while also stress-testing systems for edge cases and adversarial prompts.
AI in Government: Use Cases That Matter
We’re not interested in AI for AI’s sake. We’re interested in solving real world problems:
- In medicine, we’re exploring how AI can help triage prior authorization requests, using natural language processing to extract key clinical indicators and match them to policy criteria, reducing delays for patients and providers alike.
- In airport security, we see opportunities to use AI to pre-screen passenger data for risk indicators, not to replace agents, but to help them focus their attention where it’s needed most.
- In public health, we’re looking at how AI can help detect early signals of outbreaks by analyzing unstructured data from call centers, social media, and provider notes.
- As part of our Program Integrity service, we’re using anomaly detection to flag unusual billing patterns in claims data, patterns that might otherwise go unnoticed.
- And in customer service, we’re piloting AI agents that assist human agents by summarizing case histories and suggesting next-best actions in real time.
These are practical, measurable, and mission-aligned outcomes.
But what about the moonshots? The moonshots will come. They always do; but we must get the foundations right. By adopting a try-first approach, we are able to build the muscle memory and the know-how that will lead to even greater achievements.
Our Approach
At RELI, we’ve developed a practical, mission-focused approach to AI adoption. It’s built around five principles:
- Mission-Focused Innovation
- Responsible Governance
- Data Stewardship
- Workforce Enablement
- Intentional Partnerships
We call it the RELI AI Guiding Principles. It’s our philosophy and our guide for how we help government leaders move AI from novelty to innovation to necessity, just like what happened with the telephone.
Because someday soon, AI won’t just be a novelty. It will be infrastructure. And the agencies that start with purpose, people, and trust will be the ones setting the standard for the future of government.