This post comes from Sarjoo Shah, RELI Group’s Client Executive Director for Citizen Services. With more than 35 years of executive experience, Sarjoo leads RELI’s growth in the state health and entitlement markets. His deep expertise in digital transformation, enterprise architecture and operational leadership improves service delivery and drives measurable impact.
This post kicks off a two-part series on how artificial intelligence (AI) is reshaping fraud, waste and abuse (FWA) prevention within state and local government agencies. Part 1 examines today’s FWA landscape – what’s driving the risks, the operational pressures agencies face, and the innovations already changing how program integrity is managed. Part 2 explores how AI can help agencies move from detection to prevention and how RELI Group is helping our government partners lead that shift.
State and local government programs collectively manage hundreds of billions of dollars in taxpayer funds each year. From human services, to grants, to procurement and payment programs, ensuring that those funds are used as intended is an operational imperative and increasingly, a strategic one, supported by strong program integrity practices in government. Yet the risks of FWA remain stubbornly high, and traditional approaches to program integrity are under pressure from stretched budgets, legacy systems and escalating complexity.
Enter AI. In the federal space, AI and advanced analytics are already proving to be game-changers in detecting improper payments, automating reviews and surfacing hidden patterns. For state and local governments, the opportunity is similarly substantial – but so are the obstacles. This piece reviews current marketplace trends, highlights innovations shaping program integrity, and proposes how lessons from the federal FWA world can be adapted to the local government context.
Federal, State and Local Trends: Forces Shaping Program Integrity and FWA Risk
Understanding the current marketplace is essential to addressing today’s FWA challenges. Across every level of government, agencies are balancing rising oversight demands with limited resources and evolving risks. The trends below highlight the key forces shaping how program integrity is managed and where innovation has the most potential to make an impact.
1. Increasing Scale of Improper Payments and FWA Risk
- According to the Government Accountability Office (GAO), annual government losses resulting from fraud are estimated between $233 billion and $521 billion.
- In the healthcare space alone, improper payment volumes continue to balloon: the Centers for Medicare & Medicaid Services (CMS) estimates for FY 2023 show more than $100 billion in improper payments for benefit programs.
- On the federal payment integrity front, the United States Department of the Treasury recently reported that implementing AI-driven fraud detection recouped more than $375 million in a single year.
- For state and local governments, the combination of increased program complexity (grant awards, emergency relief and hybrid funding streams), legacy IT systems, and H.R. 1 – also known as the Big Beautiful Bill – means the risk surface is growing.
2. Operational Constraints and Resource Pressure
- Agencies are facing tight budgets, staffing constraints and increasingly large volumes of data. That combination means fewer resources per dollar of budget to dedicate to oversight, audits and investigations.
- Legacy systems, data siloes and limited data-sharing across agencies hinder timely detection of anomalies or emerging fraud schemes.
- The shift toward hybrid funding models (federal/state/local blends), value-based service models and performance-based contracting adds complexity to program integrity. Agencies must adapt oversight to new flows and structures.
3. From Reactive to Proactive/Pre-Pay Models
- Historically, many oversight functions operated in a “pay-and-chase” posture: payments are made, then audits and investigations catch improper payments after the fact. Increasingly, the emphasis is shifting toward prevention: using analytics to detect risk prior to payment (or very early in the lifecycle), increasing focus on “gray zone” improper payments (errors/waste versus intentional fraud), and optimizing limited investigative resources.
- As public expectations around transparency and accountability rise, agencies are under pressure to show measurable improvements in program integrity – not just audit results, but demonstrable reductions in improper payments.
4. AI Adoption Accelerating but Still Emerging
- Federal agencies are moving ahead: AI, machine learning (ML), natural-language processing (NLP) and network/graph analytics are increasingly used in FWA detection. CMS just launched a competition to solicit ideas from the vendor community to use AI for FWA.
- For many state/local governments, full deployment is still emerging: challenges around data accessibility, IT infrastructure, workforce capability, governance and legal/regulatory frameworks remain.
- Fraud schemes are growing more sophisticated, from collusion networks, emerging technologies like deepfakes and synthetic identity fraud to cross-jurisdictional schemes. The “arms race” between oversight agencies and fraud actors has never been more pronounced.
Innovations Shaping Program Integrity
In response to rising risks and operational pressures, agencies are rethinking how program integrity is managed, moving from manual, retrospective reviews toward integrated, data-driven oversight. Emerging technologies are enabling faster detection, smarter prevention and more efficient coordination across programs. Here are some of the more promising innovations that agencies can tap as part of the future of program integrity.
1. Anomaly Detection and Machine Learning at Scale
AI systems can comb large datasets – financial transactions, claims, provider networks or grant awards, for example – to flag outliers, unusual patterns or deviations from expected norms. In a federal context, this is already used. Additionally, machine learning models that address class imbalance (fraud is rare relative to “normal”) and adaptive learning (models that evolve as fraud schemes evolve) are emerging in research. For state/local agencies, this means shifting from static, rules-based systems (e.g., “flag provider X if bill > $10,000”) to dynamic systems that learn and highlight new types of risk.
2. NLP and Unstructured Data Analysis
A significant chunk of risk/review data is unstructured, such as contract language, case notes, provider documentation and emails. NLP enables the conversion of this into structured signals. For example, NLP can surface inconsistencies in documentation, missing justifications, providers or vendors referenced in emails that aren’t in approved vendor lists, etc.
3. Network/Graph Analytics and Relationship Mapping
Fraud networks often involve collusive groups, shell companies or layered transactions. Graph analytics allows mapping of relationships – vendors to providers, providers to claims, vendors to payments – to uncover hidden networks. This is especially important in environments with state grants or program assistance, where indirect vendors or sub-grantees may be involved.
4. Pre-Payment (or Near-Real-Time) Controls
Rather than conducting only post-payment audits, new systems aim to intercede before payment is finalized: e.g., flagging payment for further review, refusing payment until validation or automating review workflows. For state/local agencies, this means building workflows that integrate analytics into the payment lifecycle, not just at the end.
5. Data Sharing, Interoperability and Clearing Houses
Analytics are only as good as the data. Key enablers for program integrity are combining data across agencies, matching vendor lists, sharing investigation outcomes, and creating standardized data pipelines. In the federal space, there is discussion of algorithm/data “clearing houses” to share models, risk scores and provider/vendor risk profiles across agencies. State/local agencies can replicate this via multi-agency consortia or inter-jurisdictional partnerships.
6. Governance, Ethical AI and Model Transparency
As agencies adopt AI/ML, questions of model bias, explainability, auditability and trust become ever more important. A recent survey on federal AI practice underscores the need for governance frameworks and accountability mechanisms. For state/local government contexts, this means balancing innovation with transparency, ensuring that models are auditable, decisions are explainable, and privacy/regulatory obligations are met.
The shift from reactive oversight to proactive, data-driven integrity is already underway. Applied AI and advanced analytics aren’t just enhancing audits; they’re redefining how governments think about risk, compliance and trust. Part 2 of this series will explore how AI can directly help agencies detect, prevent and even predict FWA, and how RELI Group is helping government partners translate those possibilities into measurable outcomes.