Fraud, Waste, and Abuse in Benefit Programs

… a core concept used in Implementation and Delivery and Atlas107

Click for pdf of Deloitte paper

Concept description

Although the words “fraud, waste, and abuse” are often used together in the United States when criticising government generally, they are perhaps most commonly used in critiques of benefit programs.

In 2016 Deloitte Consulting published a paper (reference below, pdf on right) entitled “Shutting down fraud, waste, and abuse – Moving from rhetoric to real solutions in government benefit programs.”

The report opens with:

“For decades, our political leaders have promised to cut fraud, waste, and abuse from government spending, but somehow the problems persist, draining billions – some estimates would say trillions – of taxpayer dollars.

In the 2015–2016 election season alone, several presidential candidates have made cutting fraud, waste, and abuse a key part of their platforms. Blue-ribbon commissions and bipartisan panels from California to Wisconsin have vowed to tackle the problem. None of these, however, have managed to cool the hot rhetoric around the topic.”

Improper payments

For government benefit programs, the term improper payments is used. The US Government site PaymentAccuracy.gov (reference below) defines improper payments as “payments made by the government to the wrong person, in the wrong amount, or for the wrong reason” and notes that:

“Although not all improper payments are fraud, and not all improper payments represent a loss to the government, all improper payments degrade the integrity of government programs and compromise citizens’ trust in government.” (from What is an Improper Payment? on the site referenced below)

“Contrary to common perception, not all improper payments are fraud (i.e., an intentional misuse of funds). In fact, the vast majority of improper payments are due to unintentional errors. For example, an error may occur because a program does not have documentation to support a beneficiary’s eligibility for a benefit, or an eligible beneficiary receives a payment that is too high – or too low – due to a data entry mistake.

“Also, many of the overpayments are payments that may have been proper, but were labeled improper due to a lack of documentation confirming payment accuracy.  We believe that if agencies had this documentation, it would show that many of these overpayments were actually proper and the amount of improper payments actually lost by the government would be even lower than the estimated net loss discussed above.” (from Are all Improper Payments fraud? on the site referenced below)

The Office of Management and Budget (OMB) estimates the rate of improper payment in benefit programs and finds that several of the larger programs estimated to have improper payment rates up to 10% and higher. Deloitte generates the following table from the OMB dataset:

Deloitte’s strategies for reducing fraud, waste, and abuse in benefit programs

The Deloitte paper’s authors write:

“There’s no single solution to the problem of fraud, waste, and abuse. Because the problems are complex and evolving quickly, any effective solution must be both multifaceted and agile.

“Fortunately, 20 years of successful fraud reduction in the private sector has shown that program vulnerabilities can be mitigated with an enterprise approach that combines retrospective and prospective approaches, predictive analytics, and adaptive techniques such as machine learning and randomized controlled trials. … Five strategies in particular are critical:

  • Make data collection central to anti-fraud and waste strategies
  • Create a learning system to respond to ever-changing threats
  • Emphasize prevention to get the best return on effort
  • Use “choice architecture” to encourage compliance
  • Share intelligence to reduce intentional fraud”
The problem of false positives

The Deloitte paper includes a useful reminder of the problem of false positives in fraud detection (p. 12):

“Impressive accuracy in a predictive model doesn’t always lead to actionable intelligence. To illustrate, consider a hypothetical type of fraud with a 2 percent prevalence – or “base rate” – in the overall population. In other words, about 20 out of each 1,000 cases sampled at random are expected to involve this type of fraud.

“Next, suppose a data scientist – call him Dr. Keyes – has built a statistical fraud detection algorithm (or “fraud classifier”) that is 95 percent accurate. With this level of accuracy, he would be the envy of his peers. Finally, suppose this algorithm has flagged Mr. Neff as a suspected fraudster. What’s the probability that Neff is actually a fraudster? Perhaps surprisingly, the answer is considerably lower than 95 percent.

“To understand this, let’s return to our hypothetical expectation of 20 fraudsters in a population of 1,000. Keyes’s algorithm’s 95 percent accuracy rate implies that the model could correctly identify 19 of 20 cases of fraud. But it also implies that the model will flag an expected 49 of the remaining 980 cases as fraudulent (0.05 x 980 = 49). Neff therefore could be either one of the 19 true positives or one of the 49 false positives. Thus the so-called “posterior probability” that Neff is in fact a fraudster is only 28 percent.

“The model does provide useful intelligence: One would sooner investigate Neff than an individual not flagged by the model. But in practical terms, his flagging remains an ambiguous indicator of wrongdoing.

“This ambiguity becomes a bigger problem when fraud detection is scaled to larger samples. Consider, for example, California’s Medicaid program, Medi-Cal. In 2011, Medi-Cal’s fee-for-service program processed 26,472,513 claims. Medi-Cal reported that 4.1 percent (49 of 1,168) of sampled claims were potentially fraudulent in 2011, the latest year for which data were available at the time of publication. Extrapolated to the 26 million claims processed during that quarter, more than 1 million of those claims are likely to show indications of potential fraud. If California had a classifier that could detect fraudulent Medicaid claims with 95 percent accuracy, it would still be expected to generate more than 1.2 million false positives.”

Atlas topic, subject, and course

Controlling Fraud, Waste, and Abuse (core topic) in Implementation and Delivery and Atlas107.

Sources

Deloitte Consulting (2016), Shutting down fraud, waste, and abuse – Moving from rhetoric to real solutions in government benefit programs, at https://dupress.deloitte.com/dup-us-en/industry/public-sector/fraud-waste-and-abuse.html, accessed 15 October 2017.

PaymentAccuracy.gov, Frequently Asked Questions, at https://paymentaccuracy.gov/faq/, accessed 15 October 2017.

Page created by: Ian Clark, last modified 15 October 2017.

Image: Deloitte Consulting (2016), Shutting down fraud, waste, and abuse – Moving from rhetoric to real solutions in government benefit programs, at https://dupress.deloitte.com/dup-us-en/industry/public-sector/fraud-waste-and-abuse.html, accessed 15 October 2017.