Image Credit: Undark

Politicians, such as Donald Trump, suggested there is “tremendous fraud” in government welfare programs, sowing distrust and catalyzing states to employ AI to more efficiently and accurately predict and detect fraud. However, user fraud in these programs is incredibly rare.

The Supplemental Nutrition Assistance Program (SNAP) provides food stamps and support to 42 million low-income families nationwide at an annual cost of $68 billion. Less than 1% of benefits go to ineligible households, and when they do, it is often by way of a mistake by recipients, state workers, or computer programmers, confused by the complex regulations and eligibility requirements, not intending to defraud the program. Similarly, the majority of fraud around Medicaid, providing healthcare for low-income individuals, is committed by healthcare providers, not those receiving benefits from the program.

Some states have recently implemented advanced data mining systems to try to root out fraud in the Supplemental Nutrition Assistance Program (SNAP), providing food stamps and support for 42 million low-income families nationwide. Similarly, 20 states are employing AI and data analytics to more efficiently predict and identify fraud in unemployment insurance. The federal government is also investing in new Medicaid information technology.

Although they aim to provide support to the people who need it and mitigate fraud, these algorithms harm low-income communities, trapping them in a quote-on-quote digital poorhouse.

Unemployment Fraud Detection in Michigan

From October 2013 to September 2015, Michigan employed the Michigan Integrated Data Automated System (MiDAS), an automated unemployment fraud detection system, and wrongfully accused over 34,000 individuals.

Streamlining State Welfare Systems in Indiana

In Indiana, the governor signed a $1.4 billion contract with big tech companies, including IBM and ACS, to automate and privatize the process for determining eligibility for state welfare programs and lessen work for individual caseworkers. In the three years of its use, one million applications were denied, a 54% increase from the prior three years. Although it was meant to increase efficiency and streamline the welfare system, it put new barriers to treatment access and eliminated crucial family-social worker relationships. In one case, an African American woman, Omega Young, in Evansville, Indiana missed a recertification appointment because she was battling ovarian cancer in the hospital, and her Medicaid benefits were revoked because she missed the call.

Homelessness in California

Los Angeles, California has a homeless population of close to 58,000, 75% of whom are unsheltered and living on the streets. People are turning to AI to make the difficult decisions on resource allocation instead of leaving it up to individual caseworkers. However, existing inequalities, such as discrimination against black and brown communities in public assistance systems, are now perpetuated through automation. To collect the necessary data to implement the algorithm, houseless individuals would be subjected to invasive questions about their mental health, family situation, and other personal issues that wealthier residents would not have to answer.

Child Welfare Risk Model in Pennsylvania

In Allegheny County, Pennsylvania, the Department of Human Services implemented a predictive algorithm to determine which children are at high risk of becoming victims of abuse. However, it disproportionately discriminates against people of color and low-income families, leading to both false positives and false negatives, falsely accusing families of maltreatment and failing to prevent unsafe situations for other children.

In that case, one of the hidden biases is that it uses proxies instead of actual measures of maltreatment. And one of the proxies it uses is called call re-referral. And the problem with this is that anonymous reporters and mandated reporters report black and biracial families for abuse and neglect three and a half more often than they report white families.

Virginia Eubanks

 

The Bottom Line

Algorithms intended to detect welfare fraud and ensure access to support for low-income and vulnerable families discriminate against those who they are trying to help: the poor and underserved. Federal, state, and local governments need to work to be more transparent, accountable, and equitable in their use of algorithms regarding social welfare and to halt the perpetuation of systemic inequalities against communities of color and low-income families through automation.

Related Posts

See all recent posts