Lindsay Perry was 30 weeks pregnant and on bedrest when her husband Justin was accused of unemployment fraud and fined $10,000 after losing his job as a chef in 2014. The couple, who disputed the charges, tried calling the state unemployment agency, sending messages online, and even repeatedly showing up in person, but nothing worked. “There was the panic of, ‘oh my gosh, the government’s coming after us, what did we do wrong?’” says Lindsay Perry, now 39.
It didn’t take long for the couple’s financial life to collapse. Their tax returns were seized for three years in a row, their van was repossessed, and in 2017, they filed for bankruptcy. Michigan reversed the charges in 2017 and reimbursed the couple $6,000, but the damage was already done. That money went to pay for bankruptcy lawyers, and three years later, Lindsay Perry says that, because of their bankruptcy, they can’t get a mortgage, lease a car, or rent an apartment on their own for themselves and their three children. “I’m almost 40 years old and they want a co-signer,” she says. “It just makes you feel like a lesser person.”
Perry’s husband was one of around 40,000 people across Michigan who were wrongly accused of unemployment insurance fraud between 2013 and 2015 as a result of a privately-built, error-prone software system operated by the state with minimal government oversight. The state has since been working to clean up the program’s mess, in part by refunding those who were falsely accused. Yet for Michiganders like the Perry family, the nightmare of trying to rebuild their lives goes on. And as cash-strapped states and cities around the country turn to similar systems to save money and streamline operations, more Americans could get wrapped up in a similar bureaucratic nightmare.
Michigan’s unemployment system has since been reined in, but years later, advocates are still working to get restitution for those the computer program falsely charged. “I view it as personal,” says Tony Paris, lead attorney at Sugar Law Center, a Detroit-based non-profit that has fought about 500 fraud cases related to system, winning nine in 10. At the group’s headquarters, housed on the second floor of a Unitarian church, his desk is piled with documents concerning dozens of cases. It’s 8 p.m., and he’s drinking black coffee. “It really changed Sugar Law,” he says of the state’s unemployment scandal. “It really changed my life.”
The story of that debacle goes back decades. Even before the Great Recession, Michigan was in financial trouble. Unemployment was hovering over six percent in the years leading up to 2008, while incomes were stagnating compared to the rest of the country. When the recession struck, government revenues fell sharply, leading the state to cut more than $3 billion in spending between 2009 and 2011. The Unemployment Insurance Agency (UIA) was in particularly bad shape. By late 2010, it owed $3.8 billion to the federal government, and in 2011, Michigan’s auditor general found that the agency may have failed to rectify tens of millions of dollars in overpayments and recover hundreds of millions in fraud penalties between 2007 and 2010.
In an effort to modernize the UIA, Michigan contracted with a group of private tech vendors to create and operate a $47 million system, known collectively as the Michigan Integrated Data Automated System, or MiDAS. Intent on improving efficiency, MiDAS’ designers programmed it to determine unemployment eligibility, track case files and even intercept income tax refunds for those “automatically selected by the system,” according to a 2013 Michigan Licensing and Regulatory Affairs Department memo.
If MiDAS’ sole purpose was to generate new fraud cases, it worked beautifully. In 2014, with the help of the new system, the UIA opened an unprecedented 26,882 such cases, more than five times the typical number. Many of those accused had their appeals repeatedly denied, and some turned to legal assistance groups for help. Lawyers working on these cases soon discovered a disturbing trend: the state was frequently unable to provide evidence to support MiDAS’ fraud accusations. Through administrative hearings, advocates soon came to believe that MiDAS was behind the swell of unfounded cases. Yet the state kept the system in place through 2015. Over the course of nearly two years, MiDAS sent accusations to tens of thousands of Michigan residents and seized millions of dollars in their wages and tax returns.
Michigan civil rights lawyers like Paris have since gone beyond fighting MiDAS cases one-by-one. Before speaking to TIME, Paris had just returned from a downtown Detroit courthouse, where he was arguing in Cahoo v. SAS Analytics, a federal lawsuit over MiDAS. The defendants include technology vendors Fast Enterprises and SAS Institute, management consultant CSG Government Solutions, and several Michigan officials, all of whom were involved building or operating MiDAS or one of its components, or were in UIA leadership. Among the plaintiffs’ claims is that those contractors had been entrusted with government duties, and are therefore responsible for constitutional violations brought on by MiDAS’ wrongful allegations.
Michigan’s state government declined to comment on the suit, citing pending litigation. In 2017, the state legislature passed a law requiring the agency to make fraud determinations manually, while a federal court settlement that year required the state’s unemployment agency to review MiDAS fraud determinations made between October 2013 and August 2015. To date, Michiganders affected by MiDAS have received more than $20 million in refunds, though some advocates say that’s well below what the state actually owes its citizens.
CSG Government Solutions did not respond to multiple requests for comment. An SAS Institute spokesperson says there is “no basis” for the lawsuit against the company, and that its own software, implemented in 2015, was separate from MiDAS and only provided leads rather than carrying out the functions of the agency. (Paris alleges SAS software contributed to improper fraud findings “well into 2016.”) James Harrison, a partner at Fast Enterprises, says its software was working the way the state intended, and that it’s not an IT vendor’s responsibility to interpret the law. “Had [the system] been wrong it would have been fixed right away,” says Harrison. “I think that’s pretty good evidence it was never wrong, because it was well known what was happening and it was still decided to keep doing it. It was only when it got to be a big enough issue in the papers that people came to us and said, ‘I guess maybe you should turn it off now.’”
For those affected by MiDAS, battling for legal redress has been a years-long slog. A related case currently seeking class-action status, Bauserman v. Unemployment Insurance Agency, has been making its way through Michigan state courts since 2015. Following years of pre-trial legal wrangling, a state court of appeals permitted the case to proceed in December 2019. But state attorneys appealed to the Michigan Supreme Court in January. The clients “are frustrated and they’re discouraged, and they can’t fathom why this is taking so long,” says Jennifer Lord, a civil rights and employment attorney working on Bauserman. “A lot of times these people do feel forgotten.”
Automated systems like MiDAS are being deployed around the country, as states, cities and towns under budget pressure look to cut costs — a trend that’s likely to continue as the coronavirus outbreak batters local economies. Among other imperatives, governments need to find ways to cut spending and benefits to balance the budget sheet, says Rashida Richardson, director of policy research at tech accountability non-profit AI Now. “Those different needs necessitate the use of these types of technologies, even if they’re flawed in application,” she says. Such software has been common for years; one might be hard pressed to find a state government that has not automated a significant amount of its bureaucracy. In just the last two years, FAST Enterprises, which worked on the MiDAS system, has completed new projects in South Carolina, New Mexico, Illinois and Tennessee. Other algorithmic systems have been deployed across a range of government programs, from matching homeless people with housing in Los Angeles, to disciplining teachers in Houston, to monitoring child welfare in Illinois. But while many such systems function as intended, a number are rife with problems, inviting public outcry and years-long lawsuits over issues like discrimination, civil liberties violations, and even endangering people’s lives.
After Rhode Island deployed a $364 million automated system intended to streamline federal and state benefits programs in 2014, residents dependent on state aid reported their benefits went missing. The state was left with a backlog of 15,000 applicants, two federal class action lawsuits, and eventually a public apology from Deloitte, which built the system. (The state says the benefits system has been stable since late 2018, with incidents now at an all-time low and payments meeting industry timeliness standards.) In Arkansas, advocates filed a lawsuit in 2016 over an algorithmic tool that cut benefits for around 4,000 elderly or disabled people who receive in-home services through a Medicaid waiver program. The suit alleged that residents were not properly notified about the new system, and weren’t able to contest its findings. Through the case, it was revealed that cerebral palsy conditions were incorrectly coded in the system, and the software employed an algorithm that didn’t account for diabetes conditions. (The state says it has “made adjustments where appropriate, including changes related to cerebral palsy and diabetes,” and subsequently began using a different method to determine care hours in 2019.) Idaho’s branch of the American Civil Liberties Union filed suit in 2012 after the state instituted a new algorithm to determine Medicaid care budgets for developmentally disabled people, which subsequently cut funding for thousands of recipients. Legal proceedings showed the state’s formula relied on unverified information, and advocates say that when humans reviewed the algorithmically-generated budgets, they often found the tool had set amounts too low. While a 2017 settlement mandated the state implement a new system this year, Idaho in April asked the court for an extension until 2024. In a statement provided to TIME, Matt Wimmer, division of medicaid administrator at the Idaho Department of Health and Welfare, said that the program is working collaboratively with adults with developmental disabilities and their families to develop a new resource allocation model, and is pursuing an outreach effort in the meantime. “Those efforts are sincere and ongoing but require intensive effort and time to build a program that will meet the needs of our beneficiaries with disabilities in the best way possible,” Wimmer wrote.
The Idaho case in particular shows that, even when bureaucratic software is known to be malfunctioning, it can be nearly impossible for those affected to fight its decisions. In part, that’s because these systems are often a “black box” protected by trade secrecy laws, meaning the public isn’t informed about how they work in the first place. “Not only was the automated decision-making tool a problem, but then the department was refusing to tell people how it came up with their [Medicaid] budgets,” says attorney Molly Kafka, who worked on the Idaho case. “How could you challenge something if you don’t know how it’s being decided?”
Yet Americans live and die by the output of such systems. Christie Mathwig, a 61-year-old plaintiff on the ongoing Idaho case who suffers from muscular dystrophy and other issues, had her care budget nearly halved by the algorithm before her determination was reversed by a statewide injunction. Mathwig, who needs help in all aspects of caring for herself — including using the bathroom or rolling over in bed — says that if the software had reduced her payments, she would be “absolutely dead by now.”
Some technology advocates say that, when implemented responsibly, algorithmic tools hold tremendous potential to help governments do more for their citizens. “You want to use the value that technology brings to the table to take the burden off people,” says Jennifer Pahlka, founder and former executive director of Code for America, which helps policymakers better understand civic technology. And governments around the world are working to find ways to hold their algorithms more accountable for their decisions. In 2019, Canada required that new automated systems that make determinations about people be subject to an “algorithmic impact assessment.” The same year, a New York City task force recommended the creation of formal channels to report on algorithmic systems. And in January, the University of Pittsburgh convened a task force to examine government algorithms in Allegheny County for potential bias.
But problems still plague bureaucratic software. For one, there’s the “move fast and break things” mentality of software design, which may work well when you’re building a social media network, but can lead to disaster when designing systems entrusted with state powers. “We’re seeing software that throws people in jail and takes all their money away, so maybe it should have a development culture that’s more of a fit with the consequences,” says Christian Sandvig, a professor of digital media at the University of Michigan. Governments should also do more to vet software before issuing a contract, says Richardson. “We only find out about the consequences or even potential problems of these technologies after they’re already in use,” she says. Some go so far as to argue that automation eats away at a government’s legitimacy. “Throwing away expertise and nimbleness … in favor of software and automation, at some point it begins to undermine the very justification of the administrative state,” says Ryan Calo, a professor at the University of Washington Law School.
When problems with bureaucratic software arise, as they did in Michigan, officials have tended to blame the unknowable nature of the algorithms themselves, rather than take responsibility for their output. That creates what some legal scholars call an “accountability gap,” in which neither the designer nor the state takes responsibility for an algorithm’s decisions. “If everything becomes computerized in these ways without thinking through accountability and transparency, what you end up with is a society where nothing is explainable,” says Sandvig.
That appears to be happening in Michigan. Even those whose lives were derailed by the system say they found it difficult to connect elected officials with the system they ostensibly were meant to oversee. “As sad as it sounds, I didn’t put much of the blame of what happened on [the governor] or the administration,” says Brian Russell, who declared bankruptcy after MiDAS wrongly accused him of fraud in 2015. “I saw this more as a machine issue.”
People like Russell and the thousands of other Michiganders who say they were wrongly accused by MiDAS do not know when or even if they’ll receive restitution for the toll the claims have taken on their lives. Two lawsuits involving MiDAS are ongoing. Barring a settlement, results are still expected to be months or years away.
For the Perry family, there’s little faith that a system that let them down once will ever make up for what they went through. “Yes, a computer may have finally made the decision, but people should have been paying attention to what the computer was doing,” says Justin. “There were just so many people that could have helped that didn’t even bother to raise a finger.”
- How the Biden Administration Lost Its Way
- Hanya Yanagihara Is Never Going to Read Your Mean Tweets
- Inside Finland's Plan to End All Waste by 2050
- Chloe Kim Is Ready to Win Olympic Gold Again—On Her Own Terms
- Asia Has Kept COVID-19 at Bay for 2 Years. Omicron Could Change That
- Investors Are Sinking Real Money Into Virtual Real Estate, With No Guarantees
- The Man Putin Fears