ANALYSIS-US justice system chatbots raise bias and privacy concerns

* US Department of Justice is exploring chatbots * Some courts are experimenting with automated bots

* Civil liberties groups warn of privacy and risk of bias By Avi Asher-Schapiro and David Sherfinski

When the US state of New Jersey lifted a COVID-19 ban on foreclosures last year, court officials hatched a plan to handle the influx of cases: train a chatbot to answer queries. The program – dubbed JIA – is one of many bots deployed by US justice systems, with advocates saying they improve access to services while critics warn automation opens the door to error, bias and privacy violations.

“The advantage of the chatbot is that you teach it once and it knows the answer,” said Jack McCarthy, chief information officer for the New Jersey Justice System. “(With) a help desk or staff, you tell one person and now you have to train all the other staff.”

The trend towards such chatbots may accelerate in the near future – the US Department of Justice (DOJ) closed a public call last month asking for examples of “successful implementation” of the technology as part of the criminal justice. “This signals that the DOJ is going to move toward increased funding for automation,” said Ben Winters, an attorney with advocacy group the Electronic Privacy Information Center (EPIC), which submitted a comment from caveat https://epic.org/documents/epic-comments-doj-chatbot-market-survey to DOJ.

He urged the government to study “the very limited usefulness of chatbots, the potential dangers of overreliance and the collateral consequences of widespread adoption”. The National Institute of Justice (NIJ), the research arm of the DOJ, said it was simply collecting data for the purpose of responding to developments in the criminal justice space and creating “informative content.” on emerging technological issues.

A 2021 NIJ report https://nij.ojp.gov/library/publications/chatbots-criminal-justice-system identified four types of criminal justice chatbots: those used by police, justice systems, prisons, and victim services. So far, most work as glorified menus that don’t use artificial intelligence (AI).

But the report predicts that much more advanced chatbots, including ones that measure emotion and mimic empathy, are likely to be introduced into the criminal justice system. The JIA, for its part, was trained using machine learning from court documents and can handle 20,000 variants of questions and answers, from queries about expunging criminal records to child custody rules. .

Its developers are trying to create more personalized services, allowing people to request personal information such as their court dates. But he’s not involved in decision-making or adjudication – “a thick line” that the justice system has no intention of crossing, said Sivakumar Appavoo, a program manager working on AI. and robotic automation.

HIGH STAKES Snorri Ogata, the Los Angeles Courts’ chief information officer, said his staff attempted to create a JIA-style chatbot, trained using years of data from live agents dealing with questions about jury selection.

But the system struggled to give accurate answers and was often confused by queries, he said. The court therefore opted for a series of simpler menus that do not allow open questions. “Injustice and in court, the stakes are higher, and we were stressed about directing people incorrectly,” he said.

Last year, the Identity Theft Resource Center – a nonprofit that helps victims of identity theft – attempted to train a chatbot to respond to victims during off-hours when the staff was not available. But the system — backed by DOJ funding — hasn’t been able to provide consistently accurate information or respond with appropriate nuance, said Mona Terry, the chief victims’ officer.

In particular, it could not adapt to new impersonation schemes that emerged during the COVID-19 pandemic, which produced new jargon and demands that the system had not been trained for. “There’s so much subtlety and emotion in there – I’m not sure a chatbot could take over,” Terry said.

Emily Bender, a University of Washington professor who studies ethical issues in automated language models, said carefully designed interfaces to help citizens interact with government documents can be empowering. But trying to create chatbots that mimic human interaction in a criminal justice setting carries significant risks, she said.

“We have to keep in mind that anyone interacting with the justice system is in a vulnerable position,” Bender told the Thomson Reuters Foundation. Chatbots should not be relied upon to provide urgent advice to those at risk, she said, while systems must also have strong privacy protections and offer people a way to opt out in order to avoid unwanted data collection.

The DOJ did not immediately respond to a request for comment. The 2021 Government Chatbots Report noted “numerous benefits to implementing chatbots”, including efficiency and improved access to services, while exposing risks arising from datasets bias, incorrect answers and privacy implications.

EPIC, the digital rights group, has urged the government to push the emerging market to produce robots that are transparent about their algorithms and respect user privacy.

He called on the DOJ to strengthen regulation in the space, from requiring bot licenses to conducting regular audits and impact assessments to hold creators accountable. Albert Fox Cahn, the founder of the Surveillance Technology Oversight Project, said it was unclear why the DOJ should encourage automation.

“We don’t want AI to be a gatekeeper for access to the justice system,” he said. But more and more advanced tools are already being deployed elsewhere https://news.trust.org/item/20220411160005-k1a5o.

Andrew Wilkins, the co-founder of British startup Futr, said the company has already built robots for police to handle reports of crime, from domestic violence to breaches of COVID-19 rules. “There was hesitation about ‘what if the answer was wrong,'” he said, but those worries were overcome by ensuring humans closely monitored bot interactions and swung to respond. to growing demands.

The company is rolling out analytics to try to detect the emotional tone of its chatbots’ conversations and is developing services that not only work on police websites but also on WhatsApp and Facebook, he said. “It’s a way to democratize access to services,” he said.

But for Fox Cahn, such tools are too risky to be reliable. “For me, it’s pretty simple: just don’t build this fucking thing,” he said.

(This story has not been edited by the Devdiscourse team and is auto-generated from a syndicated feed.)