Cellebrite Is Using AI to Summarize Chat Logs and Audio from Seized Mobile Phones


Cellebrite Is Using AI to Summarize Chat Logs and Audio from Seized Mobile Phones

Cellebrite, the company which makes near ubiquitous phone hacking and forensics technology used by police officers around the world, has introduced artificial intelligence capabilities into its products, including summarizing chat logs or audio messages from seized mobile phones, according to an announcement from the company last month.

The introduction of AI into a tool that essentially governs how evidence against criminal defendants is analyzed already has civil liberties experts concerned.

“When you have results from an AI, they are not transparent. Often you cannot trace back where a conclusion came from, or what information it is based on. AIs hallucinate. If you always train it on data from cases where there are convictions, it will never understand cases where indictments should not be brought,” Jennifer Granick, surveillance and cybersecurity counsel at the American Civil Liberties Union’s (ACLU) Speech, Privacy, and Technology Project, told 404 Media in an email.

Cellebrite’s products are very common across U.S. local, state, and federal law enforcement, and law enforcement agencies overseas too. Those products include the UFED, a device that bypasses the passcode protection on some phones and extracts data from them. 404 Media previously verified leaked documents which showed what phones Cellebrite could, and could not, unlock as of April 2024. Cellebrite has introduced AI specifically into Guardian, which is a software-as-a-service “evidence management solution,” the company says. In practical terms, Guardian is a piece of software for analyzing evidence already in a police officer’s possession.

According to Cellebrite’s February 6 announcement, the company’s generative AI capabilities can summarize chat threads “to help prioritize which threads may be most relevant,” contextualize someone’s browsing history to show what was searched for, and build “relationship insight.”

💡
Do you know anything else about how police are using AI? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +44 20 8133 5190. Otherwise, send me an email at joseph@404media.co.

The announcement included a quote from Detective Sergeant Aaron Osman with Susquehanna Township, Pennsylvania Police Department, who Cellebrite says piloted the AI capabilities. “It is impossible to calculate the hours it would have taken to link a series of porch package thefts to an international organized crime ring,” he says. “The GenAI capabilities within Guardian helped us translate and summarize the chats between suspects, which gave us immediate insights into the large criminal network we were dealing with.”

Responding specifically to that case, ACLU’s Granick said “The Fourth Amendment does not permit law enforcement to rummage through data, but only to review information for which there is probable cause. To use an example from the press release, if you have some porch robberies, but no reason to suspect that they are part of a criminal ring, you are not allowed to fish through the data on a hunch, in the hopes of finding something, or ‘just in case.’ In a series of cases, we have been advocating for time and data-category limitations on searches, and a number of courts have held that those limits are necessary when searching a cell phone, computer, or social media account.”

Granick added “there could be a tendency to believe that an AI tool will successfully identify patterns revealing criminal behavior that a human reviewer would not. The company gets rewarded for finding patterns; police say the tool is useful. But with a data set of any size, you can always find a pattern, even if that pattern is incomplete, misleading or false. All you have to do is ignore other information.”

Victor Cooper, senior director of global corporate communications at Cellebrite, told 404 Media in an email that “Gen-AI is primarily about achieving more with less. Our respective Gen-AI capabilities are designed to enhance productivity and significantly reduce the burden of manual labor. We ensure that all Gen-AI results are clearly marked, allowing users to trace back to the original data that appeared as the indicator. Additionally, there is always a human-in-the-loop (HITL) to review, accept, or disregard the AI-generated suggestions. Every report we produce is firmly grounded in robust evidence.”

“While we deliver efficiency, it remains the user’s responsibility to thoroughly check any AI-generated suggestions or indicators to avoid hallucinations or incomplete results. Terms like ‘partial’ or ‘incomplete’ could equally apply to scenarios where investigators, without the aid of AI, fail to examine all potential evidence. Our Gen-AI tools are intended to automate certain manual tasks, but they do not absolve users of their responsibility to base conclusions on solid evidence and strive for thoroughness and completeness in their work,” he added.

Cellebrite’s newly announced capabilities sound somewhat similar to Draft One, a tool from contracting giant Axon. Draft One uses OpenAI’s tech to automatically generate police reports from bodycam audio. In demonstration videos, Axon also stresses the need for officers to proofread any generated result from Draft One.

Leave a Reply

Your email address will not be published. Required fields are marked *