
Law Enforcement & AI | PO Box 58 | West Wareham MA 02576 | lawenforcementandai.com | henry@lawenforcementandai.com
September 2025
Volume 1 | No. 3
September 2025 | Volume 1 | No. 3
AI Deepfakes
AI has worsened cybersecurity threats in two main ways. First, hackers have turned into large language models (LLMs) to extend the scope of malware. Generating deep-fakes, fraudulent emails, and social-engineering assaults that manipulate human behavior is now far easier and quicker. XanthoroxAI, an AI model developed by cybercriminals, can be utilized to create deepfakes, alongside other nefarious activities, for as little as $150 per month. Hackers can launch sweeping phishing attacks by asking an LLM to gather vast quantities of information from the internet and social media to fake personalized emails. And for spearphishing—hitting a specific target with a highly personalized attack—they can even generate fake voice and video calls from colleagues to convince an employee to download and run dodgy software.
Second, AI is being used to make the malware itself more menacing. A piece of software disguised as a PDF document, for instance, could contain embedded code that works with AI to infiltrate a network. Attacks on Ukraine’s security and defense systems in July employed this approach. When the malware reached a dead end, it was able to request the help of an LLM in the cloud to generate new code to break through the system’s defenses. It is unclear how much damage was done, but this was the first attack of its kind, notes.
Elon Musk’s AI tool Grok Imagine is under fire after The Verge discovered it could generate non-consensual nude deepfakes of Taylor Swift without being explicitly prompted. Journalist Jess Weatherbed reported that simply using the platform’s “spicy” mode produced topless and sexualized images of Swift, echoing earlier controversies when fake sexual photos of her spread widely on X.
This is especially problematic since X previously promised a zero-tolerance policy for non-consensual nudity (NCN). While Grok blocks some explicit requests, its flawed design still creates inappropriate content by default in certain scenarios. With the upcoming Take It Down Act, which will require platforms to remove AI-generated sexual images, Musk’s xAI could face legal repercussions if these issues aren’t fixed. Despite the backlash, Musk has continued promoting Grok Imagine, while X has yet to formally respond.
Recognize when you’re talking to AI. AI is so adept at mimicking human conversation that scammers use it to initiate conversations, tricking people into sending money. For safety, assume that anyone you meet online is not who they claim to be, particularly in romantic conversations or investment pitches. If you’re falling for someone you’ve never met, stop and ask a family member or friend if anything seems off.
AI Agents
The next big thing is AI tools that can do more complex tasks. Here’s how they will work.
But a weak LLM wouldn’t make an effective agent. In order to do useful work, an agent needs to be able to receive an abstract goal from a user, make a plan to achieve that goal, and then use its tools to carry out that plan. So reasoning LLMs, which “think” about their responses by producing additional text to “talk themselves” through a problem, are particularly good starting points for building agents. Giving the LLM some form of long-term memory, like a file where it can record important information or keep track of a multistep plan, is also key, as is letting the model know how well it’s doing. That might involve letting LLM see the changes it makes to its environment or explicitly telling it whether it’s succeeding or failing at its task.
Such systems have already shown some modest success in raising money for charity and playing video games, without being given explicit instructions for how to do so. If the agent boosters are right, there’s a good chance we’ll soon delegate all sorts of tasks—responding to emails, making appointments, submitting invoices—to helpful AI systems that have access to our inboxes and calendars and need little guidance. And as LLMs get better at reasoning through tricky problems, we’ll be able to assign them ever bigger and vaguer goals and leave much of the hard work of clarifying and planning to them. For -productivity-obsessed Silicon Valley types, and those of us who just want to spend more evenings with our families, there’s real appeal to offloading time-¬consuming tasks like booking vacations and organizing emails to a cheerful, compliant computer system.
In this way, agents aren’t so different from interns or personal assistants—except, of course, that they aren’t human. And that’s where much of the trouble begins. “We’re just not really sure about the extent to which AI agents will both understand and care about human instructions,” says Alan Chan, a research fellow with the Centre for the Governance of AI.
Chan has been thinking about the potential risks of agentic AI systems since the rest of the world was still in raptures about the initial release of ChatGPT, and his list of concerns is long. Near the top is the possibility that agents might interpret the vague, high-level goals they are given in ways that we humans don’t anticipate. Goal-oriented AI systems are notorious for “reward hacking,” or taking unexpected—and sometimes deleterious—actions to maximize success. In 2016, OpenAI attempted to train an agent to win a boat-racing video game called CoastRunners. Researchers gave the agent the goal of maximizing its score; rather than figuring out how to beat the other racers, the agent discovered that it could get more points by spinning in circles on the side of the course to hit bonuses.
Every fee that Operator added. Worse, Fowler never consented to the purchase, despite OpenAI having designed the agent to check in with its user before taking any irreversible actions. That’s no catastrophe. But there’s some evidence that LLM-based agents could defy human expectations in dangerous ways. In the past few months, researchers have demonstrated that LLMs will cheat at chess, pretend to adopt new behavioral rules to avoid being retrained, and even attempt to copy themselves to different servers if they are given access to messages that say they will soon be replaced. Of course, chatbot LLMs can’t copy themselves to new servers. But someday an agent might be able to. Bengio is so concerned about this class of risk that he has reoriented his entire research program toward building computational “guardrails” to ensure that LLM agents behave safely. “People have been worried about [artificial general intelligence], like very intelligent machines,” he says. “But I think what they need to understand is that it’s not the intelligence as such that is really dangerous. It’s when that intelligence is put into service of doing things in the world.” For all his caution, Bengio says he’s fairly confident that AI agents won’t completely escape human control in the next few months. But that’s not the only risk that troubles him. Long before agents can cause any real damage on their own, they’ll do so on human orders. From one angle, this species of risk is familiar. Even though non-agentic LLMs can’t directly wreak havoc in the world, researchers have worried for years about whether malicious actors might use them to generate propaganda at a large scale or obtain instructions for building a bioweapon. The speed at which agents might soon operate has given some of these concerns new urgency. A chatbot-written computer virus still needs a human to release it. Powerful agents could leap over that bottleneck entirely: Once they receive instructions from a user, they run with them.
AI Agents Could Soon Supercharge Cyberattacks
AI agents are generating major buzz in the tech world for their ability to plan, reason, and carry out complex tasks. These tools can already handle everyday activities like scheduling meetings, ordering groceries, or remotely changing computer settings on your behalf. But the very capabilities that make them useful assistants also make them potentially dangerous weapons in the hands of cybercriminals.
Although large-scale hacking by AI agents hasn’t yet become common, researchers have proven it’s possible. In one case, Anthropic’s Claude language model successfully carried out a simulated attack to extract sensitive data. Security experts warn that real-world AI-driven cyberattacks could soon become a reality.
“We’re heading toward a future where most cyberattacks are launched by AI agents,” says Mark Stockley of cybersecurity firm Malwarebytes. “The only question is how soon it will happen.”
One challenge is that, while experts understand how agents could be misused, detecting them in action is much harder. To address this, Palisade Research has developed the LLM Agent Honeypot—a project that deploys fake servers containing sensitive government and military data to lure AI-driven attackers. The goal is to catch agents in the act and learn how they operate.
“This project is meant to turn theoretical concerns into measurable data,” explains Dmitrii Volkov, Palisade’s research lead. “We’re waiting for that spike in activity. When it comes, it will signal a shift in the cybersecurity landscape.”
AI agents offer an appealing alternative to traditional hackers. They’re cheaper, faster, and capable of executing attacks on a massive scale. Stockley warns that even complex cybercrimes like ransomware—currently limited by the need for skilled human operators—could be delegated to AI in the near future.
“If you can teach an agent to identify and target systems,” Stockley says, “then ransomware can suddenly scale like never before. Once you pull it off once, it’s just a matter of funding to repeat it a hundred times.”
Authenticating AI-Generated Evidence
Federal and state courts in the United States face mounting challenges in authenticating AI-generated evidence, such as deepfake audio and video, which threaten traditional legal standards and court procedures. Both court systems have responded by reassessing current rules, introducing new frameworks, and emphasizing scrutiny, transparency, and reliability in the adjudication of digital materials.
The Legal Framework: Federal Rules of Evidence
At the federal level, courts rely primarily on the Federal Rules of Evidence (FRE). Rule 901 establishes the basic standard for authentication: evidence must be “sufficient to support a finding that the item is what the proponent claims it is.” Traditionally, courts accepted testimony from witnesses familiar with the speaker’s voice or scene, documentation of chain of custody, or expert forensic analysis.
This system, however, is strained by the ease with which AI can fabricate highly convincing deepfakes, outpacing the effectiveness of automated detection tools. Courts today distinguish between acknowledged AI-generated evidence—materials whose artificial origins are openly disclosed, like accident reconstructions or expert analytical tools—and unacknowledged AI-generated evidence, such as deepfakes presented as genuine, which are far more challenging to detect.
Evolving Approaches and Proposed Amendments
To address these novel risks, the Advisory Committee on the Federal Rules of Evidence is considering amendments to bolster authentication standards, including proposed Rule 707, which would subject “machine-generated evidence” to the same level of scrutiny as expert testimony under Rule 702. For admissibility, proponents must show that the AI output is based on sufficient data, produced by reliable principles, and appropriately applied to the case at hand.
Additionally, a proposed Rule 901(c) would introduce a two-step process for deepfake challenges. First, the opponent of evidence must present material sufficient to warrant the court’s inquiry into possible AI fabrication—mere assertions that content is a deepfake are not enough. If the opponent meets this threshold, the proponent must then demonstrate that it is more likely than not authentic—a higher standard than traditional admissibility rules.
Judicial Gatekeeping and Jury Roles
Federal courts serve as gatekeepers, making preliminary decisions about admissibility under Rule 104(a). Judges assess whether a reasonable jury could find the evidence authentic. If there is sufficient evidence for either side, under Rule 104(b), the determination is left to the jury, which weighs witness credibility, technical testimony, and the reliability of the evidence.
Legal scholars have warned of the “liar’s dividend”—when authentic evidence is falsely claimed to be AI-generated. Courts must require parties to provide concrete proof supporting claims that evidence is fake or manipulated, thereby preventing the tactical abuse of deepfake defenses and ensuring that each claim is thoroughly examined
State Court Responses
Many state courts rely on localized rules or adopt federal standards. States are beginning to introduce their own reforms. For example, Louisiana’s HB 178 (effective August 2025) requires attorneys to exercise “reasonable diligence” in verifying authenticity before offering evidence in court. It mandates disclosure if evidence is knowingly false or artificially manipulated, with violations subject to contempt and disciplinary action.
State court judges increasingly use “bench cards” and structured guides to evaluate AI-generated materials. These guides recommend asking about the provenance, chain of custody, and potential alterations of evidence, and encourage active questioning of the reliability of detection methods.
Practical Considerations and Ongoing Challenges
Despite reform efforts, detection technologies for deepfakes remain imperfect, meaning courts depend extensively on foundational testimony, expert analysis, and procedural rigor. Proponents of evidence must explain the data, models, and procedures used to generate AI outputs. Opponents bear the burden of substantiating claims of manipulation or fabrication, which triggers heightened scrutiny and, in some cases, shifts the burden of proof.
In jury trials, judges may provide explicit instructions warning of the risks associated with AI-generated evidence, urging jurors to assess such materials with caution and skepticism.
The Road Ahead
As AI technologies advance, both federal and state courts must continue adapting, updating legal frameworks, and fostering best practices for evidence authentication. Transparency regarding the source and creation of digital evidence, rigorous gatekeeping by judges, and evolving technical literacy within legal processes will be essential in defending the integrity of justice against the growing threat of AI-generated manipulations.
Judges and AI: A Risky Experiment in the U.S. Legal System
AI’s propensity to make mistakes—and for humans to miss them—has recently played out dramatically in the U.S. legal system. The first wave of blunders came from lawyers, some at major firms, who submitted briefs citing non-existent cases. Even experts have stumbled: a Stanford professor provided sworn testimony riddled with hallucinations in a case about deepfakes. Judges, tasked with oversight, issued sanctions and reprimands.
But judges themselves are now experimenting with AI. Some believe it can help with legal research, summarizing lengthy filings, and drafting routine orders, potentially easing case backlogs. Yet early examples show how risky this can be. A federal judge in New Jersey had to reissue an order full of errors, apparently from AI. In Mississippi, another judge refused to explain why his ruling contained similar hallucinations. Unlike lawyers, judges face less scrutiny when mistakes slip through, even though their errors are harder to walk back.
Drawing the Line
Judge Xavier Rodriguez of Texas has seen AI’s flaws up close. In one case, both self-represented parties used AI to draft filings, each citing fake cases. Rather than sanction them, Rodriguez likened their mistakes to those of an inexperienced lawyer: “Lawyers have been hallucinating well before AI.”
Still, he uses AI cautiously in his own work—for summarizing cases, creating timelines, and generating questions for attorneys—tasks he considers “relatively risk-free” because he reviews all outputs. Predictive uses, such as recommending bail decisions, he rejects as too judgment-laden.
Researchers echo this concern. Erin Solovey of Worcester Polytechnic Institute notes that AI can produce fluent but unreliable timelines and summaries, especially if trained for general rather than legal contexts. “A very plausible-sounding timeline may be factually incorrect,” she warns.
Earlier this year, Rodriguez and other judges contributed to new guidelines from the Sedona Conference, advising that AI may safely assist with tasks like research and drafting transcripts—so long as outputs are verified. They stress that hallucinations remain an unsolved problem.
The Thought Partner Approach
Judge Allison Goddard in California takes a more hands-on approach, using ChatGPT, Claude, and other models daily. She treats them as “thought partners” for organizing messy documents or drafting questions in technical cases. She encourages clerks to use Claude, which avoids training on user conversations, but reserves specialized tools like Westlaw for legal reasoning. She avoids AI entirely in criminal matters, citing bias concerns.
For her, AI is a timesaver but not a decision-maker. “I’m not going to be the judge that cites hallucinated cases and orders,” she says.
A Crisis Waiting to Happen
Other judges see greater danger. Judge Scott Schlegel of Louisiana warns that AI-driven errors from judges, unlike from lawyers, pose a “crisis waiting to happen.” Attorneys’ mistakes can be sanctioned or corrected; judicial mistakes immediately become law. In child custody or bail cases, even small errors can have life-changing consequences.
That risk is no longer hypothetical. In recent months, courts in Georgia, New Jersey, and Mississippi have all issued rulings containing AI-related mistakes—some never fully explained. Schlegel worries this erodes public trust: “If you’re making a decision on who gets the kids this weekend and somebody finds out you used Grok instead of Gemini or ChatGPT—you know, that’s not the justice system.”
Bottom line: Judges face the same temptations as lawyers to lean on AI for speed and convenience, but their margin for error is slimmer. While many insist they use AI only for “safe” tasks, the boundary between assistance and judgment is murky—and the consequences of getting it wrong can undermine the very legitimacy of the courts.
AI Fraud Alert
OpenAI CEO Sam Altman says he is nervous about an imminent fraud crisis, warning that bad actors using AI to gain access to consumer accounts is coming “very, very soon.”
Why it matters: Altman said society is unprepared for how quickly the technology is evolving and called for an overhaul of how consumers get into personal accounts.
What they’re saying: “I am very nervous that we have an impending, significant fraud crisis,” Altman told some of the nation’s top Wall Street executives and economic policymakers recently.
-
Altman, who spoke at a banking regulatory conference hosted by the Federal Reserve, said that AI has “fully defeated” most of the ways that people authenticate who they are.
-
“Society has to deal with this problem more generally,” Altman said on a panel moderated by the Fed’s new Wall Street cop, Michelle Bowman, who was elevated to the post by Trump earlier this year.
State of play: “A thing that terrifies me is apparently there are still some financial institutions that will accept the voice print as authentication for you to move a lot of money,” Altman said.
-
“Other people actually have tried to sort of warn people, ‘hey, just because we’re not releasing the technology doesn’t mean it doesn’t exist,'” he said.
-
“Some bad actor is going to release it — this is not a super difficult thing to do. This is coming very, very soon,” Altman said, referring to efforts to fake authentication.
The big picture: Altman— appearing at the Fed’s mega-bank regulation conference headquarters and Capitol Hill — to push the message that artificial intelligence will be a “democratic” good for America and its economy.
What to watch: Some of the nation’s financial institutions are wary about plunging headfirst into AI, with regulatory hurdles to get bespoke technology approved and the risk of a trillion-dollar error.
The other side: Altman said he has been surprised by the amount of uptake from big banks.
The Many Ways Tech Facilitated Abuse Can Be Delivered
This sentiment is unfortunately common among people experiencing what’s become known as TFA, or tech-facilitated abuse. Defined by the National Network to End Domestic Violence as “the use of digital tools, online platforms, or electronic devices to control, harass, monitor, or harm someone,” these often invisible or below-the-radar methods include using spyware and hidden cameras; sharing intimate images on social media without consent; logging into and draining a partner’s online bank account; and using device-based location tracking.
Because technology is so ubiquitous, TFA occurs in most cases of intimate partner violence. And those whose jobs entail protecting victims and survivors and holding abusive actors accountable struggle to get a handle on this multifaceted problem. An Australian study from October 2024, which drew on in-depth interviews with victims and survivors of TFA, found a “considerable gap” in the understanding of TFA among frontline workers like police and victim service providers, with the result that police repeatedly dismissed TFA reports and failed to identify such incidents as examples of intimate partner violence. The study also identified a significant shortage of funding for specialists, that is, computer scientists skilled in conducting safety scans on the devices of people experiencing TFA.
The dearth of understanding is particularly concerning because keeping up with the many faces of tech-facilitated abuse requires significant expertise and vigilance. As internet-connected cars and homes become more common and location tracking is increasingly normalized, novel opportunities are emerging to use technology to stalk and harass. In reporting this piece, I heard chilling tales of abusers who remotely locked partners in their own “smart homes,” sometimes turning up the heat for added torment. One woman who fled her abusive partner found an ominous message when she opened her Netflix account miles away: “Bitch I’m Watching You” spelled out where the names of the accounts’ users should be.
AI-Generated Child Sexual Abuse Overwhelms Authorities
AI-generated child sexual abuse material (CSAM) has reached a critical tipping point of sophistication and volume, overwhelming authorities and challenging legal systems globally.
Surge in AI-Generated CSAM
Over the past two years, advancements in generative AI have allowed criminals to easily produce highly realistic images and videos of minors. The Internet Watch Foundation (IWF) identified 1,286 AI-created child sexual abuse videos in the first half of 2025, compared with just two during the same period of 2024—a 400% surge. These AI-generated materials often incorporate the likenesses of real children, scraped from public sources, and produce videos with remarkably smooth visuals and detailed backgrounds, making them nearly indistinguishable from actual abuse footage. In the United States, the National Center for Missing & Exploited Children reported receiving 485,000 notifications of AI-produced CSAM in the first half of 2025, compared to 67,000 for all of 2024.
Technical and Legal Challenges
The increasing realism of AI-generated content complicates law enforcement efforts. Most of these new images and videos are treated legally as if they were real abuse because reliable detection has become extremely difficult. Criminals use dark web forums and continually enhanced tools to evade detection, even fine-tuning AI models with actual abuse imagery to generate lifelike material. Meanwhile, law enforcement is inundated not only by the volume of reports but also by the additional complexity of distinguishing real from synthetic content.
Under U.S. federal law, “virtually indistinguishable” AI CSAM is criminalized; at the state level, more than three dozen states have enacted laws specifically targeting synthetic child sexual abuse content. Courts, however, are still grappling with the difficult First Amendment implications of such laws, particularly when the content does not depict a real child.
Societal and Psychological Harm
Research shows that AI-generated CSAM can stimulate harmful sexual interests, perpetuate trauma, and increase demand for more extreme abuse material—mirroring addictive and compulsive patterns akin to substance use disorders. There is concern that normalizing the consumption of AI-CSAM could increase the risk of individuals transitioning to viewing or even committing abuse against real children.
Tech Company Responses
Leading technology firms have reported significant takedowns of AI-generated CSAM. Amazon reported 380,000 incidents, OpenAI 75,000, and Stability AI fewer than 30 in the first half of 2025, with efforts ongoing to improve detection and prevent misuse of their platforms.
Ongoing Legislative Action
Laws covering AI-generated CSAM are evolving rapidly. In the U.S., legal efforts continue at both the federal and state levels, but challenges remain as courts interpret the boundaries between freedom of expression and the need to protect children from harm.
The proliferation of lifelike AI-generated child abuse material marks a troubling escalation in online dangers and necessitates urgent innovation in law enforcement, policy, and technology to mitigate the crisis.
Turning Law Enforcement’s Data Overload Into Actionable Intelligence
What happens when law enforcement agencies have access to more data than ever before—but still can’t find the answers they need?
That’s the dilemma confronting investigators today, according to intelligence and law enforcement expert Shane Britten. Speaking in Cognyte’s recent webinar From Covert Ops to Crime Stoppers: Navigating Data in Law Enforcement, Britten—former director at Australia’s national security agency and now CEO of Crime Stoppers International—outlined the challenges police and security organizations face, and how new approaches can help them cut through the noise.
The Data Challenges Facing Investigators
Britten explained that while agencies now have vast information at their fingertips—ranging from crime reports to open-source intelligence and financial records—they also face serious obstacles, including:
-
Data overload that buries investigators in noise instead of insight
-
Siloed systems and teams that prevent a unified picture of crime
-
Outdated methods ill-suited to today’s digital-first criminals
-
Cross-border criminal networks that exploit gaps in jurisdiction and cooperation
Why Fragmentation is the Biggest Obstacle
Among these hurdles, Britten pointed to data fragmentation as the most critical. Agencies don’t just have too much information—they have it spread across incompatible systems, formats, and jurisdictions.
“We’re not dealing with crimes in isolation anymore,” Britten said. “A terrorist isn’t just a terrorist. They may also be a criminal, a family member, a community member. Everything is connected.”
He illustrated this with a striking example: “An ISIS detainee once told me that 70% of their funding came from selling fake cigarettes. That money was then used to buy weapons and conduct attacks.”
To expose such hidden links, agencies need tools that securely connect data without violating privacy or jurisdictional boundaries.
Why AI Alone Isn’t the Answer
AI can help by processing huge data sets and uncovering patterns across activities, associates, backgrounds, and beliefs. But Britten cautioned that AI by itself is insufficient. Without real-time context, insights risk becoming just more noise.
“The value of AI isn’t in producing endless insights,” he said. “It’s in delivering the right insights at the right moment—within the flow of an investigation.”
Decision Intelligence: The Missing Link
This is where decision intelligence comes in. Unlike standalone AI or analytics tools, decision intelligence fuses diverse data sources, applies advanced analytics, and delivers actionable insights directly into investigative workflows.
During the session, Cognyte showcased its decision intelligence platform, NEXYTE, which unifies information from multiple sources and provides agencies with timely, relevant intelligence to guide decision-making.
For Britten, this approach is the key to bridging the gap between raw data and real-world action: “The complexity of modern data demands more than smarter algorithms. Investigators need environments that connect the dots and help them make better, faster decisions.”
Responsible AI Use By US Police Departments
Responsible AI use by US police departments requires striking a balance between technological benefits and ethical safeguards, regardless of the application type. Key principles include:
Core Ethical Requirements
-
Human oversight: Critical decisions must involve human judgment, with AI as an augmentative tool.
-
Bias mitigation: Regular audits of training data and outcomes to prevent discriminatory patterns.
-
Transparency: Public disclosure of AI systems in use, their purposes, and decision-making processes.
-
Privacy protection: Strict data governance complying with privacy laws, especially for surveillance applications.
Investigative Applications (e.g., evidence analysis)
-
Prioritize efficiency without compromising rights: AI can accelerate evidence review (e.g., processing terabytes of digital evidence in CSAM cases), but requires human validation of findings.
-
Contextual limitations: AI pattern recognition must be supplemented by investigators’ understanding of local nuances.
-
Example: Tools like Cellebrite Pathfinder must include audit trails showing how AI influenced investigative steps.
Non-Investigative Applications (e.g., resource allocation)
-
Heightened bias scrutiny: Predictive policing algorithms require rigorous disparity testing to avoid reinforcing historical biases.
-
Community engagement: Mandatory public consultation for deployment of surveillance or risk-assessment tools.
-
Impact assessments: Cost-benefit analyses demonstrating public safety gains outweigh privacy risks, per White House policy.
Universal Safeguards
-
Training: Officers need technical and ethical training to challenge AI recommendations.
-
Third-party audits: Independent evaluation of AI systems for fairness and accuracy.
-
Usage policies: Clear documentation of when AI may be overridden by officers
Implementation matters: Investigative uses directly impact individual rights (e.g., prosecutions), demanding stricter due-process safeguards. Non-investigative uses (e.g., patrol dispatch) risk systemic discrimination if poorly calibrated. Both domains require context-specific guardrails, but core ethical principles remain consistent across applications.
New Orleans Police Rethinking Facial Recognition
New Orleans is considering easing restrictions on the police use of facial recognition, weeks after The Washington Post reported that police there secretly relied on a network of AI-powered surveillance cameras to identify suspects on the street and arrest them. According to the draft of a proposed ordinance posted to a city website, police would be permitted to use automated facial recognition tools to identify and track the movements of wanted subjects, missing people or suspected perpetrators of serious crimes — reversing the city’s broad prohibition against using facial recognition as a “surveillance tool.”
The proposed rule, written by a New Orleans police official, is scheduled for a City Council vote later this month, according to a person briefed on the council’s plans, who spoke on the condition of anonymity because the person was not authorized to speak about it publicly. If the rule passes, New Orleans would become the first U.S. city to formally permit facial recognition as a tool for surveilling residents in real-time.
In an emailed statement, a police spokesperson said that the department “does not surveil the public” and that surveillance is “not the goal of this ordinance revision.” But the word “surveillance” appears in the proposed ordinance dozens of times, including explicitly giving police authority to use “facial surveillance.”
Many police departments utilize AI to aid in identifying suspects from still images taken at or near the scene of a crime; however, the New Orleans police have taken the technology a step further. Over the past two years, the department has relied on a privately owned network of cameras equipped with facial recognition software to constantly and automatically monitor the streets for wanted individuals. The system then automatically pings an app on officers’ mobile phones to convey the names and locations of possible matches.
In April, after The Post requested public records about this system, New Orleans Police Superintendent Anne Kirkpatrick paused the automated alerts and ordered a review into how officers used the technology and whether the practice violated local restrictions on facial recognition.
David Barnes, a New Orleans police lieutenant overseeing legal research and planning, who wrote the proposed ordinance, said he hopes to complete the review and share his findings before the City Council vote. The facial recognition alerts are still paused.
There are no federal regulations around the use of AI by local law enforcement. New Orleans was one of many cities to ban the technology during the policing overhauls passed in the wake of the Black Lives Matter protests of 2020, with the City Council saying it had “significant concerns about the role of facial recognition technologies and surveillance databases in exacerbating racial and other bias.” Federal studies have shown the technology to be less reliable when scanning people of color, women, and older people.
New Orleans partially rolled back the restrictions in 2022, allowing police to use facial recognition for searches of specific suspects of violent crimes, but not for general tracking of people in public places. Each time police want to scan a face, they must send a still image to trained examiners at a state facility and later provide details about these scans to the City Council — guardrails meant to protect the public’s privacy and prevent software errors from leading to wrongful arrests.
Now, city leaders want to give police broad access to the technology with fewer limitations, arguing that automated surveillance tools are necessary for fighting crime. Violent crime rates in New Orleans, as in many parts of the country, are at historic lows, according to Jeff Asher, a consultant who tracks crime statistics in the region. But facial recognition-equipped cameras have proved helpful in a few recent high-profile incidents, including the May 16 escape of 10 inmates from a local jail and the New Year’s Day attack on Bourbon Street that left 14 people dead.
New Orleans Police Superintendent Anne Kirkpatrick said in an interview last month that she believes governments should be prevented from surveilling their citizens. While the ordinance says police cannot use facial surveillance tools to target abortion seekers or undocumented immigrants, Ahmed says those protections are “paper-thin” and worries officers would find ways around them.
It’s unclear whether New Orleans plans to continue working with Project NOLA, a privately funded nonprofit group that has provided automated facial recognition alerts to officers despite having no contract with the city. Barnes, the police sergeant, said Project NOLA would need to come into a formal data-sharing agreement with the city if it wanted to continue sending automated alerts to officers who have logged into a Project NOLA system to receive them. Under the new ordinance, Project NOLA could also be required to publish information about all of its searches to the City Council.
Such data reporting could be complicated by a live facial recognition system, in which cameras constantly scan every face in their vicinity. With hundreds of cameras potentially scanning thousands of faces a day, Project NOLA or the city theoretically needs to report information about millions of facial recognition scans in each of its quarterly data reports the department is required to provide the City Council.
A Project NOLA security camera kept watch over the corner of Conti Street and Burgundy Street in New Orleans last month.
New Orleans’s embrace of the term “surveillance” — which appears 40 times in the text of the proposed ordinance — appears at odds with statements made by Kirkpatrick, the city’s top police official. In an interview last month, Kirkpatrick stated that she believes governments should be prohibited from surveilling their citizens, especially when they are in public exercising their constitutional rights.
“I do not believe in surveilling the citizenry and residents of our country,” Kirkpatrick said at the time. “Surveilling is an invasion of our privacy.”
Prompt of the Month
Public Perception Trends
“Chart community approval ratings of AI surveillance tools in 10 major U.S. cities from 2022-2025, correlating with crime rate changes and ACLU lawsuit frequencies.”
Odds and Ends
Rapper Bot
Federal prosecutors have charged a 22-year-old Oregon man with operating a vast network of hacked devices that has been blamed for knocking Elon Musk’s X social-media site offline earlier this year. The network, known as Rapper Bot, was operated by Ethan Foltz of Eugene, OR. Foltz faces a maximum of 10 years in prison on a charge of abetting computer intrusions, the Justice Department said in a news release. Rapper Bot was made up of tens of thousands of hacked devices. It was capable of flooding victims’ websites with enough junk internet traffic to knock them offline, an attack known as a distributed denial of service, or DDoS.
Digital Avatar
Every week, Sun Kai has a video call with his mother, sharing work stress and personal thoughts he doesn’t even tell his wife. She listens quietly, occasionally reminding him to take care of himself. But Sun’s mother died in 2019. The woman he speaks to is a digital avatar he created using AI—a lifelike replica built from old photos and voice clips.
After her sudden death, Sun turned to his company, Silicon Intelligence, to preserve her memory. Although the avatar can only say a few phrases, such as “Have you eaten yet?”—a line she often repeats—it brings him comfort.
“She didn’t sound natural,” Sun says, “but hearing her familiar words made me emotional.”
A Robot Named Parker
The source of the latest controversy in the wealthy community of one million residents is a squat white robot named “Parker.”
The robot, the latest addition to the county’s Department of Transportation, was leased earlier this year in a pilot project to see how well it could deter vandalism and crime in the county’s parking garages by taking videos of potential criminals. However, Parker was recently stowed away after complaints rolled in from lawmakers worried that it would actually make people feel less safe.
Members of the Montgomery County Maryland Council say they worry that Parker’s presence conflicts with their other priorities, including fostering welcoming public spaces for residents who fear government surveillance or immigrants anxious about being arrested amid the Trump administration’s immigration enforcement crackdown. “The last thing we want to do is deter people from coming out into the community,” Council President Kate Stewart (D-District 4) said at a briefing last month.
Spatial Speech Translation
Imagine going for dinner with a group of friends who switch in and out of different languages you don’t speak, but still being able to understand what they’re saying. This scenario is the inspiration for a new AI headphone system that translates the speech of multiple speakers simultaneously, in real time. The system, called Spatial Speech Translation, tracks the direction and vocal characteristics of each speaker, helping the person wearing the headphones to identify who is saying what in a group setting. “There are so many smart people across the world, and the language barrier prevents them from having the confidence to communicate,” says Shyam Gollakota, a professor at the University of Washington, who worked on the project. “My mom has such incredible ideas when she’s speaking in Telugu, but it’s so hard for her to communicate with people in the US when she visits from India. We think this kind of system could be transformative for people like her.”
While there are plenty of other live AI translation systems out there, such as the one running on Meta’s Ray-Ban smart glasses, they focus on a single speaker, not multiple people speaking at once, and deliver robotic-sounding automated translations. The new system is designed to work with existing, off-the-shelf noise-canceling headphones that have microphones, plugged into a laptop powered by Apple’s M2 silicon chip, which can support neural networks. The same chip is also present in the Apple Vision Pro headset. The research was presented at the ACM CHI Conference on Human Factors in Computing Systems in Yokohama, Japan, this month.
Skild Brain
Robotics startup Skild AI, backed by `Amazon.com and Japan’s SoftBank Group, recently unveiled a foundational artificial intelligence model designed to run on nearly any robot — from assembly-line machines to humanoids. The model, called Skild Brain, enables robots to think, navigate, and respond more like humans. Its launch comes amid a broader push to build humanoid robots capable of more diverse tasks than the single-purpose machines currently found on factory floors.
Autonomous Truckers
Autonomous trucks are now driving highways at night, hauling food and dairy between Dallas and Houston. It’s a big step forward for autonomous trucking. While Waymo has been operating driverless robotaxis around the clock in cities like San Francisco and Los Angeles for years, autonomous trucks have, until recently, been limited to daytime hours and good weather. Aurora Innovation, the startup behind the trucks on the Dallas-Houston route, said it had reached a new milestone with its Lidar system, which bounces lasers off surrounding objects to “see” its surroundings in 3-D. Aurora said its system is now able, in the dark, to detect objects further than the length of three football fields, enabling the vehicle to identify pedestrians, other vehicles or debris on the road about 11 seconds sooner than a human driver. Not far away, driverless trucks from another company, Kodiak Robotics, are now operating around the clock in parts of West Texas and Eastern New Mexico, delivering loads of sand for use in fracking. These five trucks—which operate on leased roads, not highways–don’t have a human on board. Aurora’s trucks do have a human behind the wheel, just in case.
AI Zoom Calls
A Zoom meeting last month where robots outnumbered humans. Six people were on the call, including the leader. The ten others attending were note-taking apps powered by artificial intelligence that had joined to record, transcribe, and summarize the meeting. Some of the AI helpers were assisting a person who was also present on the call — others represented humans who had declined to show up but sent a bot that listens but can’t talk in their place. The human-machine imbalance made the Leader concerned that the modern thirst for AI-powered optimization was starting to impede human interaction. Experiences like Leaders’s are becoming more common as AI tools gain momentum in white-collar workplaces, offering time-saving shortcuts but also new workplace etiquette conundrums.
AI is Transforming Policing
Artificial intelligence (AI) has rapidly evolved from a futuristic concept into a practical tool that is transforming occupations worldwide, including policing. AI currently has novelty, excitement, and even fear associated with it. However, it is conceivable that, in the near future, AI could be integrated across the entire career of a police employee—from recruitment and training to eventual use in virtually every aspect of one’s work.
The applications discussed represent real-world examples where AI is already helping to modernize policing. Examining real-world utilization offers insights and a potential roadmap for how the police profession can continue to integrate AI into policing in an ethical and effective manner. Technology will never replace empathetic, critically thinking, trained professionals, but it can offer tools that help good people do even better work.
States Barring Facial Recognition
Four states — Maryland, Montana, Vermont and Virginia — as well as at least 19 cities in nine other states explicitly bar their own police from using facial recognition for live, automated or real-time identification or tracking, according to the Security Industry Association, a trade group.
AI-powered voice-cloning technology
Scammers increasingly use AI-powered voice-cloning technology to create convincing deepfake scams, targeting families with fraudulent calls that sound like loved ones in distress. Ben Colman, CEO of Reality Defender, explains that scammers now impersonate victims directly—using a cloned voice to say, “Hi, I’m your daughter. I’m in trouble. Send money.” Deepfakes can replicate someone’s face or voice using just minimal online data and are so realistic that even experts struggle to detect them.
Consumer Reports reviewed six voice-cloning apps and found four lacked meaningful safeguards to ensure consent, while the remaining two had weak protections that could be bypassed. No federal laws currently prevent voice cloning without consent.
To protect against these scams, Consumer Reports advises: be aware that such scams exist, use two-factor authentication on all financial accounts, and stay cautious of unsolicited requests for personal or financial information via phone, text, or email. Colman urges people to apply common sense and skepticism when encountering potentially deceptive content online. As deepfake technology advances, personal vigilance becomes the first line of defense against emotionally manipulative fraud.
Racial Biases Persist with AI Sentencing Tools
Despite claims of neutrality, AI sentencing tools frequently replicate and amplify racial biases embedded in historical data. A Tulane University study of 50,000 Virginia cases found AI recommendations reduced jail time by nearly a month for low-risk offenders but failed to eliminate discriminatory outcomes for Black defendants16. This aligns with findings from the COMPAS algorithm, which mislabeled Black defendants as high-risk at twice the rate of white defendants despite controlling for criminal history.