Friday, March 6, 2026

AI Limitations

from Alvaro Bedoya @BedoyaUSA

Open use of AI for military targeting is relatively new (we saw it in Gaza); but the use of AI for police and retail targeting is over a decade old. We’re about to re-learn some painful lessons. 1) It lets people ignore their gut. When I was a commissioner at the FTC we sued Rite-Aid over its use of AI to identify and target potential shoplifters. We alleged there were situations where staff would detain a Black woman based on a profile of “a white lady with blonde hair.” In other situations they’d detain an eleven year-old who had never set foot in the store. I don’t know this for a fact, but I strongly suspect that staff in these situations said - “wait a second here” - but they went ahead anyways. *Because the computer told them to do it.* (So we banned the use of face surveillance at Rite-Aid.) 2) It promotes magical thinking. In 2019, our team

@GeorgetownCPT found scenarios where police departments were running *drawings* of suspects into an AI face recognition system in order to arrest people. This is not a joke. Amazon Web Services bragged about how police in Washington State used a sketch to identify a suspect. Maricopa County Sheriff’s Office said you could use face recognition on “forensic busts.” Do you occasionally catch someone this way? Maybe. But the mistakes land innocent people in jail (and keep guilty people on the street.) 3) We are about to re-learn these lessons. Secretary Hegseth is bragging that the American and Israeli militaries used twice the air power as the Iraqi “shock and awe” campaign of 2003. I went to law school, and have subsequently taught law to, former military officers who were surface warfare and targeting specialists. These are some of the most serious people I know. But they are human. And if someone is asked to do something superhuman - i.e. generate an impossible number of targeting packages - it is only inevitable that they will “trust the AI,” even if their gut tells them otherwise. Yes, we’re dealing with new algorithms. The machine learning systems used in police and retail face recognition are Stone Age compared to the models being used today. But the new models make mistakes. (Yesterday Gemini told me that February 28 was a Friday.) And unlike with humans, where we know - “oh, Mike was a little rushed this morning” or “Larry always underestimates this” etc. - we are not trained to identify mistakes from AI. I hope we can avoid re-learning these lessons.

No comments:

Post a Comment

One of the objects if this blog is to elevate civil discourse. Please do your part by presenting arguments rather than attacks or unfounded accusations.

AI Limitations

from  Alvaro Bedoya  @BedoyaUSA Open use of AI for military targeting is relatively new (we saw it in Gaza); but the use of AI for police an...