Thursday, May 26, 2022

What Good is Artificial Intelligence If We Don't Use the Non-Artificial Kind?

(c) by Mark Dempsey

"Hey! He's naked!" - from a child on the parade route of the emperor displaying his new clothes.

 


The optimism over artificial intelligence (AI) rivals the optimism over tulips bulbs in Holland, railroads in the gilded age, and subprime mortgages and derivatives more recently. But the automation of important decisions still has significant problems.

For those unfamiliar with the concept of artificial intelligence, think of it as computers getting smart enough to program themselves. Tesla's automated driving "teaches" itself to recognize obstacles and routes. It's not that great, but it's young yet. We Americans are optimistic! ("Americans are a primitive people, disguised by the latest inventions" - George Santayana)

And that's just one problem. Buggy software plagues even the most primitive of applications. For example, if programmers divide by zero, without anticipating and correcting the outcome, a computer will keep suggesting larger and larger numbers to multiply, eventually consuming all the computing power available and crashing.

Similarly, some programs continue to consume the computer's memory as they process data until the computer crashes. This is called a "memory leak."

Oddly enough, there are (non-artificial) "bugs" in natural intelligence that are similar. One category is "supernormal stimuli." These are flaws in human and animal software that make those species susceptible to some fairly bizarre mistakes.

For example: Peahens simply cannot get enough of peacocks' tails. For peahens, size matters. To test this, some zoologists built artificial, clearly not-alive peacock models with tails so enormous they could not exist in nature. The peahens preferred the models to live peacocks.

Humans experience something similar with sugar. We can suck on one of those gigantic sugar-filled drinks all day, and our digestive software will never say "Hey! You have consumed enough calories!"

There's some indication that wisdom literature like the ten commandments, or the seven deadly sins, warned people off these supernormal stimuli, saying it's healthier to avoid them. Dierdre Barrett's book, Supernormal Stimuli: How Primal Urges Overran Their Evolutionary Purpose spends a lot of time explaining how restaurants seek to program their customers to want ever more sugar, salt, and grease, ignoring the health effects in pursuit of profit. The epidemic of obesity in America is the result of this Gresham's dynamic in food (bad food drives out good).

The political class's appeal to the electorate with these "can't get enough" stimuli is pervasive, too. Who can get enough safety, justice, or fairness? In pursuit of those goals, between 1982 and 2017, U.S. population increased 42%, but spending on police increased 187%. 

Unlike "Law and Order," or "Perry Mason," police don't solve all the crimes--only about 15% in California--despite that massive surge in police spending. Those TV shows amount to revenge porn, and one result is that, with 5% of the planet's population, the U.S. has 25% of its prisoners. That's hugely expensive, and ineffective at reducing crime. Per-capita, the Canadians incarcerate about one seventh as many people, yet their crime rates are about the same as the U.S.

For one thing, Canada has single-payer healthcare, which means people don't need to resort to a life of crime to pay for their spouse's cancer treatment--the plot outline of Breaking Bad.

And speaking of justice, the Guardian even reports that artificial intelligence ("AI") is learning all our worst impulses. "Correctional Offender Management Profiling for Alternative Sanctions (Compas), was much more prone to mistakenly label black defendants as likely to reoffend – wrongly flagging them at almost twice the rate as white people (45% to 24%), according to the investigative journalism organisation ProPublica."

As attractive (and slothful) as it is to automate decisions like the difficult ones in our justice system,  automation is as prone to error as any human, sometimes moreso. Sloth, if you missed it, is one of the seven deadly sins, and one possible supernormal stimulus. We simply can't get enough large peacock tails...er, I mean relaxation.

"It's tough to make predictions, especially about the future" - Yogi Berra

People expect AI to predict the future--for example, whether prisoners will reoffend. Yet it's widely acknowledged that AI simply reflects the prejudices of those who programmed its algorithms. The problem is that AI is a "black box" since it teaches itself, so those prejudices appear only after the damage is done. Can't get a loan because you're a person of color? Sorry, our algorithms are opaque, and just happen to be programmed by white engineers who are unaware how their programming ultimately results in bias.

The bottom line: no matter how sophistcated they become, we can't count on computers to automate our thinking for us. Just because we have achieved some momentum in this direction does not mean we can ignore the facts. It's taken millions of years for our natural software to evolve--granted perhaps not as speedy as current programming--but despite all that trial-and-error, we still have "bugs" in our software like supernormal stimuli.

And we still have the problem of ego: the emperor could have set aside his pride and ego, saying "Thanks child for pointing out my mistake. I'll go home and put some clothes on." He would have been more comfortable, too. Unfortunately, the fairy tale says he "grimly continued the parade," reminding us that it's still easier to fool people than to persuade them they've been fooled.

No comments:

Post a Comment

One of the objects if this blog is to elevate civil discourse. Please do your part by presenting arguments rather than attacks or unfounded accusations.

Understanding The Election

  Early predictions of tonight’s election are in and it’s a landslide victory for the oligarchs. pic.twitter.com/ZPTSYxuBMS — Danny Haiphong...