Given its Likely Control of Tomorrows to Come, We Must be Prudent With Artificial Intelligence

Human vs artificial intelligence concept. Business job applicant man competing with cartoon robots sitting in line for a job interview


TwitterFacebookCopy LinkPrintEmail

Of Shakespeare’s soliloquies, none contain more power, affect and memorable lines than Macbeth’s speech beginning “Tomorrow, and tomorrow, and tomorrow.” Just informed of Lady Macbeth’s death, he ponders the futility and illusoriness of life’s endeavors, including his own unbridled ambition, driving him to commit murder and usurp a kingdom. Were the Bard of Avon writing today “Tomorrow, and tomorrow, and tomorrow” might serve as warning, or prologue perhaps, underscoring the dizzying speed and influence of technological advances and how both biological and cultural evolution can’t keep pace. Given its likely control of tomorrows to come, we must be prudent with artificial intelligence.

Until recently, such concerns were relegated, almost entirely, to science fiction. Despite robust threats from nuclear weapons proliferation and meltdowns such as Fukushima and Chernobyl, Ray Bradbury’s short story, “There Will Come Soft Rains,” hasn’t come true. Neither is “The Veldt” more than a cautionary tale about technological gimmickry “wildly haywire.” Still, many scenarios envisioned by George Orwell, Aldous Huxley, Harry Harrison and other authors have footholds in reality. Assaulted with our purchasing histories, we’re brow-beaten by automaton advertising. Not only have social media, malicious websites, Deep Fakes, and other Internet malignancies widened cultural divides irreparably, swilling of disinformation, myth and conspiracy theories have radicalized a large and vocal minority within the population. Social media algorithms are programmed to spoon-feed us what we want to see and hear, regardless of validity. Using negative sensational content to “sell soap” by prolonging engagement, voracity of information has become less important than advertising one’s identity or type. Long-established, reliable, evidence-based institutions are cast aside to assert an individual’s group. Anyone’s opinion shared in a chat room, even the unproven and half-baked, is given equal footing with experts in their fields. And when that happens, large voting blocks are divorced from reality ─ science, fact-based knowledge and truth ─ producing cognitive inequality. The resulting willful ignorance, proclivity for chaos and “low conscientiousness” that follow (i.e., less diligence regulating one’s own behaviors and impulses) slow amelioration of existential threats such as climate change, mass extinction, and pandemics to come, becoming clear and present dangers to society. Democracy is additionally threatened when two-party, win-at-all-costs partisanship, driven by wedge issues and fringe obstructionism, elects idiots to office. The result: unhinged populists and charlatans resembling TV-personality Lonesome Rhodes in Elia Kazan’s “A Face in the Crowd,” whose folksy humor and onscreen persona conceal egomaniacal impulses and contempt for his audience. When that disdain was revealed by a microphone left-on after a telecast, Rhodes was brought down. Algorithms, designed to make profits without moral rectitude, are likely to spread, not betray creators’ malevolence.

We’re all familiar with Mary Shelley’s Frankenstein and HAL in Arthur C. Clarke’s 2001: A Space Odyssey, which Stanley Kubrick made into a cinematic classic, not only renewing appreciation for Richard Strauss’ “Also sprach Zarathustra,” but popularizing cannabis as a movie-goer’s enhancer. Rogue creations, inanimate and alive, rebelling against their creators are a staple of literature and films. Another unintended consequence, every bit as frightening as HAL, is EPICAC in Kurt Vonnegut’s Player Piano, a computer originally introduced in one of his short stories. When it comes to robotic AI, however, no one contributed more thought-provoking stories to the genre than Isaac Asimov. Inhis collection of sci-fi short stories I, Robot, Asimov having created the sympathetic robot “Robbie,” explored the ethical implications of how human beings should treat and, in turn, be treated by creations with sophisticated AI. Asimov’s “Runaround” provides the first explicit description of his Three Laws of Robotics which in earlier writings had only been implied. The rules of conduct Asimov devised are as follows: First Law: A robot may not injure a human being or allow preventable harm to come to one. Second Law: A robot must obey orders given to it except when in conflict with the First Law. Third Law: A robot must protect its own existence unless such protection is in conflict with the First or Second Law.

Given their ambiguity, Asimov’s Laws have come under scrutiny and been revised as a literary exercise by many a sci-fi writer and moralist. While thusly providing inspiration for fiction, Asimov himself introduced some variations to his rules. I for one would expand the moral circle of the First Law by prohibiting any injury or harm, physical or psychological, to any sentient animal or plant. The challenge facing us today (and in the future) is that some computers and robotics already outthink and outperform creators who direct their various tasks. They lack, however, the fictional sentience Asimov imbued in them. The “positronic brain” featured in Asimov’s sci-fi robots gave them a recognizable depth of consciousness. Lacking that and programmed to perform amorally, AI could become a liability just as Internet algorithms are now.

“Culture” is the passing of learned behaviors from one generation to the next, and humans are by no means the only animal to exhibit it. A “meme” is a unit of cultural information which can be replicated or, in similar form, passed from one generation to the next by repetition or imitation. In fact, meme comes from the Greek mimema, meaning imitated. If not originally conceived by evolutionary biologist Richard Dawkins, the term was popularly coined and more fully defined by him in his seminal work The Selfish Gene (1976). Dawkins described memes as the cultural equivalency and evolutionary congener to genes in biological evolution, that is, the cultural unit of transference acted upon by natural selection, “selfishly” competing for transmission and expression in the next and succeeding generations. Understood in those terms, memes passing from one individual to another can evolve, randomly mutate and be selected, for or against, relative to their impacts on fitness (i.e., reproduction and survival) of individuals carrying and expressing those characteristics. Subverting natural selection by deliberately restricting or enhancing reproduction so certain biological traits are either expressed or eliminated from the population is commonly referred to as bioengineering, selective breeding or, in sociologically extreme, often ethically abhorrent cases, eugenics.

Almost a decade ago, following Dawkins’ line of reasoning and referencing a book I was writing, I published an essay in a newspaper column in which I coined the term “eumemics” to describe selectively restricting or enhancing  memes within or between cultures. Controlling information flow could limit or promote expression of specific memes, impede or disperse information from one generation to the next and pre-engineer cultural evolutionary outcomes. In contexts of human survival and the biosphere at large, deliberately steering or extinguishing our memes (i.e., eumemics) could either reap enormous benefit (potentially saving the planet) or chart reckless courses for unmitigated disasters. Artificial intelligence, if wisely programmed, may be the only means by which to decipher and presage any such distinctions. Within cultures, memes take various forms ranging from an idea or behavior to a scientific discovery or superstitious myth. Transmission can be carried out electronically, verbally, visually and a wide range of reproducible communications from e-mails to books, those most frequently copied being most prevalent in cultures. As a result, however much they constitute viruses of the mind, memes can be helpful, harmful or neutral because once assimilated into thought processes persistence in the population depends on replication. Once seeded, implementing and retooling memes for common good or misuse are equally feasible. And today, deliberate alterations of memes on the Internet, particularly social media and radicalized Websites, violate Dawkins’ original concept of memes randomly mutating. The big question: Can we trust AI to guide us safely through this maze?

An expanding coalition of scholars has begun issuing statements about the dangers of giving more reins to artificial intelligence. Not only are they worried about kiosks, computers and robotics taking service jobs from cabbies, truck drivers and restaurant staff, but entire economies and social structure run by autonomous AI corporations could be in play. Would CEOs and other humans relinquish that much control to intelligent machines to maximize profit? Could machines evolve to the point of taking control themselves, not only operating businesses, but directing competition, international dynamics and futures of societies and species as a whole? Evolutionary biologists understand and have long asserted that natural selection occurs wherever and whenever three conditions occur simultaneously ─ 1) differences between individuals, 2) characteristics passed on to future generations and 3) favorable propagation by those variants fittest in the population. In a recent Time article, Dan Hendrycks (Center for AI Safety-San Francisco) observes that those same three biological determinants operate in AI environments, either selecting for or against content-recommendation algorithms used by streaming services and social media. Algorithms that are most addictive, making users devote greater screen time (and platforms that use them) have competitive advantages over those that don’t.  Algorithms that fail to capture attention, gain influence; garner profits are eliminated, while those that exhibit sensationalism and other addictive properties survive and propagate. All this AI evolution happens, by the way, much faster than many macro-biological norms. It is a highly accelerated or (in deference to Niles Eldredge and Stephen Jay Gould) “punctuated” form of cultural adaptation. As AI reacts to selective pressures favoring malice, violence, scientific illiteracy and disregard for facts, more and more undesirable memes will evolve, “selfishly” persisting by perpetuating profits. Meanwhile superhumanly smart computerized “beings” could evolve with goals conflicting with ours and Asimov’s Rules. Like HAL in 2001: A Space Odyssey, self-preservation would likely become an adaptive AI imperative. Already, with little regulation or scholarly oversight, our species is ceding more and more control of our lives to AI, incentivized by greed and competition within and between nations. If programmed by a malevolent cell or integrated into food production, infrastructure or the power grid, we couldn’t turn it off. “Tomorrow, and tomorrow, and tomorrow…”

Deshefy is a biologist, ecologist and two-time Green Party congressional candidate.