Fairfield University Hosts Panel on Ethics and AI

Share

TwitterFacebookCopy LinkPrintEmail

On Monday, Fairfield University’s Dolan School of Business held a virtual panel discussing the ethics of artificial intelligence applications in commerce.

The panel featured Jacob Alber, principal software engineer at Microsoft Research, Iosif Gershteyn, CEO of ImmuVia, an immuno-oncology company, and Philip Maymin, director of the Business Analytics Program at Fairfield University. 

“Thank you everybody for joining us today as we discuss the ethics of perhaps the most important technology shaping our changing world today, which is artificial intelligence,” Gershteyn said to viewers.

In a Q&A format, panelists engaged in an hour-long debate on a variety of topics ranging from acknowledging bias in AI to the possibility that code could become sentient. 

This conversation has been edited and condensed for clarity.

How would we know if a piece of code has become sentient?

Maymin: It’ll be a decision of society, right? Ultimately, we decide as a society and a legal system, what constitutes capacity. The definition of what a person is has changed many times over hundreds of thousands of years. It changed this year. The idea of who has rights, who has capacity and who is a minor versus an adult has changed many times. Presumably, there will first be an AI that we should treat as a minor before we treat them as an adult. So, there’ll be a certain amount of rights that go along with that.

Gershteyn: I believe that code cannot be intelligent. By definition, intelligence requires understanding. Mechanisms are not understanding. When you write a book, that book only exists when a human reads it. The code is not intelligent or sentient – it can only give the appearance of such to intelligent, sentient beings.

Maymin: The counter argument I would put forward is that a strand of DNA is a very simple kind of book or code. You put all your mechanisms around it, and suddenly you have a living, breathing human who can say things that nobody on earth ever thought of. I don’t think it’s that crazy to think that code running on some other mechanism could, in fact, also exhibit the same sort of intelligence.

Gershteyn: Well, actually, there has never been a successful creation of life from nonlife. And all of the synthetic biology that’s being worked on always starts with a basis of some life. Even if you create artificial DNA, you still need to put it into plasmid, etc. So even there, I firmly believe that intelligence is a property of life and a secondary property of consciousness.

Alber: It’s interesting that you hit upon this sort of separation between sentience and intelligence. That raises a couple of questions. Can you have Sapiens without sentience? Can you have intelligence without consciousness? And if you can’t, how do you determine that something is conscious and that it has an internal subjective process? Our current test for human-level intelligence, the Turing Test, has a very strong flaw. GPT is a perfect illustration of that flaw – it’ll be perfectly happy writing the sentence, “a flock of files flew beneath the tarmac.” But most people would not interpret that as sensible text. 

To take the devil’s advocate position, from a scientific standpoint, I don’t have a principled reason to claim that I currently need any additional ingredients to generate the qualia that we observe from humans, animals and so on. So to that extent, it doesn’t seem unreasonable to say that code can be alive and can have a subjective experience. But in order for us to be able to believe that, we need to have a much better understanding of what it is that causes us to have a subjective experience. 

Should AI that exhibits bias be shut down or overwritten in special cases?

Maymin: It’s a complicated question. Let’s try thinking about it from the flip side – what if an AI discovered a bias based on protected class information? You know, race, ethnicity, gender, age, religion, whatever it may be. Suppose an AI found that historically-oppressed minorities are better at repaying loans, so it wants to give them better rates. Should we prevent that in the name of reducing bias? Or is bias reduction really just about making sure it doesn’t harm certain people, but it’s okay if it benefits them? 

Alber: A lot of this question needs to be informed by the specific ethics of the field in which you’re applying the AI. There are multiple schools of thought on whether or not you should consult data that’s correlated to protected information. You can actually end up creating more bias if you ignore this information. If you included them and attempted to use them as controls to ensure that, for example, your dataset is representative and proportional, then you will end up with a better classifier at the end. So, you probably actually do want to collect that data, funnily enough, but you want to show that your decision wasn’t influenced by it in the statistical view. 

Maymin: That’s an interesting irony, right? In order to try to reduce bias, we actually have to ask probing, personal, uncomfortable, ignorant questions.

But if the AI is finding relationships between input such as ethnicity or gender, that can be complicated. You might be picking up very arbitrary relationships that, if we knew what they were, we would shut them down. People may ask, “how dare you look at that information? Sure, it wasn’t on the list of prohibited information, but any human would have known not to think about things that way.” And I don’t know if there’s a way to safeguard that.

Alber: There are a number of good toolkits that let you interrogate models and pull out causal relationships between your input data and your output data. Not to toot our own horn too much, but our lab at Microsoft Research works on a toolkit called Fairlearn which I strongly advise people to take a look at to help them understand what kind of biases they’re including in their models. 

With that said, though, you have to remember that the whole point of AI is to find the correct bias for your model. When you start your model is randomly initialized to some degree. It will not be balanced or fair unless you specifically engineer a tool to be uniform across all your possible output space. Your goal is to find the correct biasing of it so that it gives you the answers you want.

Gershteyn: There’s a huge confusion here between the definitions of bias. In a mathematical sense, bias is a deviation from reality, and the whole point of the algorithm is to minimize that bias to most accurately conform to the data set. Whereas the legal definition of bias is, whether they have predictive value or not, some categories need to be excluded from the decision making. So ultimately, the overriding or shutting down of AI boils down to the moral choice of a human agent, who notably bears legal responsibility.

Are privacy disclosures that no one reads ethically sufficient?

Maymin: You’re right – nobody reads them and nobody is excited by them. Even some people who write them aren’t excited by them. And yet, from the company’s perspective, they have to protect themselves because people will sue them otherwise. This extends not just to privacy disclosures, but also terms and conditions. 

But it can be made quite exciting if you recognize that there’s a real market opportunity here. Imagine a company whose job it is to make privacy disclosures easier for me to understand. I’m happy to pay a dollar for somebody else to read them to me, and tell me if there’s something I need to be worried about. That doesn’t have to be a human being. That could be an AI that collects all the privacy disclosures, reads them, marks them, and when they change, all it has to do is compare the new version to the old and show me the differences. I can feed them into OpenAI’s text predictor and say, “what do I need to be worried about in terms of privacy disclosures?” That’s a service I would pay for. Wouldn’t you? 

Alber: The idea of feeding an AI privacy policies and having it tell you what is most important begets a chicken and egg problem. Every single one of us has different values and places different levels of importance on various privacy issues. For example, maybe I don’t consider my age particularly private when I’m online and I’m reasonably comfortable giving it out, but I might be a bit more leery about giving out my location or my religious affiliation or ethnicity. So, unless every single person creates their own custom AI for analyzing privacy policies, you’re going to need to have a personalized model for each person. And once you do that, you’re collecting their data to generate this personalized model. You could set it up in a way where the data never leaves the sovereignty of the user, but at the end of the day, I really don’t think it makes sense to spend all that much effort training an AI to do it. 

So, I think there’s a lot we, as an industry, can do to create easy-to-read overviews of policies. And if we think that differing is a useful tool, then we should think about how we can standardize the representation of these policies so that users can say, “I want to compare brand A’s privacy policy with brand B’s.” So to use a metaphor that I heard, your AI products should have nutritional fact labels on them telling the user what data it collects for what purpose. That’s the dream. I believe in humanity’s ability to do this.

Gershteyn: But I think the legalese is actually the bigger problem. Philip mentioned capacity as one of the core requirements for a contract to have validity. Capacity is so unequal between the consumer and the group of lawyers who are drafting the privacy policies that, due to their complexity and their length, nobody reads them. Contracts need to be understood by all parties and nutritional fact labels are something that gets away from the legalese and gives you a clear, fair picture of what information you’re giving away. That’s the way forward, and that’s really what needs to happen. But unfortunately, there’s every incentive against that.