AI, Robots, & Deception

AI, Robots, & Deception
May 27, 2019 LH3_Admin

New developments in Artificial Intelligence (AI) are pushing the ethical boundaries of what is or isn’t acceptable use of AI and raises many questions about its potential to do significant damage. To stay on top of the ethical issues in AI, Mia Dand – CEO of Lighthouse3, a Research and Strategic Consulting firm and author of “100 Women in AI Ethics” hosts a monthly chat with experts to examine the potential impact of these developments on society and humanity.

For our May 2019 Twitter Chat, we invited Dr. Julie Carpenter (Twitter: @JGCarpenter), Research Scientist and Author of “Culture & Human-Robot Interaction in Militarized Spaces: A War Story” to discuss “AI, Robots & Deception”. Here are the key highlights from that chat.

Mia Dand: Should (ro)bots be required to disclose they are not human during their interactions with humans?

Dr. Julie Carpenter: Let’s give the word “robot” a definition for the purpose of at least this question. A “traditional” robot, the sort most people think of when they hear the word, could be defined as embodied AI with physicality in the world, a machine that can automate certain actions and learn from its environment. That’s  an imperfect definition people can argue points of endlessly (e.g., embodied): there is no ONE accepted definition of the word “robot.” For the sake of a Twitter Q&A, let’s keep it simple, though. People also commonly know about software robots or bots, too. Our ideas about robot “deception” will change over time. We’re in many ways still negotiating how to interact with robots, whether over the phone or in person. People do not like to feel deceived, and people designing these experiences with AI and robots need to respect that.

MD:  Is it ethical to develop AI to the point of sentience or consciousness?

JC: Can we or should we? It depends on how you regard sentience—will it look like human sentience? In my opinion, a sense of self in AI may occur without our planning for it or even recognizing it, because we are so focused on the human model of sentience. AI will know the world in a different way than we do. Does it need sentience? Not in all cases, but perhaps some robots or AI that interact with humans will need a framework for reflection and understanding themselves and their true impact on the world around them. it absolutely hasn’t been answered in terms of consensus from many groups involved in AI and robotics. This is an ongoing discussion as the technology and society changes.

MD: What is the danger in us feeling humanlike love and affection for AI? 

JC: Assuming the person understands it is an Ai-based thing (and we are not talking about deception), many of the similar “dangers” people generally have in loving relationships. Frustration if love is not returned in ways the person desires, sadness if the object of their love is sick/broken, anger or embarrassment if their social circle doesn’t accept/judges the object of their love. And then unforeseen ways that are special to human-AI relationships we have not foreseen, I’m sure.

MD: Okay let’s switch gears from love/relationships to war, is it ethical to have robots fight our wars?

JC: That question is phrased in an interesting way. Robots don’t “fight” in that they don’t have humanlike anger or internal humanlike motivations, like establishing territory or other reasons humans fight wars. They can obviously be weapons, though. Technology certainly is not there now. Even in the future, AI has a really difficult time with understanding context (e.g., of situations and people). But even if that was somehow established, I find it problematic in many ways. Robots as tools in war spaces? That makes sense; militaries have always sought new technologies to aid them in their goals. Furthermore, if a robot can help keep people safe (e.g., an EOD robot), that is a great tool.

MD: How can we keep the AI/Robot-Human interactions beneficial for humanity?

JC: That is a huge question and I wish I had a simple answer.  We have to iteratively—constantly—ask this question in every aspect of research, design, development, deployment. GLOBAL, interdisciplinary, difficult, long discussions.

MD: Here’s a question from our audience. How useful and realistic is the demand for an international ban of autonomous weapons?

JC: I think it is absolutely necessary to have this point of view represented at all levels of discussion. People like Noel Sharkey are making great strides towards a ban at the UN level. Enforcement of any ban is challenging, but we need the legislation.

If you’d like to learn more about Dr. Julie Carpenter’s work, check out http://jgcarpenter.com