6 Reasons AI Ethics in Corporations is All Talk and No Action

6 Reasons AI Ethics in Corporations is All Talk and No Action
September 28, 2018 LH3_Admin

Artificial Intelligence (AI) has permeated every industry ranging from finance to automotive and even fashion. The fear of terrifying human-like sentient machines perpetuated by Hollywood combined with dire warnings from experts has led to a flurry of well-deserved buzz around the ethics of AI.

The corporate world has responded with a slow trickle of announcements touting their brand-new AI ethical codes and guidelines. But a recent interview with Neil Raden, the founder of Hired Brains, highlighted that having AI ethics on the discussion agenda is a good start but getting companies to adopt them in any meaningful way continues to be a challenge.

So let’s take a closer look at why concrete action on AI ethics still lags far behind all the talk.

6 REASONS AI ETHICS IN CORPORATIONS IS (MOSTLY) ALL TALK & NO ACTION

1) Ethics is not sexy.

Let’s get real. In the corporate world, new bright shiny objects and initiatives with a direct link to revenue generation are more likely to get visibility and funding. Ethics is neither glamorous or sexy. No one gets an award or promotion because they saved the company (and possibly the human race) from a potential ethical crisis many decades into the future. AI ethics discussions are typically relegated to some “expert committee” that meets semi-regularly. It’s anyone’s guess as to how much of their input and feedback is considered for implementation.

2) Speed to market is everything.

As a newbie employee at a leading tech company, the first piece of advice I received from a top leader was to “Run as fast as you can.” The corporate world is very darwinian and there are no consolation prizes for slowing down or being fastidious in a highly competitive market. Unless ethics are integrated into the company’s processes or are required, most employees will choose the path of least resistance and skip right past them.  

3) AI may be forever but most CEOs are not.

According to a recent study, CEO turnover has risen over the past years and their median tenure  at large-cap companies is 5yrs. In this short time period, CEOs are typically focused on keeping Wall Street happy, which makes it challenging to get their attention for issues that don’t contribute to the bottom-line. Also, any questionable practices that may cause issues for their successor in the future is not likely to get prioritized because of this short-term focus.  

4) Oversimplification vs. paranoia.

When it comes to AI, there seem to be two extreme schools of thought. On one side, we have those who believe all AI issues can be solved by well-intentioned technologists. On the other side, we have over-hyping of risks to such an extent that no solution is good enough. The challenge is convincing companies to take a more balanced approach that considers all benefits and risks, while keeping humans at the center of this very important discussion.

5) Carrots or sticks.

To convince human beings (CEOs included) to change behavior, there needs to be an incentive or consequence. Today, the primary incentive to drive adoption of AI ethics is the warm, fuzzy feeling of doing the right thing. Government/regulatory agencies can be effective in “nudging” companies to adopt ethical policies and some like the U.K. have stepped up. However, in the current political climate, ethics have become a matter of opinion and vary wildly based on political affiliation so any meaningful regulation is unlikely to garner bipartisan support.

6) Talk is easy, action is hard.

In a global executive survey on AI adoption, Rumman Chowdhury, lead for Responsible AI at Accenture, shared that AI ethics codes in many companies “are more directional than prescriptive.” Even in companies with the right leadership, there is a huge gap in skills and expertise to fully understand all the risks of AI, let alone figuring out how to address them.

WHAT CAN WE DO ABOUT IT?

Keeping the hand-wringing and navel-gazing aside, let’s look at some ways that can effectively increase the adoption of AI ethics in corporate world.   

Influence at the top.

A recent SAS survey shows that majority of companies with successful AI implementations have an AI Ethics training program in place. Organizations with an enlightened leadership that believes in the importance of AI ethics are already set up for success. For others, an executive level briefing is a good way to get the management familiar with risks/implications of AI.

New doesn’t mean reinventing the wheel.

Companies don’t need to start from scratch or create their principles for AI in a vacuum. Existing values and mission statement are a great starting point for any AI ethics code/principles as long as they include these 3 core areas at the minimum – Fairness, Accountability, and Transparency.

Start at the beginning.

Irina Raicu, Director of Internet Ethics at Markkula Center for Applied Ethics recommends including training on AI ethics in your new employee on-boarding process. Early indoctrination will help set the right tone for employees and ensure ethics are tightly integrated into the company culture.

Integrate checks and balances.

Regular training and feedback loops are essential so that ethics don’t become an afterthought. A leading financial institution has adopted protocols such that testing/checking of AI algorithms is done by a different team than the one building them. This allows the organization to eliminate any unconscious bias introduced into the algorithms by the developers.

Include diverse perspectives.

AI has traditionally been the realm of technologists but it requires a more collaborative and inclusive approach. Ethicists, philosophers, privacy advocates and end users should be included in any AI Ethics discussion to make sure solutions/outcomes are human-centric and not purely technology-centric.

Support the good cause.

Last but not the least, Here is a list of top 12+ noteworthy organizations dedicated to tackling the dark side of AI and who are actively shaping the future of responsible AI. Learn from them, support their work and implement their expert recommendations, wherever possible.

Share your experience on adoption of ethical and responsible AI in your organizations in the comments below or tweet them to @MiaD.