It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness…”
No matter how you choose to describe our current age, there’s little doubt we are living in very consequential times and nowhere is it more apparent than in the rapid proliferation of Artificial Intelligence (AI).
According to Harvard Business Review (HBR), AI will be around for a while and it’s a worthwhile long-term investment.
“In our view AI will become a permanent aspect of the business landscape and AI capabilities need to be sustainable over time in order to develop and support potential new business models and capabilities.”
Experts in AI don’t agree on much but many are unanimous on the potential dangers of unchecked AI and that the decisions we make related to AI today may very well decide the future fate of humanity.
For consumers who are faced with insidious pervasiveness of discriminatory AI algorithms and autonomous “killer” robots looming on the horizon, there is an urgent need for an open discussion and immediate action to address the perils of unchecked AI.
And yet, one after another, we’ve seen leading tech companies falling all over each other trying to get it right. Many of their efforts are thinly veiled PR attempts at damage control while others are resorting to knee-jerk attempts to quash any dissent. There’s been a steady stream of high-profile councils, boards but without a clear mission and mandate, some have floundered and failed.
There is some glimmer of hope as the EU recently published their first comprehensive ethical AI guidelines, global organizations like IEEE introduced their guidance for Ethically Aligned Design, and forward-thinking corporations like Salesforce made Ethical AI their central focus.
Despite the steady progress, it’s becoming increasingly clear that ethical AI initiatives can’t match the pace and big budgets of AI innovation through tactics and small gestures. Big problems demand bold action and for global organizations this means launching a Center of Excellence for Ethical AI.
What is a Center of Excellence?
“If you want to go fast, go alone. If you want to go far, go together.”
Center of Excellence (COE) is not a new concept. It is a central group that typically sits at the core of a hub & spoke model in a large global organization and coordinates efforts across all the different spokes ie. divisions/business units.
As the HBR article goes on to say,
“The idea of establishing a CC (Competence Center) or COE (Center of Excellence) in AI is not particularly radical. In one recent survey of U.S. executives from large firms using AI, 37% said they had already established such an organization.”
To accelerate the adoption of ethical AI, global companies need a centralized approach. COEs are especially effective during times of significant (technological) changes in the industry or when there’s a sustained need to coordinate diverse time-critical activities across a global landscape.
Why do companies need a COE for Ethical AI?
While companies are setting up centralized organizations to rapidly scale their AI innovation, there needs to be a similar emphasis on providing ethical guidance for those initiatives through a dedicated COE focused on that mission. In addition to keeping up with AI innovation, there are many other strategic reasons for setting up a COE for ethical AI.
Ethical AI requires a broader lens.
There is a misguided tendency to force-fit ethical AI into the data/analytics world or IT/engineering functions because AI has primarily been the domain of the technologists. That is slowly changing as the significant impact of AI on non-technical functions is slowly becoming apparent. Ethical AI is no longer limited to development of ML learning models but also non-technical domains such as customer experience because of new AI-powered engagement channels and HR/training which have to deal with jobs displacement due to automation. It’s especially critical that focus of ethics is not just the bottom line but is also aligned with the company’s broader ethical, governance, and Corporate Social Responsibility mission.
Ethics needs more bridges not moats.
The ethical AI debate has become the battleground for “us (ethicists, social scientists, other non-technical functions) vs. them (AI/ML developers, data scientists, engineers). Time and again, I’ve heard from data scientists and engineers working on AI/ML projects about the disconnect between ethical requirements and business goals they’re responsible for. According to many, their only goal is to deliver the project on time while ethics is someone else’s problem. A COE can help build bridges between diverse groups making it easier to communicate and succeed in their roles through a shared objective rather than create new hurdles for them to overcome.
Staying on top of a rapidly changing ethical landscape.
AI landscape is constantly evolving and it’s challenging for global organizations to stay on top of all the constant changes in AI developments, let alone manage the ethical and regulatory implications. Over a decade back during the nascent days of social/digital media, I joined HP’s web COE. Building their first centralized social media organization gave me a deep appreciation for COEs and why they are critical for staying on top of a rapidly changing regulatory and ethical landscape. A COE will allow organizations to gather, organize, analyze emerging ethical research trends/threats and respond efficiently, responsibly, and consistently with one voice.
Sharing information quickly and efficiently.
Information hoarding is a big challenge in traditionally bottle-necked organizations especially, when there aren’t many incentives or easy processes to share information. Even companies who sell efficiency in the form of automation themselves are not immune to the struggles of managing ethical considerations of AI as Facebook has demonstrated many times over. COEs boost productivity by empowering employees with best practices, consistent standards for ethics and integrated processes so it’s not just an after-thought or a superficial PR exercise.
COEs have a mixed track record
Despite their advantages, some COEs fail and this is not surprising to anyone who understands how large companies operate and is familiar with the challenges of setting up a new organization.
Organizationally, companies cycle through periods of centralization followed by decentralization. COEs in decentralized companies need a massive cultural change and buy-in from all levels of the management to be successful. It can take a long time to get folks comfortable with the idea of pooling resources and talent for benefit of the entire organization. COEs are especially a hard sell in overachieving cultures where control over budgets, teams and resources are highly coveted. Change management should be part of the planning process for any COE and included as part of the broader effort to make ethics an integral part of the organizational DNA.
Lack of diversity.
When organizations fail, it’s because of failure in leadership rather than lack of talent. It’s a manifestation of a harmful tendency among management teams to hire folks just like themselves. The lack of diversity among the faculty members of the recently launched institute of Stanford Human Centered AI (HAI) serves as an ironic example of an organization that doesn’t reflect the values it promotes, namely diversity. Having a competent and diverse leadership team can go a long way toward ensuring credibility and building a strong organization with few racial/gender/expertise blind spots.
Lack of right skill set.
Building and leading global COEs requires an unique skill set which includes building, collaborating, negotiating, influencing and managing expectations across global boundaries. The right leader will successfully manage development of efficient processes for gathering emerging trends in ethical AI, understand the ethical implications, translate those into meaningful insights and communicate/implement best practices across the global organization.
That all said, the concept of a centralized organization may seem alien to organizations that are still trying to figure out how to get started in AI and understand the ethical implications of their project but setting up the right structure is essential for the long-term success of any strategic technology especially one as critical as AI, which as the experts concur is here to stay.
Author: Mia Dand is a strategic digital marketing leader and passionate diversity in tech advocate with extensive experience in building customer-centric programs at global companies like Google, HP, eBay, Symantec and others. As the CEO of Lighthouse3, an emerging tech research and advisory firm based in Oakland, California, Mia excels at identifying key industry trends and guiding F5000 companies on the responsible adoption of new & emerging technologies like AI for successful business outcomes. Mia is also the author of “100 Brilliant Women in AI Ethics”, a definitive guide to help global organizations recruit more talented women in this space. She is the organizer for SF AI, Berkeley AI, & SF AR/VR meetup groups with over 3.5K members in the San Francisco Bay Area and hosts monthly AI Ethics chats on Twitter (@MiaD)