Four ways nontechnical leaders can foster a culture that values ethical AI.
May 10, 2024
HBR Staff/Jorg Greuel/Getty Images
Post
Post
Share
Annotate
Save
Researchers engaged with organizations across a variety of industries, each at a different stage of implementing responsible AI. They determined that, although data engineers and data scientists typically take on most responsibility from conception to production of AI development lifecycles, non-technical leaders can play a key role in ensuring the integration of responsible AI. They identified four key moves — translate, integrate, calibrate, and proliferate — that leaders can make to ensure that responsible AI practices are fully integrated into broader operational standards.
When the EU Parliament approved the Artificial Intelligence (AI) Act in early 2024, Deutsche Telekom, a leading German telecommunications provider, felt confident and prepared. Since establishing its responsible AI principles in 2018, the company had worked to embed these principles into the development cycle of its AI-based products and services. “We anticipated that AI regulations were on the horizon and encouraged our development teams to integrate the principles into their operations upfront to avoid disruptive adjustments later on. Responsible AI has now become part of our operations,” explained Maike Scholz, Group Compliance and Business Ethics at Deutsche Telekom.
-
Tomoko Yokoi is a researcher and advisor in digital transformations at IMD Business School and ETH Zurich. She is the co-author of Hacking Digital: Best Practices to Implement and Accelerate Your Business Transformations.
Post
Post
Share
Annotate
Save