Leading a team or organization in the era of artificial intelligence doesn’t just call for technical chops — what is necessary is business sense, empathy, foresight, and ethics.
The key is to understand what AI means —- and that doesn’t mean AI-powered solutions dropped into the organizations with expectations of overnight miracles. AI is “not humans plus machines, but humans multiplied by machines, augmenting and assisting our capabilities across the existing and emergent tasks performed in our organizations,” according to Paul McDonagh-Smith, senior lecturer at MIT, quoted in MIT Sloan Management Review.
People outside the AI technical bubble need to have a solid understanding of how AI works, and what is required to make it responsible and productive. That requires training and a forward-looking culture. People need to be brought up to speed on the latest technologies that is, in one way or another, redefining their jobs — and organizations. “AI skills are not overly abundant, so organizations need programs to upskill and train employees on technical skills and technical decision-making,” says McDonagh-Smith. “Culture is a big part of the equation: organizations will need to create silo-busting cross-functional teams, make failure permissible to encourage creativity, and encourage innovative ways to combine human and machine capabilities in complementary systems.”
Industry leaders from across the spectrum echo the sentiment highlighted in the Management Review article. Along with business savvy and a culture of innovation, ethics also needs to be top of mind. This calls for more diverse leadership of AI initiatives — by business strategists, users, technologists, and people from the humanities side.
Essentially, we are experiencing a “once-in-a-decade moment,” and the way we lead people to embrace this moment will shape the AI revolution for years to come, says Mark Surman, president and executive director of Mozilla Foundation.
Currently, the people running AI currently tend to be “project and product managers running delivery, data engineers and scientists building data pipelines and models, or DevOps and software engineers building digital infrastructure and dashboards,” says Mike Krause, an AI startup founder and formerly director of data science for Beyond Limits.
The risks of confining AI to the technology side “include compromising the AI tools’ ability to adapt to new scenarios, alignment with the companies’ objectives and goals, accuracy of responses due to data hallucinations as well as ethical concerns including privacy,” says Pari Natarajan, CEO of Zinnov. “But the risks go beyond data hallucinations or a lack of nuanced understanding. Technologist-only led AI initiatives risk optimizing for the wrong objectives, failing to align with human values like empathy and compassion, lacking checks and balances, and exacerbating bias.”
Everyone’s goal should be to develop and assure nothing short of trustworthy AI, Surman urges. This means creating systems and products “that prioritizes human well-being and gives users an understanding of how it works. For example, imagine a personal assistant that kept all your personal data locally, getting better as you use it but keeping you private. Or a social media app whose algorithms you could tune to help you improve your mental health rather than erode. These things are possible.”
This means collaborating closely with end users, customers, and anyone else who will be relying on the output of these systems. “Build out AI in a way that puts humans in the driving seat,” Surman says. “This is paramount for circumventing the risks we are already seeing – such as consumers receiving bad medical advice from a chatbot, or algorithms sending technology job openings to men but not women.”
Bringing non-technical, humanities-oriented players into the AI management mix should be organic and encouraged by the culture — and not forced by management edict. In other words, a balancing act, and for those organizations with rigid, hierarchical cultures, it could be an uphill climb.
Users — both inside and outside the organization — need to be leading this effort. Dr. Bruce Lieberthal, vice president and chief innovation officer at Henry Schein, sees the risk of lack of user collaboration arising within the healthcare sector. “Creators of technology for use by health care professionals and their patients sometimes work in a black box, making decisions that don’t factor in the user adequately,” he warns.
“AI is best developed collaboratively – software teams need to meet with users and those most affected by the product to stay honest, Lieberthal adds. “The software should constantly be vetted with the same people to make sure that what was imagined by users hits the mark when developed and deployed.”
On the humanities side, there are efforts to “bring ethicists into the conversation at some level,” says Krause. “But it’s often not clear what authority or role they actually play.” The challenge, he continues, is “imposing a philosopher into a technical delivery team is not going to be well received or lead to a net positive outcome for the business or user.”
Such humanities-oriented roles would need to be more advisory than actual decision-making, Krause advises.
Still, he points out, highly diverse AI teams will help add safeguards that may help organizations avoid headaches — or wasted investments. “Analyzing a problem from an ethical or philosophical perspective would bring a lot of questions a purely technical team would likely not have asked,” he says. “More importantly, technical teams are not empowered, asked, or expected to think of the implications of what they’re building, rather they are expected to focus on and execute what they’re tasked with. There is risk in not exploring unintended side effects of how AI is ultimately used.”
The growing democratization of AI — thanks to offerings such as GPT and Google — will help open AI decision-making to more diverse leadership. “We must be wary of dilettantism –interdisciplinary and multidisciplinary collaboration is needed,” says Natarajan. “People with backgrounds in ethics, cognition, sociology, decision theory and other humanities or social sciences fields can provide vital diverse perspectives, along with subject matter experts to ground AI in practical, real-world context.”
The potential emergence of new roles “signify a new maturity and caution in integrating AI, learning from problems like algorithmic bias,” Natarajan observes. “They help retain human oversight and control as AI grows more capable and autonomous.” Such new roles may include chief AI ethics officers, AI ombudsmen, AI compliance officers, AI auditors, and AI UX designers or curators.
Read the full article here