• Home  >  
  • Perspectives  >  
  • Consulting with Integrity: ‘Responsible AI’ Principles for Consultants  
Blog January 5, 2022
4 min read

Consulting with Integrity: ‘Responsible AI’ Principles for Consultants

Third-party AI consulting firms engaged in multiple stages of AI development must point out any ethical red flags to their clients at the right time. This article delves into the importance of a structured ethical AI development process.

AI goes rogue and decimates or enslaves humanity — the internet is full of such horrendous fictional movies. The fictional AI risk may be far-fetched, but the current state of Narrow AI could soon have a profound impact on humanity. AI developers and leaders around the world have an ethical obligation toward society. They have a responsibility to create a system suited for the benefit of society and the environment surrounding it.

AI could go wrong in many ways and have unintended consequences in the shorter or longer term. In a certain case, an AI algorithm was found to unintentionally reinforce racial bias when it predicted lower health risk scores for people of color. It turned out that the algorithm was using patients’ historical healthcare spending to model future health risks. As this bias perpetuates through the algorithm in operation, it becomes like a disastrous self-fulfilling prophecy leading to healthcare disparity.

In another incident, Microsoft had to bear the brunt when Tay — its millennial chatbot — engaged in trash talk on social media and had to be taken offline within 16 hours of going live.

Only the juiciest stories make it to the front page of news, but the ethical conundrum runs deep for any organization building AI-driven applications. Leading organizations have concurred on the very core principles for the ethical development of AI — Fairness, Safety, Privacy, Security, Interpretability, and Inclusiveness. Numerous product-led companies champion the need for a responsible AI with a human-centric approach. But, these products are not built entirely by a single team. Many a time, the use of multiple pre-packaged software brings the AI use case to fruition. In some other cases, it involves specialized AI consulting companies to bring in bespoke solutions, capabilities, datasets, or skill sets to complement the speed and scale of AI development.

As third-party AI consulting firms are involved in the various phases of AI development — data gathering and data wrangling, model training, and building, and finally, model deployment and adoption — it is crucial for them to understand the reputational implications of even a mildly rouge AI for their clients. Without certain systems in place, AI development teams scramble to solve the issues as they come, brewing a regulatory and humanitarian storm. In such a situation, it is imperative for these consulting or vendor organizations to follow a certain process for ethical AI development. Some of the salient points of such a process should be:

1. Recognize and flag an AI ethical issue early.

We can solve ethical dilemmas only if we have the mechanisms to recognize one. A key step at the beginning of any AI ethical quandary is locating and isolating ethical aspects of the issue. This involves educating the employees and consultants alike toward AI ethics sensitivity. Experienced data modelers in the team should have the eye to identify any violations of the core ethical principles in any of their custom-made solutions.

2. Documentation helps you trace unethical behavior.

Documenting how the key AI services operate, are trained, their performance metrics, fairness, robustness, and their systemic biases goes a long way in avoiding ethical digression. The devil is in the details, and the details are captured better by documentation.

3. Work in tandem with the client’s team to understand business-specific ethical risks within AI.

Similar industries share a theme across their AI risks. A healthcare or banking company must build extra guard rails around probable violations of privacy and security. E-commerce companies, pioneers in creating state-of-the-art recommendation engines, must keep their ears and eyes open to mitigate any kind of associative bias leading to stereotypical associations within certain populations. Identifying such risks narrows the search for probable violations.

4. Use an ethical framework like the Consequentialist Framework for an objective assessment of ethical decision-making.

 A consequential framework evaluates an AI project by looking at its outcomes. Such frameworks help teams meditate over probable ethical implications. For example, a self-driving AI that has even a remote possibility of being unable to recognize pedestrians covered with facemasks could be fatal and shouldn’t ever make it to the markets.

5. Understand the trade-off between accuracy, privacy, and bias at different stages of model evaluation.

Data scientists must be cognizant of the fact that their ML models should be optimized not only for best performance and high accuracy but also for lower (unwanted) bias. Like any other non-binary decision, leaders should be aware of this trade-off too. Fairness metrics and bias mitigation tool kits like the IBM’s AI Fairness 360 could be used to mitigate unwanted bias in datasets and models.

6. Incentivize open-source and white-box approaches.

An open-source and explainable AI approach is crucial in establishing trust between vendors and clients. It ensures that the system is working as expected and any anomalies can be backtracked to the precise code or data item that might have originated it. Ease of regulatory compliance with open-source approaches makes it a favorite in the financial services and healthcare sector.

7. Run organizational awareness initiatives.

An excellent data scientist may not be aware of the ethical implication of autonomous systems. Organizational awareness, adequate training, and a robust mechanism to bring forth any AI risks should be inculcated into culture and values. Employees should be incentivized to escalate the tiniest of such situations. An AI ethics committee should be formed to provide broader guidance to on-ground teams regarding grey areas.

Final Thoughts

A successful foundation to each of these steps is smooth coordination and handshake between vendor and client teams with a responsible common vision. Vendors should not hesitate to bring forth any AI ethical risks that they might be running for their clients. Clients, meanwhile, should involve their strategic vendors in such discussions and training. Whistleblowers for AI ethical risks might be analysts and data scientists, yet it won’t be possible for them to flag those issues unless there is a top-down culture that encourages them to look for it.

Explore more blogs

5 min read
March 14, 2023
How AI can help CPG companies unlock real value through in-store trade promotions
Readshp-arrow-topright-large
6 min read
July 12, 2023
In Digital, We Trust: A Deep Dive into Modern Data Privacy Practices
Readshp-arrow-topright-large
8 min read
Highlights
April 5, 2023
Beyond the Boardroom: A Data Leader’s Comprehensive Guide to Planning, Building, and Launching Generative AI Projects
Readshp-arrow-topright-large
Copyright © 2024 Tiger Analytics | All Rights Reserved