3 steps to prioritize responsible AI

Artificial intelligence (AI) continues to be a major growth driver for companies of all sizes undergoing digital transformation, with IDC predicting that AI spending will reach $500 billion by 2023. AI is helping organizations identify new business models, increase revenue and leverage. competitive advantage.

But along with these opportunities, AI also brings great responsibilities.

AI systems and pipelines are complex. They have many areas of potential failure, from underlying data problems to errors in equipment that adversely affect people and processes. As AI becomes ubiquitous, prioritizing its responsible use can mean increased revenue streams and ensuring that individuals and communities are protected from risk and harm.

Based on these increased risks, lawmakers are stepping up globally with guidelines for AI systems and their appropriate use. The latest comes from New York City, which says employers who use AI hiring tools will face penalties if their systems create biased decisions. As a result of these mandates, technology developers and governments must strike a delicate balance.

[ Also read AI ethics: 5 key pillars. ]

By following this simple three-step process, organizations can better understand responsible AI and how to prioritize it as their AI investments evolve:

Step 1: Identify common failure points; Think holistically

The first step is to realize that most AI challenges start with data. We can blame the AI ​​model for incorrect, inconsistent, or biased decisions, but ultimately, it is data pipelines and machine learning pipelines that guide and transform data into something actionable.

Also Read :  EA Made PS2 Lord Of The Rings Game With Tiger Woods Golf Engine

When we think more broadly, stress testing can include multiple points based on raw data, AI models, and predictions. To better evaluate an AI model, ask these key questions:

  • Was an under-sampled data set used?
  • Was it representative of the class being trained against?
  • What validation data set was used?
  • Was the data labeled correctly?
  • When the model hits new runtime data, does it continue to make correct decisions since it was trained on slightly different data?
  • Was the data scientist equipped with appropriate training guidelines to ensure features were appropriately selected for training and retraining the model?
  • Does my business rule take the appropriate next step, and is there a feedback loop?

By taking a broader perspective, organizations can think about security related to people, processes, and equipment. Only then can a robust and adaptable framework be built to address the fractures in the system.

Step 2: Create a multidisciplinary framework

When it comes to creating a framework based on people, processes and tools, how do we apply it to complex data and AI lifecycles?

Also Read :  Why the Math Around Adaptive AI is Painful

Data lives everywhere – on premise, in the cloud, at the edge. It is used in batch and real-time hybrid infrastructures that allow organizations to provide closer insight into where that data resides. With low-code and automated tooling, machine learning algorithms are sophisticated enough to allow users of varying skill levels to build AI models.

Meeting the evolving needs of workloads using AI requires a strong technology foundation, as well as the AI ​​technologies themselves—libraries, open source tools, and platforms. Common capabilities found on this basis include, but are not limited to:

  • Bias detection in both AI model training and deployment
  • Detecting drift in model deployment
  • Explanations on both model training and deployment
  • Anomaly and intrusion detection
  • Governance from data through policy management

More advanced platforms have healing paths for many of the above abilities to ensure you have a path beyond detection and detection.

However, a technology-based approach is not enough. Responsible AI requires a culture to embrace the technology. It starts with executive management teams setting company-wide objectives to raise awareness and priorities. Steering committees and authorities dedicated to the responsible AI mission are critical to its success. It is becoming common for organizations to appoint chief AI ethicists to manage the company’s internal operations and external social mandates and responsibilities. They can often act as a conduit to help manage a company’s social responsibilities.

Step 3: Invest, build skills, and educate

In the final step, we need to have a bias (no pun intended) for action.

To tackle the technology challenges associated with AI, it is important to invest in a diverse skill set, which is not an easy task. However, educating internal teams on content developed by large technology companies such as Google, IBM, and Microsoft can also contribute to bootstrapping internal skill sets.

Educating MPs and government officials is also important. By regularly briefing and coordinating with local and federal agencies, policies can keep up with the rate and pace of technological innovation.

As AI becomes more pervasive, setting up guardrails for both technology and society is more important than ever. Organizations looking to understand, build, or use AI should prioritize responsible AI. These three steps will equip leaders to promote responsible AI within their organizations, build the foundation for it, and sustain it.

[ Want best practices for AI workloads? Get the eBook: Top considerations for building a production-ready AI/ML environment. ]

Source

Leave a Reply

Your email address will not be published.

Related Articles

Back to top button