Implementing AI: 6 Things Government Executives Need To Know
January 12, 20213. Don't let complexity hurt user confidence.
AI systems are intrinsically complex, particularly in human-involved domains like precision medicine. This cannot be helped. Models that are functionally opaque, however, inhibit users from fully trusting the generated outcomes. This can harm user confidence and buy-in, especially as the risk and consequences of potential failure of the AI system become dire.
Explainable AI (XAI) streamlines user acceptance and improves decision-making by providing rationales for why an AI system came to a particular result. Two XAI approaches are feature optimization/visualization (generating emblematic, synthetic data that illustrates what individual components of a network do) and attribution (employing techniques like saliency maps to indicate which components contribute most to a model’s output in a given example).
While XAI can be useful to analysts and decision-makers, it is not always necessary or even desirable. The trade-off for including XAI methods may come at the cost of model performance and encourage users to over-trust models. Alternatively, stakeholders should consider related concepts such as interpretable machine learning and assured fairness (designing systems to optimize objectives while ensuring equitable outcomes). Ultimately, the desired state of operation should be high-performing models that enable users to make informed decisions with confidence.
4. Organizational buy-in matters. Slower adoption may be better.
Even the most effective AI system can fail to benefit organizations when the user base—the workforce—shies away. This is probably the most common roadblock to successful AI implementation. Leaders who are understandably eager to share new, game-changing AI applications often strive to deploy AI to the entire organization as soon as possible. This approach is almost assured to fail.
Incremental adoption to select departments, or even a few employees, may serve the organization better in the long run. Slower deployments build internal support and reduce the risk of mission-disrupting errors that could stop the initiative instantly, and perhaps permanently. When dealing with a skeptical workforce, organizations may be well served by prioritizing XAI over black-box applications, which produce results without explanation. A system that can offer justifications would be harder for employees to brush off as a useless machine—taking away some of the mystery around what AI actually is and how it will deliver better results is a proven way to culture a workforce.
The potential benefits of AI appear to be boundless, but it is important for the government to learn lessons from the private sector as well as early federal adopters. It is also crucial to consider the potential trade-offs involved with deployment of AI at scale. Taking advantage of the road laid by those who have headed down this path will offer a smoother, more effective, and impactful experience for agencies and their workforces.
6. Data science is a team sport.
Implementing AI is part of an iterative discovery process, and understanding and organizing around trade-offs is key. Your AI teams likely employ highly talented data scientists, statisticians, programmers, and developers, but data science cannot operate in a vacuum. The tight coupling of AI implementation with design constraints imposed by an understanding of relevant trade-offs necessitates frequent collaboration and communication with end users and decision-makers who will use the final AI solution. The AI development lifecycle of design, development, implementation, and sustainment should include iterative collaboration loops to catch problems and concerns as early as possible, and build in exit ramps for users to reexamine, strategize, redefine, and revector progress.