Explainable Artificial Intelligence (XAI):
Machine learning has seen dramatic success, which has led to an explosion of Artificial Intelligence applications. The promise of autonomous systems that perceive, learn, act, and decide for themselves is a constant promise. These systems are limited in their ability to communicate with humans about their actions and decisions. Figure 1. DoD faces challenges that require more intelligent, autonomous, and symbiotic system. If future warfighters want to be able to trust, understand, and manage an emerging generation artificially intelligent machine partner, it will be necessary for them to have explainable AI.
The Explainable AI program (XAI), aims to develop a set of machine learning techniques.
- Produce more explicable models while maintaining a high level learning performance (prediction accuracy);
- Allow human users to effectively manage and trust the new generation of artificially intelligent partners.
Machine-learning systems that can explain the reasoning behind their behavior, identify their strengths and weaknesses, as well as predict their future behavior, will be able to do so. To achieve this goal, new or modified machine learning techniques will be developed that produce more explicable models. These models will be combined using state-of the-art human-computer interface technologies capable of translating models into useful and understandable explanation dialogues for end users (Figure 2). Our strategy is to pursue a variety of techniques in order to generate a portfolio of methods that will provide future developers with a range of design options covering the performance-versus-explainability trade space.
XAI is among a few DARPA programs currently expected to enable “third wave AI systems”. This means that machines can understand their environment and build explanatory models over time to allow them to identify real-world phenomena.
The XAI program focuses on multiple systems development by addressing two areas of challenge: (1) machine learning issues to classify events in heterogeneous multi-media data and (2) machine learning challenges to create decision policies that will allow an autonomous system to perform a variety simulated missions. These two areas represent two critical machine learning approaches, classification and reinforcement learning, and two crucial operational problem areas for DoD intelligence analysis and autonomous system.
Researchers are also studying the psychology of explanation.
XAI research prototypes undergo continuous testing and evaluation throughout the duration of the program. In May 2018, XAI researchers presented initial implementations of explainable learning systems. They also presented the results of Phase 1 pilot studies. In November 2018, the full Phase 1 system evaluations will be completed.
The final delivery of the program will consist of a toolkit library that includes machine learning and human computer interface software modules. This could be used for future explainable AI systems. These toolkits will be available for further refinement or transition to commercial or defense applications after the program is completed.
Read more:: from Here