Artificial intelligence & Critical systems
Artificial intelligence will be deployed in increasingly more systems that affect the health, safety and welfare of the public. These systems will better utilize scarce resources, prevent disasters and increase safety, reliability, comfort and convenience. Despite the technological challenges and public fears these systems will improve the quality of life of millions of people worldwide.
Prediction
The use of artificial intelligence (AI) in critical infrastructure systems will increase significantly over the next five years. Critical infrastructure systems or, more simply, “critical systems” are those that directly affect the health, safety and welfare of the public and in which failure could cause loss of life, serious injury or significant loss of assets or privacy. Critical systems include power generation and distribution, telecommunications, road and rail transportation, healthcare, banking and more.
The use of artificial intelligence (AI) in critical infrastructure systems will increase significantly over the next five years. Critical infrastructure systems or, more simply, “critical systems” are those that directly affect the health, safety and welfare of the public and in which failure could cause loss of life, serious injury or significant loss of assets or privacy. Critical systems include power generation and distribution, telecommunications, road and rail transportation, healthcare, banking and more.
AI and Software
AI plays an important role in some of humanity’s most complex systems, especially in safety-critical systems. In critical systems, software is generally involved in controlling the behavior of electromechanical components and monitoring their interactions [Wong], but it is also used in many other ways. AI in critical systems can involve pattern matching, and/or decision making, prognostics and predictive analytics, anomaly detection and more. In a simple scenario, AI can serve a significant benefit in automating many of the mundane tasks that in the past would have required humans (e.g. analysts) to sift through massive amounts of data in order to derive information for which decisions would need to be made off of and in many cases the AI can also make many of those decisions if properly trained. While AI can be implemented in hardware, firmware or software, the design, implementation and testing must all be concerned with very high safety, security, and reliability margins. Ultimately, AI for critical systems must combine real-time analysis with robust network communications structures to continually adapt to changing circumstances.
Today’s AI is different from general software in the following way. There is a need for training of current AI algorithms, with the possible evolution towards self-learning and understanding. The outcome of this training is used as a black box leading to a lack of “explainability” in the use of trained algorithms. Such training can be a cause of a bias (a vulnerability), because training is only as good as the data used for the training. Finally, compared to traditional software, in AI there is a more pressing need for ethical considerations.
In critical systems, internal and external interactions, timing, and general processing errors can lead the software to an unsafe state or lead to a system failure. AI can be employed to help avoid or recover from these unsafe states. When the AI in critical systems does not operate as intended (including to prevent and notcontribute to system failure) there can be serious consequences. The consequences can range from minor performance anomalies to a catastrophic failure leading to significant loss of money and property, injury and loss of human life, perhaps on a large scale [Wong]. For these reasons, it is required to have AI software that provides explainable results. The recommendations made need to be predictable and repeatable across a wide variety of inputs, in terms of timing, bias, and results.
AI Advances and Challenges
Advances in data analytics, machine intelligence, deep learning and related artificial intelligence (AI) technologies have and will continue to motivate critical systems design. These technologies leveraged by more accurate image recognition and pattern matching, the Internet of Things (IoT), edge computing and security technologies such as advanced encryption, hardware accelerators and more will drive the increase in deployments and in public confidence and trust in these systems. These systems will exhibit high levels of connectivity, intelligence, automation coupled with AI/machine learning enriched cybersecurity. IoT devices integrated with robotic process automation (RPA) allow secure robust communication among sensors, actuators, and power sources [Lange].
AI is being deployed on wide ranging systems from data centers to edge devices. Systems are becoming more responsive in thinking, perceiving, and acting within time performance constraints. Designers are more confident in applying multiple technology advancements to solve volatile, uncertain, complex, and ambiguous challenges.
There are a number of AI challenges that need to be addressed in order to successfully apply AI models in critical systems. Models are only as good as the data used to train these models. This requires that the following aspects be addressed.
Model bias: Overrepresentation of one example and underrepresentation of other examples (unbalanced data) makes a model biased towards a major class or classes. In social domains such as healthcare or finance, this may result in unfair and unethical decisions. It is not uncommon to have much more unlabeled data than labeled, and some sort of mechanism that automatically validates AI models is required.
Adversarial attacks: With the rise of Deep Learning models, a new trend has emerged in security known as adversarial attacks. This type of an attack uses data that at a macro level looks like real data (for instance, a road sign), but at a micro level it was modified so that it has a dramatic impact on a model’s decision. AI models need to be either robust enough to tampered inputs or need to be accompanied by other AI models that check if input data is from a set of expected inputs.
Data security: AI models are all about datasets that constantly grow in size. An attacker may modify a dataset by changing existing examples or introducing new ones so that a model learns that adverse behavior. Special security protocols and frameworks need to be introduced to ensure the validity of datasets.
Model security: In the future systems, AI models will be deployed everywhere – from data centers to edge and wearable devices. This is significantly different from current deployments where devices are assumed to be located in secure facilities. Systems hosting AI models need to be able to verify the validity of models, identify attempts to modify them and re-deploy them in case they are compromised.
Trust: Using systems with a high level of automation (which is not common nowadays) such as self-driving vehicles means that people need to learn to trust those systems. AI, in particular deep and reinforcement learning techniques, has and will continue to have significant impact on society in almost every area, leading to an extreme need for public trust.
Explainability and self-assessment: An AI-based model or a control system needs to be able to continuously defend/explain its decisions. These models need to be able to identify situations in which they are not confident in making the right decisions and inform human operators that they need to take control. Hence, AI models must be explainable, which is a challenge because many of the models are used now as black boxes for which outputs are hard or impossible to explain. Risks to Prediction
The main risks to rapid AI deployment are slow realization of benefits and societal and regulatory pushback. Exaggeration and falsification of real capabilities and risks based on science fiction movies and books, social media posts, can create unwarranted fear and uncertainty. Fictional depictions of sentient and self-aware AI systems gone awry (for example, HAL from the movie 2001, A Space Odyssey) give the public an unrealistic perception of AI. We may be one tragic disaster away from calls to severely restrict the use of AI in critical systems.
Another problem will be finding suitable training data such as biometrics, behavioral information, patterns of use (e.g. for utilities) for these enhanced AI systems. Privacy issues, for example, involving the training data (behavioral patterns, facial recognition, and other biometrics) needed to make the AI work could thwart progress and deployment.
Finally, legacy systems integrations problems, and standards overload, and confusion may slow progress. AI for critical systems will also require focused coordination between industry and regulatory authorities. And there will need to be increased attention to government investment, community, university and industry partnerships and professional responsibility and assurance that those who build these systems are trusted in order for the true potential to be realized.