GTRI AI researcher Jessica Inman

NSA and GTRI Collaborate to Assure Trustworthiness of AI for National Security Uses

03.11.2024

Artificial intelligence, particularly applications in machine learning, is attracting attention for uses in a broad range of areas – including national security – where the ability to understand complex patterns could be extremely helpful. In the civilian world, Generative AI based on large language models is helping produce written documents, while testing is already underway on self-driving vehicles facilitated by AI. 
 

But “hallucination” errors in AI-produced documents and well-publicized accidents in autonomous vehicles raise doubts about using this type of AI in national security and other applications where a single error could have catastrophic results. To address these concerns, researchers at the Georgia Tech Research Institute (GTRI), in collaboration with the National Security Agency’s (NSA’s) Laboratory for Advanced Cybersecurity Research (LACR), are developing metrics, tools, and techniques to improve the robustness and trustworthiness of AI for such high-stakes applications.
 

GTRI AI researcher Jessica Inman
Jessica Inman is a GTRI senior research scientist who focuses on trustworthy and assured AI systems. (Credit: Sean McNeil, GTRI)

 

Producing AI systems useful to national security requires development and training approaches that differ from those used in the commercial and academic communities – and leverage trust-based considerations throughout the development process to build in risk reduction prior to deployment.
 

“We are much more risk-averse than many commercial and academic entities, so the way we develop machine learning and AI solutions for national security is different from commercial approaches to the problem,” said Jessica Inman, a GTRI senior research scientist who focuses on trustworthy and assured AI systems. “To meet those needs, we have developed a set of tools that we can use at all the different phases in our development pipeline, affecting such components as the training set, assessment and augmentation of the model, and quantifying not just model performance but also robustness and fairness to ensure trust.”
 

That requires knowing much more about the neural network models behind the AI and being able to mathematically show that they will provide outputs as expected within a certain range of input conditions. To be trusted by national security users, these models must be verified throughout their development, and be able to meet strict performance specifications, she said.
 

Instead of improving models after their development, together with LACR, GTRI researchers are building in specific considerations focused on trust. “We need to integrate trustworthiness from the ground up during the model development phase by considering design choices and methodologies that are going to help us make more trustworthy and secure algorithms from the get-go instead of tacking on security after these algorithms exist.”
 

AI applications depend on the ability of models to ingest training sets about certain known conditions from which they can draw conclusions about conditions they may encounter in the future. Sometimes, however, these models appear to be especially sensitive to certain features in the training set that could skew their results in real operation.
 

An example from past image analysis algorithms was a picture of a fisherman holding a trout. Fish-identification algorithms that were performing well on the image were found to be keying in on people’s hands and water in the background – instead of on the fish being identified.
 

“For these features to which the model seems unduly sensitive, we are working on a method for dampening the effects of those features so we can keep their influence without allowing them to be red herrings,” said Inman. 
 

With the stakes for the operation of these machine learning systems so high, their accuracy must also be as close to 100 percent as possible. To help attain that goal, the GTRI researchers are exploring options for using targeted synthetic data and reducing unnecessary features in the inputs. Further improvements can come from incorporating evaluation techniques that assess the quality of data before it goes into the AI’s training pipeline.
 

Incorporating interpretability and generalizability in the system – and using those paradigms in system development – improves the robustness of the development pipeline, she noted.
 

A challenge for many AI systems is that they are “black boxes,” operating on data and producing results in ways that users cannot understand – and therefore cannot assess and decide to trust. Being able to explain and verify what these models are doing is essential to building trust for properly using the results that they produce. This is particularly important when the developer and user of such systems are not the same.
 

“We consider three approaches to model explanation: uncertainty classification, deep learning introspection, and counterfactual demonstrations,” Inman said. “These approaches help us understand when the model is a good model to use in specific applications and when it may not be appropriate.”
 

Identifying when a specific model should perform well allows users to cull inputs and redirect applications that are outside the appropriate operational space of that model to other models that may be available. Simply applying the right model can reduce false positives dramatically.
 

“This technique can allow us to develop a system of models where we can have low false positive rates for each of the models in our system,” she said. “Those different models are well suited to different subclasses of our problems.”
 

Deep learning introspection allows users to look at how data is passing through a model to understand better why decisions are being made. Understanding rule generation inside the neural network provides insights into the decisions that the system is making.
 

Finally, observing the actions not taken by the neural network – the paths the agent didn’t choose – also provides insights into its operation. The AI may have a large set of possible trajectories, and understanding which ones are chosen allows humans to compare and contrast the decisions, which may be counter to what a human might choose.
 

Beyond the development of the AI itself are human factors: In many cases, it will be up to operators utilizing the output of these machine learning systems to make critical decisions based on machine output. Educating them about how to properly use the information they are receiving is therefore an important part of developing a successful AI/ML system.
 

“We don’t want operators to just blindly trust the algorithm. We want them to understand when they should or should not trust a particular algorithm,” Inman said. “We also want to make sure we’re being really cognizant that our operator’s time is very valuable. We don’t want to be asking them to do more than we have to or taking up more of their time than we absolutely need.”
 

Development of metrics, tools, and techniques to demonstrate trust in AI systems is one of GTRI's key research initiatives. Such initiatives highlight how LACR and GTRI collaborate to promote the secure development, integration, and adoption of AI capabilities today and lay the groundwork to inform the work of NSA's AI Security Center for securing AI in U.S. national security systems and the defense industrial base in the not-too-distant future.
 

“Artificial intelligence and machine learning show incredible promise in a large number of domains, but in order for us to be able to use them in our systems, they have to be trustworthy,” Inman said. “Fortunately, there are lots of tools and techniques to make our machine learning algorithms more trustworthy and viable for national security applications.”
 

Writer: John Toon (john.toon@gtri.gatech.edu)  
GTRI Communications  
Georgia Tech Research Institute  
Atlanta, Georgia

 

The Georgia Tech Research Institute (GTRI) is the nonprofit, applied research division of the Georgia Institute of Technology (Georgia Tech). Founded in 1934 as the Engineering Experiment Station, GTRI has grown to more than 2,900 employees, supporting eight laboratories in over 20 locations around the country and performing more than $940 million of problem-solving research annually for government and industry. GTRI's renowned researchers combine science, engineering, economics, policy, and technical expertise to solve complex problems for the U.S. federal government, state, and industry.
 

Newsletter

Sign up for monthly updates on GTRI’s research, activity, and more.

Related News

| News stories
GTRI Principal Research Engineer Jud Ready has been selected to join the National Academy of Inventors’ (NAI) 2024 Class of Senior Members, a group of 124 academic inventors from NAI’s Member Institutions who have made significant contributions to innovation and technology. Ready also holds a dual appointment as Deputy Director of Innovation Initiatives for Georgia Tech’s Institute for Materials and has over two decades of experience as an adjunct professor in Tech’s School of Materials Science & Engineering.
| News stories
Modernization efforts utilizing the Modular Open Systems Approach (MOSA) and the Sensor Open Systems Architecture (SOSA™) standard have enabled the rapid development and prototyping of upgrades for critical sensor systems on the MQ-9 Reaper, a remotely-piloted aircraft.
| News stories
To address concerns about artificial intelligence (AI) applications, researchers at the Georgia Tech Research Institute (GTRI), in collaboration with the National Security Agency’s (NSA’s) Laboratory for Advanced Cybersecurity Research (LACR), are developing metrics, tools, and techniques to improve the robustness and trustworthiness of AI for high-stakes applications.