Pentagon’s Artificial Intelligence Efforts Restricted by ‘Mom Test’
In a move to ensure the integrity of its employee vetting process, the Department of Defense has decided to limit its use of artificial intelligence (AI) tools. The agency will only utilize AI when it can explain how it reaches its conclusions and when these methods pass what Director David Cattler calls “The Mom Test.”
Before allowing his 13,000+ Pentagon employees to access information about American citizens, Cattler ensures that the process aligns with this moral compass. This test is applied not only to AI systems but also to any government tool used for similar purposes.
DCSA’s decision to restrict its AI use stems from concerns over the potential introduction of bias in security clearance vetting and data privacy breaches. The agency recognizes the importance of transparency in its process and the need to maintain trust with the public.
The Pentagon’s approach is a significant departure from some of the more advanced applications of AI, such as generative AI models like ChatGPT or Bard. Instead, DCSA has chosen to focus on tools that provide tangible benefits without raising concerns about bias or data breaches.
In an interview, Cattler explained that the agency prioritizes systems where it can “prove why they are credible” and how they function consistently. He emphasized the importance of transparency in AI decision-making processes.
Cattler highlighted a potential application for AI in the Pentagon’s security clearance process. The tool would create real-time heatmaps of facilities under DCSA’s jurisdiction, displaying potential threats plotted across them.
Source: www.forbes.com