Booz Allen-Hamilton Secures Pentagon Artificial Intelligence Contract
We take a closer look into Booz Allen-Hamilton's massive artificial intelligence contract with the Pentagon and what it entails for both parties.
Who is Booz Allen-Hamilton?
Booz Allen-Hamilton is a global firm of approximately 26,300 diverse, passionate, and exceptional people driven to excel, do right, and realize positive change in everything that they do.
They bring bold thinking and a desire to be the best in consulting, analytics, digital solutions, engineering, and cyber, and with industries ranging from defence to health to energy to international development.
They celebrate and value diversity in all its forms; it’s something that they truly value as a multicultural community of problem solvers. They believe in corporate and individual citizenship that make communities better places for all.
Booz Allen-Hamilton has one guiding purpose—to empower people to change the world. Its founder, Edwin Booz said it best: “Start with character… and fear not the future.” They bring a ferocious integrity to not only train our clients to tackle the problems they face today but to help them change the status quo for tomorrow. Each day, they imagine, invent, and deliver new ways to better serve their employees, their clients, and the world.
About the contact:
Booz Allen Hamilton won a five-year, $800 million task order to provide artificial intelligence services to the Department of Defense’s Joint Artificial Intelligence Center (JAIC).
Under the contract award, announced by the General Services Administration and the JAIC on May 18, Booz Allen Hamilton will provide a “wide mix of technical services and products” to support the JAIC, a DoD entity dedicated to advancing the use of artificial intelligence across the department.
The contracting giant will provide the JAIC with “data labeling, data management, data conditioning, AI product development, and the transition of AI products into new and existing fielded programs,” according to the GSA news release.
“The delivered AI products will leverage the power of DoD data to enable a transformational shift across the Department that will give the U.S. a definitive information advantage to prepare for future warfare operations,” the release said.
The contract will support the JAIC’s new joint warfighting mission initiative, launched earlier this year. The initiative includes “Joint All-Domain Command and Control; autonomous ground reconnaissance and surveillance; accelerated sensor-to-shooter timelines; operations center workflows; and deliberate and dynamic targeting solutions,” said JAIC spokesperson Arlo Abrahamson told C4ISRNET in January.
The joint warfighting initiative is looking for "AI solutions that help manage information so humans can make decisions safely and quickly in battle,” Abrahamson said. The award to Booz Allen Hamilton will push that effort forward, Lt. Gen. Jack Shanahan, the center’s director, said in a statement.
“The Joint Warfighting mission initiative will provide the Joint Force with AI-enabled solutions vital to improving operational effectiveness in all domains. This contract will be an important element as the JAIC increasingly focuses on fielding AI-enabled capabilities that meet the needs of the warfighter and decision-makers at every level," Shanahan said.
The award to Booz Allen Hamilton was made by the GSA through its Alliant 2 Government-wide Acquisition Contract, a vehicle designed to provide artificial intelligence services to the federal government. The GSA and JAIC have been partners since last September, when the pair announced that they were teaming up as part of the GSA’s Centers of Excellence initiative, a program meant to accelerate modernization with agencies across government.
“The CoE and the JAIC continue to learn from each other and identify lessons that can be shared broadly across the federal space,” said Anil Cheriyan, director of the GSA’s Technology Transformation Services office, which administers the Centers of Excellence program. “It is important to work closely with our customers to acquire the best in digital adoption to meet their needs.
AI Shows its Value; Governments Must Unleash its Potential
2020 has revealed just how far AI technology has come as it achieves fresh milestones in the fight against Covid-19. Google’s DeepMind helped predict the protein structure of the virus; AI-drive infectious disease tracker BlueDot spotted the novel coronavirus nine days before the World Health Organisation (WHO) first sounded the alarm. Just a decade ago, these feats were unfathomable.
Yet, we have only just scratched the surface of AI’s full potential. And it can’t be left to develop on its own. Governments must do more to put structures in place to advance the responsible growth of AI. They have a dual responsibility: fostering environments that enable innovation while ensuring the wider ethical and social implications are considered.
It is this balance that we are trying to achieve in the United Arab Emirates (UAE) to ensure government accelerates, rather than hinders, the development of AI. Just as every economy is transitioning at the moment, we see innovation as being vital to realising our vision for a post-oil economy. Our work in his space has highlighted three barriers in the government approach when it comes to realising AI’s potential.
First, addressing the issue of ignorance
While much time is dedicated to talking about the importance of AI, there simply isn’t enough understanding of where it’s useful and where it isn’t. There are a lot of challenges to rolling out AI technologies, both practically and ethically. However, those enacting the policies too often don’t fully understand the technology and its implications.
The Emirates is not exempt from this ignorance, but it is an issue we have been trying to address. Over the last few years, we have been running an AI diploma in partnership with Oxford University, teaching government officials the ethical implications of AI deployment. Our ambition is for every government ministry to have a diploma graduate, as it is essential to ensure policy decision-making is informed.
Second, moving away from the theoretical
While this grounding in the moral implications of AI is critical, it is important to go beyond the theoretical. It is vital that experimentation in AI is allowed to happen for its own sake and not let ethical problems stymie innovations that don’t yet exist. Indeed, many of these concerns – while well-founded – are born out in the practical deployment of these end-use cases and can’t be meaningfully discussed on paper.
If you take facial recognition as an example, looking at this issue in abstract quickly leads to discussions over privacy concerns with potential surveillance and intrusion by private companies or authorities’ regimes.
But what about the more specific issue of computer vision? Although part of the same field, the same moral quandaries do not arise, and the technology is already bearing fruit. In 2018, we developed an algorithmic solution that can be used in the detection and diagnosis of tuberculosis from chest X-rays. You can upload any image of a chest X-ray, and the system will identify if a person has the disease. Laws and regulations must be tailored to unique use-cases of AI, rather than lumping disparate fields together.
To create this culture that encourages experimentation, we launched the RegLab. It provides a safe and flexible legislation ecosystem to supports the utilisation of future technologies. This means we can actually see AI in practice before determining appropriate regulation, not the other way around. Regulation is vital to cap any unintended negative consequences of AI, but it should never be at the expense of innovation.
Finally, understanding the knock-on effects of AI
There needs to be a deeper, more nuanced understanding of AI’s wider impact. It is too easy to think the economic benefits and efficiency gains of AI must also come with negative social implications, particularly concern over job loss.
But with the right long-term government planning, it’s possible to have one without the other; to maximise the benefits and mitigate potential downsides. If people are appropriately trained in how to use or understand AI, the result is a future workforce capable of working alongside these technologies for the better – just as computers complement most people’s work today.
We’ve to start this training as soon as possible in the Emirates. Through our Ministry of Education, we have rolled out an education programme to start teaching children about AI as young as five years old. This includes coding skills and ethics, and we are carrying this right through to higher education with the Mohamed bin Zayed University of Artificial Intelligence set to welcome its first cohort in January. We hope to create future generations of talent that can work in harmony with AI for the betterment of society, not the detriment.
AI will inevitably become more pervasive in society, digitisation will continue in the wake of the pandemic, and in time we will see AI’s prominence grow. But governments have a responsibility to society to ensure that this growth is matched with the appropriate understanding of AI’s impacts. We must separate the hype from the practical solutions, and we must rigorously interrogate AI deployment and ensure that it used to enhance our existence. If governments can overcome these challenges and create the environments for AI to flourish, then we have a very exciting future ahead of us.