The US of AI for Good

This is an opportunity to meet the moment and crystalize a national response to safely, securely, and responsibly harness the promise of artificial intelligence.

John Larson, EVP and AI Practice Lead

November 13, 2023

5 Min Read
american flag waving
Supapixx via Alamy Stock

As I write this, I am sitting with a team of our interdisciplinary experts convened to receive, review, reconcile, and react to the new Executive Order on Artificial Intelligence. We are seeing a groundswell of interest and excitement in government to responsibly capitalize on the promise of safe, secure, and trustworthy AI to meet missions -- federal and civilian -- and we are making strides to truly bring AI out of the lab and into the everyday for the betterment of the world. 

While this is not the first federal action on this powerful technology, we consider last month’s directives to be the “starting gun” for federal agencies to truly unlock the potential of AI for their missions, with the OMB implementation guidance as the “race course,” providing additional fidelity and practical, actionable guidance for achieving the visionary EO goals. Collaborative industry partners are ready to answer the call. But where do we start?

We’ve Already Established a Strong Foundation

It’s been a virtual AI-palooza since ChatGPT stormed into the mainstream less than a year ago; and while we’ve just started into the age of generative AI, already, everything has changed: The tech has come out from behind the veil for programmers and algorithmic experts and has been made accessible and usable by virtually anyone. The good news is that we’re prepared to meet the moment. Today, the country is at a critical juncture: there is finally a convergence beginning to align across various AI principles, frameworks, and approaches. We’re still in the early days, but no longer the earliest days. The EO presents a timely call to leverage the goodwill that has been built among private industry, academia, policymakers, and government on what could be considered a truly nonpartisan issue: that AI is powerful and can do incredible things when harnessed for good, but that there are inherent risks if not managed responsibly. 

Related:The Evolving Ethics of AI: What Every Tech Leader Needs to Know

The AI groundswell has already achieved an impressive feat: focusing the nation to care, deeply and humanly, about AI and the promise it holds to advance equity and fairness, spur innovation at a time of geopolitical hyper-competition, and invigorate an AI-ready workforce that will be essential to any future success. But questions remain around the necessary levels and ownership of oversight and enforcement in our AI-enabled future. 

In addition to the EO, multiple roadmaps have been issued this week that serve as enablers to develop, adopt, and deploy AI that is safe, secure, repeatable, explainable, and trustworthy, building on industry best practices, recommendations, and learnings. Now is the “when” we have been seeking to turn the page to deploy AI artfully, effectively, and securely at a grand scale for the benefit of all, building on decades of partnership and learnings with government.

Related:What Does the New AI Executive Order Mean for Development, Innovation?

So, Who’s on First?

We are all being called to act and participate in the shepherding of AI. Our own  team has been using AI tools to aid and speed the synthesis of the 80-page EO text and other supporting documents, leveraging a bevy of world class and experimental platforms -- including Claude -- to distill exact requirements for specific federal agencies gathered from the directives. From there, our interdisciplinary team of experts can assess how best to take action immediately, not only to comply with the EO but also to work toward fostering the world where AI can help us all thrive. 

There are massive obligations and tremendous potential benefits from AI in the civilian government sector -- with profound impact on everyday Americans -- as we focus on the ways in which this transformative technology can underpin positive change in healthcare, strengthen cybersecurity for critical infrastructure, reduce fraud, waste, and abuse, increase veterans’ access to benefits, action around intelligence to meet climate change, and more. AI isn’t just tech to be used by humans, it’s a critical tool that can actually build the safer, more prosperous, more equitable world we strive to achieve. 

Related:US Lawmakers Mull AI, Data Privacy Regulation

People all over the country already encounter AI in their day-to-day lives, and as we uncover more AI promise, we uncover more questions and considerations about the technology itself -- both what it can achieve, and how it will achieve that -- as well as who is in charge of managing that lifecycle. 

We also expect and anticipate that standards and enforcement mechanisms will continue to emerge and evolve as we learn and expand the capability, and we need to be open to progress in our own thinking and learning with this process. The key is to maintain communication with federal partners, industry leaders, and those deploying AI in the field to keep experimenting, learning, and building best practices that adhere to a responsible, safe framework. 

What Comes Next?

As AI optimists, we view the future as bright if we make specific choices to harness AI’s power for the benefit of our society. 

The EO challenges us to march collectively and collaboratively with industry, academia, civil society, and government for an AI-empowered future, as we have ambitious views of how AI will impact all aspects of society, from healthcare, immigration, and workforce disruption to the global economy, consumer protection, and climate change. 

This moment in time is the catalyst we need to collaboratively usher in the age of AI for every citizen and to aid in meeting our grandest of challenges across industries -- in lockstep with the federal government and its partners. This effort requires AI optimists, critical thinkers, and technologists alike to capitalize on the immense promise we know AI holds.

Will you meet the moment with us?

About the Author(s)

John Larson

EVP and AI Practice Lead , Booz Allen

John Larson is EVP and Leader of Booz Allen Hamilton’s artificial intelligence (AI) practice. He directs a world-class team of AI and machine learning engineers, innovators, and technologists who provide award-winning AI solutions supporting the most critical missions for national defense, civilian agencies, and intelligence communities. He is also a passionate advocate for AI education, serving on both the General Assembly AI & Data Science advisory board and the AI Education Project advisory board. John holds a master’s in public policy and a double B.A. in economics and history, both from The College of William and Mary.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights