The AI National Security Memo and broader evolution of AI inferred

AI National Security Memo

followed by MicKinsey memo on evolution of AI usage

On October 24, 2024, the White House issued the first National Security Memorandum (NSM) on Artificial Intelligence, marking a significant development in U.S. AI policy and national security strategy[1][3]. This memorandum outlines a comprehensive approach to harnessing AI for national security objectives while addressing associated risks and challenges.

Key Aspects of the AI National Security Memo

AI as a National Security Priority

The NSM identifies AI leadership as a critical national security priority for the United States[1]. It acknowledges that competitors have engaged in economic and technological espionage to steal U.S. AI technology, prompting a series of directives to maintain the U.S. advantage in AI:

  • The National Economic Council is tasked with assessing the U.S.’s competitive position in areas such as semiconductor design and manufacturing, computational resources, and access to highly skilled AI workers[1].
  • The Intelligence Community must prioritize gathering intelligence on competitors’ operations against the U.S. AI sector[1].
  • The Department of Energy will launch a pilot project to evaluate AI training and data sources[1].

AI Safety and Security Practices

The memorandum establishes several key initiatives to ensure AI safety and security:

  • The AI Safety Institute at the National Institute of Standards and Technology becomes the primary U.S. Government point of contact for private sector AI developers[1].
  • Testing requirements are set for frontier AI models and guidance for assessing AI systems’ safety and security risks[1].
  • The Department of Energy’s National Nuclear Security Administration will lead classified testing for nuclear and radiological risks[1].
  • The National Security Agency will evaluate cyber threats through its AI Security Center[1].

Governance and Oversight

The memo outlines governance structures and practices for AI in national security contexts:

  • National security agencies must appoint Chief AI Officers and establish AI Governance Boards[1].
  • Agencies are required to maintain annual inventories of high-impact AI systems and implement risk management practices[1].
  • The memo prohibits government AI use for certain sensitive applications, such as nuclear weapons deployment without human oversight and final immigration determinations[1][4].

International Implications and Reactions

The NSM has significant implications for international relations, particularly regarding U.S.-China competition in AI:

  • The memo positions AI as key to countering Beijing and securing the AI supply chain against foreign meddling[3].
  • It has sparked concerns about an escalating tech war with China, with observers expecting Beijing to place greater emphasis on self-reliance in AI[2].
  • The U.S. aims to attract top AI experts globally, reminiscent of post-World War II strategies to draw scientific talent[4].

Challenges and Criticisms

Despite its comprehensive approach, the memo faces several challenges:

  • Civil rights groups have expressed concerns about national security agencies potentially monitoring themselves in AI implementation[3].
  • The policy may be politically vulnerable, with potential changes if there’s a shift in administration[3].
  • Balancing human oversight in non-nuclear weapons systems while maintaining operational effectiveness poses a significant challenge[4].

In conclusion, the AI National Security Memo represents a landmark policy initiative that aims to position the United States at the forefront of AI development and application in national security contexts. While it addresses critical areas of AI governance and safety, it also raises important questions about international competition, civil liberties, and the future of AI in warfare and intelligence.

Citations:
[1] https://newsletter.safe.ai/p/ai-safety-newsletter-43-white-house
[2] https://www.scmp.com/news/china/diplomacy/article/3284118/first-us-national-security-memo-ai-sparks-fear-escalating-tech-war-china
[3] https://www.politico.com/news/2024/10/24/biden-ai-policy-national-security-00185407
[4] https://www.nytimes.com/2024/10/24/us/politics/biden-government-guidelines-ai.html
[5] https://www.washingtonpost.com/technology/2024/10/24/white-house-ai-nation-security-memo/
[6] https://www.whitehouse.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security/
[7] https://cset.georgetown.edu/article/the-national-security-memorandum-on-artificial-intelligence-cset-experts-react/
[8] https://www.reuters.com/world/us/white-house-presses-govt-ai-use-with-eye-security-guardrails-2024-10-24/
[9] https://www.infosecurity-magazine.com/news/white-house-ai-national-security/

—–

Based on the National Security Memorandum (NSM) on Artificial Intelligence, the United States is trying to protect and maintain its advantage in several key AI technologies and capabilities:

  1. Frontier AI models: The NSM focuses heavily on “frontier models,” which are defined as cutting-edge, general-purpose AI systems near the top of performance benchmarks[1][3]. These likely include large language models and other advanced AI systems developed by leading U.S. companies.
  2. Semiconductor design and manufacturing: The memo directs an assessment of the U.S. competitive position in semiconductor technology, which is crucial for AI development[1][3].
  3. Computational resources: The availability of high-performance computing and specialized AI hardware is highlighted as a key area to protect and advance[1][3].
  4. AI talent: The NSM emphasizes attracting and retaining highly skilled AI workers, indicating this human capital is seen as a critical resource to safeguard[1][3].
  5. AI training data and sources: The Department of Energy is tasked with evaluating AI training data, suggesting this is another area the U.S. aims to protect[3].
  6. AI safety and security practices: The memo outlines initiatives to develop and protect U.S. capabilities in AI safety testing, risk assessment, and security evaluation[3].
  7. AI applications in national security: While not specifying exact technologies, the NSM indicates a focus on protecting AI developments that could be applied to military, intelligence, and other national security functions[1][3].

The NSM also directs the Intelligence Community to prioritize gathering intelligence on competitors’ operations against the U.S. AI sector, suggesting a broad effort to protect U.S. AI technologies and know-how from economic and technological espionage[3]. This comprehensive approach indicates that the U.S. is trying to protect its entire AI ecosystem, from fundamental research and development through to applications and deployment, with a particular emphasis on maintaining leadership in the most advanced AI capabilities.

Citations:
[1] https://www.itpro.com/technology/artificial-intelligence/three-things-you-need-to-know-about-the-us-national-security-memorandum-on-ai
[2] https://www.scmp.com/news/china/diplomacy/article/3284118/first-us-national-security-memo-ai-sparks-fear-escalating-tech-war-china
[3] https://newsletter.safe.ai/p/ai-safety-newsletter-43-white-house
[4] https://cset.georgetown.edu/article/the-national-security-memorandum-on-artificial-intelligence-cset-experts-react/
[5] https://www.nytimes.com/2024/10/24/us/politics/biden-government-guidelines-ai.html
[6] https://www.whitehouse.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security/
[7] https://apnews.com/article/artificial-intelligence-national-security-spy-agencies-abuses-a542119faf6c9f5e77c2e554463bff5a


McKinsey

Gen AI in corporate functions: Looking beyond efficiency gains

October 23, 2024 | Article

By Heiko Heimes
with Abhishek Shirali,  Edward Woodcock, and Shilpa GoswamiShare

Generative AI is already adding value for corporate and business functions. Here’s how it could add more.

Article (6 pages)

In less than two years, generative artificial intelligence (gen AI) has become a mainstream tool with applications across almost every area of the economy. New McKinsey research shows that corporate and business functions—including finance, human resources, and customer care, among others—are ramping up their investment in gen AI technologies. A year ago, early adopters were experimenting with pilot projects based on “minimum viable product” gen AI tools. Now a significant minority have deployed gen AI use cases across their organizations.Share

About our research

Those users are broadly satisfied with their gen AI efforts. In our latest survey of senior business leaders (see sidebar, About our research), more than 75 percent of those who have deployed gen AI tools at scale say that those systems have met or exceeded expectations. Yet the data also suggests that companies are only scratching the surface of gen AI in corporate and business functions. While companies are experimenting with multiple use cases for the technology, even the fastest movers have only rolled out one or two full-scale gen AI applications.

The potential of generative AI is too great, and the risks too significant, for today’s approach to continue. Our research provides some clear hints about ways companies can generate more value, more quickly from their gen AI efforts. Specifically:

  1. Most companies are pursuing efficiency gains with gen AI, but leaders believe the real value of the technology will accrue from applications that transform the effectiveness of business functions.
  2. Overcoming barriers to gen AI development, deployment, and adoption requires a structured and systematic approach. Leading players tend to be those who centralize the management of gen AI technologies. That helps them take a holistic perspective on value creation through gen AI, and implement effective governance structures to accelerate deployment while managing the associated risks. 

Where we are now

Comparing this year’s survey with its counterpart from 2023, we see a dramatic acceleration in the engagement with gen AI technologies. The proportion of organizations that are actively using (as opposed to just experimenting with) gen AI in their corporate functions has increased by a factor of five, from 4 percent to 22 percent of CXOs reporting their function has rolled out technology for at least one use case. Furthermore, of the organizations with successful deployments, more than half are using gen AI daily, with less than 5 percent of respondents reporting intermittent usage of once a month or less.

While uptake of gen AI has increased across the board, our survey reveals significant differences in the pace of evolution within different functions (Exhibit 1 ). The IT function has the highest maturity among those in our survey, with 36 percent of respondents saying they have deployed gen AI use cases. Customer care, HR, and legal are in the middle of the pack, with around a quarter of respondents actively using the technology. Only 6 percent of finance leaders say that they have rolled out gen AI applications.

Exhibit 1

IT is furthest along the road of active gen AI adoption, while finance may be in pilot purgatory.'

We strive to provide individuals with disabilities equal access to our website. If you would like information about this content we will be happy to work with you. Please email us at: McKinsey_Website_Accessibility@mckinsey.com

That interfunctional difference can be attributed to several factors. IT teams may have better access to the skills needed to develop and integrate gen AI tools, for example. And the faster-moving functions have identified clear use cases, such as coding support in IT, customer-facing chatbots in customer service, and gen AI tools that can review and summarize documents for HR and legal teams.

Efficiency versus effectiveness

Our survey found that the most frequently adopted use cases for gen AI tended to favor applications that focused on automating tasks to improve efficiency: reducing the time and employee effort required to complete certain tasks. Effectively, these organizations are using gen AI to supercharge or even replace conventional digitization approaches such as robotic process automation (RPA). A smaller number of respondents are targeting greater effectiveness with gen AI: using the new technology to enhance service levels, improve business outcomes, or add new capabilities (Exhibit 2 ).

Exhibit 2

Corporate function leaders have tended to focus on opportunities that drive efficiency vs. more sophisticated use cases that improve insights and drive business performance.

We strive to provide individuals with disabilities equal access to our website. If you would like information about this content we will be happy to work with you. Please email us at: McKinsey_Website_Accessibility@mckinsey.com

Today’s focus on efficiency is probably because simpler, tactical uses of gen AI have been easier to deploy. In the medium term, we expect companies to be more ambitious about the application of the technology. In part, that’s because it can be hard to turn small time savings into meaningful cost savings. Today’s gen AI systems are capable of automating parts of roles, rather than whole jobs, making it difficult for companies to redeploy people freed up by automation. It’s also because effectiveness offers much more potential value. Improving forecast accuracy, understanding markets more deeply, or optimizing capital allocation might unlock truly significant cost savings and growth opportunities.

Leaders recognize the potential to apply gen AI in more strategic ways. For example, in the finance function the most frequently pursued use cases that had either been piloted or fully deployed include cost analytics (by 47 percent of CFOs responding to our survey), optimizing accounts payable approvals (44 percent), and fraud prevention checks (also 44 percent). These more tactical applications contrast with the most popular use cases CFOs plan to pursue in the future, such as cashflow optimization (which 46 percent plan to pursue next year) and revenue forecasting (which 30 percent plan to pursue), where the focus is on improving the effectiveness of the finance organization.

Building a gen AI engine

On average, the CXOs we surveyed believe it will take another three to five years to capture significant value from gen AI deployments in their functions (with between 55 and 80 percent of respondents, depending on specific function, falling in this range). Leaders cited a wide range of issues they believe are hampering their gen AI ambitions, with inaccuracy and security risks at the top of the list, followed by challenges in selecting appropriate use cases for investment (Exhibit 3 ).

Exhibit 3

The main issues CXOs have with gen AI are concerns about accuracy, security, and understanding where it should be deployed.

We strive to provide individuals with disabilities equal access to our website. If you would like information about this content we will be happy to work with you. Please email us at: McKinsey_Website_Accessibility@mckinsey.com

These are valid concerns, but we believe they are exacerbated by the way many companies are approaching gen AI development. Often that means a highly distributed, bottom-up approach, with individual functions and business units conducting their own gen AI experiments and pilots with limited oversight or central coordination. 

In our survey, 35 percent of organizations that followed an enterprise-wide approach to gen AI investments have already successfully deployed at least one use case, while functions that pursued gen AI in a single business unit or region had only a 24 percent success rate (Exhibit 4 ). Companies were also ten percentage points more likely to be actively using gen AI when the underlying data was owned by the organization’s global business services (GBS) unit, rather than its IT function.

Exhibit 4

Organizations with an enterprise-wide approach to deployment had the greatest success in active use of gen AI in corporate functions.

We strive to provide individuals with disabilities equal access to our website. If you would like information about this content we will be happy to work with you. Please email us at: McKinsey_Website_Accessibility@mckinsey.com

These findings add weight to the idea that gen AI is too important a technology to leave to chance. Centralizing the oversight of gen AI development within the organization is a key enabling step that allows companies to make better decisions about where, how, and when they deploy gen AI in their corporate functions. Specifically, we believe that companies should establish structures and processes that enable them to do three things:

  1. Develop a robust perspective on where gen AI adds value. Ensure the organization is prioritizing investments with a clear sense of how and where gen AI can add value and what this value is.