(Original caption) Making of old glory. Mrs. Helen McAfee uses a zigzag sewing machine to sew a star… [+] blue field.
Bettman Archive
That date was October 24th. The Biden White House has released a new U.S. National Security Memorandum on Artificial Intelligence, and as you might expect, there’s a lot of language in it.
It’s worth writing about because we’re at a tipping point where new models are evolving very quickly.
Looking at this document and its annex, the Framework for Advancing AI Governance and Risk Management in National Security, there are several things to note about this directive.
difficult language
The first thing many readers will notice is that both of these documents are primarily written in legalese.
Here’s how the drafters describe the scope of the framework document itself:
“The Framework for Advancing AI Governance and Risk Management in National Security (the “AI Framework”) is a National Security Memorandum on Advancing U.S. Leadership in AI, Leveraging AI to Achieve National Security. It is built on and meets the requirements in Section 4.2 of Improving AI Safety, Security, and Trustworthiness (“AI NSM”). It directs designated department heads to issue guidance to their respective components/sub-agencies to promote governance and risk management practices regarding the use of AI. It is a component of the National Security System (NSS). 1,2 This AI framework is intended to support and enable the U.S. government to continue to take proactive steps to preserve human rights, civil rights, freedoms, privacy, and security. Ensure that AI is used in a manner consistent with the president’s authority as commander-in-chief to decide when to order military operations for the national defense. …The use of AI in a military context shall comply with the principles and measures enshrined in the Political Declaration on the Responsible Military Use and Autonomy of Artificial Intelligence, published by the United States on November 9, 2023. Masu. ”
It may be easy for ChatGPTs to figure it out, but most people will struggle at least a little. However, as with any law, plain language is not a top priority.
Here’s how planners are addressing the previous regulation, the Office of Management and Budget’s Advancing the Responsible Acquisition of Artificial Intelligence in Government Memorandum, also known as OMB M-24-10.
“The AI Framework supplements, but does not replace or modify, OMB Memorandum M-24-10. This AI Framework shall cover AI used as a component of the NSS by or under the U.S. Government. AI applies to both new and existing AI developed, used, or procured on behalf of the government, and applies to system functions that implement or rely on AI, rather than the entire information system that incorporates AI. ”
Suffice it to say, let’s leave that aside for now – read at your own risk.
Four pillars of national security for dealing with AI
Very clearly spelled out in the framework document are four key elements of a strategy for managing AI.
The first is to consider prohibited or “high-impact” use cases and associated risks related to artificial intelligence.
The second is to commit to “sufficiently robust minimum risk management practices.” This includes testing, as we will point out later.
The third is cataloging important AI cases that the government deems “high impact,” essentially a documentation approach.
Fourth, it cites another element of the regulation: “effective training and accountability mechanisms.”
protect nuclear football
Here are some parts of the memo that are uniquely interesting to foreign policy buffs: It begins with the suggestion that “Chinese and Russian analysts will definitely study the NSM closely.”
The accompanying document itself includes a moratorium on “involving humans as collateral” in nuclear weapons-related decisions. Experts point to the US-China AI safety conference held in Geneva in May of this year, which likely contributed to this important warning. Officials said the disclaimer should “reduce misunderstandings between the two rivals.”
Robust testing: part of the deal
Returning to the second pillar identified in the document, additional safeguards are required for high-impact AI use cases.
There is testing and notification and impact assessment. You can read everything in the language of your notes. Essentially, the system will be scrutinized to ensure it works as intended and does not have harmful side effects for society as a whole.
Appointment of chief
Another seemingly important aspect of this memo is: The document indicates that the government plans to appoint a chief AI officer with the ability to waive high-impact requirements in specific cases.
Whatever you call it, this reflects the practice in the private sector of appointing someone responsible for overseeing the use of AI in general.
Those are five big takeaways from the unprecedented government document, but let’s dig a little deeper and hear what AI itself has to say.
Just for fun, ask ChatGPT “What does the document say?”
“The new National Security Memorandum (NSM) on Artificial Intelligence (AI) issued by the Biden Administration on October 24, 2024, will leverage AI to advance U.S. national security while ensuring responsible use.” “We are focused on that,” ChatGPT responded. “The memorandum outlines several key priorities…”
This is ChatGPT’s list: Accelerate AI adoption, protect value/rights, strengthen AI infrastructure, and counter “adversarial AI.”
So we asked: “What do you think about this memorandum?”
In response to this, ChatGPT wrote a book and it is:
“The Biden Administration’s National Security Memorandum on AI reflects a timely and comprehensive approach to managing both the opportunities and risks that AI poses in the national security context, especially against adversaries like China. It seeks to strike a balance between leveraging AI to maintain technological advantage in competition with countries, and ensuring that its use is in line with democratic values and protects civil liberties.
One of the memorandum’s strengths is its focus on securing AI infrastructure, including strengthening chip supply chains and prioritizing AI in supercomputing. This is consistent with broader strategies seen in initiatives like the CHIPS Act that address the importance of semiconductor production to AI leadership. A focus on private sector collaboration really recognizes where much of the AI innovation is happening and will help U.S. companies remain globally competitive.
The memorandum also brings an ethical dimension by stressing that the deployment of AI must protect civil rights and privacy. This is critical given public concerns about surveillance and bias in AI systems. However, ensuring transparency and accountability in AI deployment by intelligence and defense agencies is difficult, especially given the sensitive nature of some applications.
One potential criticism is that the U.S. has no realistic view of the need for rapid adoption of AI and the need for strict ethical safeguards, especially when global competitors may not impose similar constraints. It’s a matter of being able to strike a balance. There are also concerns from civil society organizations that introducing AI in the security sector could perpetuate bias and violations of civil liberties.
Overall, this memorandum sets a solid framework, but its effectiveness will depend on the practical implementation of these guidelines across various federal agencies and how well the U.S. works with international partners on the responsible use of AI. It depends on whether you can adjust it. ”
These answers really demonstrate the power of AI to influence the discussion and contribute to a meaningful conversation about what we are seeing as we move into a year of fast-paced innovation and development. .