Skip to main content

Framework for State, Local, Tribal, and Territorial Use of Artificial Intelligence for Public Benefit Administration

Date of Publication: April 29, 2024

1. OVERVIEW

Artificial intelligence (AI) is a powerful technology that presents both opportunities and risks for the delivery of public benefits. This framework outlines USDA’s principles and approach to support states, localities, tribes, and territories in responsibly using AI in the implementation and administration of USDA’s nutrition benefits and services.

This framework is in response to Section 7.2(b)(ii) of Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence: 1

(ii) The Secretary of Agriculture shall, within 180 days of the date of this order and as informed by the guidance issued pursuant to section 10.1(b) of this order, issue guidance to state, local, tribal, and territorial public-benefits administrators on the use of automated or algorithmic systems in implementing benefits or in providing customer support for benefit programs administered by the Secretary, to ensure that programs using those systems:

(A) maximize program access for eligible recipients;
(B) employ automated or algorithmic systems in a manner consistent with any requirements for using merit systems personnel in public-benefits programs;
(C) identify instances in which reliance on automated or algorithmic systems would require notification by the state, local, tribal, or territorial government to the Secretary;
(D) identify instances when applicants and participants can appeal benefit determinations to a human reviewer for reconsideration and can receive other customer support from a human being;
(E) enable auditing and, if necessary, remediation of the logic used to arrive at an individual decision or determination to facilitate the evaluation of appeals; and
(F) enable the analysis of whether algorithmic systems in use by benefit programs achieve equitable outcomes.

1.1. USDA Programs in Relation to the Framework

USDA’s Food and Nutrition Service (FNS) operates 16 federal nutrition programs that affect one in four Americans each year. These programs serve a variety of populations, from infants and children to the elderly. All USDA nutrition programs are federally funded and administered by state, local, tribal, or territorial (SLTT) agencies in partnership with FNS and are covered by this plan, including:

  • Child and Adult Care Food Program
  • Commodity Supplemental Food Program
  • Farmers Market Nutrition Program
  • Food Distribution Program on Indian Reservations
  • Fresh Fruit and Vegetable Program
  • National School Lunch Program
  • School Breakfast Program
  • Senior Farmers Market Nutrition Program
  • Special Milk Program
  • Special Supplemental Nutrition Program for Women, Infants, and Children (WIC)
  • Summer Electronic Benefit Transfer (EBT) Program
  • Summer Food Service Program
    • Group meal sites (congregate option)
    • To-go or delivered meals (non-congregate option)
  • Supplemental Nutrition Assistance Program (SNAP)
    • Nutrition Assistance Program (NAP) block grants for American Samoa, the Commonwealth of the Northern Mariana Islands, and the Commonwealth of Puerto Rico in lieu of SNAP
  • The Emergency Food Assistance Program
  • The Patrick Leahy Farm to School Program
  • USDA Foods in Schools

1.2.Scope and Applicability

This framework provides recommendations for balancing the opportunities and risks of AI in SLTT systems and technology used to administer FNS nutrition programs. This framework is in support of FNS’s role in overseeing the administration of USDA nutrition programs, ensuring the responsible use of technology and innovation that produces accurate and equitable outcomes and a customer experience that engenders public trust. This framework also includes ways FNS will prepare to support program and SLTT agency use of AI technologies.

This framework applies to SLTT government agencies in their administration of the 16 federal nutrition programs listed in Section 1.1, with “local agencies” being inclusive of local program operators of child nutrition programs. USDA recognizes that tribal nations and indian tribal organizations administering applicable FNS programs in Section 1.1 may require flexibility and will include consultation on this framework to ensure that it is consistent with any related federal or tribal policies and supports tribal sovereignty. While this document references states, localities, tribes, and territories as a group, USDA does recognize that tribes are sovereign nations and that as policies are developed at a program-specific level, USDA will differentiate between those entities as appropriate.

This framework applies to all AI technology solutions used by SLTT agencies to support program administration and service delivery including, but not limited to, management information systems, case management systems, online application forms, online portals for participant management of benefits, reporting and analysis tools, and mobile applications. These recommendations apply to all new and existing uses of AI that are developed, used, or procured by SLTT agencies in the administration of FNS programs. This framework applies to system functionality that implements or is reliant on AI, and not to the entirety of an information system that incorporates AI.

This framework applies to all forms of AI-enabled tools and capabilities that meet the definition of artificial intelligence in Section 1.3. It does not include automation tools or capabilities whose behavior consists only of executing human-defined rules or that are trained solely to repeat an observed practice exactly as it was conducted. Examples of automation that do not use AI include:

  • Interactive voice response systems that act solely on buttons pressed by callers;
  • Interactive forms based on dynamic, human-defined logic that skip fields that are not relevant to applicants or flag inputs that may affect eligibility;
  • Auto-population of data into forms to facilitate renewals or enrollment in another benefit program, with the opportunity for an applicant to review and edit entries before submission; and
  • Using robotic process automation to enter data into a case file from a trusted data source and to flag the new information to a case worker for evaluation.

While this framework does not indicate any change in FNS’s approach to overseeing SLTT agencies’ use of automated systems that do not use AI, SLTT agencies are strongly encouraged to apply these principles and best practices where applicable to non-AI enabled automated systems. SLTT agencies are particularly encouraged to identify non-AI enabled automated systems with rights-impacting or safety-impacting uses as described in Section 2.2, evaluate those systems for bias or risks to rights or safety, and mitigate potential risks or harms. All SLTT systems used to administer USDA nutrition programs should protect rights and safety, advance equity, uphold accountability, and engender public trust, no matter what technology is used. FNS welcomes SLTT agencies, as well as beneficiaries and their advocates, to proactively engage with us to assess how to best support implementation of these principles and best practices, including with respect to automated systems that do not use AI.

This framework does not supersede or displace existing legal requirements for FNS nutrition programs, such as those regarding merit system personnel, appeal rights, and state systems advance planning. This framework does not address issues that are present regardless of the use of AI and does not supersede other, more general federal or USDA policies, including those policies that apply to AI but are not focused specifically on AI, such as policies related to privacy or cybersecurity. This framework does not apply to systems used by SLTT agencies for purposes outside of their administration of FNS programs.

This framework was informed by the following goals:

  • Support tailoring for specific programs, technologies, and uses – FNS understands each nutrition program and application of AI presents different opportunities and risks. This framework seeks to set high-level principles and guardrails that should apply to all programs without creating “one size fits all” processes. Programs have the flexibility to calibrate governance as appropriate for the needs and risks of the program and can evolve and iterate their guidance within the framework as new risks, opportunities, or technologies emerge.
  • Provide flexibilities for states, localities, tribes, and territories – SLTT agencies that administer FNS nutrition programs have latitude to tailor service delivery to meet the needs of the populations they serve, within the bounds of program requirements. FNS encourages responsible innovation by SLTT agencies administering FNS programs and the exploration and use of technologies that improve program administration while protecting vulnerable populations. This framework seeks to define principles and recommendations for using AI in SLTT agency administration of FNS programs, while enabling flexibility in ways SLTT agencies administer programs in the context of a rapidly changing technology environment.
  • Align with existing requirements and frameworks – This framework was informed by and seeks to align with federal AI guidance and frameworks including, but not limited to, Executive Order 14110, 2 OMB Memorandum M-24-10, 3 and the NIST AI Risk Management Framework 4 to leverage identified best practices and to facilitate USDA’s compliance with federal requirements and standards.

This framework will focus on high-level principles and recommendations consistent for AI usage across all FNS programs and AI technologies. FNS programs will issue regulations and/or guidance aligned with this framework and consistent with applicable law. Program-specific regulations and/or guidance will include required processes and practices for governance, risk management, and reporting and may define requirements for specific technologies, such as generative AI.

1.3.Definitions

For the purposes of this framework, the following definitions are applicable:

Artificial Intelligence (AI): The term “artificial intelligence” has the meaning provided in Section 6 of OMB Memorandum M-24-10:

The term “artificial intelligence” has the meaning provided in Section 238(g) of the John S. McCain National Defense Authorization Act for Fiscal Year 2019, 5 which states that “the term ‘artificial intelligence’ includes the following”:

1. Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets.
2. An artificial system developed in computer software, physical hardware, or other context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.
3. An artificial system designed to think or act like a human, including cognitive architectures and neural networks.
4. A set of techniques, including machine learning, that is designed to approximate a cognitive task.
5. An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decision making, and acting.

For the purposes of this memorandum, the following technical context should guide interpretation of the definition above:

1. This definition of AI encompasses, but is not limited to, the AI technical subfields of machine learning (including deep learning as well as supervised, unsupervised, and semi-supervised approaches), reinforcement learning, transfer learning, and generative AI.
2. This definition of AI does not include robotic process automation or other systems whose behavior is defined only by human-defined rules or that learn solely by repeating an observed practice exactly as it was conducted.
3. For this definition, no system should be considered too simple to qualify as covered AI due to a lack of technical complexity (e.g., the smaller number of parameters in a model, the type of model, or the amount of data used for training purposes).
4. This definition includes systems that are fully autonomous, partially autonomous, and not autonomous, and it includes systems that operate both with and without human oversight.

Automation Bias: The term “Automation Bias” has the meaning provided in Section 6 of OMB Memorandum M-24-10:

The term “automation bias” refers to the propensity for humans to inordinately favor suggestions from automated decision-making systems and to ignore or fail to seek out contradictory information made without automation.

Equity: The term “equity” has the meaning provided in Section 10(a) of Executive Order 14091. 6

Generative AI: The term “generative AI” has the meaning provided in Section 3(p) of Executive Order 14110.

Human-In-The-Loop: The term “human-in-the-loop” has the meaning provided in NIST’s “The Language of Trustworthy AI: An In-Depth Glossary of Terms:” 7

An AI system that requires human interaction.

Machine Learning: The term “machine learning” has the meaning provided in Section 3(t) of Executive Order 14110.

Public Benefits Program: The term “public benefits program” has the meaning provided in footnote 6 of OMB Memorandum M-22-10: 8

“[Public benefits programs]” should be construed widely to include social welfare programs; social insurance programs; tax credits; and other cash, loan, or in-kind assistance programs, particularly those intended to support in-need individuals or communities.

Rights-Impacting AI: The term “Rights-Impacting AI” has the meaning provided in Section 6 of OMB Memorandum M-24-10:

The term “rights-impacting AI” refers to AI whose output serves as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s:

1. Civil rights, civil liberties, or privacy, including but not limited to freedom of speech, voting, human autonomy, and protections from discrimination, excessive punishment, and unlawful surveillance;
2. Equal opportunities, including equitable access to education, housing, insurance, credit, employment, and other programs where civil rights and equal opportunity protections apply; or
3. Access to or the ability to apply for critical government resources or services, including healthcare, financial services, public housing, social services, transportation, and essential goods and services.

Risks from the Use of AI: The term “Risks from the Use of AI” has the meaning provided in Section 6 of OMB Memorandum M-24-10:

The term “risks from the use of AI” refers to risks related to efficacy, safety, equity, fairness, transparency, accountability, appropriateness, or lawfulness of a decision or action resulting from the use of AI to inform, influence, decide, or execute that decision or action. This includes such risks regardless of whether:

1. the AI merely informs the decision or action, partially automates it, or fully automates it;
2. there is or is not human oversight for the decision or action;
3. it is or is not easily apparent that a decision or action took place, such as when an AI application performs a background task or silently declines to take an action; or
4. the humans involved in making the decision or action or that are affected by it are or are not aware of how or to what extent the AI influenced or automated the decision or action.

While the particular forms of these risks continue to evolve, at least the following factors can create, contribute to, or exacerbate these risks:

1. AI outputs that are inaccurate or misleading;
2. AI outputs that are unreliable, ineffective, or not robust;
3. AI outputs that are discriminatory or have a discriminatory effect;
4. AI outputs that contribute to actions or decisions resulting in harmful or unsafe outcomes, including AI outputs that lower the barrier for people to take intentional and harmful actions;
5. AI being used for tasks to which it is poorly suited or being inappropriately repurposed in a context for which it was not intended;
6. AI being used in a context in which affected people have a reasonable expectation that a human is or should be primarily responsible for a decision or action; and
7. the adversarial evasion or manipulation of AI, such as an entity purposefully inducing AI to misclassify an input.

This definition applies to risks specifically arising from using AI and that affect the outcomes of decisions or actions. It does not include all risks associated with AI, such as risks related to the privacy, security, and confidentiality of the data used to train AI or used as inputs to AI models.

Safety-Impacting AI: The term “Safety-Impacting AI” has the meaning provided in Section 6 of OMB Memorandum M-24-10:

The term “safety-impacting AI” refers to AI whose output produces an action or serves as a principal basis for a decision that has the potential to significantly impact the safety of:

1. Human life or well-being, including loss of life, serious injury, bodily harm, biological or chemical harms, occupational hazards, harassment or abuse, or mental health, including both individual and community aspects of these harms;
2. Climate or environment, including irreversible or significant environmental damage;
3. Critical infrastructure, including the critical infrastructure sectors defined in Presidential Policy Directive 21 or any successor directive and the infrastructure for voting and protecting the integrity of elections; or,
4. Strategic assets or resources, including high-value property and information marked as sensitive or classified by the federal government.

Vital Document: The term “vital document” has the meaning provided in the Food and Nutrition Service Language Access Plan: 9

Paper or electronic written material that contains information that is critical for accessing an agency/office’s programs or activities or is required by law. Translation of vital documents is required if requested.

1.4. Principles for AI Use in Benefit Administration

USDA’s approach to AI and innovation in SLTT systems is guided by the following principles:

  • Protecting rights and safety and advancing equity – AI must be employed in ways that protect applicant and participant rights, opportunities, access, and receipt of critical services. AI technologies used in the administration of public benefits must include safeguards against bias, discrimination, infringements on privacy, and other harms from AI and must employ safeguards that ensure they do not create inequities or barriers to accessing vital nutrition programs.
  • Upholding accountability for program decisions and operations – AI must be employed such that SLTT agencies remain accountable for compliance with federal requirements, including those regarding program decisions (e.g., eligibility determinations, penalties), program performance (e.g., accuracy, timeliness), and operations (e.g., availability of service delivery channels that provide alternatives to AI). Affected and interested parties should know how AI is being used and be able to provide feedback on public-facing systems.
  • Promoting responsible innovation that engenders public trust – AI should be employed only when it is well-suited for a given task and after risks from the use of AI have been mitigated. SLTT agencies are encouraged to prioritize AI uses that reduce barriers or burden to accessing benefits and are neither rights-impacting nor safety-impacting uses.

2. PROTECTING RIGHTS AND SAFETY AND ADVANCING EQUITY

2.1. Protecting Civil Rights

This framework centers protection of civil rights and civil liberties in the use of AI stemming from its assignment in Section 7.2 of Executive Order 14110, “Protecting Civil Rights Related to Government Benefits and Programs.” Section 7.2(a) directs federal agencies to “use their respective civil rights and civil liberties offices and authorities—as appropriate and consistent with applicable law—to prevent and address unlawful discrimination and other harms that result from uses of AI in federal government programs and benefits administration.”

AI holds enormous potential to streamline access to nutrition benefits, reduce administrative burden, and improve the customer experience. That potential must be realized on a foundation of preserving the rights and safety for all populations affected by a program. The public should not need to trade or risk their rights or safety to benefit from AI, and AI must not be used in ways that exacerbate bias or inequality. AI uses must comply with nondiscrimination laws, rules, and regulations, as applicable.

2.2.Managing Risks to Rights and Safety

FNS will work with SLTT agencies to manage risks proportionate to the risk presented by a use of AI. This section describes four categorical groupings for AI, from uses that typically present the highest risk to uses that typically present the lowest risk when used in FNS programs. These groupings are not meant to be comprehensive but provide an example framework for categorization and proportionate actions.

Uses of AI Presumed to Impact Rights or Safety

Uses of AI that are rights-impacting or safety-impacting present the most risk and may require FNS notification, review, and governance. Uses of AI considered rights-impacting or safety-impacting include both AI systems that directly control outcomes and AI systems that influence outcomes or human decision making. For example, an AI tool that directly makes and enacts program eligibility decisions (if allowed under applicable law) would be considered a rights-impacting use of AI, as would an AI tool that recommends eligibility decisions for a case worker to review and act on.

The following table lists uses of AI that are presumed to be rights-impacting and/or safety-impacting in OMB Memorandum M-24-10 and provides examples of potential uses of AI relevant to administration of FNS nutrition programs that would, by extension, be considered rights-impacting or safety-impacting. 10 Please note that not all examples are applicable to all FNS programs, as individual programs may have laws prohibiting such uses.

FNS programs may designate other uses of AI as presumed to be rights-impacting or safety-impacting. These uses may or may not mirror uses of AI enumerated as presumed to be rights-impacting or safety-impacting in OMB Memorandum M-24-10, as its lists are not to be considered exhaustive.

Purpose for which AI is Presumed to Impact Rights or SafetyRelated Nutrition Program Use of AI
Rights-impacting – Appendix I(2)(l)
Making decisions regarding access to, eligibility for, or revocation of critical government resources or services; allowing or denying access—through biometrics or other means (e.g., signature matching)—to IT systems for accessing services for benefits; detecting fraudulent use or attempted use of government services; assigning penalties in the context of government benefits;

Benefit administration – Determining eligibility for a benefit at initial application for benefits or redetermination, determining benefit amounts, processing changes that affect eligibility or benefit levels, terminating benefits, appeals of eligibility determinations.

Integrity and enforcement – Analyzing data for program violations or fraud; assessing sanctions or penalties to retailers, program operators, households, or individuals; authorizing or disqualifying vendors, markets, or service providers.

Access and use of benefit systems – Allowing or denying access to online applications, participant portals, or other SLTT systems for accessing benefit services.

Rights-impacting – Appendix I(2)(i)
Determining the terms or conditions of employment, including pre-employment screening, reasonable accommodation, pay or promotion, performance management, hiring or termination, or recommending disciplinary action; performing time-on-task tracking; or conducting workplace surveillance or automated personnel management;

Workforce management – Automated assignment of cases to case workers based on the predicted complexity of the case, with impacts on the terms or conditions of employment (e.g., performance management).

Employment and Training evaluation – Assessing program participants for suitability for employment and training opportunities, matching participants to work opportunities

Safety-impacting – Appendix I(1)(j)
Rights-impacting – Appendix I(2)(j)
Carrying out the medically relevant functions of medical devices; providing medical diagnoses; determining medical treatments; providing medical or insurance health-risk assessments; providing drug-addiction risk assessments or determining access to medication; conducting risk assessments for suicide or other violence; detecting or preventing mental-health issues; flagging patients for interventions; allocating care in the context of public insurance; or controlling health-insurance costs and underwriting;

Health screening or risk assessment – Food insecurity screening, nutrition risk assessment, assessment or screening for referral to additional services or for interventions (e.g., mental health services, domestic violence support).

Nutrition tailoring – Modifying food packages, benefit amounts, or medically-tailored meals for an individual’s specific nutrition needs.

Rights-impacting – Appendix I(2)(m)
Translating between languages for the purpose of official communication to an individual where the responses are legally binding; providing live language interpretation or translation, without a competent interpreter or translator present, for an interaction that directly informs an agency decision or action;

Translation of program materials – Translation of vital documents without validation by and accountability of a human translator, including online and/or paper applications and application guides; public notifications and outreach that explain eligibility, program rules, or appropriate use and disqualifying conduct; discrimination complaint process and other feedback mechanisms; and materials critical to equitable program participation by eligible people with limited English proficiency (LEP).

Live translation – Live translation without a competent interpreter or translator present for an interaction that directly informs an agency decision or action, such as an eligibility interview, nutrition screening, appeal, or administrative hearing.

SLTT agencies are responsible for notifying FNS and/or obtaining FNS approval for the AI uses outlined in OMB Memorandum M-24-10 if those AI uses fall under an existing requirement to notify and/or obtain FNS approval. SLTT agencies must determine what program system changes and functions incorporating AI—particularly rights-impacting or safety-impacting AI—trigger notice to and/or approval by FNS per existing FNS program requirements. FNS programs should seek to regulate and/or issue guidance to expand notification and approval requirements for rights-impacting and safety-impacting uses of AI to the fullest extent consistent with applicable law.

Uses of AI That May Impact Rights or Safety

Some uses of AI are not presumptively rights-impacting or safety-impacting but could impact rights or safety depending on how they are applied. These uses of AI do not patently influence outcomes or decision-making but do inform outcomes or decision-making in ways that could impact rights or safety. Examples include, but are not limited to:

  • Staff support tools – Internal AI-powered chatbots or policy summarization tools used by call center staff, eligibility workers, or other program administration staff to answer questions in real time. These tools may be rights-impacting if they lead to increased rates of incorrect information being provided or adverse outcomes.
  • Customer service tools – Public-facing AI-powered chatbots or phone support may become rights-impacting depending on how they are used to support service delivery. FNS programs should work with SLTT agencies to consider the impact of risks from the use of AI, such as increasing errors or adverse outcomes, leading eligible populations to believe they are ineligible for public benefits, providing incorrect program information, or creating barriers to accessing public benefits.

These uses will not be presumed to be rights-impacting or safety-impacting, but they should be reported to FNS and assessed for whether they present rights-impacting or safety-impacting risks. If a use of AI is determined to be rights-impacting or safety-impacting, it should be held to the same reporting, review, and governance processes as a presumptive use.

AI with Human Oversight

Uses of AI-enabled technologies that include human oversight or review can also present lower risks if best practices for human-in-the-loop processes are followed. 11 Examples of these uses include:

  • Creating, summarizing, or transforming (such as rewriting in plain language) public-facing program materials with validation by and accountability of a human with program expertise;
  • Translating vital documents and/or public-facing program materials with validation by and accountability of a skilled human translator;
  • Providing auto-captioning in addition to having a live American Sign Language interpreter;
  • Converting scans of physical documents into machine-readable formats for further analysis and/or improved accessibility after validation by a human with program expertise; and
  • Creating draft meal plans to be reviewed and updated by a child nutrition program operator to ensure meals meet program requirements, such as nutrition and reimbursement requirements.

In these cases, although the risk is lower when there is sufficient human review of the output, there may be substantial risk if the review does not occur or is done by someone without proper expertise to evaluate the AI output. Therefore, it will be important for SLTT agencies to establish clear, effective human oversight protocols, document those processes, train staff, and conduct periodic oversight to ensure adherence. FNS programs may seek to review such protocols and oversight mechanisms and require reporting on the use of AI. Lower risk uses of AI may still require FNS notice or approval, depending on existing legal requirements.

Enabling Uses of AI

Uses of AI-enabled technologies that are not rights-impacting or safety-impacting and do not patently influence outcomes typically present lower risks. Examples of these uses include:

  • Interactive Voice Recognition (IVR) technology for call centers that uses voice recognition to assist callers in navigating menus and routing calls;
  • Optical Character Recognition (OCR) to transcribe information from uploaded documents or paper forms;
  • Chatbots using natural language processing to better understand user questions, with human-coded, logic-based preset outputs, not generative AI responses;
  • Sentiment analysis/natural language processing to categorize major themes and trends in unstructured text for customer experience and customer satisfaction surveys, helpdesk tickets, or social media posts referencing a benefits program;
  • Creation of synthetic data for testing information technology systems; and
  • AI-enabled search tools that answer questions about program requirements or policies by directing caseworkers to the relevant section of an official policy manual or other primary source.

FNS programs may require reporting of these uses of AI.

2.3. Protecting Against Bias and Advancing Equity

Bias can occur when employing AI even in the absence of prejudice, partiality, or discriminatory intent. 12 SLTT agencies employing AI should proactively assess and mitigate factors that contribute to bias, algorithmic discrimination, or that create inequitable outcomes for protected classes or underserved communities. These factors include, but are not limited to, tools, datasets, system flows, and business processes. These assessments should not be a one-time activity; they should occur before any procurements and also be integrated into processes for design, development, implementation, testing, training, and ongoing monitoring. 13 Assessments should also consider the risks of AI beyond its intended use.

Processes to assess and mitigate bias should include all three categories of AI bias identified by NIST: 14

  1. Systemic bias – Procedures and practices that operate in ways that result in certain groups being advantaged or favored and others being disadvantaged or devalued. Systemic bias can be present in AI datasets, organizational norms, practices, and processes, and the public interacting with AI systems.
  2. Statistical and computational bias – Errors introduced when an AI is trained on data that is not representative of the population.
  3. Human-cognitive bias – How an individual or group perceives and uses AI information to make a decision or fill in missing information, or how humans think about the purposes and functions of an AI system.

It is also important that processes to identify and mitigate bias in AI systems include consultation with SLTT civil rights or civil liberties offices and affected groups and communities, including recipients of benefits. 15 Consultations should be initiated early enough in system design and development that feedback can be meaningfully incorporated into decision-making, and should be repeated throughout design, development, and use of the AI, to account for new issues that may emerge.

Rights-impacting and safety-impacting uses of AI should be evaluated for biased or inequitable outcomes under conditions that mirror real-world use before they are used in program administration. SLTT agencies should capture and retain system information needed to support system evaluations, such as logs and relevant system records. Rights-impacting and safety-impacting uses of AI should be reevaluated for bias on at least an annual basis and after any significant modification to the AI or the conditions or context in which the AI is used. Periodic reevaluation is important to detect any emergent biases as participant demographics and program rules change over time. If an evaluation determines that inaccurate, biased, or disparate outcomes are produced, the AI cannot be used. Evaluations should assess disparate impacts on classes protected by federal nondiscrimination laws and should also assess impacts on underserved communities that are particularly vulnerable to additional burden, barriers, or disruptions in benefit access. 16

2.4. Preserving Options to Opt-Out of AI

Even with the above protections in place, some individuals and groups will not want to use AI-enabled systems. SLTT agencies should provide options to opt-out of the use of public-facing AI for a human alternative, wherever practicable. Where a human alternative is not feasible, an automated system that does not use AI should be provided. For example, if an applicant contacts a call center during business hours, they should be able to opt out of interacting with an AI-powered virtual agent to talk with a human representative. If the applicant calls after hours, when call center staff are not on duty, they should be able to opt out of the virtual agent to access a multilingual phone tree of common functions, such as checking benefit balances or hearing application status.

Opt-out options should be easy to find and access and be provided in all languages required to be supported. Opt-out options must be broadly accessible to people with disabilities and provide a level of service (e.g., wait time, administrative burden) that is not disproportionate to AI alternatives. When designing opt-out options and services, SLTT agencies should consider that people with disabilities; individuals with LEP; people experiencing disasters, financial shock, or trauma; and people from underserved communities may be more likely to opt out of AI for human alternatives.

Opt-out options are not required for back-end processes, employee-facing functions, or for enabling functions, such as scanning forms using AI-enabled OCR. Opt-out alternatives are also not required when AI is solely used for the prevention, detection, or investigation of fraud.

3. UPHOLDING ACCOUNTABILITY

3.1. Transparency and Accountability

SLTT agencies should be transparent about how they use AI in the administration of FNS programs. Public-facing systems using AI should provide a prominent notice that AI is being used and a plain-language description of how AI contributes to outcomes. SLTT agencies using AI for back-end program administration should provide notice in program materials, such as on benefit applications and program websites. Any notices or messages must meet language access requirements, including for individuals with LEP, and must be available in formats that are accessible for individuals with disabilities.

As described in Section 2.3, AI systems should be developed in consultation with affected and interested groups, including program participants, with their feedback incorporated into system design, development, and use. Public-facing AI systems should also include channels for users to provide feedback on the customer experience and to report inaccurate information or issues. This feedback should be regularly reviewed and used to inform system updates. Programs may require that customer experience metrics and a summary of feedback are published on a public site on a regular basis, or programs may recommend this as a best practice.

Both public-facing AI systems and internal AI systems should include channels for SLTT agency employees, contractors, advocates, local operators and partners to report inaccurate information or adverse actions taken by AI systems to inform system updates. Programs may require that summarized reports or adverse event metrics are published on a public site on a regular basis, or programs may recommend this as a best practice.

3.2. Human Oversight and Human-in-the-Loop Processes

Some risks from the use of AI can be mitigated by requiring a “human in the loop”, with human oversight of AI functions or requirements that a human approve AI recommendations before they are enacted. It is important to carefully design these “human in the loop” processes to ensure that human oversight provides the intended validation of AI outputs and does not result in human actions becoming a “rubber stamp” without the expected scrutiny. Refactoring AI business processes to add human oversight should not be used as an alternative to addressing a root cause of bias or errors in an AI system.

SLTT agencies should ensure that staff providing human oversight understand how the AI system functions, what an accurate decision looks like, and how to evaluate a system’s decisions. Staff should understand the types of errors their role is meant to detect and have a workload appropriate for providing the expected level of oversight. SLTT agencies should ensure staff have the authority to override or alter the decision under review and should be able to escalate patterns of errors they have observed for further analysis and remediation.

Staff who provide human oversight for AI-enabled functions should be trained on relevant AI topics and should receive sufficient training in business processes to assess outputs of AI functions or models for accuracy. For example, staff who use an AI-enabled tool to advise them in benefit calculations should be trained in how to calculate benefit amounts without the assistance of the AI tool, so they can evaluate the AI’s recommendation.

SLTT agencies should regularly evaluate business processes that interact with AI systems by observing their execution under real-world conditions to determine the effect of automation bias. If AI outcomes that are intended to have human review or that are intended to influence, but not direct, decision-making are being accepted without appropriate scrutiny, SLTT agencies should stop using the AI until business processes can be refactored and staff can be retrained.

3.3. Decision-Making and Appeals

AI systems should not directly make adverse decisions, such as denying eligibility for benefits, reducing benefit amounts, terminating benefits, disqualifying vendors, or assessing penalties. Where AI technologies inform adverse actions, a human with appropriate training and authority to override or alter the decision under review should validate each recommendation and be accountable for the adverse action. 17 For example, a case worker with appropriate training and credentials should review, validate, and approve each recommendation for denial of benefits. It is not sufficient for a system administrator or case worker to monitor that a batch process for assessing eligibility was executed.

SLTT agencies must provide explanations to an affected party when adverse actions impact them, including those where AI influences the outcome. Explanations should explain why the system produced the given result for this specific instance; be meaningful, useful, and as simply stated as possible; accurately indicate the principal reason(s) for the adverse action; meet language access requirements; and include information on how to remedy or appeal the adverse action. 18 These requirements apply both to instances where an explicit adverse action is taken and to implicit actions, such as procedural denials.

SLTT agencies must provide processes to appeal adverse decisions or actions consistent with applicable program requirements, regardless of whether AI played a part in decision making. Appeal processes should employ human consideration as a first line of appeal. Individuals wanting to appeal an adverse decision or action should not need to opt out of an automated or AI-based appeal process for human consideration. Appeals must be timely, equitable, accessible to people with disabilities and to people with LEP and should not impose an unreasonable burden. The process to appeal AI-informed decisions should be easy to find and use and SLTT agencies should consider that people with disabilities, individuals with LEP, and underserved communities may be more likely to need access to appeals processes.

SLTT agencies should assess AI-informed decisions that are overturned by appeal for patterns of errors. Mitigations, if appropriate, should be put in place to prevent recurrence of those errors. Programs may require that information about appeals of AI-informed decisions (such as the timeliness, frequency, and disposition of such appeals) be published publicly on a regular basis or may recommend that as a best practice. 19

4. ADVANCING RESPONSIBLE INNOVATION

4.1. Suitability of AI

FNS encourages SLTT agencies to make training and educational opportunities related to AI available widely to SLTT agency staff to advance innovation and promote responsible use. SLTT agencies should ensure adequate training in relevant AI topics for staff who procure, design, develop, enhance, and/or maintain AI-enabled technologies.

Before using AI for a given purpose, it will be critical for SLTT agencies to identify the expected benefit of the AI (e.g., improved timeliness, an improved customer experience) and seek to estimate the anticipated gain from using AI. SLTT agencies should identify quantitative or qualitative measures to validate that expected benefits or gains were realized after the AI is in use. AI should not be used with the objective of achieving unspecified or unknown potential gains. SLTT agencies should work with FNS to identify acceptable thresholds for reliability, accuracy, and trustworthiness before implementing AI and should incorporate processes to identify risks to using AI throughout system development, from design to ongoing operations. AI systems should be tested and monitored before, during and after implementation to validate whether expected benefits have been realized, risks mitigated, and whether acceptable thresholds are being met. If those objectives have not been achieved, the AI should not be used.

AI should be used for business functions that are well understood and where staff have the knowledge and skills to evaluate performance. AI should not be used for immature business functions with a goal of an AI discovering new approaches or efficiencies. SLTT agencies should consider the direct risks from AI, risks that come from misuse of AI for an unintended purpose, and the risk of AI outputs that lower barriers for people to take harmful actions.

SLTT agencies should analyze the goals of using AI and compare other technology or automation approaches to validate whether AI is the technology best suited to the task, balancing benefits and risks. When performing this analysis, the benefits and risks of AI features should not be considered in isolation but should be evaluated in context of the broader system, business processes, and workforce, and considered alongside other risks, such as cybersecurity. 20

All AI must be used in compliance with program requirements for the use of merit systems personnel, such as those applicable to SNAP. Programs may specify requirements or restrictions governing how AI may be used to influence or inform decisions by merit systems staff.

4.2. Acquisition

SLTT agencies should use the acquisition process, if applicable, to gather information from vendors that will assist with identifying and mitigating risks from the use of AI, including but not limited to:

  • Intended use – Vendors should identify uses that are suitable for their AI and purposes, contexts, and uses that are not suitable for their AI’s use;
  • Training data – Vendors should provide details about the model’s training data, including the source of the data, the time periods the data describes, and the extent to which the data reflects real-world contexts. For training data that reflects populations, vendors should provide summary statistics of the population whose data it was trained on that are relevant to a public benefits context (e.g., race/ethnicity, age, gender, income brackets).
  • Testing results – Vendors should provide information about how they have tested their system, what contexts they tested it in, findings from their testing, and changes they made to address any issues discovered during the testing process.
  • Audit results – Vendors should disclose whether the system and data have been assessed for bias and/or other risks, the results of any assessments, and any actions taken to mitigate findings.

SLTT agencies should only acquire and use AI if a vendor provides all necessary information to evaluate suitability and risk. Purchasing from federal or SLTT acquisition schedules may simplify the process of gathering information if disclosure is required by the schedule’s terms and conditions.

SLTT agencies should consider acquisitions that enable the agency to try AI technologies before committing to expensive or lengthy contracts. Agencies could consider testing the technology in a controlled, isolated environment known as a sandbox or conducting a limited pilot to evaluate suitability, risks, and gains from using AI.

As AI becomes more commonplace, SLTT agencies should anticipate that vendors may respond to proposals with solutions containing AI, even if AI features were not requested or anticipated for that acquisition.

4.3. Responsible Data Use and System Design

FNS encourages SLTT agencies to voluntarily share successful uses of AI and best practices with FNS and peer agencies to promote and advance the responsible use of AI. FNS also encourages SLTT agencies to share validated resources, such as model weights or code and anonymized or synthetic training data.

SLTT agencies should use high-quality, representative datasets when training AI. SLTT agencies must apply appropriate protections to sensitive information and/or personally identifiable information (PII) used in AI training datasets and assess and mitigate the risks associated with such data being used for that purpose, including the risk that the models can be induced to reveal sensitive information and/or PII to unauthorized or unintended users.

SLTT agencies should continue to limit data collection to data needed to effectively administer benefit programs. SLTT agencies should continue to establish timelines for data retention, with data deleted as soon as possible in accordance with legal or policy limitations. SLTT agencies should not expand data collection or lengthen data retention for the purpose of employing the use of AI. SLTT agencies should follow privacy and security best practices designed to ensure data and metadata do not leak beyond its intended use, such as using privacy-enhancing technologies or fine-grained permissions and access control mechanisms, along with conventional system security protocols.

Despite best efforts to mitigate errors, protect rights, and ensure equity, situations will arise where AI does not perform as expected and creates unacceptable risk or harm, such as an AI chatbot that provides incorrect information about program eligibility to the public or an AI-enabled routine in a management information system that begins to produce biased outcomes. AI-enabled functions should be able to be disabled without creating unacceptable disruption to service delivery. SLTT agencies should be able to disable AI functions quickly in response to an identified unacceptable risk or harm through the use of system settings, feature flags, or other actions that do not require new development or redeployment of a system. Staff should be trained on fallback processes that do not rely on AI-enabled functions. 21

4.4. AI Governance

FNS programs must establish AI governance processes that align with this framework, are appropriate for program requirements and risks, and are consistent with applicable law. Programs should seek to apply best practices for AI governance processes, such as those in the NIST AI Risk Management Framework. 22 Governance processes should, at a minimum, seek to catalog, review, and approve rights-impacting and safety-impacting uses of AI prior to their use.

Programs should require SLTT agencies to report all uses of AI for program administration at least annually, and to identify rights-impacting and safety-impacting uses of AI. These reports should include a plain-language description of how each use of AI contributes to program outcomes and their identified risks and mitigations, and should be made available to the public, if possible. To minimize burden on SLTT agencies, programs should seek to align data reporting requirements across FNS nutrition programs and other SLTT-administered benefit programs, such as Medicaid and Temporary Assistance for Needy Families.

Many tribes and tribal organizations are implementing their own data sovereignty and data governance strategies regarding how tribal data is used, collected, and owned. Collaboration and coordination should occur between the USDA and tribes to ensure tribal input is being considered as well as incorporated as much as practicable within applicable law.

SLTT agencies must notify FNS programs of their intent to use AI for purposes that are presumed to be rights-impacting or safety-impacting when such purposes trigger existing notification and approval requirements. Programs must seek to expand notification requirements and preapproval to include all rights-impacting or safety-impacting uses of AI to the fullest extent consistent with applicable law. Governance processes should include ongoing monitoring and evaluation to identify and mitigate any errors, biases, disparate impacts, or new risks to the use of AI. Programs should consider whether to require that evaluations be conducted by an independent third party.

5. FNS’S SUPPORT FOR RESPONSIBLE INNOVATION

FNS plans to be a partner to SLTT agencies in responsible use of AI and will prepare for this support by identifying the necessary people, training, and tools to effectively fulfill this role. FNS nutrition programs and other organizational units, including the Office of Management and Technology, Civil Rights Division, and Regional Operations and Support should consider the necessary personnel requirements, workforce skillsets, training, tools, and resources needed to effectively support responsible innovation and manage the risks of AI. Organizations should work with the FNS Administrator’s Office to request resources via the budget process to support the responsibilities identified in this memorandum and future regulations and/or guidance, subject to available funds.


1 Executive Order 14110, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, https://www.govinfo.gov/content/pkg/FR-2023-11-01/pdf/2023-24283.pdf.
2 Executive Order 14110, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, https://www.govinfo.gov/content/pkg/FR-2023-11-01/pdf/2023-24283.pdf.
3 OMB M-24-10, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (March 28, 2024), https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf.
4 Artificial Intelligence Risk Management Framework (AI RMF 1.0), NIST Publication AI 100-1, https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf.
5 PL No. 115-232, § 238(g), https://www.govinfo.gov/content/pkg/PLAW-115publ232/pdf/PLAW-115publ232.pdf.
6 Executive Order 14091, Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government, https://www.govinfo.gov/content/pkg/FR-2023-02-22/pdf/2023-03779.pdf.
7 NIST, The Language of Trustworthy AI: An In-Depth Glossary of Terms, https://airc.nist.gov/AI_RMF_Knowledge_Base/Glossary
8 OMB M-22-10, Improving Access to Public Benefits Programs Through the Paperwork Reduction Act (April 13, 2022), https://www.whitehouse.gov/wp-content/uploads/2022/04/M-22-10.pdf.
9 USDA Food and Nutrition Service, Food and Nutrition Service Language Access Plan (Feb. 9, 2024), https://www.usda.gov/sites/default/files/documents/fns-language-access-plan.pdf.
10 For the full list of purposes for which AI is presumed to be rights-impacting and/or safety-impacting, see Appendix I of OMB M-24-10, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (March 28, 2024), https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf.
11 See Section 3.2: Human Oversight and Human-in-the-Loop Processes for more information.
12 Artificial Intelligence Risk Management Framework (AI RMF 1.0), NIST Publication AI 100-1, https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf.
13 General Services Administration, AI Guide for Government, https://coe.gsa.gov/coe/ai-guide-for-government/introduction/index.html.
14 Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (March 2022), NIST Special Publication 1270, https://nvlpubs.nist.gov/NISTpubs/SpecialPublications/NIST.SP.1270.pdf.
15 Recommended practices for community engagement can be found in OMB M-23-22, Delivering a Digital-First Public Experience (Sept. 22, 2023), https://www.whitehouse.gov/wp-content/uploads/2023/09/M-23-22-Delivering-a-Digital-First-Public-Experience.pdf.
16 USDA FNS Civil Rights Division can assist SLTT agencies with disparate outcome analyses but cannot provide legal advice.
17 A human should also review and take accountability for decisions that are not adverse actions but would result in harms if there is an error, such as determining someone is eligible for benefits that require repayment if incorrectly granted.
18 OMB M-24-10, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (March 28, 2024), https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf.
19 Blueprint for an AI Bill of Rights, White House Office of Science and Technology Policy, https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf.
20 Artificial Intelligence Risk Management Framework (AI RMF 1.0), NIST Publication AI 100-1, https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf.
21 Blueprint for an AI Bill of Rights, White House Office of Science and Technology Policy, https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf.
22 Artificial Intelligence Risk Management Framework (AI RMF 1.0), NIST Publication AI 100-1, https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf.

Page updated: April 29, 2024