Section 4121 of the Food, Conservation, and Energy Act of 2008 (also known as the Farm Bill), required that states conduct adequate system testing and a pilot of a new eligibility or issuance system, before rollout, as well as before implementation of major system changes. On 01/02/2014, a Final Rule was published in the Federal Register entitled “Automated Data Processing and Information Retrieval System Requirements: System Testing.” This rule amends the Supplemental Nutrition Assistance Program (SNAP) regulations to implement this requirement.
Copies of this Final Rule and the Proposed Rule which preceded it, as well as all regulations applicable to the Advance Planning Document process, can be found here: Statutes and Regulations.
Revisions to reflect changes from the Final Rule have been made to FNS Handbook 901 - Advance Planning Documents, which provides guidance to facilitate successful approval of state agency information system projects.
Answers to Frequently Asked Questions (FAQs) have been developed to help explain this Final Rule. Following are FAQS regarding the testing provisions, and additional FAQs on the non-testing provisions may be found at: Non-Testing FAQs.
First, the name of section 277.18 has been changed from “Establishment of an Automated Data Processing (ADP) and Information Retrieval System” to “State Systems Advance Planning Document (APD) process,” to better reflect the content of the section.
The major change calls for state agencies to submit a detailed test plan for FNS review which describes how all system testing will be conducted in order to verify that the system complies with SNAP requirements and system design specifications. Language has been added to describe FNS’ expectations for the test plan, including User Acceptance Testing (UAT) and pilot testing. The plan would include opportunities for state agency and/or federal reviews prior to UAT, as well as after the system is fully implemented. The level of detail specified in section 277.18(g), Basis for continued federal financial participation, is to be provided to FNS before the state agency begins its testing of the system. At a minimum, the test plan should address: the types of testing to be performed; the organization of the test team and associated responsibilities; test database generation; test case development; test schedule; documentation of test results; acceptance testing; and, decision criteria. The decision criteria should be specific and measurable. The evaluation should include a summary of any outstanding issues/defects with the system and any other pertinent readiness issues.
In addition, the test plan is to include a contingency plan component which identifies alternative strategies that may be used if specific risk events occur, such as a failure of test results to support a decision to proceed to the next phase of the project. Examples include alternative schedule activities, staffing plans, and emergency responses to reduce the impact of risk events.
- The state must submit its Test Plan, not for FNS approval, but to be sure that when FNS evaluates test results later, it has already agreed that the process and methodology are adequate to produce valid results. The Test Plan will often be provided in draft form as part of the IAPD, to allow early assessment, and in final form at least 60 days before testing begins.
- The state must submit Testing Evaluation Report, including detailed test results, which demonstrates readiness to proceed to pilot. FNS concurrence with the state’s decision to proceed to the next step is required as a condition of continued FFP. If test results have been shared and discussed with FNS throughout the test phase, this final report need not be lengthy, as FNS will already be anticipating its content.
- The state must submit a Pilot Evaluation Report, which demonstrates readiness to proceed to Rollout. The state must have concurrence from FNS for a “Go” decision, as a condition of continued FFP. If performance has been shared and discussed with FNS throughout the pilot phase, the final evaluation report need not be lengthy, as FNS will already be anticipating its content. In the case of both the Test Evaluation and the Pilot Evaluation, a specific report need not be prepared for FNS if one already being used by state decision-makers can be shared.
FNS expects the testing requirements to have minimal impact on state workload. Most documents now required during the Advance Planning Document process are already prepared during the customary System Development Life Cycle process. System testing is part of the project management and risk management processes. When implementing new systems or enhancements to existing systems, most states already include testing, pilot projects and some form of graduated rollout.
If the test plan is sound and the testing is conducted according to the plan, and FNS is kept apprised of results throughout the testing process, the state should not experience any unexpected delays caused by FNS. Objective decision criteria which are established up front before the testing phase begins should guide the state in determining whether and when to proceed to the next step. As a reminder, a written evaluation which justifies the state’s decision to move forward or delay the project must be provided to FNS for approval to move from UAT to Pilot, and from Pilot to Rollout. But, it need not be lengthy if FNS has been apprised of test or pilot results throughout that phase and can already anticipate the evaluation’s content.
The testing requirements in the final rule are effective for active projects 60 days after publication in the Federal Register. FNS believes the Food, Conservation, and Energy Act of 2008 intended adequate system testing be applied to all projects in active development of a new state information system. Some states have expressed concerns that imposing this rule retroactively on existing projects and contracts would require rewriting schedules to allow sufficient time for FNS involvement and/or approval of a test plan prior to system implementation. However, FNS believes that current projects should already have sufficient time built into their timelines to test and pilot new systems.
All aspects of system functionality must be tested to ensure accurate eligibility determinations are made and system functionality meets required functional specifications. Testing must include, but is not limited to: unit testing, integration testing, performance testing, end-to-end testing, User Acceptance Testing (UAT), and regression testing. The results of UAT must be provided before the system is piloted. There must be a Pilot Test of the fully operational system in a live production environment prior to state-wide rollout.
State agencies will likely be reporting activity to FNS for the duration of the UAT and Pilot Test, which will provide FNS with an opportunity to monitor testing activities, corroborate the findings of the state agency, anticipate the success of the test, and determine if rollout may occur. The state agency must allow sufficient time after the test period to evaluate test results and secure FNS concurrence for moving forward. As a reminder, a written evaluation which justifies the state’s decision to move forward (or delay the project) must be provided to FNS for approval to move from UAT to Pilot, and from Pilot to Rollout.
Yes, there are several reasons each state should test its system. There will probably be some modifications made to the transferred system to make it fit your state. Also, it is likely that both states do not use exactly the same platform or environment, or the same telecommunication infrastructure, etc., which could also impact the system functionality and performance. User Acceptance Testing and Pilot Testing should be conducted to ensure that the system performs as your state agency expects it to in your environment. Lastly, because testing is not a perfect process and errors do occur, it is in each state agency’s best interest to perform its own testing. The level of testing may vary, based on how many modifications have been made and how many differences there are.
A schedule could be considered aggressive if it either skips key tasks, events and deliverables, and/or does not provide reasonably sufficient time for each scheduled item. Factors to consider in determining the reasonableness of an implementation schedule take account of whether key milestones have been included and their duration. The schedule should include all relevant major milestones and consider timeframes based on the project’s needs and the state’s ability to meet the demands of the schedule, given available resources. Systems which are more complex, contain several unique features, and are less mature, should have lengthier schedules than those which are simpler and involve fewer modifications or customizations. A systemic approach to testing, including the necessary documentation, is critical to ensure a successful outcome. Sufficient time should be allowed to test functionality, data conversion, performance, and interfaces. A measured and orderly transition to the new system should provide for sufficient periods for testing and training, with go/no-go points set at a minimum for the User Acceptance Test and Pilot Test milestones.
The purpose of the Pilot Test (Pilot) is to provide the state agency with a smaller scale shakedown test prior to expansion. Most state agencies recognize the need for Pilot project operations and first implement systems on a small scale. Pilot tests may also be necessary in limited areas for major system changes.
A Pilot is important for more than just providing a dry run for the computer system. It is also an opportunity for state agencies to ensure that: all parties (e.g., recipients and state/local staffs) are comfortable with the system, the state agency’s approach to training is effective, and any program and system interfaces are effective.
The FNS rule does not remove the latitude provided to state agencies in choosing the Pilot sites. FNS will interpret the “limited area” for the Pilot as not necessarily synonymous with a geographic area, but rather focus on a limited scale or scope of the Pilot Test. State agencies should, however, take into consideration how well the Pilot’s caseload represents the demands on the fully operational system.
The Pilot Test is a key milestone in project development and occurs when a fully functional prototype system is available for testing, but before statewide implementation. The Pilot needs to include operating all components of the system in a live environment. The state agency should define its own ‘‘go/no-go’’ criteria prior to the start of the Pilot Test and evaluate the results of the pilot based on the established criteria. FNS approval of federal funds for system implementation will be conditional on the result of the Pilot. FNS may also establish additional ‘‘go/no-go’’ criteria and decision points for continuing rollout of the project.
State agencies will likely be reporting activity to FNS for the duration of the Pilot, which will provide FNS with an opportunity to monitor Pilot activities, corroborate the findings of the state agency, anticipate the success of the Pilot, and determine if rollout may occur. The state agency must allow sufficient time after the Pilot period to evaluate Pilot results and secure FNS concurrence for rollout.
The pilot is a key milestone in project development and occurs when a fully functional prototype system is available for testing, but before statewide implementation. The pilot period is when the state has the best opportunity to identify defects in either the system or the implementation approach before they become costly large-scale problems. State agencies must operate pilot projects until a state of routine operation is reached with the full caseload in the pilot area. If the pilot is going well early on, then the process of evaluation and FNS approval can start during the pilot period and lessen or eliminate any delay.
The length of the Pilot would need to be agreed upon by the state agency and FNS. Some of the factors that would need to be taken into consideration will be the size of the Pilot; the rate of phase-in of the Pilot caseload; and the track record, if any, of the system being implemented. FNS has always recommended that there be sufficient time in the pilot to thoroughly test all system functionality, including time for evaluation, prior to beginning the wider implementation of the system. FNS believes that a minimum duration of three months for pilot would permit the system to work through all functions and potential system problems.
Upon completion of the successful Pilot project, the state agency needs to receive written approval from FNS before expanding beyond the Pilot. The state agency and/or FNS may establish additional ‘‘go/no-go’’ decision points during the rollout schedule to assess the project status and determine if continuing the expansion is in the best interest of the project. A reasonable rollout should be scheduled in phases to provide the opportunity for making course corrections and adjustments along the way. A phased approach is more desirable than the higher risk “big bang” approach.
To ‘‘certify’’ a system generally means that the certifying entity verifies through independent evaluation that a fixed set of standardized tests have been passed or criteria on a standard checklist have been met. The certifying agency issues some sort of statement or document attesting to the certification, which may have legal implications. FNS does not certify systems or system testing.
FNS may, however, conduct pre and/or post implementation reviews. FNS may conduct reviews either onsite or by examining relevant documents provided by the state agency.
The need for these reviews will be determined on a case-by-case basis, based on the risk of the project. FNS reviews would be intended to: evaluate system performance and accuracy; verify that functional requirements were met; ensure that the policy to be administered is accurate; analyze data capture, integrity edits and calculations; verify that the User Acceptance Test was thorough and successfully completed; and, ensure that the system interfaces successfully with other programs and external entities, including EBT.
FNS staff may participate to a limited extent in the functional demonstrations and acceptance testing. The state agency should verify that all aspects of program eligibility are tested to ensure that the system makes accurate eligibility determinations in accordance with federal statutes and regulations and approved state policies, and that the system functionality meets the required functional specifications.
Post implementation reviews may be conducted once the system is fully operational statewide. These system reviews encompass technical and security components as well as program and financial aspects. Reviews by FNS are a function of its regulatory oversight authority. Resolution of any issues identified or completion of corrective action required by FNS, and subsequent closure of a report, review or project does not constitute ‘‘certification.’’
FNS has developed additional guidance material to assist state agencies in their implementation of the provisions in the Final Rule. These may be found on the FNS website under the Advance Planning Documents (APD) homepage. Besides copies of the 8/23/11 Proposed Rule and the 01/02/2014 Final Rule, state Agencies may find an updated version of the FNS Handbook 901 – Advance Planning Documents. Other APD training and technical assistance materials will be made available in the future. State agencies may also contact their FNS State Systems Office representative should they have further questions: Contacts and stakeholders.