They’re Heeeeeeere!
Yesterday was the big day many of us have been anticipating for over a year now: the formal announcement and corresponding delivery of the new Head Start Federal Performance Standards, the first time in close to 20 years they’ve been formally revised.
At my workstation where I serve as Project Director for a data science and evaluation firm serving Head Start organizations, I pulled up the 621 page (!) Final Rule document on monitor one, and joined the OHS Performance Standards Webinar at noon on monitor two. As I delved into the content and listened to the presentation, questions started arriving quickly in my head: “What do they specifically say?” “Did they listen to the comments and respond?” “How will this affect our clients and their Head Start programs?”
If you haven’t seen them yet, you can access all 621 pages of the new Performance Standards as published in the Federal Register here:
https://www.federalregister.gov/articles/2016/09/06/2016-19748/head-start-performance-standards
Along with accessing the simpler, 4-page “fact sheet” here:
https://eclkc.ohs.acf.hhs.gov/hslc/hs/docs/hs-prog-pstandards-final-rule-factsheet.pdf
**
Here are some of my immediate reactions from the perspective of an external evaluator and data partner on today’s historic release:
1) You spoke, they listened. This much is clear. In fact, pages 30-290 of the 621-page document, totaling 260 pages, which is approximately 42% of the total document (this is what data people do!) is dedicated to painstakingly summarizing the 1000+ comments received regarding the proposed rulemaking period, and how that feedback was incorporated into the final document. My brief analysis of the type of comments broke them into three categories:
- Helpful: comments that clarified, strengthened, or otherwise provided valuable changes to the proposed rules, from the perspective of the authors. Most of these are identifiable by the words “We agree” that preface the response section immediately following the comments.
- Self-interested: comments that reflect the viewpoint of that particular respondent, typically within the context of his/her own program. Many of these were referenced in the section, along with the basic rationale for not taking action on them as they didn’t apply universally to the issue at hand.
- Policy-Passionate: comments that reflect the belief that an issue or rule should be handled in a precise, policy-oriented fashion, presumably for the good of everyone. These appear to be the trickiest comments for the authors to handle.
2) There is a new, remarkably flexible focus on outcomes instead of processes and plans. Between the manner in which they handle program duration for both Head Start and Early Head Start, and the way in which they articulate expectations for the use of program data for both continuous quality improvement and evidence based outcomes, Head Start programs will likely benefit rather than be burdened by the new standards.
3) They definitely are simpler and organized more effectively. Even though they released a 621-page, non-navigable pdf document accompanied by a narrative of “paperwork reduction” (sorry, had to be said), moving from 1400 provisions in 11 sections to only 4 sections really is a nice improvement for grantees who, encouraged by Dr. Enriquez, will soon be “immersing themselves” in the new standards. The preamble states that they “…significantly reduced the number of regulatory requirements without compromising quality” and that appears to be evident throughout the new regulations.
4) Programs are going to need support and technical assistance in using their existing data to meet the new standards and expectations. The new expectations in utilizing data analytics and research to ensure high program quality are enmeshed throughout the entire document. Here’s a sample from page 518:
“(c) Using data for continuous improvement. (1) A program must implement a process for using data to identify program strengths and needs, develop and implement plans that address program needs, and continually evaluate compliance with program performance standards and progress towards achieving program goals described in paragraph (a) of this section. (2) This process must: (i) Ensure data is aggregated, analyzed and compared in such a way to assist agencies in identifying risks and informing strategies for continuous improvement in all program service areas.”
It would be a mistake to read this section, and conclude that programs are currently doing this. Some veteran administrators might respond to this by thinking of the many static reports they currently run from their data system, and/or the meetings they have with their managers and directors to track PIR targets or the completion of 45-90 day mandated program components. But this is not what the government is now asking.
The use of data to demonstrate continuous quality improvement and evidence based outcomes is dependent on several factors:
- the gathering of data in both a timely fashion and that meet a defined threshold of quality and fidelity.
- the “tidying” of data by a trained analyst who can apply business rules, data quality standards, and data science techniques to ensure data are accurate and useful in practice.
- the analysis of data using research methods that are rigorous and sound.
- the visualization of those data so that inputs and outcomes can actually be accessed and used by administrators, teachers, family service staff, parents, board members, and any other stakeholders in the community the program serves.
- the creation of quality assurance processes, managerial protocols, and other policies to ensure that continuous quality improvement and evidence-based outcomes are sustainable and successful.
While Fortune 500 companies often have entire data science departments responsible for these jobs, most Head Start organizations do not. An alternative approach is to contract with a data science and evaluation partner to achieve these desired outcomes. This solution is more cost-effective than directly hiring a new team of employees and/or expecting your existing “data people” to suddenly increase their workload and skillset. Additionally, best practices in grant-funded social service and educational organizations dictate an external perspective on your operations; many funders similar to Head Start in both targeted population and programmatic scope require the use of an external evaluator to comply with reporting and outcomes mandates.
5) Time is of the essence. Many of the new standards have implementation dates within 60 days, and some of the specific standards associated with data must be met by next August 2017. Your program will have to move quickly!
Those are my initial takeaways on the big day – along with some kudos to the Office of Head Start for the immediate posting of all the resources related to the announcement (along with the webinar) here:
https://eclkc.ohs.acf.hhs.gov/hslc/hs/new-policy
Good luck everyone!