Unlocking Better Data by Measuring What Matters: A Practical Guide to Effective School Surveys

Why Surveys Matter in Education

Educators constantly talk about “data-driven decision making” and “evidence-based practices” to improve instruction, develop teachers, or plan strategies. Districts generate and collect huge amounts of data. They measure and assess student learning, behavior, and achievement. They gather qualitative feedback from students, staff, and families about their experience with the education system.

The problem is, more data doesn’t always mean better data. School leaders make big decisions—like where to invest in new programs or how to train teachers—so the quality of the data they use is crucial.

Common data sources like test scores and attendance are often used to shape decisions around student support and teaching. But surveys can also be a powerful tool for measuring the success of school programs from all stakeholders. Many districts use standard surveys developed by external research agencies which can be beneficial for benchmarking, but customized surveys designed for specific programs offer deeper insights.

Well-designed surveys are essential and require specific training in question design. Poorly designed surveys can lead to bad data, and bad data leads to bad decisions. This is why ALP recommends the following tips to craft a well-designed survey that generates useful data for evaluating and scaling success within schools and districts.

Tip 1: Clearly Define Your Survey’s Purpose

Before you start writing survey questions, you need to know exactly what you’re trying to measure and how you define success within the program or initiative.

Are you evaluating a new teaching method, a professional development program, or student engagement?

What are the intended measurable outcomes of the program or initiative? Specifically, what do you want to measure: knowledge gained? skills developed? or shifts in attitudes?

Think about both the short-term (immediate) effects of your program and the long-term (lasting) impacts. Guskey’s Framework for educational program evaluation breaks this continuum of effect into levels.

  • Level 1 evaluates participants’ reactions to a program.
  • Level 2 captures participants’ understanding and learning of the program content.
  • Level 3 captures how the school or district enables change–translating learning into actions.
  • Level 4 assesses the extent to which educators have applied learning to their daily work.
  • Level 5 focuses on how the program impacts student learning outcomes.

Understanding these different levels will help you understand how to best define and articulate the key outcomes that need to be measured in the survey, so that  we can collect the most relevant data aligned to your definition of success.

Tip 2: Know Your Audience

Once you’ve defined your survey’s purpose, consider who will be taking it. Are you surveying teachers, education leaders, students, or parents? If your program involves different groups—like teachers, principals, and coaches—you may need separate surveys for each one.

Knowing your audience helps you design questions that are clear and easy to understand. For example, if you’re surveying parents in a district with a high number of non-English speakers, you’ll need to offer the survey in multiple languages. A well-designed survey that speaks to your audience will optimize the survey takers’ experience and will also take less time to complete, which can increase response rates.

Tip 3: Use Science-Backed Questions & Prompts

It’s not just about what you’re measuring—it’s about how you’re measuring it. Your survey prompts need to accurately capture the outcomes you want to assess. This is called “construct validity,” meaning the questions truly measure what they’re supposed to.

For example, if you’re asking teachers about their experiences with technology in the classroom, a question like “Does technology make teaching more effective?” might seem straightforward, but it’s too broad. Different teachers might interpret “effective” in different ways, leading to inconsistent responses. One teacher might define “effective” as engaging students, while another defines it as efficient assessment.

If you’re designing survey prompts from scratch, follow best practices and avoid the following pitfalls.

Avoid leading questions that suggest or direct toward specific responses

❌ “Do you think the newly implemented PLC are a great improvement in the district?”  This leads participants toward positive responses.

✅ Use survey questions that has been previously used and validated. For example, this paper on teachers’ PLC and instructional practices published in ECNU Review of Education provides survey questions that capture three dimensions of PLC engagement. An example of assessing reflective dialogue during PLC is, “Teachers share with one another their evidence-based approach to improve practice.” Teachers can rate their agreement to this statement using a scale from 1 (strongly disagree) to 5 (strongly agree).

Avoid double-barreled questions which ask about two things at once.

How satisfied are you with collaboration opportunities and meeting frequency of the newly implemented PLC? This asks two different concepts in one question.

✅Create 2 separate survey questions: How satisfied are you with collaboration opportunity during the newly implemented PLC? and  How satisfied are you with the meeting frequency of the newly implemented PLC?

Avoid using analogy, metaphor, or language that may trigger strong association (or implicit biases). Consider how your audience will interpret the language, and be aware how different segments of your audience may interpret the language differently based on culture, education levels, and/or life experiences.

To what extent does your PLC feel having a whole squad of superheroes working together to improve student learning? The analogy may be misleading to some participants and it also gives PLC a positive connotation that may lead participants toward positive responses.

✅Remove the biased language: On a scale of 1 (not satisfied at all) to 5 (very satisfied), how satisfied are you with the collaboration opportunities of the newly implemented PLC? 

Education leaders can partner with an expert in program evaluation who can provide valuable guidance on designing surveys. ALP has partnered with several districts to conduct Program Impact Evaluation of specific programs (e.g., technology integration program, leadership development coaching).

In a partnership with a school district in Georgia, ALP consultants designed and implemented an evaluation plan for their learning innovation program. The program included components of challenge-based learning, personalized learning, and instructional technology integration.

After conducting reviews of artifacts and materials from the district as well as scholarly literature, ALP consultants implemented a longitudinal survey design with the uses of validated survey questions. This program evaluation included a scale to assess teachers’ capacity to integrate technology to deepen student learning. Instead of a general survey prompt, we included a validated scale including multiple dimensions of technology integration like teachers’ confidence, beliefs or perceived benefits and risk-taking ability.

Tip 4: Track Progress Over Time

A single survey can give you valuable insights, but collecting data over time can show you the bigger picture, such as the progress, learning change/improvement, specific feedback for program designers or professional development facilitators. Surveys taken before, during, and after a program allow you to see not only immediate results but also long-term impact.

For example, if you’re rolling out a new project-based learning program, it might take months to see the full effect on student outcomes. Tracking data at different stages will give you a better idea of how successful the program really is.

Alternatively, pulse surveys could be designed and implemented to have quick capture at many touch points through the program. This may involve much shorter surveys and allow frequent collection of feedback which can inform program designers to make just-in-time adjustments to the program. Keep in mind that tracking progress over time requires careful planning and may involve more complex survey designs and analysis.

Tip 5: Prioritize Ethical Data Collection

It’s essential to create a safe, respectful environment for your survey participants. This means being clear about the survey’s purpose, ensuring responses are confidential, and protecting participant data.

Specifically, we want to make sure participants know their responses won’t be used against them and that their data will be reported in aggregate and only used for research or evaluation purposes. If you’re collecting sensitive information, it is critical to ensure you’re using secure methods for storing and accessing data. In any surveys that we design at ALP, we strive to communicate all of this information regarding data ethics and confidentiality to our participants to minimize potential concerns.

Final Thoughts

Designing a high-quality survey isn’t easy, but it’s worth the effort. Surveys provide valuable insights that can help school leaders make better decisions, but only if the survey is well designed. These five tips—defining your purpose, knowing your audience, crafting valid questions, tracking progress, and prioritizing ethics—will help you create surveys that generate the useful data you need.

Do these tips make you think about your own program evaluations? If you have questions or want help with survey design, feel free to reach out to ALP. Check out more about our Program Impact service. We’re here to help!


Back to Top
css.php