What type of evaluation should be conducted?
The evaluation questions and priorities should inform the type of evaluation conducted. In a simplified sense, there are two primary types of evaluation: formative evaluation and summative evaluation.
As the table below demonstrates, if you are using the evaluation to examine the needs of participants, the adequacy of resources, or the quality of implementation to aid learning and program improvement, it is recommended that you conduct a formative evaluation. If you are interested in documenting the benefits of the program to determine the effectiveness and impact, it is recommended that you conduct a summative evaluation process. Evaluations can, and often do, include components of both types of evaluations depending on the evaluation questions and priorities. In fact, it is useful to explore implementation (as common in formative evaluations) when you are interested in program effectiveness (summative evaluation) because information about the quality of services is useful for explaining why a particular project or intervention was successful or not. When evaluations include both formative and summative priorities, best practices in implementation can be identified to promote participant outcomes most effectively. The table below summarizes these types of evaluations, and the designs that are most commonly employed.
Evaluation Purpose |
To aid learning and continuous improvement |
To demonstrate effectiveness and impact |
Type of Evaluation |
Implementation/Formative Evaluation:
- Assess needs of participants
- Assess adequacy of resources, materials and inputs
- Identify challenges, issues and barriers to high-quality implementation
- Understand the quality and fidelity of services provided
- Document and maximize strengths
|
Outcome/Summative Evaluation:
- Understand actual and perceived benefits associated with services
- Determine the effectiveness, impact of intervention
- Answer questions about what works and for whom
- Cost-effectiveness
|
Program Types |
Appropriate for projects in early implementation stages and throughout the project lifespan |
Appropriate for mature programs or those later in implementation stages |
Evaluation Foci |
Focused on resources, activities and outputs |
Focused on outcomes (short-term, intermediate and/or long-term) and impact |
Evaluation Design |
Descriptive
Correlational |
Experimental
Quasi-Experimental |
Data Collection Methods |
Needs assessment
Observations
Focus groups/interviews
Surveys
Document review
Dosage/attendance |
Surveys
Tests or assessments
Focus groups/interviews |
Who should we collect data from? |
Program participants
Program staff/instructors |
Program participants and a comparison or control group of nonparticipants |
A good evaluation design addresses the evaluation questions, is appropriate for the evaluation context (e.g., time, resources), and provides sufficient and critical data. The table above provides a brief summary of the designs and the common methods that are typically involved in formative and summative evaluation. This list of methods and designs is not exhaustive, but provides a starting place to select the appropriate design and methods aligned with the evaluation questions and priorities. As with the other decisions made during the evaluation, the educator should consider several factors before selecting the data collection methods, including resources available, potential data sources that are already available or being collected, practicality, feasibility (particularly given the expertise of the evaluation team), and funding.
As the table below demonstrates, there are three primary evaluation designs that are suited for distinct evaluation goals and contexts. Evaluations conducted by financial educators likely will employ descriptive or correlational designs, given that more sophisticated evaluation designs require funding, resources and experimental control that is commonly is lacking at this level. Unless the educator possesses high levels of knowledge about experimental and quasi-experimental designs, it is recommended that educators seek external evaluation assistance from a professional evaluator to conduct these types of evaluations.
Nonexperimental Designs
(Descriptive, Correlational) |
Experimental Designs |
Quasi-Experimental Designs |
- Explores relationships among participation,implementation and outcomes
- Does not provide evidence to suggest that the outcomes were the result of the program (attribution) or say that the program caused the outcomes (directionality)
- Examples: post-test only , pre-post change
- Most common for formative evaluations
|
- Uses random assignment to create equivalent groups
- Compares outcomes across treatment and control groups
- Appropriate for impact/attribution
- Eliminates threats to internal validity
|
- Matched comparison sample developed instead of using random assignment
- Compares outcomes across group who receives services and comparison group
- Reduces threats to internal validity
|
Another consideration is when and how often to collect data. Evaluation designs that collect data at only one time point are called a cross-sectional design. The most common cross-sectional design is to collect data only at the end of the program or intervention (post-test only). These designs limit your ability to examine changes over time in participant outcomes, however a comparison sample still can be used to explore differences between participants and nonparticipants after the program or intervention. The alternative is using a longitudinal design, meaning that evaluation data is collected at least twice or more during the evaluation. The benefit of longitudinal data collection is the ability to explore how outcomes change over time, such as comparing outcomes before the intervention (pre-test or baseline) to outcomes after the intervention (post-test).