|  
  |  

Evaluation Manual

Appendix A: Glossary of Evaluation Terms 

  • Census: The complete population of intervention recipients.
  • Continuous Quality Improvement: The systematic process of improving programs and services through an ongoing cycle of planning for an intervention, implementing the intervention, evaluating the implementation and effectiveness of the intervention, and acting to make improvements based on the evaluation findings.
  • Cross-sectional designs: A type of evaluation design that involves the collection and analysis of data at only one specific point in time.
  • Evaluation capacity: An individual’s or organization’s ability to understand evaluation concepts, meaningfully engage in evaluation, and use the evaluation findings to improve services provided.
  • Evaluation stakeholder: Individuals or organizations that are interested or invested in the program and the findings from the evaluation. Stakeholders typically include those involved in implementing the program, those served by the program, and intended users of the evaluation findings.
  • Experimental designs: A type of evaluation design used to ensure equivalence of treatment and control groups to allow the evaluator to assess impact or effect. Potential participants are randomly assigned to either the treatment (group receiving services) or the control group (group not exposed to the program or treatment) to enhance the likelihood that groups are equivalent at baseline and comparisons can be made post-intervention, attributing differences between treatment and control groups to the intervention itself.
  • Formative evaluation: A type of evaluation in which the educator explores the operations or process of a financial education intervention implementation, also referred to as process evaluation. As opposed to exploring outcomes in a summative evaluation, formative evaluation is conducted is to help educators decide whether the program is meeting needs of program recipients, whether the activities implemented are of high quality, and whether any improvements are required.
  • Inferential Statistics: A type of statistics that are used to make statements about a population based on a sample of participants and/or to make judgments about whether statistical findings are due to chance or actual differences between groups.
  • Institutional review board (IRB): A committee appointed by the university administration composed of community and legal experts as well as scientists across departments that evaluates, approves and monitors all research projects in that institution with respect to ethical requirements and practices (less formally known as the “human participants committee). No research involving human participation can be performed prior to IRB approval.
  • Logic model (or theory of change model): A visual representation of the logical relationships between the resources invested, the activities that take place and the benefits or changes that result. The logic model depicts the programming process in graphical form to help clarify what should be implemented to create the changes in financial education outcomes for participants as intended.
  • Longitudinal designs: A type of evaluation design that involves the collection and analysis of data at multiple points in time (repeated observations).
  • Nonexperimental designs: An evaluation design focusing on describing the program or intervention and the associated outcomes, and exploring the correlational relationships between variables of interest. These designs are most appropriate when experimental designs are not appropriate or possible.
  • Qualitative data: Verbal information or descriptions that are categorical rather than numerical, and often include attitudes and perceptions.
  • Quantitative data: Countable, numerical data.
  • Quasi-experimental designs: An evaluation design employing a matched comparison group instead of employing randomization to create treatment and control groups like in an experimental evaluation. As opposed to a control group, matching participants and nonparticipants on critical variables of interest is used to develop a comparison group. These designs can approximate findings from experimental designs although there is less confidence about the attribution of findings to the program or intervention.
  • Reliability: Consistency and stability with which a measure assesses a given construct.
  • Request for proposal (RFP): A document outlining the pertinent information about a desired future evaluation to request proposals from evaluators interested in conducting the evaluation.
  • Sample: A part of a larger population of intervention recipients.
  • Summative evaluation: A type of evaluation exploring the learner outcomes or achievements as a result of participation in the intervention. Summative evaluation is conducted to help the educator document the participant outcomes associated with or attributed to financial education programs. Summative evaluation provides data to understand whether or not a program is effective in promoting learning about financial education concepts, including the actual and perceived benefits associated with services.
  • Triangulation: Using multiple data sources or sources of information to corroborate or complement each other to confirm the evaluation findings.
  • Validity: The degree of relationship between the instrument and the construct it is trying to measure.