The main purpose of this evaluation is for learning or knowledge, and to improve the program organization. However, the first way this evaluation can be used is a “symbolic” (Weiss, 1998) use. A purely symbolic-focused evaluation might have little to no value, but if the program is working as intended, then the positive effects will be evident in the data collection.
This can lead to sustaining or increasing funding or resources from outside sources and demonstrate positive program design strategies for other, similar programs. After receiving funding from the Ontario government (Government of Ontario, 2021), further evaluation reporting could “legitimate” (Alkin & Taut, 2003) that funding to stakeholders and “upstream impactees” (in this case, taxpayers) (Scriven, 2007).
The second use is through a conceptual change in stakeholders’ understanding of their program and in gathering data and drawing conclusions that can improve the overall understanding of 2SLGBTQI student issues and the field of evaluation (Alkin & Taut, 2003).
The third, instrumental use (Alkin & Taut, 2003) is more complicated. As this is primarily an outcome evaluation, some of the evaluation questions result in a simple “yes” or “no” with further implications. “Did students’ mental health improve over the course of the program implementation?” If the answer is “yes” then the follow-up needs to determine 1: to what extent did they benefit?, 2: did they benefit as a direct result of program implementation?, and 3: should resources be allocated differently based on the success in regards to specific questions? The second follow-up causes a problem, as data collection was not focused on the actual day-to-day actions taken by program implementers.
My feeling after looking at my outcome evaluation plan is that this evaluation approach needs to be – ideally – a holistic approach discussed by Chen (2005) that considers context and environmental factors, but also includes an evaluation of the program design (formative), the implementation of activities (process), the achievement of objectives (outcome), and finally, the long-term effects (impact) of the program (Center for Disease Control). My outcome evaluation design could have direct symbolic and conceptual uses, but the concrete, instrumental use is limited by simply pointing towards process issues or suggesting design flaws without proving a direct link from inputs to outcomes or design to impact (Bhasin, 2021).
Alkin, M. C., and Taut, S. (2003). Unbundling evaluation use. Studies in Educational Evaluation, 29, 1-12.
Weiss, C. H. (1998). Have we learned anything new about the use of evaluation? American Journal of Evaluation, 19, 21-33.
Ontario Supporting 2SLGBTQI+ Students. (2021, June 15). Government of Ontario. Retrieved May 14, 2022, from https://news.ontario.ca/en/release/1000346/ontario-supporting-2slgbtqi-students
Scriven, M. (2007). Key Evaluation Checklist. [PDF] Retrieved June 10, 2022 from: https://wmich.edu/sites/default/files/attachments/u350/2014/key%20evaluation%20checklist.pdf
Bhasin, H. (2021, January 30). Outcome Evaluation – Meaning, Strategies, Characteristics, Advantages & Limitations. Marketing91. https://www.marketing91.com/outcome-evaluation/
Chen, H.-T. (2005). Practical program evaluation: Assessing and improving planning, implementation, and effectiveness. Thousand Oaks, CA: Sage Publications
Center for Disease Control. Types of Evaluation. https://www.cdc.gov/std/Program/pupestd/Types%20of%20Evaluation.pdf