First, qualitative data analysis will be done in collaboration with intended users due to their experience in designing the program and their previous reporting, to cause conceptual use of this evaluation (Weiss, 1998), and to lead to collaborative evaluation and actual use (Patton, 2013). This will involve looking at the open-ended questions given to students as part of surveys, classroom observations, and teacher interviews.
Second, the evaluation will follow the four steps laid out by the Center for Disease Control (2018): Review, organize, code, and interpret. It is essential that intended users be engaged in reviewing the data, as their responses can give a general impression of differences or similarities between each evaluation, each previous evaluation (if this is done cyclically), and the two reports done by the Egale organization.
Organization of the data would first be based on the type of data collection (interview or survey), then answers to individual questions, and finally based on patterns that emerge in reviewing the data. An example is on Figure 16 in the Egale 2021 report (Peter et al, 2021). The chart contains quantitative data based on self-reporting (# of incidents not reported by CH and 2SLGBTQI students) and categorized to be about various forms of physical harassment. The paragraph following the figure indicates that student responses varied and gave more specific, personalized information that was then also categorized to show trends in responses.
In terms of coding the data, the most basic approach will be “tagging” the results from the surveys based on the evaluation question it is tied to. A rough version of this can even be done before the surveys take place (although open-ended answers may change or apply to multiple questions): A question on the survey that asks 2SLGBTQI students about their changing feelings towards school could easily be tagged under the evaluation question: “Do 2SLGBTQI students report changes to their mental health, feeling of belonging, or expectation of future success?” But it could also be tagged as qualitative data related to “Are there fewer incidences of anti-2SLGBTQI cyberbullying?” based on specific answers that address cyberbullying.
Then, quantitative and qualitative data can be synthesized into overall, or generalized impressions based on specific evaluation questions. Comparisons between the initial impressions of the review, previous reports, and between categories in this evaluation can lead to conclusions about program success. Detailed explanations below each interpretation of quantitative data can help enrich the data (Better Evaluation), explain outliers, and provide specific, qualitative responses from students who might represent a typical response from students towards this topic. Ideally, a report will take shape based on shared understanding between evaluators, intended users, and on program client responses in surveys and interviews.
One important factor to consider is the implication when there is a disparity between quantitative and qualitative data. It could be an issue with the evaluation, with the interpretation of responses, or – in the case of this evaluation – it could be because of an issue the program is dealing with directly. If archival data shows that school bullying incidents are going down, but students report feeling less safe, are both those conclusions true? They could be: students could feel less supported or included, while actual bullying is less common due to other factors. Or, as was the case with the Egale report (Peter et al, 2021), students did not feel comfortable reporting bullying to staff for a variety of reasons that this program is trying to rectify. The evaluation needs to consider those possibilities while reviewing the data, and if qualitative and quantitative data contradict each other, it is important to improve future evaluation questions to account for conflicts like this one.
Center for Disease Control. Analyzing Qualitative Data for Evaluation. Evaluation Briefs No. 19, August 2018. Retrieved June 11, 2022 from https://www.cdc.gov/healthyyouth/evaluation/pdf/brief19.pdf
Combine qualitative and quantitative data. (2018, August 29). BetterEvaluation. https://www.betterevaluation.org/en/rainbow_framework/describe/combining_qualitative_and_quantitative_data
Bamberger, M. InterAction, (2012). Introduction to mixed methods in impact evaluation (No. 3.). Retrieved June 14, 2022 from: https://www.interaction.org/wp-content/uploads/2019/03/Mixed-Methods-in-Impact-Evaluation-English.pdf
Patton, M. Q. (2013). Utilization-focused evaluation for equity-focused and gender-responsive evaluations. [Video]. YouTube. https://www.youtube.com/watch?v=jQP1FGhxloY
Weiss, C. H. (1998). Have we learned anything new about the use of evaluation? American Journal of Evaluation, 19, 21-33.
Peter, T., Campbell, C.P., & Taylor, C. (2021). Still every class in every school: Final report on the second climate survey on homophobia, biphobia, and transphobia in Canadian schools. Toronto, ON: Egale Canada Human Rights Trust.