top of page

Using evaluation to inform education policy in South Africa

By Leticia Taimo, Margie Roper and Zamokuhle Thwala Khulisa Management Services



This blog is part of the Eval4Action ‘Walk the Talk’ blog series. The series details six nominated actions for influential evaluation that were contributed during the Walk the Talk drive, held in October 2021. These lessons and reflections inspire greater action for influential evaluation in the Decade of Action.

 

Even before the COVID-19 pandemic struck, South Africa’s education system was facing tremendous challenges in providing quality education in the majority of the country’s schools. As Khulisa has reported in early grade reading evaluation reports, the basic education system in South Africa (Grade 1 to 12) consistently performs poorly on international ratings. In 2016, the Progress in International Reading and Literacy Study (PIRLS) indicated that 78% of South African Grade 4 learners were not reading for meaning. This means that 8 out of 10 South African children do not learn to read for meaning in the early years of school. [1]



The COVID-19 pandemic and its ensuing series of lockdowns, school closures, and the necessity to rotate learners and observe strict social distancing protocols have intensified these challenges tremendously. In the wake of the pandemic, evaluating teaching methods and learner outcomes has become more critical than ever to address learning losses and build back better. At the same time, evaluations have become more difficult to conduct during the pandemic. More so than ever before, evaluations must be flexible and adaptable in order to be effective.


The circumstances of the pandemic, while challenging, have created unique opportunities for innovation and accelerated the demand for, and immediate use of, evaluation data.

Conducting actionable evaluations during a pandemic: What we’ve learned


Khulisa and its partners undertook an assignment focused on evaluating early grade reading; creating language benchmarks for learners in two languages; and researching the social-emotional effects of COVID-19 on early grade reading, learning, and teaching. The circumstances of the pandemic, while challenging, have created unique opportunities for innovation and accelerated the demand for, and immediate use of, evaluation data.


The bulk of the data collection took place in September 2021, between two devastating waves of COVID-19. Conducting this work during such a difficult time for the South African education system taught us several important lessons about how to make our evaluations accessible and immediately useful to policymakers:

  1. While conducting a high-stakes evaluation, it is crucial to build and maintain a strong relationship with the client/partner, in this case, the South African Department of Basic Education (DBE) and the United States Agency for International Development (USAID). The DBE was very interested in the results of these evaluations and played a crucial role in evaluation design, instrument development, and training and selecting fieldworkers. Constant communication and joint planning with DBE led to more trust and buy-in, which increased the chances DBE would adopt the recommendations that emerged from our evaluations. For example, in February 2022, due to the recommendations provided in Khulisa’s research and other studies, the DBE and the Government of South Africa changed COVID-19 regulations to allow learners to return to school full-time and ended rotational learning.

  2. Evaluators must be agile and flexible in choosing the data collection methods that are responsive to their client’s needs. While in the proposal-writing stage, since it was uncertain when and how schools would reopen, Khulisa intentionally included a data collection method that did not require physical presence in schools. Khulisa contracted Geopoll to conduct Computer Assisted Telephonic Interviews (CATI) with school management teams, teachers, and parents, which ensured we were able to collect the data we needed amidst the uncertainty of pandemic school closures.

  3. Breaking data collection into multiple points within the span of the evaluation helps increase the data’s usability. In our case, we broke the data collection into three phases: 1) collecting data from school leaders; 2) collecting data in schools (whenever we were allowed back in schools); and 3) collecting data from parents. This approach helped us be strategic about which questions to ask when and to whom, avoiding duplication of effort and maximizing our evaluation insights. Our phased data collection approach also allowed the client to receive evidence in a timely fashion and take action in the moments that mattered most.


Moving forward


This project was implemented during the COVID-19 pandemic, but the lessons learned are useful for all evaluations – not only those conducted during a crisis. Constant engagement with key stakeholders and flexibility in project implementation are important for dealing with unexpected challenges in every project. Thus, it’s essential to build in sufficient time for flexibility when budgeting for the evaluation, and to be intentional about creating opportunities for key stakeholders to be involved in the evaluation process and take action on the results.


In the case of the COVID-19 research portion of Khulisa’s project with DBE and USAID, which has now been concluded, our team was successful in accelerating demand for the use of evidence by policy makers.


We also learned that, as evaluators, we should always consider innovative ways to be responsive to client needs and provide timely data, which leads to increased interest in the evaluation findings and ultimate use of the evidence. We will be taking these lessons forward in our evaluations in the future.


The other component of this project (the language benchmarks and two impact evaluations) are still underway, and we are looking forward to seeing how evidence emerging from this project phase is used for action.



[1] Spaull, N. 2017, The unfolding reading crisis: the new PIRLS 2016 results. Available from: https://nicspaull.com/2017/12/05/the-unfolding-reading-crisis-the-new-pirls-2016-results/

 

Leticia Taimo works at Khulisa Management Services as Senior Associate Evaluator. She has worked on several evaluation, research and assessment projects for a variety of stakeholders (private sector, NGOs, government and international donors). Ms Taimo was awarded the Mandela Rhodes Scholarship in 2013 and the Commonwealth Scholarship in 2014 as recognition of her commitment to social change in the African continent. Follow Leticia on LinkedIn and contact her via ltaimo@khulisa.com.



Margaret Roper leads Khulisa’s Education and Development Division, supporting clients such as USAID, UN agencies and the LEGO Foundation. She provides technical expertise and leadership on programme development, monitoring, evaluation and knowledge sharing in education, human trafficking and social development. Ms Roper is a PhD candidate at Lancaster University, United Kingdom (UK). Follow Margaret on LinkedIn.



Zamokuhle Thwala is a Junior Project Manager at Khulisa Management Services focusing primarily on providing project and fieldwork management support. Ms Thwala has strong project management skills, data quality assurance, as well as project administration. Prior to joining Khulisa, Ms Thwala worked for the University of the Witwatersrand. She has also worked in other community-based projects, mostly focusing on young people. Follow Zamokuhle on LinkedIn.


 

Komentar


bottom of page